Skip to content

Instantly share code, notes, and snippets.

@olifante
Created April 24, 2010 16:04
Show Gist options
  • Save olifante/377739 to your computer and use it in GitHub Desktop.
Save olifante/377739 to your computer and use it in GitHub Desktop.
InfoQ have recently published a video of a very interesting interview with Rich Hickey, creator of the Clojure programming language. Conveniently, they also offer a transcript of the entire interview, but as is costumary with their transcripts, it is placed inside a very small text field, making for very awkward reading. Here's a copy of that transcript, extracted from http://www.infoq.com/interviews/hickey-clojure
Q: We are here at QCon London 2009, with Rich Hickey. Rich, who are you?
A: I'm the author of Clojure. I'm an independent developer, contractor, consultant and,
basically, a practitioner. I'm not a researcher.
Q: What brought you to create a Lisp? Do you go out in the morning and say "Hey, I'm bored!
Let's create the Lisp!"? What's the intention behind it?
A: I don't think I necessarily set out to create a Lisp specifically. I set out to create a
language to only deal with the problems I was contending with in Java and C# for the kinds
of applications that write, which are broadcast automation and scheduling and elections
systems and things like that, where there is a lot of concurrency. I found just object
oriented programming and their approaches to concurrency of those languages is just not
good enough for those kinds of problems - they are too hard. I'm a big fan of Lisp and the
other functional languages and what I want to do is solve those problems, make a practical
language, not to have to program in Java anymore.
Q: It sounds like a good idea. You mentioned a thing you had a problem with: object
oriented programming. Is that the right way to say that?
A: Yes. Not with the idea behind it, but the realization of object oriented programming in
the pragmatic languages that are used today have some problems definitely for concurrency
because the way objects are done encourages mutable state and that's just - I think - in
direct conflict with the solutions we need for concurrency.
Q: Are you more interested in object oriented model that is closer to what Erlang does,
sort of an actor style, or like Common Lisp, or what would you be interested in? Would you
say you use object oriented programming?
A: I have, sure. A lot of Clojures abstractions are defined with Java interfaces, so I
think interfaces are a neat thing and one of the better parts of Java and of object
orientation. I haven't thought a lot about how I would do objects in Clojure or immutable
objects, but it's possible, for instance right now, in Clojure to define Java objects that
are, in fact, only mutable inside transactions. It's possible to bridge the 2 worlds, but
the big problem is less with objects than it is with mutability. Even if you set
concurrency aside, when you build a large program with objects, you end up with this giant
graph of interconnected mutable things and that's just a very hard thing to understand and
maintain - that's what I want to get away from.
Q: You mentioned transactions, which brings us to STM - Software Transactional Memory.
What's your elevator pitch for STM?
A: I don't know that I have one. I don't actually want to be an evangelist for STM. I think
one of the problems we have today in talking about STM is that we say "STM" and presume
that means something in particular, but there are many flavors of STM, there are many
different objectives in STM designs. I think there is a fair amount of research oriented at
making STMs that allow you to essentially program the way you do today and magically put
transaction, start and end around it and everything will be OK. I don't have high hopes for
those approaches. Then, I think there are language specific approaches, like that in
Haskell, that are really nice and elegant. Clojure is quite different from both of those. I
don't think STM labels anything other than the general approach of trying to say "We are
going to contain mutation inside some sort of a scope." What does that mean? That's not an
elevator pitch, is it?
Q: What's your elevator explanation of STM? What's would be a short explanation of what
goes on in your STM implementation? What happens?
A: Good things happen. Essentially, what STM allows you to do is to have the language
provide a construct that handles the hard parts of coordinating change. Fundamentally,
transactions are about the harder class of changes. It's pretty easy to make language
features that deal with atomic change. We change this one thing and it doesn't have to be
coordinated with anything else, but when you have coordinated change, you want to say "I
want to move this object from one collection to another and not have it appear in both and
not have it disappear from both", you need to coordinate that change into 2 things within a
single logical unit of work. Typically, in the absence of something like transactions, the
management of that falls on the programmer, with locks or some technique like that. I guess
the short pitch for STM is it allows you to do coordinated change without the complexity of
locking.
Q: Can STM throw threads or should it be on the same thread all the time?
A: Transaction happens in one thread, but multiple transactions can be occurring on
multiple threads.
Q: Is it possible to implement it on several threads because it could solve some problems
in the web? We have several requests and, if we could span transactions to several threads,
it could solve some problems when we have partial states on several requests coming from
the web. Do you think it could be possible to span several threads with the transaction?
A: A single transaction? You could always take some of your work in a transaction and
parallelize it - that's possible right now -, but the other threads are not coordinated
with the transactional data. You can calculate your answer in multiple threads and then use
the transactional thread to commit it.
Q: About STM again: is there any label for your STM? Is there any keywords that describe
the STM? I saw you had the acronym MVCC.
A: There are a couple of things that I think distinguish Clojure STM. The first is it's not
designed to solve the problem of "Let me do what I've always been doing, which is I've
designed an object and it has 6 fields in it and I'm going to independently access those
fields at different times." In STM transactions it's going to encapsulate any access to
those pieces and make a single unit of work. Instead, the Clojure model is definitely
oriented towards programming with values, so it's a functional style model. What we'd like
to do is say that what would have been an object is going to be an immutable value and
instead the STM cell is merely a reference to an immutable value. Once you do that, you can
do a couple of things. First of all, that's a more coarse-grained STM and Clojure is
definitely more coarse-grained. The other thing is once you start working with immutable
data structures that are persistent it's because they are persistent. Making is inexpensive
and keeping all the versions around is also inexpensive. What multi-version concurrency
control does is it will actually keep older versions of some references around in order to
satisfy read-transactions and allow write-transactions to progress. That multi-version
concurrency control served a database technique is used by databases where they would keep
records around in order to satisfy longer running transactions and allow new writes to
continue.
The advantages of multi-version concurrency control are you don't need to do read-tracking,
so you don't need to keep a record in the transaction of what was read, you only need to
keep a record of what was written. The other thing is that writers don't impede readers, so
there is actually a lot more concurrency, I think, in multi-version concurrency control.
That's what distinguishes Clojures, because your references are the persistent data
structures. It's inexpensive for me to keep older revisions and I determine how many to
keep dynamically; therefore, you get a lot more concurrency.
Q: You talked about readers not impeding writers. Does that mean if I read from a
reference, which is the STM construct you use, there is no locking? Is that right?
A: No, there is locking inside the implementation. What there isn't is a sequencing. It
isn't like a reader has to wait for a writer or a reading transaction is going to have to
really start to the activity of a writer. That will only happen if there is insufficient
history and as that happens, the history grows to accommodate that read pattern.
Q: You already mentioned persistent data structures. Maybe you could give a short
explanation?
A: They are immutable. Generally you talk about them as collections, so you consider it an
immutable collection, where making copies is inexpensive, so, instead of changing the
collection in place, they are going to produce a new collection, so it's functional, but
there are a couple of characteristics that have to be guaranteed by that. First is that the
old version still has to be accessible after you've made a changed version. The others,
that both the old and the new version meet the performance guarantees you would expect from
that data structure. There can't be full copying going on because that would violate the
performance guarantees of log(n) or whatever your expectation was for that data structure.
Most functional data structures are persistent in that way, but you could make a naïve one
that was just copy on write - that wouldn't be persistent.
Q: You offer persistent Data structures for vectors?
A: Vectors and HashMaps and Sets and all the good stuff.
Q: Do they have the same access characteristics as the Java collections?
A: They have similar access characteristics. I mean, technically, a Java collection HashMap
or ArrayList is going to have constant time access and Clojure access is not technically
constant time, but the trees that are used are extremely shallow. You can say "OK,
technically it's O( log n), but if it's never more than three hops from my huge data
structure, maybe I don't care anymore about that." In other words, constant factors matter.
Q: You mentioned in your talk yesterday about Clojure that you had agents and references
and atoms. What would you call them? Concurrency primitives?
A: They are all called the reference types in Clojure but they are essentially all the
boxes that refer to an immutable thing. Each has a different way of managing time and how
things are shared.
Q: You said you had 4 right now and you were going to add a 5th? Could we have the
exclusive information: what's the 5th one?
A: The 5th will probably be some flavor of a Safe Mutex. Right now the way atoms work is
they work a little bit like transactions and that you are going to have atomic access to an
individual thing, so you are going to maybe get to run some work and attempt to put the
results of that work back in the atom, but you may not succeed because someone has
intervened. Your work will get re-done. There are cases in which that's fine. It's actually
very fast to do, but there are other cases in which you really only want that work to ever
happen once, which means that you not only have to encapsulate the change to the cell, but
encapsulate the work within some sort of a boundary.
That's what locks are traditionally used for, but the problem with locks is they can nest
and then you have lock order acquisition problems. I have a few ideas for doing mutexes.
The simplest of which would be just to disallow a subsequent acquisition, so non-nestable
Mutex - it's a handy primitive thing. It just keeps you from making that mistake. Other
possible flavors of that kind of reference cell would be once where you have declared
order. The API could enforce lock acquisition order. I haven't really thought about the
more involved Deadlock detecting ones, but that is also a possible thing to make a
primitive that would have Deadlock detection.
Q: One of your constructs is agents. What's a short explanation for agents? What do you use
them for?
A: Agents are similar to actors, some more to the actor model. The difference is that they
are designed as all the rest of the concurrency primitives for the same process problem. In
not being distributable, you have one additional feature you can add to actors, which is
direct access to the state, without sending and waiting for a message cycle. I think that's
an important facility to have in process, because otherwise you are competing with
concurrency primitives that allow that kind of access.
I don't think readers should have to wait for writers as a general rule, I like constructs
that don't enforce that. An agent is something that is asynchronous. You say it manages its
own state, it's completely independent; you can't coordinate change amongst agents. Like
the other constructs, you make a change by sending it a function and maybe some arguments
and that function is going to be applied to the state of the agent and become the new state
of the agent. The thing about agents is that they are asynchronous. You ask for this change
to be made and you return right away; at some point in the future, the change will be made
in a thread pool.
Q: So agents are reactive?
A: You can build systems that appear like reactive systems. You can use them for event
style programming - things like that. Once you embrace concurrency and asynchrony, this is
a nice model to have, where things are genuinely independent.
Q: I saw a thing on the news group or your mailing list that some people are working on the
implementation of cells, which is - I think - a GUI framework using data flow methods?
A: Cells is like program spread sheet reactive programming data flow thing from common
Lisp. That kind of programming is really easy to do in Clojure. The traditional models have
been highly synchronous - make change to data once you flow through a network. What I'm
excited about is the potential for concurrent designs for those kind of things, but the
infrastructure is in place so that you can attach a notification function to any of the
reference types and be told when they change - that sort of low level plumbing that you can
use for any number of different reactive programming techniques because you can find out
about change. In addition, it allows you to make orthogonal decisions about the management
of something. Then, who cares about that? They don't need to know about each other.
Q: You said "concurrent implementation of cells or of these features". What do you mean by
that?
A: What I mean is being amendable to the fact that multiple threads of control could be
modifying the network of cells at the same time.
Q: Let's move on to some other language features in Clojure. I saw a very interesting
feature called metadata. What do you do with it? How do you use it?
A: Clojure uses it specifically for type hints and for communicating to the compiler. The
general problem it solves is this: Clojure programs are represented as data structures. You
write those data structures in a file, potentially, and that gets read by the reader and
the data structure is presented to the compiler. That model is really nice, but there are
often other things you would like to communicate. For instance, it would be nice to
communicate the line numbers from that file, which would have to have some place to go on
those data structures, but they are not part of the data, they are really about the data.
I got this data structure from line 27. So that begs the question for metadata, so
automatically Clojure needs metadata. It ends up being a really handy way to extend the way
the programmer can communicate about the program without really changing the syntax of the
language. For instance, they can also use metadata to adorn some symbols in their code to
provide type hints to the compiler, which can be used to optimize calls to Java to make
sure they are not reflective. What I did is, instead of just making their language syntax
for annotations talk to the compiler something very specific, I made a general way to put
metadata on data. Now you can use it for line number, you can use it to communicate with
the compiler and it's generally available to programmers to use for whatever they invent.
The use of metadata is open, it's not dictated by a language and there are lots of
interesting things you can do because you always have metadata. I got this piece of
information from the Internet, from a trusted source, from an untrusted source. It's not
going to be valid after the state. Those are all things about data that are not the data
itself, now they have a place to live.
Q: That's an interesting feature, because the language is like Perl or Ruby who have the
"tainted" feature. If you get a string or data from the net, its marked as "tainted" It's a
special feature in String or wherever it's defined, so you could do that with metadata.
A: Right! Not for strings, though.
Q: Do you currently support metadata on symbols?
A: Symbols and Clojure collection types.
Q: Did you plan to add it more constructs or will that be limited?
A: I guess you do have metadata on the reference types and you also have metadata on name
spaces, so I've added it to a couple of things. I can't edit the things I can't control.
String would be one of those things.
Q: Because string is the native string type in Java.
A: Yes, it's the native string from Java. I don't own it.
Q: Would that be the same for numbers?
A: Correct - the same thing for numbers.
Q: Metadata is always just data, it never directly influences the evaluation of the code,
at least for Clojure?
A: Like I said, you can use metadata to communicate with the compiler if you adorn your
original source forms with metadata, but during the running program, right now, the
metadata is not used. There actually is a new library function that uses some metadata if
it's available. A lot of people are using metadata as a convention to tag things with what
would have been types in other languages. There is some library support for finding out
those types and if there are other common usages of metadata, they may work their way into
libraries, but no, generally it's transparent and it's certainly transparent to the
operational semantics of the value types themselves. In other words, metadata does not
participate in value comparisons - equality, for instance.
Q: Moving on to another feature that brings polymorphism to Clojure, multimethods - to an
object oriented programmer, how would you explain multimethods?
A: When you think about polymorphism, generally you are thinking about some Runtime
dispatch. Something different happens at Runtime, depending on some characteristic of the
object. It ends up that in object oriented programming languages like Java and C#, there is
only one thing that can be a criteria for something different happening and that's the
class or type of the object. A general way of thinking about that is saying you have a
method call and it involves an object and some arguments and the actual dispatch is going
to differ, depending on the type of this.
You can generalize that and that was done in common Lisp to say it might be interesting to
have things be polymorphic based upon more than just the first arguments. So, we are no
longer considering the first arguments to be special and we'll allow you to take some call
and look at all the arguments and do something based upon all the arguments. There too,
though, there are some things that are hardwired, for instance, in common Lisp you can only
dispatch on either the type of arguments or their values. You can say "If it's equal to
this or if it's of this type do this", but you can do it for any or all the arguments.
Again, we are talking about dispatch based upon some function of the arguments.
Clojure multimethods are just another level of that same logic, in fact they are a
realization of the last sentence I just said. They are dispatch based upon an arbitrary
function of the arguments. You define a multimethod and you say "Here is a function of the
arguments I'd like you to use" You could look at the first argument, you could look at the
5th, you could look at all of them, you could look inside them, some member of an argument,
it could look at the types or not or the values. Now, you could look at relationships
between arguments, you have dispatch based upon an arbitrary function of the arguments and
you have a vastly wider set of polymorphic possibilities than you had before and it's quite
powerful. In particular, it allows you to do Runtime dispatch on Runtime attributes. You
don't usually represent something like being hungry as part of something's type, it's some
attribute that it acquires while the program is running or being outdated or things like
that. Now you can access those things and you can do things polymorphically based upon that
and take a lot of switch statements out of your code.
Q: How does the dispatch function work? What does it return? Does it find a function to
execute or how does it work?
A: It does not go directly. There is a dual dispatch. The dispatch function is determined
at the time you write a multimethod you say "This is the function that will be applied to
the arguments" Its value, the return value of that function gets mapped to the function
that it will actually execute. The dispatch function gets called on the arguments, it
returns a dipatch value and when people make different instances of the multimethod -
actual methods - they say "When a dispatch function returns this value, here is the code to
call" That's generally how it works.
The only additional feature that's in there, which is that the dispatch values are not
matched with plain equality, instead they are matched with a sort of a generic hierarchical
isa? test, which gives you also the hierarchical capabilities that you expect with
polymorphism. Instead of just saying "Is the dispatch value this exact value" you can say
"Is the dispatch value of this type of value?" Then Clojure has an a la carte hierarchy
system where you can define hierarchies of names essentially. The thing here is, as it's
typical of Clojure, lots of nice features bundled together in languages. You have object
systems that do hierarchy and do dispatch and it's all munched together and you can't get
polymorphism independent of types and hierarchy Clojure delivers polymorphism a la carte,
hierarchy a la carte. You can put them together how you like.
Q: You mentioned this isa? function, which is basically for checking equality between these
symbols?
A: It's a hierarchical test. For instance, when used with Java classes, it does the typical
test for the class relationships. If they're derived from another. Because you have these a
la carte hierarchies, you can just compose arbitrary hierarchies of names and say "Fred is
a plumber". Then, if you say "You do this when I get a plumber", it actually is of Fred,
he'll do the plumber job. It's effectively just hierarchical classification. Because it's
not bound to types you can have as many taxonomies as you like - different taxonomies for
different class libaries and situations.
Q: Can I modify this isa? too or is that hard coded for these hierarchies and Java classes?
A: It's hard coded for the hierarchies and the Java classes right now.
Q: What kind of hierarchies can you define? Can you define multi-inheritance?
A: Yes.
Q: Can you define graphs, too or just trees?
A: No cycles.
Q: Dispatching on several parameters of the function when you call it and you have
hierarchies and you get more chances to find more than one candidate for the call, how do
you reason about which one to choose and what to do with it?
A: If it can't be resolved, then nothing will be chosen, but the programmer has a way to
say "Prefer this to that". If you create relationships that inherently have
non-resolvability and then you call with this set of parameters that can't be resolved - I
can't tell it will be this situation or that situation, neither dominates the other - the
programmer can say "Prefer this situation to that. Given these 2, this is preferred." and,
therefore, it answers explicitly what happens in that situation and that becomes part of
the dispatch logic. You say it once and it remembers that.
Q: Can the dispatch functions see all the dispatch values that have been defined for
multimethod so it can build a more efficient way of lookup?
A: That's premature optimization because, hopefully, I'm doing a good job of caching the
results of lookup and you wouldn't need to do that, but there are reflective mechanisms to
look at the contents of the dispatch table. I don't know if it would be a good idea to do
that inside the dispatch function, but it would be possible.
Q: Clojure works currently on the JVM, but I think there are other implementations for
Clojures or ports?
A: Yes, there are a couple of ports inside the Clojure community, which is approved and
being nurtured. One is a port to Java Script, it's called Clojure Script, it was done by a
contributor and that essentially compiles Clojure to Java Script. He's done a really nice
job. It actually compiles Clojures core bootstrap library directly and the support for some
or most of the persistent data structures in Java Script implementations - not all of them,
but your source code can run there. The only thing that's not there is obviously the
concurrency because there is no threads. In the others, a port to the Java core
underpinnings to C#, which then lets Clojure run on the CLR.
Q: The Java core or the Clojure core?
A: The Java part of that underlies Clojure and needs to be ported in order to run on a
different platform, so that's been ported to C#, but directly from my sources. That's a
good port and it's called ClojureCLR and it's still early days there, but the guy who's
working on that has got some impressive results - he's got the ant concurrency demo running
on .NET in Windows form, so that's pretty neat.
Q: Is Clojure written in Clojure or in Java source code?
A: Both. There is a set of the collection classes and some of the interfaces and the
compiler itself written in Java and that was used to bootstrap the language and the
language is primarily defined in terms of itself from there. Like any Lisp there are 7
primitives and everything else is built with macros and other functions on top of those
primitives. That's the current architecture. Maybe, in some parallel universe, where I have
more hours on a day, I'll go back and reimplement the parts that are in Java in Clojure to
make it easier to do ports and easier to change. Obviously, the Java side is kind of hard
to work with, but the advantage of the Java stuff being in Java is a lot of Clojure's -
very accessible. If you never end up programming in Clojure, the data structures and the
STM and the reference types are all accessible from Java and there are Java interfaces for
everything so there are a lot of advantages to do that. Even if I re-did it in Clojure, I'd
make sure the accessibility of those things from Java was as good.
Q: You have ahead of time compilation - AOT - with Clojure. What happens if AOT compile
something? Do you still need a Clojure Runtime for that or are the class files that it
creates self-contained?
A: In most Lisps, the whole notion of a Runtime is intimately there. Right now, Clojure's
packaged in a single jar and you would need that jar any place you delivered Clojure. What
you would not need if you had a time compile is the compiler - right now, there isn't a way
to extract the compiler. The other thing you wouldn't need is the dynamic class loader.
What ends up happening is they sit there, but you don't use them. The advantage of not
using them is in environments where you can't use them. For instance, if you want to run on
android, you don't really have a bytecode loading there of Java bytecode or if you want to
run in an app where you are not allowed to if you are not signed. You can't get to both of
those targets, but there is not much to take out the parts that you wouldn't need yet.
Q: What kind of capabilities do you need for implementing Clojure? If you have things like
STM, do you need special concurrency features, do you need some kind of atomic instructions
or anything like that?
A: If you wanted to move Clojure to another platform, other than Java or .NET, you would
need some substantially similar capabilities to Java. Clojure certainly relies on some
threads, it uses java.util.concurrent.Atomic quite a bit and locks, of course. You need the
basics there. It's nice to have a memory model or something that determines what happens;
is there a notion of volatile, is there a notion of final? - some equivalents to those
things to get full Clojure.
Q: Currently, at the time of this recording, Clojure is not 1.0 yet. What would be the
features that would make it 1.0? what are you going to add, what are you going to modify to
go to 1.0? When do you declare it 1.0, to put it this way?
A: I'm still working on making 1.0 mean something other than making people happy to see
1.0. Generally, Clojure is pretty stable between releases and I think I've gotten into a
set of changes that would be breaking behind me. Now, that we are going to enter a long
period of stability, it might be worth labeling what happens now as 1.0, but it doesn't
have a very stepped growth pattern. It's a very round growth pattern, so I expect that to
continue. One of the nice things about Lisp and about Clojure is that you don't really go
back to the compiler and change things substantially, you change just syntax and you don't
need to be there. Most of the changes and most of the enhancements come from library and
most of them take the forms of just additions. Today you have something new that you didn't
have yesterday, but it doesn't break anything that you are already doing. I am interested
in labeling 1.0 sooner rather than later to help out. There is a book coming out that
Stuart Halloway is writing, which refers to Clojure 1.0.
Q: You were recently doing some work on redefining the way laziness works, streams work.
What's the current status of that? What the current idea of laziness in Clojure?
A: Clojure is not generally lazy, and the only thing that it has is a sequence model that
supports sequences being defined that are lazy. It ended up that that model came from an
abstraction of Lisp's list model, the cons cell model, saying just first an rest, just kind
of nice model and you can define very many things on top of those two. But it ends up that
in Lisp, rest is eager because it returns nil if there's no more stuff, which means it
doesn't return a logical list, it returns a concrete thing. That's my first implementation
worked that way, which meant that, while you could make the ultimate production of the
entire list lazy, in other words, overall in aggregate you're not going to produce more
than you need, you do produce at least one cell in order to know whether or not you are
empty. That ends up making some things awkward, so is a big trade off though for Lisp
because the ability to know there is nothing, that's testable on the conditionals, so you
can write some really elegant and succinct code by using that - it's called nil punning.
It ends up there as just a hard trade off: you either can have nil punning or you can make
lists and sequences fully lazy. In other words, I have something, may be there is a first
item, maybe there isn't, and I won't force the issue about that until I examine it. After a
year of experience and some feedback, I decided to go for fully lazy and essentially lose
nil panning in a lot of contexts. The advantage of that is it becomes a lot easier to
reason about things and you can concatenate things and not worry about being one ahead of
yourself. Essentially, the only enhancement was that, now, that can be fully lazy. You need
not consume any resources that you don't desire.
Q: I think you previously mentioned type hints on functions. I think I saw something about
Clojure being able to do fast math or compile to fast math in Java - how does that work?
How fast can you get?
A: Inside a function, so not across a function call boundary, you can get exactly as fast
as Java. Essentially, the functions called boundaries in Clojure are object based, so
anything that crosses a function called boundary is going to be boxed and there are no
optimizations around that currently. Within a function, you can declare that a local
variable is an int or a long and it will have the Java representation of int or long. The
one advantage is you move to and all the arithmetic that you use within that will now use
the primitive operations. The one enhancement to that is that by default, declaring it
primitive does not give you bad arithmetic. In other words, the math is still checked, so
overflow checks are there. Then finally, if you want to go pedal to the metal you can
explicitly call big ugly unchecked add function, which will do a wrapping, a call, but you
get substantial speed even with the overflow checking in because it is still primitive in
there. That's very fast and can be exactly fast as Java if you use unchecked it's just the
same bytecode.
Q: I say some variable is an int, 32 bit in, but you still do you the overflow check. Do
you have a number tower in Lisp? Does it expand the width of the type or do you just have
type int?
A: Normally yes, there is automatic growth to the next size. If you overflow ints you
normally get long, if you overflow long you normally get BigInteger and you don't do
anything. That's the normal way arithmetic works in Clojure. So in this case, when you
saying "I want an int", what happens is I can't represent an int and a BigInteger in the
same type in Java. It now is an int, but if you do an operation that would overflow, it
throws an exception instead of promoting it. If you are going to say "I want to use ints"
you have to make sure you don't overflow but you'll get an exception by default instead of
bad numbers. There are three different flavors: automatic growth, checking and Java free
for all math.
Q: What do you use to implement Clojure? What kind of tools do you use?
A: Nothing very fancy. I'm a big fan of IntelliJ and that's what I use for all the Java
parts. Then, for Clojure code editing right now I'm still using Aquamacs, Emacs. There is a
nice Clojure mode for that; I don't use SLIME or any of the fancier things that people
have. And that's basically it. I'm looking at both the plug-ins for IntelliJ and NetBeans
and both are growing rapidly, and I think they are really neat and will quickly pass
Aquamacs in terms of being able to integrate with the Java part of the job. For instance,
both have completion that does Java completion in addition to Clojure completion in the
same space. That's an absolute "when". As soon as they have that stuff working, that's
going to be a better place to write Clojure code.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment