The goal of this language is to include all the things I like and to build them axiomatically. The core language has:
- it's a lisp, so it has s-exprs
- fexprs, with vau, wrap, and unwrap
- lexical scope; expressions can mutate immediate lexical environment under TBD conditions
- no shared state
- primitive integer, floating point, boolean, and unicode types
- lists, and corresponding functions (
car
,cdr
,cadr
,cons
, etc) - operator to do a key lookup on associative lists or get the nth element of a list (think Perl)
- minimal builtins:
map
,reduce
,fold
,zip
,cond
,and
,or
,not
, and a lexical symbol mutator:=
- a function yielding the signature of a function
- a function which takes a list of expressions and evaluates them concurrently
- IO, system utilities, foreign function interface. the usual.
From this and fexprs, we can then build
- lambdas, from fexprs
- deconstructing assignment
- automatic currying
- a modified let* form
!=
syntax sugar to update a list in lexical scopeseq
: takes a list of expressions, wraps them in lambdas, and uses type checking to see if they're composable; if so, they're composed<-
andobject
, syntax sugar around lambdas which enables the kind of object orientation Alan Kay was talking about
; lots of syntax sugar here, mostly for readability. defined in standard library
(def make-obj()
(let ((parameter 'value))
(object
('msg1 (seq (:= parameter (! argv 1))
('success!)))
('msg2 'failValue))))
(let ((obj (make-obj)))
(obj <- (msg1 val1 val2) :returnsym
(print "I got ~A in return!" returnsym)))
<-
really just calls the object function in parallel and returns. Objects are the only way to handle state. It has its own unique type.
A par
operator comes which evaluates a list of expressions concurrently (I know, misnomer by design). The semantics are such that you can't know which one will return first. A runtime flag lets you set the number of cores to use.
The compiler does static analysis and annotates each expression with its type. Flags can turn off whether this is a deal breaker, but the information is still available at compile and run time.
I don't want state, but I don't want monads, but I also want a way to reason without Hoare triples. Two options are sequence and unity.
The standard library provides a function which accepts a list of expressions and wraps them in lambdas, seq
. Then, using type inference, seq
composes the functions if it's possible. Some sugar lets you put a symbol of the form :symbol
after an expression, which the next expression can use to refer to the previous computation. Still working on this one conceptually but that's the basic gist.
Based on this article. let*
operates a lot like let
except that all the lexical symbol assignments in the first argument are evaluated randomly and repeatedly until they all reach a fixpoint. The remaining expressions in the let*
form must be assignments (normal :=
or special purpose ones, see the article) and they, too, will be re-evaluated randomly and repeatedly until they reach fixpoints. This is built on par
and is written in the standard library. It'd be cool if the compiler could take shortcuts here; more studying is necessary.
Basically, if !
is never applied to a list, it'll be stored internally as a list. If it is, though, it's an efficient hash implementation. Fast, immutable data structures are a must. Through syntax sugar operators like !=
can be written to simulate updates. Other data structures can be built out of these.
Too tired, will write about this later. On one hand, the idea intrigues me. On the other hand, I'm not entirely certain