Skip to content

Instantly share code, notes, and snippets.

@frenchy64
Last active August 16, 2022 19:04
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save frenchy64/9c3198063b63ec367b2b934e0e6802dc to your computer and use it in GitHub Desktop.
Save frenchy64/9c3198063b63ec367b2b934e0e6802dc to your computer and use it in GitHub Desktop.
Schema generative testing

(This is a really long answer to feedback on plumatic/schema#445)

Sorry, I don't understand from here down.

My bad, there's a ton of implicit decisions and I'm only explaining the tiny details :) I'll try writing this again.

Are you basing this design off an existing language or library that I can read about to get a better understanding?

This is a combination of several ideas and tradeoffs.

System F

Polymorphic binders resemble those in System F. However there are key differences:

  1. there is no runtime representation for Λ (we just expand polymorphic functions to monomorphic ones--more on why below). To illustrate:
;; # System F
(s/defn :all [T] id :- T [x :- T] x)
; expands to
(def id (fn [T] (s/fn :- T [x :- T] x)))
; is called like
((id s/Int) 1)

;; # Schema
(s/defn :all [T] id :- T [x :- T] x)
; expands to
(def id (s/fn :- T [x :- T] x))
; is called like
(id 1)
  1. the funky part is that T is still in scope in the schema expansion (def id (s/fn :- T [x :- T] x))! We need to erase these variables because s/fn can't do anything with T (in System F, T is instantiated at runtime when id is called: ((id s/Int) 1)). A fair heuristic is to replace each variable with their most general (ie., permissive) value (for T that is s/Any).
;; # Schema
(s/defn :all [T] id :- T [x :- T] x)
; expands to
(def id (s/fn :- s/Any [x :- s/Any] x))
; is called like
(id 1)

This is why calculating the most general bindings for polymorphic variables is important for the approach in this PR.

Existing work on checking polymorphic fns at runtime

If this was the complete story, it would seem a bit pointless. The polymorphic syntax provides some documentation, but we've erased the polymorphic relationships between inputs and outputs so they are never checked!

Instead, we will lean on generative testing to verify polymorphic schemas (see =>/check in the original sketch), but it's not in this PR. It can be added separately (perhaps in schema-generators). I elaborate on this further down when explaining dotted variables.

Other systems actually manage to check polymorphic function contracts at runtime (other than System F). They come with their own tradeoffs which IMO don't work well for schema.

Runtime sealing

Racket's contract system uses runtime sealing to track polymorphic values. See Section 2.1 of Chaperones and Impersonators. However, the cost is huge (both in expressivity and perf): polymorphic values must be wrapped in opaque containers, then unwrapped.

In schema, this sealing strategy would be equivalent to:

(defrecord Sealer [val variable-name)

(s/defn :all [T] id :- T [x :- T] x)
; expands to
(def id (fn [x]
          (let [x (->Sealer x 'T)
                body x
                _ (assert (= 'T (:variable-name body)))]
            (:val body))))

Wrapping in general is a non-starter in Clojure because lots of classes are final, but sealing is even a step further (eg., think of sealing a Long, you can't use it!)---I abandoned gradual typing for Typed Clojure because of wrapping issues (Chaperones and Impersonators were added to the Racket VM as primitives, we don't have that luxury).

Runtime tracking

Another strategy might be to track values of a particular polymorphic variable at input positions, then assert that output positions correspond to one of the inputs.

(s/defn :all [T] id :- T [x :- T] x)
; expands to
(def id (fn [x]
          (let [Ts #{x}
                body x
                _ (assert (contains? Ts body))]
            body)))

I'm not aware of systems that use this strategy, but it's an interesting one to use when inferring polymorphic types for functions (I use it in typed.clj.annotator). It would be expensive to track deeply nested values and would require wrapping function arguments in order to track their behavior. It's probably not sound either, I know this was one of the discarded solutions over runtime sealing from the Racket team---they were worried that implementations could do tricky things like cache sealed values from previous invocations (probably important to guarantee parametricity).

Dotted polymorphism

The weird triple dot syntax comes from Typed Racket. It was originally developed in this paper. It's useful for "Non-Uniform Variable-Arity Functions" (Section 3.3).

The essential problem is that normal ("uniform") polymorphism is not expressive enough to specify many Lisp functions like map. The triple dots are effectively a templating language for types. map has an infinite number of types of this form:

;; 2 arg `map`
forall x,    s. (x    ->s) [x]         -> [s]
;; 3 arg `map`
forall x,y,  s. (x,y  ->s) [x] [y]     -> [s]
;; 4 arg `map`
forall x,y,z,s. (x,y,z->y) [x] [y] [z] -> [s]
...etc

However (without dots), the best you can do is the following type that almost completely loses the relationship between the function arguments and the collection arguments:

;; 2+ args `map` in one (uniform) type
forall x,s. (x,Any*->s) [x] [Any]* -> [s]

Dotted polymorphic variables allow you recover this lost precision and package these functions in a single type:

;; 2+ args `map` in one (non-uniform) type
forall x,y...,s. (x,y...y->s) [x] [y] ... y -> [s]

y now represents a vector of types, unlike x and s which just represent types.

Once you start instantiating this type with the triple dots, the pattern becomes apparent.

Instantiating with:
  x=In1
  y=[]
  z=Out
gives:
  (In1->Out) [In1] -> [Out]

Instantiating with:
  x=In1
  y=[In2]
  z=Out
gives:
  (In1,In2->Out) [In1,In2] -> [Out]

Instantiating with:
  x=In1
  y=[In2,In3]
  z=Out
gives:
  (In1,In2,In3->Out) [In1,In2,In3] -> [Out]

I hope that explains the motivation for dotted polymorphism from the type-theoretic perspective (increasing the expressivity of types to meet the actual semantics of Lisp).

In terms of the relevance of dotted polymorphism to schema checking, the motivation is similar. Functions like map have an infinite number of function schemas:

;; 2 arg `map`
(all [x     s] (=> [s] (=> s x    ) [x]        ))
;; 3 arg `map`
(all [x y   s] (=> [s] (=> s x y  ) [x] [y]    ))
;; 4 arg `map`
(all [x y z s] (=> [s] (=> s x y z) [x] [y] [z]))
...etc

Similarly, using rest schemas loses the essense of this pattern:

;; 2+ args `map` in one (uniform) schema
(all [x s] (=> [s] (=> s x & [Any]) [x] & [[Any]]))

And dotted variables capture it perfectly:

;; 2+ args `map` in one (non-uniform) schema
(all [x y :.. s] (=> [s] (=> s x y :.. y) [x] [y] :.. y))

I'm not following what the dotted variables mean or when I would want to use them.

Hopefully I've answered the first part (they are a templating tool for schemas). The second part is a very good question: why bother with all this ceremony if we're just going to throw away all that precision and compile down the uniform schema (& [Any]) when expanding s/defn?

In fact, the real motivation is not part of this PR (it can be added later), but instead something I alluded to in the sketch (via =>/check). We can use the precise structure of dotted variables to generatively test "all" (probabilistically) of the schemas a dotted schema represents.

map is a good motivating example. Let me demonstrate how generating testing might look in 3 stages:

  1. without polymorphic schemas
  2. with polymorphic schemas, but without dotted polymorphism
  3. with dotted polymorphic schemas

To set the stage: the motivation of generatively testing functions against schemas is to identify function definitions that don't implement the schemas they claim to. The closer the schema is to the intended semantics, the more precisely we can check the implementation is "correct".

First, without polymorphic schemas, this is the schema for map:

;; 1
(=> [Any] (=> Any Any & [Any]) [Any] & [[Any]])

For the function arg, we will be forced to generate a function like (fn [x & ys] (generate s/Any)) thanks to the (=> Any Any & [Any]), and for collections, you will just pick a random number of collections thanks to the & [[Any]]. The generative testing will be so imprecise, that the following definitions of map will pass testing:

(defn map [f xs & yss] [1]) ;; pass (bad!)
(defn map [f xs & yss] (clojure.core/map f xs)) ;; pass (bad!)
(defn map [f xs & yss] (apply clojure.core/map f xs (reverse yss))) ;; pass (bad!)

Second, let's add polymorphic schemas without dotted polymorphism:

;; 2
(all [x s] (=> [s] (=> s x & [Any]) [x] & [[Any]]))

A really simple way to generatively test this is to instantiate each variable to a unique schema, and then generatively test using the resulting schema:

(=> [(eq :s)] (=> (eq :s) (eq :x) & [Any]) [(eq :x)] & [[Any]])

Now this schema will at least assert that the function argument is used to build the result (thanks to the (eq :s) in both return positions). The first "bad" implementation is now correctly identified as wrong! The other 2 bad implementations don't violate the schema because we still need to generate f as (s/fn [x :- (eq :x) & ys] (generate (eq :s)) (the ys are never checked):

(defn map [f xs & yss] [1]) ;; fail! 1 is not (eq :s) (good!)
(defn map [f xs & yss] (clojure.core/map f xs)) ;; pass (bad!)
(defn map [f xs & yss] (apply clojure.core/map f xs (reverse yss))) ;; pass (bad!)

Third, let's use dotted polymorphism:

;; 3
(all [x y :.. s] (=> [s] (=> s x y :.. y) [x] [y] :.. y))

There's tons of structure here to work with. You can build all of these schemas from this dotted template to generatively test implementations:

; By instantiating the dotted schema with:
; 1. x=(eq :x), y=[                          ], s=(eq :s)
; 2. x=(eq :x), y=[(eq :y1)                  ], s=(eq :s)
; 3. x=(eq :x), y=[(eq :y1) (eq :y2)         ], s=(eq :s)
; 4. x=(eq :x), y=[(eq :y1) (eq :y2) (eq :y3)], s=(eq :s)
; ..etc
;
; you get:
1. (=> [(eq :s)] (=> (eq :s) (eq :x)                           ) [(eq :x)]                                 )
2. (=> [(eq :s)] (=> (eq :s) (eq :x) (eq :y1)                  ) [(eq :x)] [(eq :y1)]                      )
3. (=> [(eq :s)] (=> (eq :s) (eq :x) (eq :y1) (eq :y2)         ) [(eq :x)] [(eq :y1)] [(eq :y2)]           )
4. (=> [(eq :s)] (=> (eq :s) (eq :x) (eq :y1) (eq :y2) (eq :y3)) [(eq :x)] [(eq :y1)] [(eq :y2)] [(eq :y3)])
...etc

By testing a good selection of these schemas, we will catch errors in all the following implementations.

(defn map [f xs & yss] [1]) ;; fail! 1 is not (eq :s) (good! caught by schema 1)
(defn map [f xs & yss] (clojure.core/map f xs)) ;; fail! f called with wrong number of arguments (good! caught by schema 2)
(defn map [f xs & yss] (apply clojure.core/map f xs (reverse yss))) ;; fail! :y2 is not a (eq :y1) (good! caught by schema 3)

I'm not aware of existing work that can check dotted polymorphic functions in this way besides my own work on typed.clj.spec which this is ported from (but I don't think anyone tried it since it's for spec2).

To conclude, this PR in itself is not super compelling, because the real meat of polymorphic (dotted) schemas is in the generative testing story, which can be built on top. I will try and elaborate on this in the docs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment