Skip to content

Instantly share code, notes, and snippets.

@ptaoussanis
Last active August 29, 2015 14:25
Show Gist options
  • Star 3 You must be signed in to star a gist
  • Fork 1 You must be signed in to fork a gist
  • Save ptaoussanis/0a294809bc9075b6b02d to your computer and use it in GitHub Desktop.
Save ptaoussanis/0a294809bc9075b6b02d to your computer and use it in GitHub Desktop.
Clojure 1.8.0-alpha2 tuple performance
;;
;; LATEST UPDATE: 25 July 2015
;;
;; ****************************************************************
;; ** NB false alarm! My original benchmarks showing large perf **
;; ** improvements with tuples turned out to be noise, **
;; ** unfortunately. Current (+more reliable) numbers seem[1] to **
;; ** show no consistent significant advantage using currently **
;; ** available tuple implementations against real-world code. **
;; ** **
;; ** Sincere apologies for jumping the gun + publishing dodgy **
;; ** results! - Peter Taoussanis **
;; ** **
;; ** [1] Not a final word; think this'll need more time to **
;; ** become clear one way or the other. **
;; ****************************************************************
;; ** Latest results table on Google Docs: https://goo.gl/khgT83 **
;; ****************************************************************
*clojure-version* ; + -server jvm
(require '[criterium.core :as criterium :refer [bench quick-bench]
:rename {quick-bench qb}])
;; [1] 1.7.0
;; [2] 1.8.0-alpha2
;; [3] 1.8.0-snapshot as of b8607d587
;; [4] mikera's fork at https://github.com/mikera/clojure/tree/clj-1517
(let [v [1 2 3 4]
v2 [1 2 3 "foo"]
x 1
y 2
z 3]
; [1] ; [2] ; [3] ; [4]
(qb (conj v 5)) ; 59 ; 24 ; 21 ; 22
(qb (assoc v 1 3)) ; 44 ; 74 ; 64 ; 68
(qb (assoc {} :v1 v :v2 v :v3 v)) ; 405 ; 491 ; 388 ; 446
(qb (subvec v 1 3)) ; 27 ; 24 ; 25 ; 26
(qb [x y z]) ; 38 ; 22 ; 21 ; 22
(qb [[[x y z] y z] y z]) ; 91 ; 46 ; 53 ; 49
(qb (let [[x y z] v])) ; 22 ; 103 ; 97 ; 97
(qb (peek v)) ; 16 ; 24 ; 26 ; 27
(qb (pop v)) ; 46 ; 64 ; 65 ; 69
(qb (seq v)) ; 35 ; 35 ; 34 ; 31
(qb (rseq v)) ; 23 ; 19 ; 18 ; 20
(qb (count v)) ; 16 ; 14 ; 15 ; 17
(qb (reduce (fn [acc in]) nil v)) ; 62 ; 182 ; 184 ; 202
(qb (mapv inc v)) ; 361 ; 599 ; 565 ; 559
(qb (into [0] v)) ; 468 ; 560 ; 536 ; 783
(qb (hash [[[x y z] y z] y z])) ; 697 ; 465 ; 468 ; 454
(qb (hash [x y z])) ; 249 ; 238 ; 220 ; 173
(qb (= v v2)) ; 265 ; 306 ; 85 ; 82
)
@ptaoussanis
Copy link
Author

Okay, so ran some larger benchmarks on a real system I have on me that uses plenty of 2- and 3- vectors.

Clojure 1.8.0-snapshot was consistently a little slower than 1.7.0. Not by much: 5% over a suite of 5 tests.
Clojure 1.8.0 with clj-1517 was statistically indistinguishable from 1.7.0 after JIT warmup.

This is with the same environment, -server mode again, lots of JIT warmup time.

It's looking to me like maybe the JIT ends up basically optimising away the difference between the tuple and vector implementations? It's of course very possible that my results are wrong or my system somehow unusual.

@mikera I think you mentioned that you'd run some larger core.matrix benchmarks, yes? Do you maybe have any more info on those handy?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment