Skip to content

Instantly share code, notes, and snippets.

@danidiaz
Last active June 5, 2021 19:13
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save danidiaz/a2162bcf6ae359d3102dbaadc44494d0 to your computer and use it in GitHub Desktop.
Save danidiaz/a2162bcf6ae359d3102dbaadc44494d0 to your computer and use it in GitHub Desktop.

React Server Side Rendering with GraalVM for Clojure. tweet

Note that you aren't tied to Java 8, which is quite old by now, but can also run GraalJS with OpenJDK 11, 12, 13, and enjoy the recent performance improvements to the GC (using ZGC or Shenandoah) and startup time (using Application CDS):

Graal-dev here,

We are working with a very high priority on JDK 11 support. We should be able to ship it in a few months from now.

  • So is graal polyglot already compatible with newer JDKs and it's just native image generation that's blocking the upgrade?

Not just. We also need to upgrade the JVMCI version we use in order to support libgraal (Graal compiler as native-image in HotSpot). Also, modules make everything a little bit more complicated as well

https://github.com/graalvm/graal-js-jdk11-maven-demo

Introducción a GraalVM - codemotion

Substrate VM – A framework that allows AOT compilation of Java applications

Truffle is basically a toolkit for building language runtimes. It provides a framework for creating AST interpreters. Generally AST interpreters are easy to work with and reason about, but slow. Truffle addresses the speed issue by using Graal's API to perform partial evaluation on the interpreter. So, you get very fast runtimes without having to build a custom VM. But, it also exposes some compiler details by way of an annotation-based DSL if you want to take advantage of it. This includes control for PIC sizes, loop exploding, and explicit branch profiling amongst others.

This is awesome! Substrate VM not being open source was a major reason I wasn't more excited about Truffle/Graal. Without Substrate VM, startup times for Truffle-based interpreters are terrible

Now anyone outside of Oracle can write cool JIT compilers using Truffle, and have them start quickly! I think it can also be used to compile normal Java programs to native executables.

http://lafo.ssw.uni-linz.ac.at/papers/2017_PLDI_GraalTutorial.pdf

Do Graal native images run faster than the Java bytecode ordinarily produced

FAIK currently executables produced by native-image have lower peak performance than the original Java code running under GraalVM JIT. OTOH native-image starts faster and can have much lower memory overhead for simple applications.

Devirtualization, escape analysis, inlining

GraalVM docs

GraalVM is an ecosystem and shared runtime offering performance advantages not only to JVM-based languages such as Java, Scala, Groovy, and Kotlin, but also to other programming languages such as JavaScript, Ruby, Python, and R. Additionally, it enables the execution of native code on the JVM via an LLVM front-end. GraalVM 19.1.0 is based on JDK version 8u212.

SubstrateVM limitations

Substrate VM is a framework that allows ahead-of-time (AOT) compilation of Java applications under closed-world assumption into executable images or shared objects (ELF-64 or 64-bit Mach-O).

https://aboullaite.me/understanding-jit-compiler-just-in-time-compiler/ https://en.wikipedia.org/wiki/Just-in-time_compilation https://javapapers.com/core-java/jvm-server-vs-client-mode/ https://stackoverflow.com/questions/198577/real-differences-between-java-server-and-java-client

Understanding How Graal Works - a Java JIT Compiler Written in Java

https://www.oracle.com/technetwork/java/jvmls2015-wimmer-2637907.pdf

self-hosting

SimpleLnaguage. truffle-material. Introduction to SimpleLanguage.

Performance tuning Twitter services with GraalVM and Machine Learning

Many services work below optimality

Baeldung - Deep Dive Into the New Java JIT Compiler – Graal.

Java JIT compiler inlining. Method Inlining in the JVM. Java JIT compiler inlining. JVM JIT loop unrolling. What are some of the most useful JVM JIT optimizations and how to use them?.

As you know the Java Virtual Machine (JVM) optimizes the java bytecode at runtime using a just-in-time-compiler (JIT). However the exact behavior of the JIT is hard to predict and documentation is scarce. You probably know that the JIT will try to inline frequently called methods in order to avoid the overhead of method invocation. But you may not realize that the heuristic it uses depends on both how often a method is invoked and also on how big it is. Methods that are too big can not be inlined without bloating the call sites.

Confirm this: the JIT can "take guesses" and backtrack if they turn to have lower performance

Theoretically, javac could do something similar [loop unrolling] when translating Java to bytecode, but the JIT compiler has better knowledge of the actual CPU that the code will be running on and knows how to fine tune the code in a much more efficient way.

Understanding JIT Compilation and Optimizations

A close look at Java's JIT: Don't Waste Your Time on Local Optimizations

If there is no method call, but the program spends a lot of time in a method, the method is compiled. After that the program is stopped, the variables are transferred to the machine code version of the method and the machine code version is started. This is a difficult operation called On Stack Replacement. Usually, the JIT tries to optimize everything else before starting OSR.

Øredev 2011 - JVM JIT for Dummies (What the JVM Does With Your Bytecode When You're Not Looking). video

-XX:+UnlockDiagnosticVMOptions

-XX:+PrintCompilation

-XX:+PrintAssembly

-XX:+PrintInlining

-XX:+LogCompilation

CALL operation in the assembly indicate that something failed to inline

Optimization by Java Compiler

javac will only do a very little optimization, if any.

The point is that the JIT compiler does most of the optimization - and it works best if it has a lot of information, some of which may be lost if javac performed optimization too. If javac performed some sort of loop unrolling, it would be harder for the JIT to do that itself in a general way - and it has more information about which optimizations will actually work, as it knows the target platform.)

Loop Transformations in the Ahead-of-Time Optimization of Java Bytecode (paper). vectorization in hotspot VM

New tricks of the GraalVM

Snippets Finally we arrive at the concept that really makes having a separate JIT compiler written in a high level language possible - snippets. Snippets can be seen as the glue that ties the compiler and the JVM together.

What a difference a JVM makes?

You don't need a JVM and the binary can be as small as a few megabytes. The SubstrateVM uses Graal todo this compilation. In some configurations, the SubstrateVM can also compile Graal into itself so that it can compile code at runtime as well, just-in-time. So Graal is ahead-of-time compiling itself.

https://chrisseaton.com/truffleruby/jokerconf17/ https://www.reddit.com/r/java/comments/8zbqfo/getting_to_know_graal_and_graalvm/ https://www.beyondjava.net/graal-towards-the-holy-grail-of-polyglot-programming https://www.graalvm.org/docs/reference-manual/aot-compilation/

GraalVM Native Image allows you to ahead-of-time compile Java code to a standalone executable, called a native image. This executable includes the application, the libraries, the JDK and does not run on the Java VM, but includes necessary components like memory management and thread scheduling from a different virtual machine, called “Substrate VM”. Substrate VM is the name for the runtime components (like the deoptimizer, garbage collector, thread scheduling etc.). The resulting program has faster startup time and lower runtime memory overhead compared to a Java VM.

https://gist.github.com/smarr/d1f8f2101b5cc8e14e12 One Compiler: Deoptimization to Optimized Code (work on SubstrateVM, AOT compilation of Truffle code)http://doi.org/10.1145/3033019.3033025

Key Papers

oracle/graal#1069

AFAIK currently executables produced by native-image have lower peak performance than the original Java code running under GraalVM JIT. OTOH native-image starts faster and can have much lower memory overhead for simple applications.

https://github.com/oracle/graal/tree/master/substratevm https://medium.com/de-bijenkorf-techblog/speed-up-application-launch-time-with-graalvm-3d629131adb1 https://news.ycombinator.com/item?id=16070262 https://github.com/oracle/graal/tree/master/substratevm https://medium.com/de-bijenkorf-techblog/speed-up-application-launch-time-with-graalvm-3d629131adb1 https://news.ycombinator.com/item?id=16070262 https://medium.com/graalvm/understanding-class-initialization-in-graalvm-native-image-generation-d765b7e4d6ed

tl;dr: Classes reachable for a GraalVM native image are initialized at image build time. Objects allocated by class initializers are in the image heap that is part of the executable. The new option --delay-class-initialization-to-runtime= delays initialization of listed classes to image run time.

https://hackernoon.com/why-the-java-community-should-embrace-graalvm-abd3ea9121b5 https://www.oracle.com/technetwork/java/jvmls2015-wimmer-2637907.pdf https://nirvdrum.com/2017/02/15/truffleruby-on-the-substrate-vm.html https://www.codacy.com/blog/scala-faster-and-slimmer-with-graalvm/ http://lafo.ssw.uni-linz.ac.at/papers/2017_PLDI_GraalTutorial.pdf https://www.complang.tuwien.ac.at/lehre/ubvo/substrate.pdf substrate vm

an embeddable VM with fast startup and low footprint for, and written in, a subset of Java optimized to execute Truffle languages ahead-of-time compiled using Graal integrating with native development tools.

https://developers.redhat.com/blog/2018/07/30/natively-compile-java-code-for-better-startup-time/ https://stackoverflow.com/questions/tagged/graalvm https://e.printstacktrace.blog/ratpack-graalvm-how-to-start/ Java for Serverless: Ahead-of-Time compilation with Micronaut and GraalVM https://www.infoq.com/presentations/graal-jit-c2/

https://benchmarksgame-team.pages.debian.net/benchmarksgame/fastest/java-substratevm.html

Graal How to use the new JVM JIT compiler in real life (C. Thalinger)

Down the rabbit hole

https://medium.com/graalvm/graalvms-javascript-engine-on-jdk11-with-high-performance-3e79f968a819

Written in Java to lower the entry barrier – Graal compiling and optimizing itself is also a good optimization opportunity

https://stackoverflow.com/questions/2719469/why-is-the-jvm-stack-based-and-the-dalvik-vm-register-based https://stackoverflow.com/questions/3315938/is-it-possible-to-view-bytecode-of-class-file https://stackoverflow.com/questions/30928786/how-do-i-check-assembly-output-of-java-code

Shenandoah and Concurrent GCs

revirivueltas

building a DSL with GraalVM

gu install native-image

bash-4.2# gu list
ComponentId              Version             Component name      Origin
--------------------------------------------------------------------------------
graalvm                  19.1.1              GraalVM Core
native-image             19.1.1              Native Image        github.com

GraalVM on Goldman Sachs. tweet

Thomas Wuerthinger on GraalVM and Optimizing Java with Ahead-of-Time Compilation

one vm to rule them all?

graalvm for Java developers 2019

Graal: How to use the new JVM JIT compiler in real life video 2019

The JVMCI code that Graal depends on was added to Java 9 and is inbuilt. However, all Graal and Graal-downstream projects use a custom built Java 8 release. This is also the plan for the "foreseeable future"

Clojure Interop with R and Python on GraalVM

clojureD 2019: "Native Clojure with GraalVM" by Jan Stępień

https://clojureverse.org/t/why-is-graalvm-so-fast/2079 https://discourse.purescript.org/t/you-should-check-out-graalvm/814 GraalVM: Run Programs Faster Anywhere https://www.reddit.com/r/haskell/comments/8dr3xw/could_haskell_benefit_from_graalvm/ https://stackoverflow.com/questions/53712580/does-graalvm-jvm-support-java-11 https://stackoverflow.com/questions/48252830/does-java-9-include-graal https://markmail.org/message/42jds3ktwib6jn6r?q=net%2Ejava%2Eopenjdk Substrate VM – A framework that allows AOT compilation of Java applications. https://assets.ctfassets.net/oxjq45e8ilak/3VZgJf2jLWaQQGKaeSsecc/a015330e94f964d96df0b366321ec068/Dmitry_Chuyko_AOT.pdf https://www.reddit.com/r/java/comments/73c9vb/has_anyone_of_you_used_aot_compilation_on_java9/

Truffle is basically a toolkit for building language runtimes. It provides a framework for creating AST interpreters. Generally AST interpreters are easy to work with and reason about, but slow. Truffle addresses the speed issue by using Graal's API to perform partial evaluation on the interpreter. So, you get very fast runtimes without having to build a custom VM. But, it also exposes some compiler details by way of an annotation-based DSL if you want to take advantage of it. This includes control for PIC sizes, loop exploding, and explicit branch profiling amongst others.

As a language toolkit, Truffle also gives you an instrumentation framework so you get things like a debugger and profiler for minimal effort. It provides a polyglot system so you can call into other Truffle languages. Since they all inherit from the same base Node class, the compiler can even inline nodes across language boundaries. Truffle takes advantage of that itself by shipping functionality, such as its own native function interface, as its own Truffle language.

With Substrate VM you get way faster startup for your Truffle language and potentially overall less dynamic footprint. Graal VM is based on HotSpot, which is sort of "bloated" for the specific use case that Truffle has.

What's the difference between substrate vm and jaotc? https://mjg123.github.io/2017/10/02/JVM-startup.html

Substrate VM is more mature? jaotc was response to closed Substrate VM?

jaotc integrates with the HotSpot VM, while Substrate VM provides its own runtime (GC etc.) and intentionally supports only a subset of the JVM features. FWIW, both use Graal as a compiler.

Graal: Not just a new JIT for the JVM

todo list compiled with Graal for fast startup

Polyglot exception

GraalVM updater https://www.graalvm.org/docs/reference-manual/graal-updater/ https://medium.com/graalvm/graalvm-19-2-new-tools-b78a70f54b06

https://www.reddit.com/r/Kotlin/comments/bnl8qk/q_is_the_aotc_in_graalvm_native_image_comparable/

GraalVM updater https://www.graalvm.org/docs/reference-manual/graal-updater/ https://medium.com/graalvm/graalvm-19-2-new-tools-b78a70f54b06

Q: Is the AOTC in GraalVM Native Image comparable to / able to replace Kotlin Native?

Baeldung - Deep Dive Into the New Java JIT Compiler – Graal - Very good! Graal: How to use the new JVM JIT compiler in real life https://openjdk.java.net/projects/graal/ https://news.ycombinator.com/item?id=16346773 oracle/truffleruby#556 (comment) https://github.com/graalvm/truffleruby/blob/master/doc/user/using-graalvm.md http://lafo.ssw.uni-linz.ac.at/javadoc/graalvm/jdk.internal.jvmci.compiler/javadoc/index.html https://www.infoq.com/articles/Graal-Java-JIT-Compiler/

This goal has been achieved as evidenced by the inclusion of the Graal compiler in JDK 9 as the basis for jaotc and in JDK 10 as an experimental tier 4 just-in-time compiler. To use the latter, simply add these VM options to the java command line:

There are several key advantages of writing a compiler in Java. First of all, safety, meaning no crashes but exceptions instead and no real memory leaks. Furthermore, we’ll have a good IDE support and we’ll be able to use debuggers or profilers or other convenient tools. Also, the compiler can be independent of the HotSpot and it would be able to produce a faster JIT-compiled version of itself.

What this means is that we can run a simple program in three different ways: with the regular tiered compilers, with the JVMCI version of Graal on Java 10 or with the GraalVM itself.

The JVMCI code that Graal depends on was added to Java 9 and is inbuilt. However, all Graal and Graal-downstream projects use a custom built Java 8 release. This is also the plan for the "foreseeable future" (oracle/truffleruby#556 (comment)...)

Additionally, "mx" : the build system that Graal uses is fairly funky - and everyone including Chris (the engineer that made this video) is trying to get the team to move on

That's not a perfect solution I'm afraid. We recommend using GraalVM and won't support anything else at the moment (beyond attempting to be helpful).

GraalVM is sticking with a custom Java 8 build that includes JVMCI, for the foreseeable future. I'm not responsible for that and there are many projects which need to co-ordinate beyond TruffleRuby to upgrade - that's why it's not trivial to do.

The GraalVM is the way we package up everything and distribute it for end-users. If you want to use a standard JDK you need to build everything from scratch.

Graal is used for many more things than dynamically compiling Java. Besides others it's used for SubstrateVM (closed world AOT compilation) and Truffle languages like TruffleRuby, Graal.JS, FastR, Sulong(llvm integration) and our newest member Graal.Python.

Graal comes out of Oracle Labs. The build tool mx is short for Maxine, the spiritual predecessor of the Graal project. If you don't know mx it's quite intimidating and ugly. It's Python written by Java developers. But it does it's job coping with our not so standard building and testing requirements. No we don't run mx on Graal.Python yet, but we are working on it for full build tool metacircularity ;).

JEP 295: Ahead-of-Time Compilation

AOT compilation is done by a new tool, jaotc [...] It uses Graal as the code-generating backend.

Understanding How Graal Works - a Java JIT Compiler Written in Java 2017

things to never do again: write a vm in C++

introduction to simple language

jvmci

Java Code Examples for jdk.vm.ci.runtime.JVMCI

JVMCI examples for Java Day Tokyo 2017 Very good, re-watch this!

very interesting, talks about (explicit) bootstrapping

perhaps give an example of the JVMCI being compiled itself

it takes 10 seconds for comiling xxxx methods...

sooo the jvci can slow down startup times

CompileGraalWithC1Only "we just compile with C1 and that's usually fine"

sometimes (long running programs) explicit bootstrap (despite slow startup) can be better than C1

either upfront or on demand during runtime

if Graal compiles a methods, it uses heap memory to do that

JMVCI javadocs?

Package that defines the interface between a Java application that wants to install code and the runtime.

http://lafo.ssw.uni-linz.ac.at/javadoc/graalvm/all/jdk/internal/jvmci/runtime/JVMCI.html

https://neomatrix369.wordpress.com/tag/jit/ https://csl.name/post/python-jit/ not Graal https://github.com/spencertipping/jit-tutorial not Graal http://blog.reverberate.org/2012/12/hello-jit-world-joy-of-simple-jits.html not Graal https://jaxenter.com/jep-draft-java-jit-compiler-158398.html

Compiler Interface and Experimentations

Metropolis was created to explore Java-on-Java technologies and indeed the results of which came along with Java-9 with an introduction to the Java-Level JVM Compiler Interface (JVMCI) which targeted AOT compilation for improving the start-up time of both small and large Java applications by compiling classes to native code prior to launching the virtual machine.

As explained by Doug in an OpenJDK thread, one can use the following code block to verify this on their system:

Determining if JVMCI is enabled and whether it is being used by the JVM

test code for JVMCI?

https://www.dynatrace.com/news/blog/new-ways-introducing-compiled-code-java-9/ <- useful https://medium.com/@nigamnaman/compiler-interface-and-experimentations-f511e34b2b6b <- useful https://github.com/jruby/jruby-graal/blob/master/src/main/java/org/jruby/jvmci/JRubyJVMCI.java <- useful oracle/graal#306

-Djvmci.Compiler=graal

I have been working on a wrapper for Graal that can plug into JVMCI but allow us to tweak compilation specific to JRuby's workloads. The code for this is at https://github.com/jruby/jruby-graal.

Originally, the code simply wrapped and delegated all the Graal JVMCI endpoints, and this worked correctly to decorate Graal's high tier with a special pass for JRuby.

https://github.com/graalvm/graal-jvmci-8/tree/master/jvmci <- good javadocs https://github.com/oracle/graal/tree/master/compiler/src/org.graalvm.compiler.nodes/src/org/graalvm/compiler/nodes/java https://github.com/oracle/graal/tree/master/compiler/src/org.graalvm.compiler.nodes/src/org/graalvm/compiler/nodes https://github.com/oracle/graal/blob/master/compiler/src/org.graalvm.compiler.nodes/src/org/graalvm/compiler/nodes/StructuredGraph.java https://github.com/graalvm/graal-jvmci-8/blob/master/jvmci/jdk.vm.ci.runtime/src/jdk/vm/ci/runtime/JVMCIRuntime.java https://github.com/graalvm/graal-jvmci-8/blob/master/jvmci/jdk.vm.ci.runtime/src/jdk/vm/ci/runtime/JVMCIBackend.java https://github.com/graalvm/graal-jvmci-8/blob/master/jvmci/jdk.vm.ci.code/src/jdk/vm/ci/code/TargetDescription.java code cache https://github.com/graalvm/graal-jvmci-8/blob/master/jvmci/jdk.vm.ci.runtime/src/jdk/vm/ci/runtime/JVMCICompiler.java https://github.com/graalvm/graal-jvmci-8/tree/master/jvmci

http://lafo.ssw.uni-linz.ac.at/javadoc/graalvm/all/jdk/internal/jvmci/compiler/package-tree.html http://lafo.ssw.uni-linz.ac.at/javadoc/graalvm/all/jdk/internal/jvmci/common/package-tree.html http://lafo.ssw.uni-linz.ac.at/javadoc/graalvm/all/jdk/internal/jvmci/code/stack/package-tree.html http://lafo.ssw.uni-linz.ac.at/javadoc/graalvm/all/jdk/internal/jvmci/code/package-tree.html http://lafo.ssw.uni-linz.ac.at/javadoc/graalvm/all/jdk/internal/jvmci/runtime/package-tree.html http://lafo.ssw.uni-linz.ac.at/javadoc/graalvm/all/jdk/internal/jvmci/service/package-tree.html http://lafo.ssw.uni-linz.ac.at/javadoc/graalvm/all/jdk/internal/jvmci/hotspot/package-tree.html http://lafo.ssw.uni-linz.ac.at/javadoc/graalvm/all/jdk/internal/jvmci/code/TargetDescription.html

Does Java 9 include Graal? https://stackoverflow.com/questions/49505629/jdk-vm-ci-common-jvmcierror-no-jvmci-compiler-selected-with-java9-but-not-in-ja?noredirect=1&lq=1

GraalVM: the holy graal of polyglot JVM?

29 GraalVM related talks at @OracleCodeOne and @oracleopenworld. article

Quarkus and GraalVM: Booting Hibernate at Supersonic Speed, Subatomic Size

Graal: How to Use the New JVM JIT Compiler in Real Life

Maximizing Performance with GraalVM

interesting commentary towards the end on annotations & SubstrateVM

I don't get the difference substratevm - graal. more

In every application I've tried, AOT actually decreases startup performance because of the time it takes to load the shared library. Class Data Sharing (CDS or AppCDS) might be a better choice 1

Optimizing Java: Practical Techniques for Improving JVM Application Performance

Graal JIC C2

slides

Graal isn’t just a JIT Many parts of a compiler are the same whether you do the compilation ahead of time or just in time You can do ahead of time compilation using jaotc

check the JEP

AOT compilation is done by a new tool, jaotc:

jaotc --output libHelloWorld.so HelloWorld.class jaotc --output libjava.base.so --module java.base It uses Graal as the code-generating backend.

slides - implementing ruby - interesting rationale for truffle

investigate "bootstrapping"

native IMAGE limitations. some other info

master your java applications on K8s

Cons: worse performance of long-running applications

GraalVM allows you to compile your programs ahead-of-time into a native executable. The resulting program does not run on the Java HotSpot VM, but uses necessary components like memory management, thread scheduling from a different implementation of a virtual machine, called Substrate VM. Substrate VM is written in Java and compiled into the native executable. The resulting program has faster startup time and lower runtime memory overhead compared to a Java VM.

Don’t overlook Serverless!! § GraalVM and other projects focusing low footprint & fast startup

The application life cycle has changed § no longer dominated by uptime § startup is now critical to your application

The GraalVM frenzy

requires more iterations than C2 to reach peak performance, un-tiered compilation (e.g., Graal without C1) is slower until Graal kicks-in.

Today they can be combined, so that code is compiled with C1 first and then C2 later if they are still being executed a lot and it looks worth spending the extra time. This is called tiered compilation.

what I learned writing my own JIT language

Graal(VM) – Versuch eines Überblicks

Es gibt bei jaotc jede Menge Einschränkungen, die den Einsatz in der Praxis wirkungsvoll verhindert haben – zumindest in der Breite. Immerhin gab es bei Java 10 einige Bemühungen, um diverse gravierende Einschränkungen wie “funktioniert nur unter Linux-x64” aufzuheben. GraalVM geht heute aber noch einen großen Schritt weiter mit “native-image”.

libgraal ist der Versuch, den Graal-JIT-Compiler mittels GraalVM-native-image (dazu später mehr) in eine Bibliothek zu überführen, die das Performance-Problem des Compile-Vorgangs des Graal-JIT-Compilers in der JVM lösen soll. Da schon alles in nativem Code vorliegt, muss sich der JIT-Compiler nur noch um den Anwendungscode kümmern und nicht mehr um sich selbst. Im Prinzip wird dadurch Graal zu einem – was das Laufzeitverhalten angeht – sehr ähnlichen JIT-Compiler wie der klassische C1 und C2 in der HotSpot-JVM. Langsame JIT-Compilierung während der JVM-Warm-Up-Phase wird dadurch verhindert.

How do I run a class compiled with jaotc? Aha! Es esta la principal diferencia con native-image?

libgraal: GraalVM compiler as a precompiled GraalVM native image. HN "but still no windows support of position-independent code".

This has several advantages, libgraal improves startup times and completely avoids interfering with the heap usage and profiling of the application code. That is, the compiler now “codes like Java, runs like C++”. More specifically in the context of HotSpot, libgraal executes like C2 while preserving most of the advantages of a managed runtime. Libgraal significantly contributes to improving the compilation speed and performance on shorter and medium length workloads in GraalVM 19.1 release. Keep reading to learn more.

The primary benefit of libgraal is that compilations are fast from the start. This is because the compiler is running compiled from the get-go, by-passing the HotSpot interpreter altogether. Furthermore, it’s compiled by itself. By contrast, jargraal is compiled by C1. The result is that the compiled code of the compiler is more optimized with libgraal than with jargraal.

They're still keeping performance optimizations out of CE (free) version. Unheard of for a language runtime and quite crappy

Oracle GraalVM Enterprise Edition 19 Guide - Compiler Options - good info!

There are two operating modes of the GraalVM compiler when used as a HotSpot JIT compiler:

libgraal: the GraalVM compiler is compiled ahead of time into a native shared library. In this operating mode, the shared library is loaded by the HotSpot VM. The compiler uses memory separate from the HotSpot heap and it runs fast from the start since it does not need to warm-up. This is the default and recommended mode of operation.

jargraal: the GraalVM compiler goes through the same warm-up phase that the rest of Java application does. That is, it is first interpreted before its hot methods are compiled. This mode is selected with the -XX:-UseJVMCINativeLibrary command line option. This will delay the time to reach peak performance as the compiler itself needs to be compiled before it produces code quickly. This mode allows you to debug the GraalVM compiler with a Java debugger.

Run JVM-based Languages with GraalVM Enterprise

The compiler depends on a JVM that supports a compatible version of the JVM Compiler Interface (JVMCI). GraalVM Enterprise includes a version of the HotSpot JVM that supports JVMCI. JVMCI is a privileged low-level interface to the JVM. It can read metadata from the VM such as method bytecode and install machine code into the VM
. GraalVM Enterprise includes the Truffle Language Implementation framework. For improved performance of Truffle Abstract Syntax Trees (AST), they are compiled to machine code by the GraalVM compiler. When such a Truffle AST is hot (i.e., called many times), it is scheduled for compilation by the compiler. The pipeline for such a compilation is:

Truffle Framework code and data (AST) is partially evaluated to produce a compilation graph. The compilation graph is optimized by the GraalVM compiler to produce machine code. The JVMCI API is used to install this machine code in the VM’s code cache. The AST will automatically redirect execution to the installed machine code once it is available. [NOTE: look more into this]

If an uncaught exception is thrown by the compiler, the compilation is simply discarded and execution continues. However, the compiler can instead produce diagnostic data (such as compiler immediate representation graphs) that can be submitted along with a bug report. This is enabled with -Dgraal.CompilationFailureAction=Diagnose. The default location of the diagnostics output is in graal_dumps/ under the current working directory of the process but can be changed with the -Dgraal.DumpPath option. During VM shutdown, the location of the archive containing the diagnostic data is printed to the console.

Running JVM-based Apps good resource about options

Difference between running the GraalVM compiler in a Native Image vs on the JVM When running the GraalVM compiler on the JVM, it goes through the same warmup phase that the rest of Java application does. That is, it is first interpreted before its hot methods are compiled. This can translate into slightly longer times until the application reaches peak performance when compared to the native compilers in the JVM such as C1 and C2.

To address the issue of taking longer to reach to peak performance, libgraal was introduced – a shared library, produced using Native Image framework to ahead-of-time compile the compiler itself. That means the GraalVM compiler is deployed as a native shared library. In this mode, the compiler uses memory separate from the HotSpot heap and it runs compiled from the start. That is, it has execution properties similar to other native HotSpot compilers such as C1 and C2. Currently, this is the default mode of operation in both GraalVM Community and Enterprise images. It can be disabled with -XX:-UseJVMCINativeLibrary.

Under the hood of GraalVM JIT optimizations very good

really small java apps

GraalVM native-image is not a listed option, but I regularly use it to produce binaries that are competitive with (often better than) Go in terms of size and (start time) perf. A Clojure tool that parses, traverses and processes JSON (caro, see below) worked out to 3.2MB. I'm sure C and Rust can do better, but it's not bad compared to a JVM, and it's an order of magnitude better than the best option in the article. It's also small enough that while comparisons like "that's three floppies!" are interesting historical perspective, you can't reasonably complain about having a 3 MB binary in ~/.local/bin.

Everything you need to know about GraalVM 3h!

A race of two compilers: GraalVM JIT versus HotSpot JIT C2 slides

The sea of nodes and the HotSpot JIT

Not directly related to Graal, but cool stuff:

Devoxx Ukraine 2019: The Sea of Nodes and the HotSpot JIT https://www.youtube.com/watch?v=98lt45Aj8mo&feature=youtu.be https://www.youtube.com/watch?v=-vizTDSz8NU A JVM Does That? JIT and AOT in the JVM https://www.youtube.com/watch?v=gx8DVVFPkcQ Understanding the Tricks Behind the JIT https://www.youtube.com/watch?v=oH4_unx8eJQ https://www.youtube.com/watch?v=-vizTDSz8NU A JVM does that?

Quarkus http://1.0.0.Final

Compiling Native Projects via the GraalVM LLVM Toolchain

mixed interactive debugging

lambdas

2019 year in review

native-image

Profile-guided Optimizations

GraalVM also supports monitoring and generating heap dumps of the Native Image processes.

Warning: This functionality is available in the Enterprise Edition of GraalVM.

For additional performance gain and higher throughput in GraalVM ahead-of-time (AOT) mode, make use of profile-guided optimizations (PGO). With PGO, you can collect the profiling data in advance and then feed it to the GraalVM native-image utility, which will use this information to optimize the performance of the resulting binary.

Warning: Profile-guided optimizations (PGO) is a GraalVM Enterprise feature.

To achieve such aggressive ahead-of-time optimizations, we run an aggressive static analysis that requires a closed-world assumption. We need to know all classes and all bytecodes that are reachable at run time. Therefore, it is not possible to load new classes that have not been available during ahead-of-time-compilation.

about inlining. Java on Steroids: 5 Super Useful JIT Optimization Techniques. compiler inlining messages. the meaning of non-entrant. baeldung

hot code is faster code - addressing JVM warmup

GraalVM: Run Programs Faster Everywhere - 2019

Can GraalVM combine ahead of time compilation with adaptive optimization?

AOT libraries can be compiled in two modes controlled by --compile-for-tiered flag:

Non-tiered AOT compiled code behaves similarly to statically compiled C++ code, in that no profiling information is collected and no JIT recompilations will happen.

Tiered AOT compiled code does collect profiling information. The profiling done is the same as the simple profiling done by C1 methods compiled at Tier 2. If AOT methods hit the AOT invocation thresholds then these methods are recompiled by C1 at Tier 3 first in order to gather full profiling information. This is required for C2 JIT recompilations in order to produce optimal code and reach peak application performance.

Graal IR: An Extensible Declarative Intermediate Representation

GraalVM: the holy graal of polyglot JVM?

Learn about the trade-offs between GraalVM's JIT mode and its AOT mode. GraalVM-Native Images: The Best Startup Solution for Your Applications

...custom JVMCI implementation.... reddit

Graal native compilations are expensive

Native Clojure with Graal

Graal criticism

Lots of GraalVM internship positions available all over the world

JEP 295: Ahead-of-Time Compilation

The logical compilation mode for java.base is tiered AOT since JIT recompilation of java.base methods is desired to reach peak performance. Only in certain scenarios does a non-tiered AOT compilation make sense. This includes applications which require predictable behavior, when footprint is more important than peak performance, or for systems where dynamic code generation is not allowed. In these cases, AOT compilation needs be done on the entire application and is thus experimental in JDK 9.

Tika

The lower memory footprint is achieved due to a number of reasons. One of them is to do with the fact that reading the configuration files, particularly in XML format, results in loading a lot of class instances required to parse the configuration. Yet another reason is that Substrate VM does not need to aggressively optimize the way the traditional Java VM does it so there is no need to keep the extra meta-data in memory.

tweet

Graal native images will work well for the very smallest, self-contained apps, but the build time is rather long and the lack of dynamic code loading limits use cases. For local development tasks, inability to rapidly iterate code and load libraries is a showstopper.

Java JIT vs Java AOT vs Go for small, short-lived processes notes about memory usage. more

The optimization capabilities of a JIT compiler makes the executable running faster than in any other implementation.

Quarkus & Graal VM 2020

Truffle

Investigate partial evaluation and exact relationship with JVCI.

This Tuffle language exposes GPUs to the polyglot GraalVM. The goal is to

Safe and sandboxed execution of native code

Will the JVM ever inline an object's instance variables and methods? Is this up-to-date?

Futamura projections

publications

Partial Evaluation and Automatic Program Generation

• Truffle Optimizer: this includes the partial evaluation, and is implemented on top of the API that the Graal compiler [45] provides. [partial evaluation depends on GRAAL!]

Self-Specialising Interpreters and Partial Evaluation (2016)

The main overhead in our interpreter comes from dynamic dispatch between nodes. However, the targets of those dynamic dispatches are constant except when rewriting occurs. We count the number of invocations of a tree and reset the counter in the event of a node replacement. When the number of invocations on a stable tree exceeds a threshold, we speculate that the tree has reached its final state. We then start a special compilation for a specific tree where we assume every node in the tree remains unmodified. This way, the virtual dispatch between the execute nodes can be converted to a direct call, because the receiver is a constant. These direct calls are all inlined, forming one combined unit of compilation for a whole tree. Because every node of the tree is assumed constant, many values in the tree can also be treated as constants. This helps the compiler produce efficient code for nodes that have constant parameters. Examples include the actual value of a constant node, the index of a local variable access (see Section 4.2), and the target of a direct call. This special compilation of the interpreter by assuming the nodes in the tree to be constants is an application of partial evaluation to generate compiled code from an interpreter definition [21].

One VM to Rule Them All by Thomas Wuerthinger. One VM to Rule Them All? Lessons Learned with GraalVM

Tracing vs. Partial Evaluation Comparing Meta-Compilation Approaches for Self-Optimizing Interpreters

Best practice: Use Java exceptions for inter-node control flow

To investigate performance issues, we recommend the Ideal Graph Visualizer (here and after “IGV”)

Practical Partial Evaluation for High-Performance Dynamic Language Runtimes. One VM to Rule Them All paper

Our approach significantly reduces the implementation effort for a dynamic language by decoupling the language semantics and the optimization system. However, this comes at the cost of longer warmup times compared to a runtime system specialized for a single language. Our measurements show that warmup times are an order of magnitude longer than in a specialized runtime.

Graal compiles Java bytecode to optimized machine code, and can serve as a replacement for the Java HotSpot server compiler. We extended it with a frontend that performs PE, and added intrinsic methods that support our core primitives.

Applying Futamura Projections to Compose Languages and Tools in GraalVM (Invited Talk)

Benchmarking Partial Evaluation in Truffle

Speed up partial evaluation: by implementing second Futamura projection

https://github.com/graalvm/simplelanguage https://chrisseaton.com/truffleruby/metass16/metass.pdf https://github.com/oracle/graal/tree/master/truffle/docs https://github.com/oracle/graal/tree/master/truffle#using-truffle https://github.com/oracle/graal/blob/master/truffle/docs/LanguageTutorial.md https://github.com/oracle/graal/blob/master/truffle/docs/TruffleLibraries.md https://github.com/oracle/graal/blob/master/truffle/docs/InteropMigration.md

For an excellent, in-depth presentation on how to implement your language with Truffle, please have a look at a three hour walkthrough presented at a recent Conference on Programming Language Design and Implementation PLDI 2016.

crafting interpreters

Why is 2 * (i * i) faster than 2 * i * i

["Language-Independent Development Environment Support for Dynamic Runtimes"(https://twitter.com/grashalm_/status/1177514061963022337)

Haskell & Truffle

https://github.com/ekmett/core

How does PE take place for a Truffle interpreter AOT-compiled with native-image?

Origins of IGV

Deep dive into using GraalVM for Java and JavaScript developers

@ExplodeLoop

Why use truffle

JVM flags and properties

-XX:+UnlockExperimentalVMOptions 
-XX:+EnableJVMCI 
-XX:+UseJVMCICompiler 
-XX:-TieredCompilation 
-XX:+PrintCompilation 
-XX:CompileOnly=Demo::workload 

-XX:+PrintCompilation -XX:+UnlockDiagnosticVMOptions -XX:+PrintInlining

-XX:+LogCompilation
-XX:LogFile=foo.log t

-Djvmci.Compiler=graal
GraalCompileOnly
CompileGraalWithC1Only "we just compile with C1 and that's usually fine" relationship with libgraal

AOTLibrary

--tool:chromeinspector
Run a java program in JIT mode with a -Dgraal.PGOInstrument

--language:python to make sure Python is available as a language for the image;

-XX:FreqInlineSize=10

-XX:+UnlockExperimentalVMOptions -XX:+EnableJVMCI -XX:+JVMCIPrintProperties

GraalVM supports debugging of guest language applications and provides a built-in implementation of the Chrome DevTools Protocol. This allows you to attach compatible debuggers such as Chrome Developer Tools to GraalVM.

HotSpot Graal compiler factory

For Metropolis with libgraal, CompileGraalWithC1Only doesn't make sense to me

TruffleRuby and deoptimization

Other

Build Great Native CLI Apps in Java with Graalvm and Picocli

Interview on GraalVM

graalvm tips and tips for quarkus

Spring Tips: Spring and GraalVM (pt. 2)

Truffle Tutorial: Adding 1 and 1 Together

Java frameworks for the cloud: Establishing the bounds for rapid startups

Announcing GraalVM 20.2.0

How the HotSpot and Graal JVMs execute Java Code

Java on Truffle – Going Fully Metacircular

Cadenza Building Fast Functional Languages Fast

Nix

Nix discourse

Is there much difference between using nix-shell and docker for local development?

You need to see a Docker image as a snapshot of a machine. It will stay like it is and thus anything you do with it will keep working. However, making changes to such an image is where nondeterminism seeps in. Docker images are built using shell commands, usually it contains commands like apt-get update, apt-get install ... or wget .... These fetch information from outside sources, but because the outside sources change over time will result in different files being fetched at different times. In addition, there are no checks that the files that are being fetched are actually the ones you intended to download for your Docker image. So, Docker images will stay the same, but building the Docker image will result in a different result each time you build it. No guarantees there. Nix is different in the sense that the build process itself is resttricted so that no outside sources can change without that change being detected. Builds will result in functionally the same thing each time it is run. That also makes making changes safer, as a change requires a rebuild.

Another area that nix-shell is better is composability. You can potentially maintain common nix expressions among your many projects, and the shared build dependencies will only need to be built once. This could potentially greatly speed up your ability to work on a lot of small projects, if the alternative is you have to build a separate Docker container for each one.

With Docker, you can share and cache the layers of containers, but you have to figure out how to order your layers to maximize sharing, and sometimes it is not possible to get all the sharing you need because two dependencies are kind of at the same layer.

Nix, Docker and Haskell

cachix

Nix builds a package in isolation from your system. This ensures that build process is reproducible and doesn’t have undeclared dependencies, so if a package is built on one machine, it will build identically on another machine.

Setting Up A Private Nix Cache

Nix compiling Haskell packages instead of downloading from Hydra

Snack: incremental Nix builds for Haskell

How I develop with Nix

http://cache.nixos.org/

Nix command reference

Nixos docker image

Getting Started Haskell Project with Nix tried this

Comparing to Stack, Nix also have the advantage that we can host private Haskell packages, or even private distributions with Nix. The expressiveness of Nix makes it possible to add or remove packages, apply private patches to packages, use upstream or unpublished packages, and pin some packages to specific versions. When working on multi-language projects, Nix also makes it easy to integrate Haskell projects to a larger system and manage them as a whole.

Haskell Nix Vim

hacking on GHC but on Nix

all hies

GHC HEAD with Nix

Various ways to use gch in nix-shell

Stack Nix integration

Gabriel Gonzalez’s guide to using Haskell with Nix

Nix is a language-independent build tool. This means you can use Nix to also build and customize non-Haskell dependencies (like gtk). This uniform language simplifies build tooling and infrastructure.

Both Nix and stack use curated package sets instead of version bounds for dependency management. stack calls these package sets "resolvers" whereas Nix calls these package sets "channels". Nix provides stable channels with names like NixOS-18.09 (analogous to stack's LTS releases) and then an unstable channel named nixpkgs-unstable (analogous to stack's nightly releases)

Nixpkgs guide to the Haskell infrastructure

Nix integration with Cabal

User's guide to the Haskell infrastructure

Cabal - Nix local build overview

Make Cabal v3 Nix-friendly. nix integration.

However, I'm also a member of another group of users: those who make heavy use of Nix. For a number of reasons, Nix is very popular amongst the Haskell community, and nixpkgs builds a huge number of Haskell packages. And many of the industrial users are quite tightly wedded to it - for things like cross-compilation there are few compelling alternatives.

Now, most people using Nix to build Haskell packages do use Cabal. nixpkgs uses the lower-level Setup.hs machinery, but of course that still depends heavily on Cabal as a library.

A truly reproducible scientific paper. hn

Is cabal2nix the (only) recommend way to convert/create small Haskell project as of today?

cabal2nix converts Haskell projects to nix expressions based on the currently available package set in nixpkgs. This usually follows stackage. If your Haskell project works with this, then it is probably the most convenient way to get started. You get the benefit of using the binary cache, so you don’t even need to build much.

It quickly gets frustrating when your dependencies don’t match what is available in the default package set. You begin to start needing to override the packages to get the exact dependencies you want, and this can quickly get out of hand.

I’ve recently been playing with https://github.com/input-output-hk/haskell.nix 5 which take the approach of conjuring up the package set you need using cabals solver (or stacks). It then builds a nix expression which matches exactly what cabal or stack would do at that point in time, and allows you to reproduce it whenever. This is much easier to work with as you can just use your normal tools to come up with the build plan, and simply use nix for caching and reproducibility (check in the generated nix files much like you would a cabal lock file).

The papercut thread

Backport ghc-8.6.5 to 19.03

https://discourse.nixos.org/t/why-not-use-yaml-for-configuration-and-package-declaration/1333

https://discourse.nixos.org/t/declarative-package-management-for-normal-users/1823

https://discourse.nixos.org/t/nixos-pain-points-newbie-gone-intermediate-experience-report/452

“to pin or not to pin”: when staying with a NixOS release, I quickly started to miss newer packages; when following nightly, I was afraid of things breaking and the config being not really reproducible; I tried to reasearch nixpkgs pinning, but didn’t manage to find a lot about this (reportedly doable, but haven’t found good examples I would be able to reuse);

https://vaibhavsagar.com/blog/2018/05/27/quick-easy-nixpkgs-pinning/

This might be desirable in many cases, but for us it means a lot of waiting for no benefit. We can avoid this by pinning nixpkgs to a known-good commit. One way to do this is by setting the NIX_PATH environment variable, which is where Nix looks for the location of nixpkgs. We could do this as follows:

cabal2nix converts Haskell projects to nix expressions based on the currently available package set in nixpkgs. This usually follows stackage. If your Haskell project works with this, then it is probably the most convenient way to get started. You get the benefit of using the binary cache, so you don’t even need to build much.

I’ve recently been playing with https://github.com/input-output-hk/haskell.nix 5 which take the approach of conjuring up the package set you need using cabals solver (or stacks). It then builds a nix expression which matches exactly what cabal or stack would do at that point in time, and allows you to reproduce it whenever. This is much easier to work with as you can just use your normal tools to come up with the build plan, and simply use nix for caching and reproducibility (check in the generated nix files much like you would a cabal lock file).

Nix hash collision

Nix pills

Nix non-determinism https://gitlab.haskell.org/ghc/ghc/issues/4012 https://nixos.org/nix-dev/2015-June/017429.html

What is nix-instantiate good for ? What is a store-derivation?

Nix PhD thesis

From the thesis: "Nix expressions are [...] translated to [...] store derivations, which encode single component build actions. [Just as] compilers generally do the bulk of their work on simpler intermediate representations of the code being compiled, rather than on a full-blown language with all its complexities. [...] Nix expressions are first translated to store derivations that live in the Nix store and that each describe a single build action with all variability removed. These store derivations can then be built, which results in derivation outputs that also live in the Nix store."

scripting with nix

Environments with Nix Shell - Learning Nix pt 1

Failing to Learn Nix

not feeling worthwhile on the desktop

remote Nix shells

nix-instantiate man page. nix-store man page. nix-env man page

HN Nix misgivings. HN on Nix 2.0 release. more misgivins

Needing to pass -A to a lot of commands to use them the "right way" smells of a poorly thought out design.

Nix isolates packages, such that updating one package has no impact on any other packages (with unavoidable exceptions like the graphics driver, presumably).

You're right that package isolation isn't much of a problem on non-rolling distros. One of the benefits of Nix is that you get some of the stability and predictability of an LTS distro with the freshness of a rolling distro when desired, without having to deal with package conflicts.

Incidentally, Nix doesn't need to be used in a deterministic manner. In fact, I don't think most desktop users of Nix care too much about determinism for most packages they run. I certainly don't; I'm happy to follow along with whatever arrives in my channel. Nix has features that support determinism, and I'm certainly glad they exist for when I end up needing them, but they're not necessarily why people use Nix.

How we use Nix at IOHK

how does nix know what binary a machine needs?

managuing rust dependencies with Nix

NIX + BAZEL = FULLY REPRODUCIBLE, INCREMENTAL BUILDS

Overlays

Overlays provide a method to extend and change nixpkgs. They replace constructs like packageOverride and overridePackages.

NixOS: The DOs and DON’Ts of nixpkgs overlays with video

NixCon

Nixpkgs Overlays – A place for all excluded packages

the ability to compose different modules

NixOS: declarative, composable. Nixpgs: not declarative, not composable (before overlays)

Introduction to Nix

scons (not Nix)

"Nix does a lot of fixpoints"

Dhall and bidirectional type checking

complex setup. generating package sets for Cabal projects

GHC user guide: packages

phases

How to install java in NixOS?. issue. issue.

Using Nix and CI for great good!

Graal VM packages wanted

Learning to love Nix

another take on Docker vs Nix

NixOS channels, profiles and packages

nix-env -q will only report packages that are installed into imperative 'environments', like those created by nix-env -i.

nix-env is a tool for imperative package management that is a thin layer over the otherwise declarative and immutable Nix system. The profiles mechanism provides a means for mutability and nix-env creates manifest.nix in the profile to record the set of packages that are in the environment.

Quick and Easy Nixpkgs Pinning

What does the string / value mean in Nix?

Note that the Nix search path is impractical in many situations. You can only pass it from the outside, and it easily creates impurity. In my experience, problems are better solved with explicit argument passing or the functions relating to fix-points like callPackage and the overlay system.

Self and super in nix overlays

Can I replicate what nix-build does with nix-shell and cabal build?

In general, you are not guaranteed that the output of nix-build and the same steps in a nix-shell will equivalent.

nix-shell can be invoked without the --pure flag, so that many more environment variables will be set (this lets you use GUI applications for example)

Where is callPackage defined in the Nixpkgs repo (or how to find Nix lambda definitions in general)?

nix repl can tell you the location where a lambda is defined.

not exactly Nix related but

Nix .defexpr

mutable NIX_PATH stuff, channel stuff, and the inconsistent plethora of CLI tools

Getting started with Nix

Nix by example

porcelain commands from 2.0

Compare Nix derivations using nix-diff

These *.drv files use the ATerm file format and are Nix-independent. Conceptually, Nix is just a domain-specific language for generating these ATerm files.

Nix cookbook

Nix function parameters

https://nixos.org/nixos/nix-pills/nixpkgs-parameters.html In this case, nix does a trick: If the expression is a derivation, well build it. If the expression is a function, call it and build the resulting derivation.

Nix + Cabal example

nix-env -qa not showing latest packages

nix for cross-compiling

Packaging a Haskell library for artefact evaluation using nix

Nix cookbook

Videos

Intro to Nix

Py Munich

Using Nix as a build tool - Part 1

Nix the functional Package Manager and NixOS the declarative Linux distribution

Nix: Under the hood by Gabriel Gonzalez

Nix as HPC package management system

Configuration management for your house

Nix: The Purely Functional Package Manager

Introduction to NixOS.

Nix Flake MVP

Example of default.nix for use with nix-build. tweet.

Putting your own derivations in Nix Profile

nix-store --query --deriver $(readlink -f $(which vim))
nix-env --query --drv-path --file '<nixpkgs>' vim

Setting up a C++ project environment with nix. Managing libraries with Nix

Haskell Nix tip

Nix like

Backup using Nix

more on Cabal Nix integration

Package Management Walkthrough: apt, yum, dnf, pkg

Optimising Docker Layers for Better Caching with Nix

Generic Level Polymorphic N-ary Functions

Nix typeclasses

Flakes – Proposed mechanism to package Nix expressions into composable entities

I’d love for a way to write nix expressions that easily lead to reproducible builds. Of course this is possible right now next, but requires substantial legwork to pin Nixpkgs version or add SHA for every tarball.

Functional DevOps in a Dysfunctional World

arion. tweet.

Arion is a tool for building and running applications that consist of multiple docker containers using NixOS modules. It has special support for docker images that are built with Nix, for a smooth development experience and improved performance.

stop using docker/containers as a replacement for reproducible build tools (like nix)

another nix+haskell tweet

nix-copy-closure

Using Nix for Repeatable Python Environments | SciPy 2019 | Daniel Wheeler

python section of NixOS docs

article about Nix and Haskell

how to fix broken #Haskell packages in #Nix

ociTools in NixOS

#Nix code that can be used to find or build all reverse dependencies of a given #Haskell package.

nix on small computers

nix at shopify

some of the most commonly useful things about the nix repl

https://twitter.com/ProgrammerDude/status/1188184013556604928

Building our project using Nix for - JS, Statically (Musl + integer-simple), IHaskell notebook, Reflex webapp

testing Cabal packages

Nix recipes for Haskellers

recursive Nix

authoring Nix derivations

recursive Nix

Incremental GHC builds with Recursive Nix

Recursive Nix #13. This allows Nix builders to call Nix to build derivations, with some limitations

Eelco Dolstra - content-addressable store

the holy grail for many years

non-root users could install stuff from unverified binary caches

NixCon

The runtime dependencies are a subset of the build-time dependencies that Nix determines automatically by scanning the generated output for the hash part of each build-time dependencies' store path.

nix for bash scrips

NixOS for developers

It’s good the author addresses the value of pinning channels, such as nixpkgs. Otherwise the dependencies aren’t really deterministic as nix will use the latest version.

thoughts on Nix

experiences with Haskell and Nix

haskell nix doubt

I was Wrong about Nix mentions Nix with Go, construction of docker images. Tools niv, dive, vgo2nix

I Was Wrong about Nix https://news.ycombinator.com/item?id=22295102 https://www.reddit.com/r/programming/comments/f23oox/i_was_wrong_about_nix/ http://lethalman.blogspot.com/2016/04/cheap-docker-images-with-nix_15.html

How I start: Nix

Running a haskell script without GHC

https://www.reddit.com/r/haskell/comments/fp0g4n/nix_haskell_development_2020_howto/ pin nixpkgs

Building a reproducible blog with Nix

Erase your darlings: immutable infrastructure for mutable systems

NixOS – Building a web app with functional programming

My experience with NixOS

Nix documentation shouldn't even mention the existence of nix-env. It's a complete anti-pattern.

society if Nix were typed

nix yubykey

My NixOS Desktop Flow

Nix as a Homebrew replacement

ghcide and nix

A Gentle Introduction to the Nix Family (2019)

tweet

NIX FLAKES, PART 1: AN INTRODUCTION AND TUTORIAL

A Nix terminology primer by a newcomer. HN

Getting ghcide into nixpkgs

other package mangers

Nix(OS) Thoughts

Python with Nix

NixOS: How it works and how to install it!

nix flakes 1 2 3

When and how should default.nix, shell.nix and release.nix be used?

stuff about nix-shell

If path is not given, nix-shell defaults to shell.nix if it exists, and default.nix otherwise.

creating your Nix channel

good video

TOWARDS A CONTENT-ADDRESSED MODEL FOR NIX. cachix interview

https://haskell4nix.readthedocs.io/

Distributing Haskell programs in a multi-platform zip file

Creating a Haskell development environment with LSP on NixOS

Ubuntu 20.10 now packages Nix

Scrive Nix Workshop

DERIVATION OUTPUTS IN A CONTENT-ADDRESSED WORLD

Offloading NixOS builds to a faster machine

overlays based on recursive knots

The TLA+ Home Page

The TLA+ Video Course by Lamport himself!

notice on Lobste.rs

Review by Lamport himself, with some corrections

Industrial Use of TLA+

review at "path sensitive"

Reducing the system to a model requires another idea unfamiliar to many programmers: how all the questions about timeouts and schedulers and inputs boil down to “and here a nondeterministic action may occur.”

There are also many temporal properties that TLA+ cannot express, such as “Until an action is committed, a user may abort the action at any point.”

new paradigms are valuable when they take concepts that are implicit in software design and give them syntax and compiler errors. For TLA+, these concepts are: modeling, nondeterminism, and temporal correctness properties.

On HN: https://news.ycombinator.com/item?id=18163470 https://news.ycombinator.com/item?id=19661329 https://news.ycombinator.com/item?id=14373359 https://news.ycombinator.com/item?id=13918648 https://news.ycombinator.com/item?id=22496287 https://news.ycombinator.com/item?id=21662484 https://news.ycombinator.com/item?id=18357550 https://news.ycombinator.com/item?id=18937214 https://news.ycombinator.com/item?id=14221848 https://news.ycombinator.com/item?id=19821272 https://news.ycombinator.com/item?id=18814350 https://news.ycombinator.com/item?id=14563219 https://news.ycombinator.com/item?id=21003470 https://news.ycombinator.com/item?id=16569653 https://news.ycombinator.com/item?id=14528072 https://news.ycombinator.com/item?id=19839388 https://news.ycombinator.com/item?id=14432754 https://news.ycombinator.com/item?id=24591131 https://news.ycombinator.com/item?id=14475791 https://news.ycombinator.com/item?id=10313386

formal methods at amazon

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment