Create a gist now

Instantly share code, notes, and snippets.

Truffle/Graal at SPLASH'16

Truffle and Graal-related Presentations at SPLASH'16


  • AST Specialisation and Partial Evaluation for Easy High-Performance Metaprogramming (PDF)
    Chris Seaton, Oracle Labs
    Sun 30 Oct 2016, 11:30-12:00 - Meta'16

    The Ruby programming language has extensive metaprogramming functionality. But unlike most similar languages, the use of these features is idiomatic and much of the Ruby ecosystem uses metaprogramming operations in the inner loops of libraries and applications.

    The foundational techniques to make most of these metaprogramming operations efficient have been known since the work on Smalltalk and Self, but their implementation in practice is difficult enough that they are not widely applied in existing Ruby implementations.

    The Truffle framework for writing self-specialising AST interpreters, and the Graal dynamic compiler have been designed to make it easy to make high-performance implementations of languages. We have found that the tools they provide also make it dramatically easier to implement efficient metaprogramming. In this paper we present metaprogramming patterns from Ruby, show that with Truffle and Graal their implementation can be easy, concise, elegant and high-performance, and highlight the key tools that were needed.


  • Towards Advanced Debugging Support for Actor Languages: Studying Concurrency Bugs in Actor-based Programs (PDF)
    Carmen Torres Lopez, Stefan Marr, Hanspeter Moessenboeck, Elisa Gonzalez Boix
    Sun 30 Oct 2016, 14:10-14:30 - Agere'16

    With the ubiquity of multicore hardware, concurrent and parallel programming has become a fundamental part of software development. If writing concurrent programs is hard, debugging them is even harder. The actor model is attractive for developing concurrent applications because actors are isolated concurrent entities that communicates through asynchronous message sending and do not share state, thus they avoid common concurrency bugs such as race conditions. However, they are not immune to bugs. This paper presents initial work on a taxonomy of concurrent bugs for actor-based applications. Based on this study, we propose debugging tooling to assist the development process of actor-based applications.


  • Bringing Low-Level Languages to the JVM: Efficient Execution of LLVM IR on Truffle (PDF)
    Manuel Rigger, Matthias Grimmer, Christian Wimmer, Thomas Würthinger, Hanspeter Mössenböck
    Mon 31 Oct 2016, 15:40-16:05 - VMIL

    Although the Java platform has been used as a multi- language platform, most of the low-level languages (such as C, Fortran, and C++) cannot be executed efficiently on the JVM. We propose Sulong, a system that can execute LLVM- based languages on the JVM. By targeting LLVM IR, Sulong is able to execute C, Fortran, and other languages that can be compiled to LLVM IR. Sulong combines LLVM’s static optimizations with dynamic compilation to reach a peak per- formance that is close to the performance achievable with static compilers. For C benchmarks, Sulong’s peak runtime performance is on average 1.53× slower (0.80× to 2.45×) compared to the performance of executables compiled by Clang O3. For Fortran benchmarks, Sulong is 3.05× slower (1.45× to 5.30×) than the performance of executables com- piled by GCC O3. This low overhead makes Sulong an alter- native to Java’s native function interfaces. More importantly, it also allows other JVM language implementations to use Sulong for implementing their native interfaces.


  • Building Efficient and Highly Run-time Adaptable Virtual Machines (PDF)
    Guido Chari, Diego Garbervetsky, Stefan Marr
    Tue 1 Nov 2016 13:55-14:20 - DLS

    Programming language virtual machines (VMs) realize language semantics, enforce security properties, and execute applications efficiently. Fully Reflective Execution Environments (EEs) are VMs that additionally expose their whole structure and behavior to applications. This enables developers to observe and adapt VMs at run time. However, there is a belief that reflective EEs are not viable for practical usages because such flexibility would incur a high performance overhead.

    To refute this belief, we built a reflective EE on top of a highly optimizing dynamic compiler. We introduced a new optimization model that, based on the conjecture that variability of low-level (EE-level) reflective behavior is low in many scenarios, mitigates the most significant sources of the performance overheads related to the reflective capabilities in the EE. Our experiments indicate that reflective EEs can reach peak performance in the order of standard VMs. Concretely, that a) if reflective mechanisms are not used the execution overhead is negligible compared to standard VMs, b) VM operations can be redefined at language-level without incurring in significant overheads, c) for several software adaptation tasks, applying the reflection at the VM level is not only lightweight in terms of engineering effort, but also competitive in terms of performance in comparison to other ad-hoc solutions.

  • Optimizing R Language Execution via Aggressive Speculation (DOI)
    Lukas Stadler, Adam Welc, Christian Humer, Mick Jordan
    Tue 1 Nov 2016 14:45-15:10 - DLS

    The R language, from the point of view of language design and implementation, is a unique combination of various programming language concepts. It has functional characteristics like lazy evaluation of arguments, but also allows expressions to have arbitrary side effects. Many runtime data structures, for example variable scopes and functions, are accessible and can be modified while a program executes. Several different object models allow for structured programming, but the object models can interact in surprising ways with each other and with the base operations of R.

    R works well in practice, but it is complex, and it is a challenge for language developers trying to improve on the current state-of-the-art, which is the reference implementation - GNU R. The goal of this work is to demonstrate that, given the right approach and the right set of tools, it is possible to create an implementation of the R language that provides significantly better performance while keeping compatibility with the original implementation.

    In this paper we describe novel optimizations backed up by aggressive speculation techniques and implemented within FastR, an alternative R language implementation, uti- lizing Truffle - a JVM-based language development framework developed at Oracle Labs. We also provide experimental evidence demonstrating effectiveness of these optimizations in comparison with GNU R, as well as Renjin and TERR implementations of the R language.

  • Cross-Language Compiler Benchmarking—Are We Fast Yet? (PDF)
    Stefan Marr, Benoit Daloze, Hanspeter Mössenböck
    Tue 1 Nov 2016 16:30-16:55 - DLS

    Comparing the performance of programming languages is difficult because they differ in many aspects including preferred programming abstractions, available frameworks, and their runtime systems. Nonetheless, the question about relative performance comes up repeatedly in the research community, industry, and wider audience of enthusiasts.

    This paper presents 14 benchmarks and a novel methodology to assess the compiler effectiveness across language implementations. Using a set of common language abstractions, the benchmarks are implemented in Java, JavaScript, Ruby, Crystal, Newspeak, and Smalltalk. We show that the benchmarks exhibit a wide range of characteristics using language-agnostic metrics. Using four different languages on top of the same compiler, we show that the benchmarks perform similarly and therefore allow for a comparison of compiler effectiveness across languages. Based on anecdotes, we argue that these benchmarks help language implementers to identify performance bugs and optimization potential by comparing to other language implementations.


  • Truffle and Graal: Fast Programming Languages With Modest Effort
    Chris Seaton, Oracle Labs
    Thu 3 Nov 2016 14:20-15:10 at M3 - SPLASH-I, Session 11

    Not all programming languages can be supported by huge expert engineering teams to make them as fast as major languages such as Java and JavaScript. Two technologies from Oracle Labs are making it easy to achieve similar results with much less work. Truffle is a framework for writing language interpreters on top of the JVM, and Graal is a new JVM dynamic compiler that makes them fast with very modest effort.

    We’ll use Ruby to give a concrete example of how we have taken a large existing language with much accidental and historical complexity and with a modest team and time have given it performance to rival Java and JavaScript.


  • GEMs: Shared-memory Parallel Programming for Node.js (DOI)
    Daniele Bonetta, Luca Salucci, Stefan Marr, Walter Binder
    Thu 3 Nov 2016 11:20-11:45 at Z2 - OOPSLA, Language Design and Programming Models II

    JavaScript is the most popular programming language for client-side Web applications, and Node.js has popularized the language for server-side computing, too. In this domain, the minimal support for parallel programming remains however a major limitation. In this paper we introduce a novel parallel programming abstraction called Generic Messages (GEMs). GEMs allow one to combine message passing and shared-memory parallelism, extending the classes of parallel applications that can be built with Node.js. GEMs have customizable semantics and enable several forms of thread safety, isolation, and concurrency control. GEMs are designed as convenient JavaScript abstractions that expose high-level and safe parallelism models to the developer. Experiments show that GEMS outperform equivalent Node.js applications thanks to their usage of shared memory.

  • Efficient and Thread-Safe Objects for Dynamically-Typed Languages (PDF)
    Benoit Daloze, Stefan Marr, Daniele Bonetta, Hanspeter Mössenböck
    Thu 3 Nov 2016 13:30-13:55 at Z2 - OOPSLA, Runtime Support

    We are in the multi-core era. Dynamically-typed languages are in widespread use, but their support for multithreading still lags behind. One of the reasons is that the sophisticated techniques they use to efficiently represent their dynamic object models are often unsafe in multithreaded environments.

    This paper defines safety requirements for dynamic object models in multithreaded environments. Based on these requirements, a language-agnostic and thread-safe object model is designed that maintains the efficiency of sequential approaches. This is achieved by ensuring that field reads do not require synchronization and field updates only need to synchronize when they are applied to objects shared between threads.

    Basing our work on JRuby+Truffle, we show that our safe object model has zero overhead on peak performance for thread-local objects and only 3% average overhead on parallel benchmarks where field updates require synchronization. Thus, it can be a foundation for safe and efficient multithreaded VMs for a wide range of dynamic languages.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment