Skip to content

Instantly share code, notes, and snippets.

@TerrorJack
Created March 25, 2015 16:43
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save TerrorJack/ef1f8b231660ea3523bb to your computer and use it in GitHub Desktop.
Save TerrorJack/ef1f8b231660ea3523bb to your computer and use it in GitHub Desktop.
some acm data for fun.
This file has been truncated, but you can view the full file.
{
"event_title": "International Conference on Functional Programming",
"event_contents": [
{
"conference_title": "Commercial Users of Functional Programming",
"conference_contents": [
{
"proceeding_title": "CUFP '10:ACM SIGPLAN Commercial Users of Functional Programming",
"proceeding_contents": [
{
"paper_title": "Clojure Tutorial",
"paper_authors": [
"Aaron Bedra"
],
"paper_abstract": "In this tutorial you will spend some time learning about what clojure has to offer and build an application that solves a real world problem. In this tutorial pair programming is welcome and encouraged! Bring a friend to help you work through the exercises and to bounce ideas off of. •Introduction •Clojure's Elevator Pitch Diving into the Syntax In the introduction you will learn key features of the clojure language and learn how to read and understand the syntax at the basic levels. Starting the Application •Diving into the problem •Dealing with the data •An introduction to Incanter After getting the initial problem, attendees will be introduced to a real world scenario that involves data processing. They will be given a data stream that cannot be modified and will have to write some transformation code to handle and process the data. Attendees will also have to produce some simple statistics on the data. This section will deal with experimentation on the clojure repl and trying out new ideas that address the problem. Building the Application •Making the ideas into concrete functions •Handling dependencies •Packaging and building •Polish After walking through experiments on the repl, attendees will wrangle the ideas into a real application. They will learn how to structure a clojure application and learn how to write clojure in an idomatic way. At this time bonus challenges will be offered to those who have completed the exercise, and assistance to those who are still working through them."
},
{
"paper_title": "Building robust servers with Erlang",
"paper_authors": [
"Martin Logan"
],
"paper_abstract": "The Building Robust Servers with Erlang tutorial will cover the fundamental concepts needs to write highly fault tolerant servers with Erlang/OTP. In this tutorial you will learn some OTP basics focusing specifically on the gen_server and supervision behaviours. With these behaviours you will build server that is capable of processing requests rapidly and concurrently and recovering in the face of failures large and small. You will ultimately build a fault tolerant server, package it, and deploy it as a running service."
},
{
"paper_title": "High-performance Haskell",
"paper_authors": [
"Johan Tibell"
],
"paper_abstract": "Haskell makes it possible to write elegant, high-level code that rivals the performance of low-level, imperative languages. In this tutorial, I will introduce the tools Haskell provides for reasoning about the performance of your code and techniques that you can use to make your code faster. I will cover important topics in Haskell performance optimization, including: •accurate benchmarking, •CPU and memory profiling, •laziness and strictness, •making sense of compiler output, •performance idioms, and •data types and their impact on performance. By the end of the tutorial you should have an understanding of how to accurately measure the performance of your Haskell program, determine which parts of the program needs improvement, and finally, improve the performance of the program."
},
{
"paper_title": "F# 2.0: a day at the beach",
"paper_authors": [
"Rick Minerich"
],
"paper_abstract": "This tutorial will be organised around learning the building blocks of F#, real world functional programming principles, and platform compatibility. •\"Hello Seagull\" - A first glimpse of F# •Beyond the REPL - A relaxing Visual Studio 2010 rubdown •The Power of Types - A distinct lack of unpleasant surprises •Asynchrony and Concurrency - Be lazy, it works so you don't have to •Monadic Magic - A frosty gulp of language oriented programming •Tour de F# - Enjoy the fruits of others' hard work What can attendees expect? A thorough introduction to F#, one of the most powerful languages available for real world development. Come dive into the building blocks of functional programming, swim around with the safety and convenience of type inferred statically checked code, and glide through F#'s impressive array of libraries."
},
{
"paper_title": "Implementing web sites with Scala and Lift",
"paper_authors": [
"David Pollak"
],
"paper_abstract": "Lift is a Web framework in the vein of Seaside and WebObjects. Lift is built on Scala, a functional/OO hybrid lanugage that runs on the Java Virtual Machine. In contrast to frameworks oriented around the model-view-controller (MVC) pattern, Lift abstracts the HTTP request~response cycle rather than wrapping HTTP concepts in APIs. Lift makes use of Scala's functional abstractions in a way that allows composition of applications. In this tutorial, we will build a multiuser, real-time chat application in Lift and discuss Scala's language features that make Lift possible."
},
{
"paper_title": "Camlp4 and Template Haskell",
"paper_authors": [
"Jake Donham",
"Nicolas Pouillard"
],
"paper_abstract": "\"Static metaprogramming\" is compile-time code analysis and synthesis. It has many applications, such as (from simple to complex): defining abbreviations, generating boilerplate from type definitions, extending the language syntax, and embedding DSLs. Static metaprogramming is supported for Haskell with Template Haskell, and for OCaml with Camlp4; the two systems have a lot in common. In this tutorial we will work through examples in both languages, sticking mostly to their commonalities. We will also say a little about features which are unique to each. The main features we will cover are: •host language syntax trees •quotations / antiquotations for host language syntax trees •defining new quotations •pragmatics of each tool (Quotations / antiquotations are a mechanism for working with host language abstract syntax trees using the concrete syntax of the host language.) We will motivate these features through examples of increasing complexity, including a simple DSL for working with JSON data. Participants will leave with a basic understanding of how to build syntax extensions with the two systems, including how to run the tool, how to work with the host language syntax trees using quotations and antiquotations, and how to define new quotations to extend the host language."
},
{
"paper_title": "F#: embracing functional programming in Visual Studio 2010",
"paper_authors": [
"Luke HOban"
],
"paper_abstract": "Earlier this year, Microsoft released Visual Studio 2010 with full support for the F# functional programming language. In this talk, we will look at what this meant, both technically and non-technically, and some of the interesting aspects of this \"productization\" of F#. From ways that functional programming languages can be positioned in the broader developer space, to issues like backwards compatibility, localization and tools scalability, we'll look at some of the practical concerns facing functional programming languages in the commercial software development space."
},
{
"paper_title": "Scaling Scala at Twitter",
"paper_authors": [
"Marius Eriksen"
],
"paper_abstract": "Rockdove is the backend service that powers the geospatial features on Twitter.com and the Twitter API (\"Twitter Places\"). It provides a datastore for places and a geospatial search engine to find them. To throw out some buzzwords, it is: •a distributed system •realtime (immediately indexes updates and changes) •horizontally scalable •fault tolerant Rockdove is written entirely in Scala and was developed by 2 engineers with no prior Scala experience (nor with Java or the JVM). We think the geospatial search engine provides an interesting case study as it presents a mix of algorithm problems and \"classic\" scaling and optimization issues. We will report on our experience using Scala, focusing especially on: •\"functional\" systems design •concurrency and parallelism •using a \"research language\" in practice •when, where and why we turned the \"functional dial\" •avoiding mutable state"
},
{
"paper_title": "Cryptol, a DSL for cryptographic algorithms",
"paper_authors": [
"Sally Browning"
],
"paper_abstract": "Cryptol is a domain-specific functional language designed by Galois, Inc in collaboration with the the NSA for specifying cryptographic algorithms. The Cryptol language includes native support for arbitrary sized words, a strong type-system based on Hindley-Milner style polymorphism extended with arithmetic size constraints, and the ability to generate proof-objects throughout the compiler toolchain to provide correctness evidence that can be independently verified. In addition, high-level specification is fully executable. The accompanying toolset provides a rich set of translators that can produce both hardware and software implementations for a variety of target platforms. In addition, the toolset can generate formal models representing the specification and an implementation, whether automatically generated from the Cryptol specification or written independently, and show that the two models are functionally equivalent. A team of developers from Rockwell Collins, Inc. and Galois, Inc. has successfully produced high-speed embedded Cryptographic Equipment Applications (CEAs), automatically generated from high-level specifications. These high-speed CEA implementations comprise a mixture of software code and VHDL, and target a compact new embedded platform designed by Rockwell Collins. Automated formal methods prove that algorithm implementations faithfully implement their high-level specifications. Cryptol's high-level approach to hardware implementation does not come at the expense of performance. For instance, an algorithm core generated from a Cryptol specification for AES-256 and running in Electronic Codebook mode demonstrated throughput in excess of 16 Gbps. When feedback from the output stage to the input was introduced, thereby defeating the advantage gained by \"unrolling\" AES rounds, encryption performance for AES-256 still exceeded 1 Gbps, while consuming less than 2% of the available programmable logic for the algorithm core. Significantly, the Rockwell Collins/Galois team was able to design, implement, simulate, integrate, analyze, and test a complex CEA on the new hardware, including AES-256 and Galois Counter Mode (GCM), in less than 3 months, significantly reducing the usual time to produce a new design on a new platform."
},
{
"paper_title": "Naïveté vs. experience: or, how we thought we could use Scala and Clojure, and how we actually did",
"paper_authors": [
"Michael Fogus"
],
"paper_abstract": "This talk will discuss the use of Scala and Clojure in an ongoing software project, now 2-years old. The talk will start with an anecdote about how each language was pitched to the software team composed of programmers with very little to no prior experience with functional programming. The actual Scala pitch will be dissected and criticized and then distilled into a few general thoughts on how to make an effective functional language pitch. With these tangential matters out of the way, a description of the difficulties ramping up to the full use of Scala will be discussed, followed by a simplified description of the system developed. The talk will briefly cover the conceptual hurdles that the team experienced when changing from a purely object-oriented mindset to a more functional approach. Additionally, the talk will focus on how Clojure was used for ancillary functionality and tooling. The difficulties experienced, both technical and cultural, in introducing Clojure into the mainline source will then be covered. To wind down, the advantages and disadvantages, both real and perceived, between the use of both languages in a commercial setting will be explored. Finally, the lessons learned after 2 years will be contrasted with the initial pitch to see where experience and naïveté intersected."
},
{
"paper_title": "Reactive extensions (Rx): curing your asynchronous programming blues",
"paper_authors": [
"Erik Meijer"
],
"paper_abstract": "Asynchronous, event-driven \"reactive\" programming is way too hard in today's world of development tools and frameworks. The huge amount of manual and error-prone plumbing leads to incomprehensible and hard to maintain code. As we reach out to services in the cloud, the desire for asynchronous computation is ever increasing, requiring a fresh look on the problems imposed by reactive programming. Centered around the concept of observable data sources, Rx provides a framework that takes care of the hard parts of reactive programming. Instead of focusing on the hard parts, you now can start dreaming about the endless possibilities of composing queries over asynchronous data sources, piggybacking on convenient LINQ syntax. In this session, we'll cover the design philosophy of Rx, rooted on the deep duality between the interactive IEnumerable interface and the new reactive IObservable interface in .NET 4. From this core understanding, we'll start looking at various combinators and operators defined over observable collections, as provided by Rx, driving concepts home by a bunch of samples. Finally, if time permits, we'll look at the Reactive Extensions for JavaScript which allows us to take the concepts we already know from Rx and apply them to JavaScript and have deep integration with libraries such as jQuery. Democratizing asynchronous programming starts today. Don't miss out on it!"
},
{
"paper_title": "Eden: an F#/WPF framework for building GUI tools",
"paper_authors": [
"Howard Mansell"
],
"paper_abstract": "Our group within Credit Suisse is responsible for developing quantitative models used to value financial products within the Securities Division of the bank. One aspect of this role is to deliver tools based on those models to trading and sales staff, which they can use to quickly price proposed transactions and perform other analysis of market conditions. Historically these tools have been delivered as Excel spreadsheets. WPF (Windows Presentation Foundation) is a GUI framework which encourages architectural separation between the layout of the user interface itself (the \"View\") and the underlying interactions and calculations (the \"ViewModel\" and \"Model\"). We have built a framework for developing tools in WPF that makes use of a graph-based calculation engine for implementing ViewModels and Models in F#. The engine is built on F# asynchronous workflows and provides a standard set of features to our tools. In this talk I'll discuss the implementation of this calculation engine, including various steps in its evolution that led up to our use of asynchronous workflows. I'll also talk about how well F# and asynchronous workflows have worked for us, and briefly discuss some of the challenges of integrating F# and WPF."
},
{
"paper_title": "Functional language compiler experiences at Intel",
"paper_authors": [
"Leaf Petersen",
"Neal Glew"
],
"paper_abstract": "For five years Intel's Programming Systems Lab (PSL) has been collaborating with an external partner on a new functional programming language designed for productivity on many-core processors. While the language is not yet public, this talk outlines motivations behind the language and describes our experiences in implementing it using a variety of functional languages. The reference interpreter is written in Haskell and compiled with GHC while PSL's performance implementation is written in SML and compiled with Mlton. We have also generated Scheme code compiled with PLT Scheme as part of a prototyping effort. At several points, the project has had several contributors that did not have a background in functional languages working on the compiler and on writing benchmarks. We describe their experiences working in SML and with functional languages in general. Specifically: •what they liked and disliked about using functional languages •what was easy and hard about learning and using functional languages •what worked/didn't work for helping them learn to program in functional languages •which functional features they used and didn't use •general observations about the code that they wrote •what (if anything) they took away from the experience A design principle of the implementation effort was to by default avoid the use of imperative features. Previous experience and review of the literature suggested that many parts of a compiler could be written as well or better using primarily functional code, but that restricting ourselves entirely to the functional fragment of SML was probably not reasonable. We describe our experiences with this, and the tradeoffs that we encountered. Specifically: •During the project we experimented with both immutable and mutable intermediate representations (IRs). We describe and motivate some of the scenarios where we used one or the other, explain our experiences with this, and describe cases where we feel that we made an inappropriate initial choice. •We chose to avoid all global mutable state in the compiler. Most notably, symbol tables and global configuration information are always passed explicitly as parameters to parts of the compiler that require them. This choice had benefits and costs, and we discuss our experience with this. We also describe some of the features of SML that we found useful, and discuss some of the lack of features and quirks of SML that annoyed us. Specifically: •We discuss places where we encountered limitations in the module system. These include the lack of uniformity between the signature and structure language, and the lack of recursive modules. •We describe some of the ways in which we believe that better syntactic support could have lessened the programming burden, especially with respect to attempting to be purely functional. Examples of this include a type class mechanism and/or monadic syntax to lessen the burden of passing around explicit state, better syntactic support for using the imperative features of the language, and better support for custom control operators."
},
{
"paper_title": "Riak Core: building distributed applications without shared state",
"paper_authors": [
"Rusty Klophaus"
],
"paper_abstract": "Storing big data reliably is hard. Searching that data is just as hard. Basho Technologies, the company behind Riak KV and Riak Search, focuses on solving these two problems. Both Riak KV (a key-value datastore and map/reduce platform) and Riak Search (a Solr-compatible full-text search and indexing engine) are built around a library called Riak Core that manages the mechanics of running a distributed application in a cluster without requiring a central coordinator or shared state. Using Riak Core, these applications can scale to hundreds of servers, handle enterprise-sized amounts of data, and remain operational in the face of server failure.* This talk will describe the implementation, responsibilities, and boundaries of Riak Core, including how Riak Core: •Divides the data- or computing-domain of your application into separate virtual nodes located on different physical servers. •Distributes data and operations to the correct virtual nodes within the cluster. •Dynamically re-shapes the cluster without requiring a central coordinator when nodes enter, leave, crash, slow down, or go dark. •Provides the Erlang community with a solid platform for creating other distributed applications. Special attention will be paid to how Riak Core adopts common functional programming patterns and leverages OTP/Erlang libraries and behaviours."
},
{
"paper_title": "Functional programming at Freebase",
"paper_authors": [
"Warren Harris"
],
"paper_abstract": "Freebase is a community-built, online database of facts and information, developed by Metaweb Technologies in San Francisco, California [1]. Freebase uses a proprietary graph database technology to store and query a network of over 12 million interrelated topics involving several hundred million individual graph relations. Third-party applications are free to query and update the Freebase database, and do so using the Metaweb Query Language, MQL [2]. MQL queries are expressed with a JSON-based template language which makes them easy to integrate into web applications, particularly the client-side portion which is processed with JavaScript?. Metaweb's first-generation MQL implementation was written as a Python-based middle-tier application that dynamically translates JSON-based queries into a low-level proprietary graph query language and networking protocol [3]. The low-level query language was designed for efficiency rather than for direct usability, and significant effort is required to translate between the two languages. Analysis of the entire Freebase application stack has revealed that as much time was being spent in the MQL query and result translation process as was being spent actually resolving the low-level graph queries. Much of this was attributed to the memory-intensive architecture of the translator, but a large portion was attributed to overhead inherent in the Python 2.6 runtime. We have undertaken developing a second-generation MQL translator written in Ocaml and drawing on a number of pure functional techniques. The core language translation process is expressed in terms of embedded language that implements the graph query protocol. This embedded language is used for both for static queries, e.g. for schema lookups, and for expressing the dynamic translation of MQL queries. The translator operates as a server and uses Lwt (Lightweight Threads library [4]) to interleave both client and graph database requests. A web services API and monitoring console have been developed using the Ocsigen web server and associated Eliom infrastructure [5]. The performance of our reimplemented MQL translator service is very encouraging. One process can sustain over an order of magnitude more simultaneous MQL requests, and service each request in a small fraction of the time consumed by the Python implementation. Moreover, due to the asynchronous nature of the underlying Lwt I/O subsystem, a single processor core can handle several times the capacity of an entire multi-core server machine running the former Apache/WSGI/Python [6] infrastructure. In addition to describing the MQL translator system, I would like to discuss the underlying mechanism by which it batches fragments of I/O requests together into single larger protocol messages, thereby minimizing communication overhead with the underlying graph database. This technique closely resembles monads typically used in functional programming, but also provides some of the benefits of 'arrows' [7]."
},
{
"paper_title": "ACL2: eating one's own dogfood",
"paper_authors": [
"Warren A. Hunt, Jr."
],
"paper_abstract": "We are using the ACL2 theorem-proving system for formally verifying properties of the X86-compatible, 64-bit VIA Nano microprocessor. To validate Nano circuit models, we translate its Verilog into our formally defined HDL. We write specifications in the ACL2 logic, and mechanically verify HDL descriptions using the ACL2 theorem prover to orchestrate the use of BDDs, AIGs, SAT, symbolic simulations techniques, and the theorem prover itself. Our system has been integrated into the Centaur design toolflow; this includes rapid and regular translation of the Nano design into our framework and daily regression runs. Our tools are written in ACL2, which is itself a functional language. For instance, our BDD package is written in ACL2 and has been proven correct using the ACL2 theorem prover -- likewise so is our AIG package and many other tools. Our symbolic simulation system for the entire ACL2 logic is also written in ACL2, and it has been verified by the ACL2 theorem prover. In fact, the entire ACL2 system is written in the ACL2 language. ACL2 is in commercial use by a number of companies, including AMD, Centaur, IBM, and Microsoft. We believe the FP community should consider the same operational paradigm. In fact, we challenge the FP community to write analysis tools for their functional programs in their own programing languages. This kind of \"eating one's own dog food\" tends to make one's system better. Our combined ACL2/CAD system may be the world's largest functional program as the source code exceeds five megabytes. Without our associated mechanical verification system, we couldn't begin to manage the complexity we have created. We have wondered if we could apply our tools to other functional languages."
}
]
},
{
"proceeding_title": "CUFP '09:Proceedings of the 2009 Video Workshop on Commercial Users of Functional Programming: Functional Programming As a Means, Not an End",
"proceeding_contents": [
{
"paper_title": "Welcome",
"paper_authors": [
"Francesco Cesarini",
"Jim Grundy"
]
},
{
"paper_title": "Real world Haskell: keynote",
"paper_authors": [
"Bryan O'Sullivan"
],
"paper_abstract": "Bryan will talk about how the book \"Real World Haskell\" came to be, and the response that it has received since publication. He will also discuss the opportunities presented, and the challenges faced, by functional languages in open source and in industry."
},
{
"paper_title": "Scala at EDF Trading: implementing a domain-specific language for derivative pricing with Scala",
"paper_authors": [
"Lee Momtahan"
],
"paper_abstract": "In this talk I shall explain how Scala has been used at EDF Trading. We have used Scala to implement Diesel: a domain-specific language used within our in-house system for pricing/risk-managing/hedging commodity derivatives. Diesel can represent commodity derivatives naturally and can then be executed to perform a Monte Carlo simulation. Compared to writing derivative-specific Monte Carlo simulations in a general-purpose computer language, the per-derivative development time has been reduced from weeks to hours. We have been using Scala within a large code base of 300k lines of Java on a on a business-critical, production system, where performance requirements are key. Based on this experience, the second part of the talk will discuss the suitability of Scala for use within this context, comparing the risks of using a niche language against its productivity benefits."
},
{
"paper_title": "Erlang at hover.in",
"paper_authors": [
"Bhasker Kode"
],
"paper_abstract": "I'm Co-Founder & CTO at 'hover.in', the in-text content & ad delivery platform that let's blogs and website publishers to push client-side events to the cloud. The start-up predominantly runs off the LYME stack ( Linux / Yaws / Mnesia / Erlang ) . From our experiences at 'hover.in' I'd like to discuss why we chose using Erlang and got about using it as our bridge across our multi-node cluster. In particular the architectural decisions that went into making our distributed python crawler back-end running off Mnesia with its sharing & fragmentation strategies for tables that span several millions of rows, load-balancing to our 3-node Yaws web servers, tweaks to solve file descriptor & efficiency bottlenecks, experiments in DHT's, our cache worker implementations, our messaging queues, cron's & trade offs in dispatching jobs also throw light on design choices that can fit in distributed and heterogeneous environments. We have also recently built our own in-memory cache workers, persistent stats & logging system, and in the process of now building our own A/B testing framework, that we'd love to talk about. After an initial quite 1 1/2 years of Erlang in production, we've just launched our developer blog which might give a closer insight of our work: •notes on our multi-core computing & back-end experiences •notes on using a Erlang based load-testing tool (Tsung) •my talk on 'Erlang at hover.in' at devCamp Bangalore (with one slide listing all the modules written as well) •a more product-oriented interview"
},
{
"paper_title": "The Big Board: teleconferencing over high-res maps with Haskell",
"paper_authors": [
"Jefferson Heard"
],
"paper_abstract": "A public emergency is the ultimate example of multitasking and collaboration over a wide area. Crowd control, first aid, fire, police, and search and rescue teams all have to provide a timely and coordinated response to save lives and property. This requires situational awareness, meaning that emergency managers know what and where their resources are on the ground, and what the ground's features are. Managers who direct emergency operations centers have few technological tools to help them with these complicated, critical tasks. They are, in fact, still largely paper based. The Big Board aids in filling this technological gap by providing aerial photography, facility for dynamic, geo-referenced content creation, and a mechanism for simulcasting the content to everyone participating in the management of an emergency. The goal is that it should give emergency managers a reliable and flexible way to coordinate on their most natural medium (maps), be as easy to use as pen and paper, quick to train a new user on, and able to run in many environments, some quite technologically out of date. In The Big Board, users open a map, join a virtual conference room, and then can collaboratively mark up the map with paths, polygons, and placemarks and add content behind these using a wiki that is generated as part of every conference room. As they mark up the map, their annotations are simulcasted to everyone else in the same conference room. Additionally, web-services serving geo-referenced content can be integrated onto the map to overlay things such as real-time vehicle location and sensor data. This application has been written from the ground up in Haskell, with a PostGIS backend. Haskell was chosen for its laziness, reliability of applications written in it, rapid development, multicore scalability, and native compilation. In this presentation I will describe: •The Big Board, who's using it, and where we're going with it. •The requirements of it, and how Haskell fulfills these requirements better than the alternatives. •A high level overview of the application's structure. •The challenges and advantages of functional design in such a large application. •How the design led naturally to two reusable, publicly available libraries now in Hackage: Buster (FRP for application orchestration) and Hieroglyph (for functional 2D vector-graphics). I will also give a demonstration of the application running, showing how it can be used to coordinate a disaster response."
},
{
"paper_title": "The first substantial line of business application in F#",
"paper_authors": [
"Alex Peake",
"Adam Granicz"
],
"paper_abstract": "We have developed MarketingPlatform™ a marketing automation solution delivered as Software as a Service with F# as the primary language. MarketingPlatform™ is a solution for marketers in direct marketing and in channel marketing who would like to gain a timely and deep understanding of what is working and what is not working in their marketing campaigns. Marketers are than facilitated in the execution and delivery of campaigns, using this insight to create relevant communications to each individual. It is divided into four tightly integrated campaign management steps of Measure, Analyze, Design and Execute. Measure: How well are my campaigns working? Are they meeting the goals? Analyze: Why are the campaigns exceeding goals, or falling short of goals? What sub-segments are worthy of further communications with? Design: Create communications that are relevant to each individual, driven by the data and insight gained from measurement and analysis. Execute: Deliver the communications through the most appropriate channels, including email, print and mail, texting and purls. Why did TFC management choose F#? F# is a functional language with first class functions and composability, a pattern matching language and strongly typed. At the same time it as a .NET CLR language, fully interoperable with the vast libraries, and fully capable of OOP. So we believed that we would get the best of both worlds --- a modern productive language and a full set of libraries to build on. The product is in the market and selling well, so we were rewarded by our decisions. Where was F# applied within the application, and where not? Most of the business logic was implemented in F#, as was most of the data handling. C# was retained for building the ASP.NET user interface, just because the F# tool support is not there yet. What were the development benefits? The application was developed in less time with significantly less code --- probably a quarter. The code is more readable and easier to adapt as new market requirements come in. What were the business benefits? We were able to get to market much sooner with more feature, and the overall cost of the development was significantly lower. Conclusion: F# is an excellent language for line of business applications. It somewhat shortens the development cycle and greatly shortens the enhancement cycle."
},
{
"paper_title": "Functional programming at Facebook",
"paper_authors": [
"Christopher Piro",
"Eugene Letuchy"
],
"paper_abstract": "We use Erlang, ML, and Haskell for a few small projects but (as I'm sure you guessed) we'd like to focus on the lessons we've learned using Erlang for building Facebook Chat. The channel servers are at the heart of our Chat infrastructure; they receive messages from our web servers and queue them for HTTP delivery. Presence information, the knowledge of who is currently using Chat, is aggregated and shipped to a separate service. The service has been operational since our launch about a year ago and we continue to develop and improve it frequently. We're also developing an XMPP interface to Facebook Chat based on the ejabberd project. A lot of functionality has been added to interact with the many moving parts of our infrastructure. Erlang has been the right tool for the job. It's well-known that Erlang is a good choice for communications applications and real-time chat in particular, and we'd like to mention that Erlang's strong support for concurrency, distribution, hot code loading, and online debugging have been indispensable, and that it's those same weaknesses that lead us away from C++ when first designing the service. However, we'd like to focus on the factors inside and outside Facebook that enabled us to choose Erlang in the first place. Most importantly, we needed good support for language-independent RPC. Fortunately we rely heavily on Thrift for all our services. Both the channel and Jabber servers need to speak to programs written in PHP and C++, and having infrastructure and engineers comfortable debugging it already in place when we began was invaluable. Our service management tools as well are language-independent; all our services regardless of language are controlled using the same interface. Facebook and the Thrift project are also fortunate to have an active community. The original Thrift binding for Erlang was implemented in-house but substantial portions were reimplemented by outside contributors. As well we've leveraged existing open source products including ejabberd and Bob Ippolito's Mochiweb framework. The other half of our story consists of the barriers presented and risks taken in choosing Erlang. I mentioned above that much of Facebook's infrastructure is language-agnostic, but our build and deploy systems unfortunately are not. Many of our internal tools weren't written with Erlang or (in particular) hot code reloading in mind, so we needed to write many one-offs to support all the runtime goodness that Erlang affords us. Facebook runs many services, most of which are in C++, Python, or Java, and only two in Erlang, so in some respects we've needed to work harder to integrate. We at Facebook also tend to work in large codebases and share the responsibility of bug fixing. Unfortunately, our Erlang projects live outside of our main repository, and the extra learning necessary to develop or just fix bugs in Erlang tends to keep us isolated from the pool of talent in our department. Unsurprisingly, the job of fixing Chat bugs falls to one of a very small group, whereas bugs in our PHP code have a small army of PHP generalists waiting to squash them. However Erlang has such a steep learning curve only because most undergraduate programs don't stress functional programming, and Facebook in particular doesn't expose most engineers to functional programming, or even information about the strengths and weaknesses of FP versus PHP, C++, and Python. In particular we'd like to touch on the prevalence of object-oriented programming in education and in practice, and the tendency of engineers (even engineers with FP experience) to guide their design by OOP principles even when they don't fit. The recurring theme is \"the right tool for the job\" --- actor-based, functional, imperative, and object-oriented programming are all valuable tools for modeling particular problems, but the most important lessons in our experience are how to lower the barriers to choosing the right tool, and how to arm engineers with the knowledge they need to make an appropriate choice."
},
{
"paper_title": "FMD: functional development in Excel",
"paper_authors": [
"Lee Benfield"
],
"paper_abstract": "Barclays Capital, like many other investment banks, uses Microsoft Excel as a rapid application development environment for pricing models and other tools. While this may sound bizarre, Excel's flexibility and convenience renders it an immensely useful tool for the Front Office, and our Traders are extremely Excel literate. Excel combines two programming models a zeroth order functional language (the spreadsheet) an imperative programming language (Visual Basic for Applications) The functional model allows very rapid development and great transparency, but the limitations of Excel's built-in functions drives developers to use VBA. Soon the spreadsheet dependency graph is lost and the spreadsheet layer is relegated to a GUI on top of tens/hundreds of thousands of lines of VBA. The logic is tightly tied to Excel, and a server-side implementation involves a complete rewrite in another language, which carries both operational risk and developmental cost. FMD ('Functional Model Deployment') prevents these problems by embedding a functional language cleanly into Excel, effectively extending Excel to be a higher order functional environment. Now complex models can be developed without leaving the pure spreadsheet domain: Before 1.Limited built-in functions need to be extended with add-ins or VBA. 2.Boilerplate code needs to be written to import libraries. 3.Systems need to be rewritten to run outside Excel. (typically ported to C++ / C# back end) After 1.Functions can be defined on-the-fly without leaving the pure spreadsheet side. 2.Dynamic and data-driven wrappers make external libraries directly visible. 3.Spreadsheet 'programs' can be exported and run outside of Excel. The business have fully supported this approach, and are enthusiastic about using FMD - as Simon Peyton Jones identified elsewhere, \"Excel is the world's most popular functional language\". From their point of view, functional programming in Excel is just an extension of what they've been doing for years!"
},
{
"paper_title": "Building the user programmable internet with Erlang",
"paper_authors": [
"Gordon Guthrie"
],
"paper_abstract": "For many people, the programming language of choice is a spreadsheet. This is especially true of people who are not employed as programmers, but write programs for their own use --- often defined as \"end-user\" programmers---A User-Centred Approach to Functions in Excel (Simon Peyton Jones, Alan Blackwell, Margaret Burnett) The most popular programming language in the world is a functional one --- the humble spreadsheet. The spreadsheet programming paradigm remains tied to the 'document' model due to its firm desktop roots. Hypernumbers are developing a 'spreadsheet-like' programming language and platform for the web that will enable non-technical end-users to build dynamic integrated web-applications. The hypernumbers platform is itself implemented in a functional programming language --- Erlang. The platform is currently in private beta testing with selected users and potential customers."
},
{
"paper_title": "Clear & simple: composing a marketplace",
"paper_authors": [
"Marc Wong-VanHaren"
],
"paper_abstract": "Glyde is a web-based marketplace, with the goal of making buying and selling as easy as possible. We also strive to provide a responsive, error-free experience. These goals represent challenges which functional programming helps mitigate. First, easy-to-use often means complex-to-build, but FP's high-level abstractions and referential transparency help us to manage this complexity. Second, as we handle people's money, mistakes could be damaging, but the expressiveness of FP give us confidence in our code, and a lack of side effects facilitates testing. Third, a responsive site is a pleasure to use, so we write parallelizable logic, which FP's avoidance of side effects makes feasible. This talk will describe Glyde's use of JoCaml and Scala and discuss our experiences with these tools."
},
{
"paper_title": "Birth of the industrial Haskell group",
"paper_authors": [
"Duncan Coutts"
],
"paper_abstract": "It has long been thought that commercial users of Haskell could benefit from an organisation to support their needs, and that as a side-effect the wider Haskell community could benefit from the actions of such an organisation. The stronger community would in turn benefit the commercial users, in a positive feedback cycle. At last year's CUFP, users of several FP languages raised the issue that there was no organisation that they could pay to do the important but boring work of maintaining and improving common infrastructure. Shortly after CUFP, in partnership with major commercial users of Haskell such as Galois and Amgen, we started to set wheels in motion, and in March 2009 we announced the birth of the Industrial Haskell Group (IHG). The IHG is starting off with a limited set of activities, but already it is having an impact on the state of the Haskell development platform. We expect that as it expands, it will become a significant force driving Haskell forwards. In this presentation we will talk about the motivation leading to the formation of the IHG, how it has worked thus far and what lessons we can learn that might benefit other FP communities. We will also look at how we can encourage the positive feedback cycle between commercial users and the wider community."
},
{
"paper_title": "Discussion",
"paper_authors": [
"John Launchbury"
]
}
]
},
{
"proceeding_title": "CUFP '07:Proceedings of the 4th ACM SIGPLAN workshop on Commercial users of functional programming",
"proceeding_contents": [
{
"paper_title": "Fourth ACM SIGPLAN Workshop on Commercial Users of Functional Programming",
"paper_authors": [
"Jeremy Gibbons"
],
"paper_abstract": "The goal of the Commercial Users of Functional Programming series of workshops is to build a community for users of functional programming languages and technology\". The fourth workshop in the series took place in Freiburg, Germany on 4th October 2007, colocated as usual with the International Conference on Functional Programming. The workshop is flourishing, having grown from an intimate gathering of 25 people in Snowbird in 2004, through 40 in Tallinn in 2005 and 57 in Portland in 2006, to 104 registered participants (and more than a handful of casual observers) this time. For the first time this year, the organisers had the encouraging dilemma of receiving more offers of presentations than would fit in the available time. The eventual schedule included an invited talk by Xavier Leroy, eleven contributed presentations, and an energetic concluding discussion. Brief summaries of each appear below."
},
{
"paper_title": "Industrial uses of Caml: examples and lessons learned from the smart card industry",
"paper_authors": [
"Xavier Leroy"
],
"paper_abstract": "The first part of this talk will show some examples of uses of Caml in industrial contexts, especially at companies that are part of the Caml consortium. The second part discusses my personal experience at the Trusted Logic start-up company, developing high-security software components for smart cards. While technological limitations prevent running functional languages on such low-resource systems, the development and certification of smart card software present a number of challenges where functional programming can help."
},
{
"paper_title": "The way it ought to work... and sometimes does",
"paper_authors": [
"Ulf Wiger"
],
"paper_abstract": "The telecommunications world is now moving rapidly towards SIP-based telephony and multimedia. The vision is to merge mobile and fixed networks into one coherent multimedia network. Ericsson has been a pioneer in SIP technology, and the first Erlang-based SIP stack was presented in 1999. A fortunate turn of events allowed us to revive the early SIP experiments, rewrite the software and experiment to find an optimal architecture, and later verify our implementation with great success in international interop events. We believe this to be a superb example of how a small team of experts, armed with advanced programming tools, can see their ideas through, with prototypes, field trials, and later large-scale industrial development."
},
{
"paper_title": "The default case in Haskell: counterparty credit risk calculation at ABN AMRO",
"paper_authors": [
"Cyril Schmidt",
"Anne-Elisabeth Tran Qui"
],
"paper_abstract": "ABN AMRO is an international bank headquartered in Amsterdam. For its investment banking activities it needs to measure the counterparty risk on portfolios of financial derivatives. We will describe the building of a Monte-Carlo simulation engine for calculating the bank's exposure to risks of losses if the counterparty defaults (e.g., in case of bankruptcy). The engine can be used both as an interactive tool for quantitative analysts and as a batch processor for calculating exposures of the bank's financial portfolios. We will review Haskell's strong and weak points for this task, both from a technical and a business point of view, and discuss some of the lessons we learned."
},
{
"paper_title": "Ct: channelling NeSL and SISAL in C++",
"paper_authors": [
"Anwar Ghuloum"
],
"paper_abstract": "I will discuss the design of Ct, an API for nested data parallel programming in C++. Ct uses meta-programming and functional language ideas to essentially embed a pure functional programming language in impure and unsafe languages, like C++. I will discuss the evolution of the design into functional programming ideas, how this was received in the corporate world, and how we plan to proliferate the technology in the next year. Ct is a deterministic parallel programming model integrating the nested data parallelism ideas of Blelloch and bulk synchronous processing ideas of Valiant. That is, data races are not possible in Ct. Moreover, performance in Ct is relatively predictable. At its inception, Ct was conceived as a simple library implementation behind C++ template magic. However, performance issues quickly forced us to consider some form of compilation. Using template programming was highly undesirable for this purpose as it would have been difficult and overly specific to C++ idiosyncrasies. Moreover, once compilation for performance was considered, we began to consider a language semantics that would enable powerful optimizations like calculational fusion, synchronization barrier elimination, and so on. The end result of this deliberation is an API that exposes a value-oriented, purely functional vector processing language. Additional benefits of this approach are numerous, including the important ability to co-exist within legacy threading programming models (because of the data isolation inherent in the model). We will show how the model applies to a wide range of important (at least by cycle count) applications. Ct targets both shipping multi-core architectures from Intel as well as future announced architectures. The corporate reception to this approach has (pleasantly) surprised us. In the desktop and high-performance computing space, where C, C++, Java, and Fortran are the only programming models people talk about, we have made serious inroads into advocating advanced programming language technologies. The desperate need for productive, scalable, and safe programming languages for multi-core architectures has provided an opening for functional, type-safe languages. We will discuss the struggles of multi-core manufacturers (i.e. Intel) and their software vendors that have created this opening. For Intel, Ct heralds its first serious effort to champion a technology that borrows functional programming technologies from the research community. Though it is a compromise that accommodates the pure in the impure and safe in the unsafe, this is an important opportunity to demonstrate the power of functional programming to the unconverted. We plan to share the technology selectively with partners and collaborators, and will have a fully functional and parallelizing implementation by year's end. At CUFP, we will be prepared to discuss our long term plans in detail."
},
{
"paper_title": "Terrorism response training in scheme",
"paper_authors": [
"Eric Kidd"
],
"paper_abstract": "The Interactive Media Lab (IML) builds shrink-wrapped educational software for medical professionals and first responders. We have teams focusing on media production, script-level authoring, and low-level engine development. Our most recent project is Virtual Terrorism Response Academy. VTRA uses 3D simulations to teach students about radiological, chemical and biological weapons. Our software is now undergoing trials at government training centers and metropolitan police departments. VTRA consists of approximately 60,000 lines of Scheme, and a similar amount of C++. All of our product-specific code is in Scheme, and we make extensive use of macros and domain-specific languages. From 1987 to 2002, we used a C++ multimedia engine scripted in 5L, the \"Lisp-Like Learning Lab Language\". This was Lisp-like in name only; it used a prefix syntax, but didn't even support looping, recursion, or data structures. We needed something better for our next project! We ultimately chose to use Scheme, because (1) it was a well-known, general-purpose programming language, and (2) we could customize it extensively using macros. Migrating to Scheme proved tricky, because we needed to keep releasing products while we were building the new Scheme environment. We began by carefully refactoring our legacy codebase, allowing us to maintain our old and new interpreters in parallel. We then rewrote the front-end in a single, eight-day hacking session. But even once the Scheme environment was ready, few of our employees wanted to use it. In an effort to make Scheme programming more accessible, we invested significant effort in building an IDE. Today, our environment is much more popular---a third of our employees use it on a regular basis, including several professional artists. After migrating to Scheme, we added support for 3D simulations. And Scheme proved its worth almost immediately: we faced several hard technical problems, which we solved by building domain-specific languages using Scheme macros. First, we needed to simulate radiation meters. For this, we used a reactive programming language to implement a Model-View-Controller system. Second, we needed to guide students through the simulation and make teaching points. For this, we relied on a \"goal system\", which tracks what students need to accomplish and provides hints along the way. In both these cases, Scheme proved to be a significant competitive advantage. Not all problems have clean imperative solutions. A language which supports functional programming, macros, and combinator libraries allows us to do things our competitors can't. This summer, we'll be releasing our engine as open source, and starting work on a GUI editor. We welcome users and developers!"
},
{
"paper_title": "Learning with F#",
"paper_authors": [
"Phil Trelford"
],
"paper_abstract": "In this talk, I will describe how the Applied Games Group at Microsoft Research Cambridge uses F#. This group consists of seven people, and specializes in the application of statistical machine learning, especially ranking problems. The ranking systems they have developed are used by the XBox Live team to do server-side analysis of game logs, and they recently entered an internal competition to improve \"click-through\" prediction rates on Microsoft adCenter, a multi-million dollar industry for the company. The amount of data analysed by the tools is astounding: e.g. 3TB in one case, with programs running continuously over four weeks of training data and occupying all the physical memory on the 64-bit 16GB machines we use. F# plays a crucial role in helping the group process this data efficiently and develop smart algorithms that extract essential features from the data and represent the information using the latest statistical technique called \"factor graphs\". Our use of F# in conjunction with SQL Server 2005 is especially interesting: we use novel compilation techniques to express the primary schema in F# and then use SQL Server as a data slave."
},
{
"paper_title": "Productivity gains with Erlang",
"paper_authors": [
"Jan Henry Nyström"
],
"paper_abstract": "Currently most distributed telecoms software is engineered using low- and mid-level distributed technologies, but there is a drive to use high-level distribution. This talk reports the first systematic comparison of a high-level distributed programming language in the context of substantial commercial products. The research clearly demonstrates that Erlang is not only a viable, but also a compelling choice when dealing with high availability systems. This is due to the fact that it comparatively easy to construct systems that are: • resilient: sustaining throughput at extreme loads and automatically recovering when load drops; • fault tolerant: remaining available despite repeated and multiple failures; • dynamically reconfigurable: with throughput scaling, near-linearly, when resources are added or removed. But most importantly these systems can be delivered at a much higher productivity and with more maintainable deployment than current technology. This is attributed to language features such as automatic memory and process management and high-level communication. Furthermore, Erlang interoperates at low cost with conventional technologies, allowing incremental reengineering of large distributed systems."
},
{
"paper_title": "An OCaml-based network services platform",
"paper_authors": [
"Chris Waterson"
],
"paper_abstract": "At Liveops, we've developed a robust network services platform that combines the scalability of event-based I/O with the simplicity of thread-based programming. We've done this using functional programming techniques; namely, by using continuation-passing monads to encapsulate computation state and hide the complexity of the non-blocking I/O layer from the application programmer. Application code is written in a \"naive threading\" style using primitives that simulate blocking I/O operations. This network platform serves as the basis for one of the most critical applications in our business --- agent scheduling --- and has proven to be easy to maintain and extremely scalable. Using commodity server hardware, we are able to support thousands of persistent SSL connections on a single dual-core Pentium-class server, and handle tens of thousands of transactions per minute. The application and platform are implemented in OCaml. This talk will briefly describe the application domain, discuss the specifics of the monadic I/O library we've built, and describe some of the issues involved. Our hope is that by the time that the conference arrives, the library will be released as open-source software. Although developed independently, this work is the same vein as (and, in some ways, validates) Peng Li and Steve Zdancewic's A Language-based Approach to Unifying Events and Threads, which appears in the ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI) in June 2007."
},
{
"paper_title": "Using functional techniques to program a network processor",
"paper_authors": [
"Lal George"
],
"paper_abstract": "I will describe technology we built at Network Speed Technologies to program the Intel IXP network processor --- a multi-core, multi-threaded, high performance network device. Ideas from functional programming and language design were key to programming this device. For example, 650 lines of Intel C for the IXP together with embedded assembly are required to just update the TTL and checksum fields of an IPv4 header; in our functional language, it is less than 40 lines! The functional semantics and novel compilation technology enables us to demonstrably out-perform hand-coded assembly written by experts --- a remarkable accomplishment by itself, and even more so in the embedded space. The critical components of the compilation technology are a big part of the puzzle, but are not directly FP related. The language and technology have been a phenomenal success, and easily surpass conventional approaches. The ease of learning, the dramatically lower cost, and superior performance make this the 'right' choice for deploying these devices. However, there are hard lessons learned from using FPL in the real world..."
},
{
"paper_title": "Impediments to wide-spread adoption of functional languages",
"paper_authors": [
"Noel Welsh"
],
"paper_abstract": "If functional languages are so great, why does virtually no-one use them? More to the point, why have relatively new languages like PHP, Python, and Ruby prospered while functional languages have failed to make inroads in their long and glorious history? I believe the answers are largely cultural, and indeed the academic culture of functional languages is both their greatest strength and biggest barrier to adoption. I'll present a simple model of language adoption, and show specific instances where functional languages fail to support it. I'll also make concrete suggestions for how functional language communities can improve, while still retaining their distinctive strengths."
},
{
"paper_title": "Functional programming in communications security",
"paper_authors": [
"Timo Lilja"
],
"paper_abstract": "At SSH Communications Security, we've employed functional programming for a long time in some of our projects. Over the years, we've shipped a number of products written mostly in Scheme, and are about to ship some software which is in part written in Standard ML. We have also written several pieces of software for internal use in Haskell, Standard ML, Scheme, and probably others as well. In this talk, I will describe some useful insights into how these languages have worked for us in developing security software. We had some successes: we've been able to build and ship fairly large software systems rather quickly and with good confidence in certain aspects of their security. We've also experienced some failures. Using functional programming doesn't protect against bad design. Implementations of functional languages are sometimes slow. The user base of many languages is small, and there aren't a whole lot of programmers on the market who can program well in, for example, Scheme. Over the past few years, we've also seen some of the social phenomena related to functional programming: how people feel about it, why they believe it works/doesn't work, and why they are (not) interested in doing it."
},
{
"paper_title": "Cross-domain WebDAV server",
"paper_authors": [
"John Launchbury"
],
"paper_abstract": "At Galois, we use Haskell extensively in a carefully architected DAV server. Our clients need very strong separation between separate network access points. Haskell gave us critical flexibility to provide major pieces of functionality that enable the separation, including the implementation of a completely new file system. Interestingly, however, we implemented the single most critical component in C! In this talk we will discuss our experiences, and draw lessons regarding the appropriateness---or otherwise---of functional languages for certain security critical tasks."
},
{
"paper_title": "Discussion",
"paper_authors": [
"Don Syme"
],
"paper_abstract": "Syme ended the day by chairing a discussion on hiring functional programmers. What skills should we look for? Where should we look? Is retraining feasible? What forums and teaching qualifications do we need? Can we help schools 'sell' functional programming? He sensed a sea change over the last few years: it used to be the case that just one or two companies were hiring functional programmers, but now he could list four or five such companies off the top of his head, and a quick poll indicated about ten companies represented at the meeting were looking to hire functional programmers."
},
{
"paper_title": "CUFP in the future",
"paper_authors": [
"Kathleen Fisher"
]
}
]
}
]
},
{
"conference_title": "Dependently-Typed Programming",
"conference_contents": [
{
"proceeding_title": "DTP '13:Proceedings of the 2013 ACM SIGPLAN workshop on Dependently-typed programming",
"proceeding_contents": [
{
"paper_title": "Correct-by-construction pretty-printing",
"paper_authors": [
"Nils Anders Danielsson"
],
"paper_abstract": "A new approach to correct-by-construction pretty-printing is presented. The basic methodology is the one of classical (not necessarily correct) pretty-printing: users convert values to pretty-printer documents, and a general rendering algorithm turns documents into strings. The main novelty is that dependent types are used to ensure that, for each value, the constructed document is correct with respect to the value and a given grammar. Other parts of the development use well-established technology: the pretty-printer document interface is basically that of Wadler (2003), but with more precise types, and a single additional primitive combinator; and Wadler's rendering algorithm is used. It is proved that if a given value is pretty-printed, and the resulting string parsed (with respect to the same, unambiguous grammar), then the original value is obtained. No guarantees are made about \"prettiness\"."
},
{
"paper_title": "New equations for neutral terms: a sound and complete decision procedure, formalized",
"paper_authors": [
"Guillaume Allais",
"Conor McBride",
"Pierre Boutillier"
],
"paper_abstract": "The definitional equality of an intensional type theory is its test of type compatibility. Today's systems rely on ordinary evaluation semantics to compare expressions in types, frustrating users with type errors arising when evaluation fails to identify two `obviously' equal terms. If only the machine could decide a richer theory! We propose a way to decide theories which supplement evaluation with `ν-rules', rearranging the neutral parts of normal forms, and report a successful initial experiment. We study a simple λ-calculus with primitive fold, map and append operations on lists and develop in Agda a sound and complete decision procedure for an equational theory enriched with monoid, functor and fusion laws."
},
{
"paper_title": "A multivalued language with a dependent type system",
"paper_authors": [
"Neal Glew",
"Tim Sweeney",
"Leaf Petersen"
],
"paper_abstract": "Type systems are used to eliminate certain classes of errors at compile time. One of the goals of type system research is to allow more classes of errors (such as array subscript errors) to be eliminated. Dependent type systems have played a key role in this effort, and much research has been done on them. In this paper, we describe a new dependently-typed functional programming language based on two key ideas. First, it makes no distinction between expressions, types, kinds, and sorts-everything is a term. The same integer values are used to compute with and to index types, such as specifying the length of an array. Second, the term language has a multivalued semantics-a term can evaluate to zero, one, multiple, even an infinite number of values. Since types are characterised by their members, they are equivalent to terms whose possible values are the members of the type, and we exploit this to express type information in our language. In order to type check such terms, we give up on decidability. We consider this a good tradeoff to get an expressive language without the pain of some dependent type systems. This paper describes the core ideas of the language, gives an intuitive description of the semantics in terms of set-theory, explains how to implement the language by restricting what programs are considered valid, and sketches the core of the type system."
},
{
"paper_title": "Relational algebraic ornaments",
"paper_authors": [
"Hsiang-Shang Ko",
"Jeremy Gibbons"
],
"paper_abstract": "Dependently typed programming is hard, because ideally dependently typed programs should share structure with their correctness proofs, but there are very few guidelines on how one can arrive at such integrated programs. McBride's algebraic ornamentation provides a methodological advancement, by which the programmer can derive a datatype from a specification involving a fold, such that a program that constructs elements of that datatype would be correct by construction. It is thus an effective method that leads the programmer from a specification to a dependently typed program. We enhance the applicability of this method by generalising algebraic ornamentation to a relational setting and bringing in relational algebraic methods, resulting in a hybrid approach that makes essential use of both dependently typed programming and relational program derivation. A dependently typed solution to the minimum coin change problem is presented as a demonstration of this hybrid approach. We also give a theoretically interesting \"completeness theorem\" of relational algebraic ornaments, which sheds some light on the expressive power of ornaments and inductive families."
},
{
"paper_title": "Leveling up dependent types: generic programming over a predicative hierarchy of universes",
"paper_authors": [
"Larry Diehl",
"Tim Sheard"
],
"paper_abstract": "Generic programming is about writing a single function that does something different for each type. In most languages one cannot case over the structure of types. So in such languages generic programming is accomplished by defining a universe, a data structure isomorphic to some subset of the types supported by the language, and performing a case analysis over this datatype instead. Such functions support a limitied level of genericity, limited to the subset of types that the universe encodes. The key to full genericity is defining a rich enough universe to encode all types in the language. In this paper we show how to define a universe with a predicative hierarchy of types, encoding a finite set of base types (including dependent products and sums), and an infinite set of user defined datatypes. We demonstrate that such a system supports a much broader notion of generic programming, along with a serendipitous extension to the usefulness of user defined datatypes with existential arguments."
}
]
}
]
},
{
"conference_title": "ERLANG",
"conference_contents": [
{
"proceeding_title": "Erlang '14:Proceedings of the Thirteenth ACM SIGPLAN workshop on Erlang",
"proceeding_contents": [
{
"paper_title": "Functional programming and the \"megacore\" era",
"paper_authors": [
"Kevin Hammond"
],
"paper_abstract": "Kevin Hammond is a full professor of Computer Science at the University of St Andrews, where he leads the Functional Programming research group. His research interests lie in programming language design and implementation, with a focus on parallelism and real-time properties of functional languages, including modelling and reasoning about extra-functional properties. Prof. Hammond has has published around 100 research papers, books and articles, and held over 20 national and international research grants, totaling around £11M of research funding. He was a member of the Haskell design committee, co-designed the Hume real-time functional language, and is co-editor of the main reference text on parallel functional programming. Currently, he coordinates the ParaPhrase FP7 project, a 3-year EU research project that aims to develop new refactoring technology for Erlang and C++ programs, targeting heterogeneous parallel architectures."
},
{
"paper_title": "More scalable ordered set for ETS using adaptation",
"paper_authors": [
"Konstantinos Sagonas",
"Kjell Winblad"
],
"paper_abstract": "The Erlang Term Storage (ETS) is a key component of the runtime system and standard library of Erlang/OTP. In particular, on big multicores, the performance of many applications that use ETS as a shared key-value store heavily depends on the scalability of ETS. In this work, we investigate an alternative implementation for the ETS table type ordered_set based on a contention adapting search tree. The new implementation performs many times better than the current one in contended scenarios and scales better than the ETS table types implemented using hashing and fine-grained locking when several processor chips are used. We evaluate the new implementation with a set of experiments that show its scalability in relation to the current ETS implementation as well as its low sequential overhead."
},
{
"paper_title": "Discovering parallel pattern candidates in Erlang",
"paper_authors": [
"István Bozó",
"Viktoria Fordós",
"Zoltán Horvath",
"Melinda Tóth",
"Dániel Horpácsi",
"Tamás Kozsik",
"Judit Köszegi",
"Adam Barwell",
"Christopher Brown",
"Kevin Hammond"
],
"paper_abstract": "The ParaPhrase Refactoring Tool for Erlang PaRTE provides automatic, comprehensive and reliable pattern candidate discovery to locate parallelisable components in Erlang programs. It uses semi-automatic and semantics-preserving program transformations to reshape source code and to introduce high level parallel patterns that can be mapped adaptively to the available hardware resources. This paper describes the main PaRTE tools and demonstrates that significant parallel speedups can be obtained."
},
{
"paper_title": "On shrinking randomly generated load tests",
"paper_authors": [
"Thomas Arts"
],
"paper_abstract": "Running a load test is a time consuming undertaking for which normally a complete system should be configured. In contrast to running tests to find deviations from functional requirements, the goal of load testing is to find defects that only appear when the system has a lot of load to handle during a longer time. Load testing is typically performed by increasing the load and observing the effect of doing so. It is plausible that more defects can be detected by a wider variety of scenarios to create load. This makes it an attractive idea to use QuickCheck for the generation of user scenarios to perform load testing with randomly behaving users. In this paper we show that QuickCheck can be used as a framework for doing so by introducing and discussing load generators."
},
{
"paper_title": "Jsongen: a quickcheck based library for testing JSON web services",
"paper_authors": [
"Clara Benac Earle",
"Lars-Åke Fredlund",
"Ángel Herranz",
"Julio Mariño"
],
"paper_abstract": "This article describes a systematic approach to testing behavioural aspects of Web Services that communicate using the JSON data format. As a key component, the Quviq QuickCheck property-based testing tool is used to automatically generate a large number of test cases from an abstract description of the service behaviour in the form of a finite state machine. The same behavioural description is also used to decide whether the execution of a test case is successful or not. To generate random JSON data for populating tests we have developed a new library, jsongen, which given a characterisation of the JSON data as a JSON schema, (i) automatically derives a QuickCheck generator which can generate an infinite number of JSON values that validate against the schema, and (ii) provides a generic QuickCheck state machine which is capable of following the (hyper)links documented in the JSON schema, to automatically explore the web service. The default behaviour of the state machine can be easily customized to include web service specific checks. The article illustrates the approach by developing a finite state machine model for the testing of a JSON-based web service."
},
{
"paper_title": "Investigating the scalability limits of distributed Erlang",
"paper_authors": [
"Amir Ghaffari"
],
"paper_abstract": "With the advent of many-core architectures, scalability is a key property for programming languages. Actor-based frameworks like Erlang are fundamentally scalable, but in practice they have some scalability limitations. The RELEASE project aims to improve the scalability of Erlang on emergent commodity architectures with 100,000 cores. This paper investigates the scalability limits of distributed Erlang on up to 150 nodes by using DE-Bench. We discuss the design and implementation of DE-Bench, a scalable peer-to-peer benchmarking tool for distributed Erlang. Our benchmarking results demonstrate that the frequency of global commands limits the scalability of distributed Erlang. There is also a common belief that distributed Erlang does not scale on large architectures with hundreds of nodes and thousands of cores. We provide evidence against this belief and show that distributed Erlang scales linearly up to 150 nodes and 1200 cores with relatively heavy data and computation loads when no global command is made. Measuring the latency of commonly-used distributed Erlang commands reveals that the latency of rpc calls rises as cluster size grows. Our results also show that server processes like gen_server and gen_fsm have low latency and good scalability."
},
{
"paper_title": "Derflow: distributed deterministic dataflow programming for erlang",
"paper_authors": [
"Manuel Bravo",
"Zhongmiao Li",
"Peter Van Roy",
"Christopher Meiklejohn"
],
"paper_abstract": "Erlang implements a message-passing execution model in which concurrent processes send each other messages asynchronously. This model is inherently non-deterministic: a process can receive messages sent by any process which knows its process identifier, leading to an exponential number of possible executions based on the number messages received. Concurrent programs in non-deterministic languages are notoriously hard to prove correct and have led to well-known disasters. Furthermore, Erlang natively provides distribution and process clustering. This enables processes to asynchronously communicate between different virtual machines across the network, which increases the potential non-determinism. We propose a new execution model for Erlang, ''Deterministic Dataflow Programming'', based on a highly available, scalable single-assignment data store implemented on top of the riak_core distributed systems framework. This execution model provides concurrent communication between Erlang processes, yet has no observable non-determinism. Given the same input values, a deterministic dataflow program will always return the same output values, or never return; liveness under failures is sacrificed to ensure safety. Our proposal provides a distributed deterministic dataflow solution that operates transparently over distributed Erlang, providing the ability to have highly-available, fault-tolerant, deterministic computations."
},
{
"paper_title": "BEAMJIT: a just-in-time compiling runtime for Erlang",
"paper_authors": [
"Frej Drejhammar",
"Lars Rasmusson"
],
"paper_abstract": "BEAMJIT is a tracing just-in-time compiling runtime for the Erlang programming language. The core parts of BEAMJIT are synthesized from the C source code of BEAM, the reference Erlang abstract machine. The source code for BEAM's instructions is extracted automatically from BEAM's emulator loop. A tracing version of the abstract machine, as well as a code generator are synthesized. BEAMJIT uses the LLVM toolkit for optimization and native code emission. The automatic synthesis process greatly reduces the amount of manual work required to maintain a just-in-time compiler as it automatically tracks the BEAM system. The performance is evaluated with HiPE's, the Erlang ahead-of-time native compiler, benchmark suite. For most benchmarks BEAMJIT delivers a performance improvement compared to BEAM, although in some cases, with known causes, it fails to deliver a performance boost. BEAMJIT does not yet match the performance of HiPE mainly because it does not yet implement Erlang specific optimizations such as boxing/unboxing elimination and a deep understanding of BIFs. Despite this BEAMJIT, for some benchmarks, reduces the runtime with up to 40%."
},
{
"paper_title": "Synapse: automatic behaviour inference and implementation comparison for Erlang",
"paper_authors": [
"Pablo Lamela Seijas",
"Simon Thompson",
"Ramsay Taylor",
"Kirill Bogdanov",
"John Derrick"
]
},
{
"paper_title": "Faulterl: precise fault injection for the erlang VM, NIFs and linked-in drivers",
"paper_authors": [
"Scott Lystig Fritchie"
]
}
]
},
{
"proceeding_title": "Erlang '13:Proceedings of the twelfth ACM SIGPLAN workshop on Erlang",
"proceeding_contents": [
{
"paper_title": "ACM SIGPLAN Erlang workshop 2013: keynote",
"paper_authors": [
"Justin Sheehy"
]
},
{
"paper_title": "Using many-core coprocessor to boost up Erlang VM",
"paper_authors": [
"Siyao Zheng",
"Xiang Long",
"Jingwei Yang"
],
"paper_abstract": "The trend in processor design is to build more cores on a single chip. Commercial many-core processor is emerging these years. Intel Xeon Phi coprocessor , which is equipped with at least 60 relatively slow cores, is the first commercial many-core product released by Intel. Xeon Phi coprocessor is encapsulated in a PCIe card and communicates with host operating system through a specialized interface library called SCIF. In this paper, we propose a solution that integrates the computing power of multi-core host machine and Xeon Phi many-core coprocessor. The solution is based on the distribution facilities of Erlang. This paper proposes a highly optimized distribution carrier driver based on SCIF. This driver can both exploit the advantage of SCIF to the full and shield the performance gap between host and coprocessor. User applications can make use of the extra computing resources provided by many-core coprocessor without even knowing the existence of it."
},
{
"paper_title": "On the scalability of the Erlang term storage",
"paper_authors": [
"David Klaftenegger",
"Konstantinos Sagonas",
"Kjell Winblad"
],
"paper_abstract": "The Erlang Term Storage (ETS) is an important component of the Erlang runtime system, especially when parallelism enters the picture, as it provides an area where processes can share data. It is therefore important that ETS's implementation is efficient, flexible, but also as scalable as possible. In this paper we document and describe the current implementation of ETS in detail, discuss the main data structures that support it, and present the main points of its evolution across Erlang/OTP releases. More importantly, we measure the scalability of its implementations, the effects of its tuning options, identify bottlenecks, and suggest changes and alternative designs that can improve both its performance and its scalability."
},
{
"paper_title": "Riak PG: distributed process groups on dynamo-style distributed storage",
"paper_authors": [
"Christopher Meiklejohn"
],
"paper_abstract": "We present Riak PG, a new Erlang process group registry for highly available applications. The Riak PG system is a Dynamo-based, distributed, fault-tolerant, named process group registry for use as an alternative to the built-in Erlang process group facility, pg2, and the globally distributed extended process registry, gproc. Riak PG aims to provide a highly-available, fault-tolerant, distributed registry by sacrificing strong consistency for eventual consistency in applications where availability of the registry is paramount to application function and performance."
},
{
"paper_title": "Multicore profiling for Erlang programs using percept2",
"paper_authors": [
"Huiqing Li",
"Simon Thompson"
],
"paper_abstract": "Erlang is a functional programming language with built-in support for concurrency based on share-nothing processes and asynchronous message passing. The design of Erlang makes it suitable for writing concurrent and parallel applications, taking full advantage of the computing power of modern multicore machines. However many existing Erlang applications are sequential, in need of parallelisation. In this paper, we present the Erlang concurrency profiling tool Percept2,and demonstrate how the information provided by it can help the user to explore the potential parallelism in an Erlang application and how the system performs on the Erlang multicore system. Percept2 thus allows users improve the performance and scalability of their applications."
},
{
"paper_title": "Software agents mobility using process migration mechanism in distributed Erlang",
"paper_authors": [
"Michał Piotrowski",
"Wojciech Turek"
],
"paper_abstract": "Basic features of Erlang technology are very similar to the theoretical assumptions of the software agent paradigm. This fact encourages development of Erlang-based software agent systems and platforms. One of the lacking features of the Erlang technology, which is required in agent systems, is the ability to migrate agents between nodes of the agent platform. The feature of moving working processes can also be useful for solving particular problems in Erlang-based systems. In this paper the novel Erlang process migration mechanism is proposed. It is fully transparent, and it provides code migration and messages forwarding. The solution has been used for implementing an agent migration mechanism in an Erlang-based software agent platform (eXAT). It has been intensively tested and compared in terms of durability and performance with the most popular Java based agent platform."
},
{
"paper_title": "Actor scheduling for multicore hierarchical memory platforms",
"paper_authors": [
"Emilio Francesquini",
"Alfredo Goldman",
"Jean-François Méhaut"
],
"paper_abstract": "Erlang applications are present in several mission-critical systems. These systems demand substantial computing resources that are usually provided by multiprocessor and multi-core platforms. Hierarchical memory platforms, or Non-Uniform Memory Access (NUMA) architectures, account for an important share of these platforms. Yet, the research on the suitability of the current virtual machine (VM) for these platforms is quite limited. The current VM assumes a flat memory space, thus not performing as well as it could on these architectures. The NUMA environment presents challenges to the runtime environment in fields varying from memory management to scheduling and load-balancing. In this article we summarize some of the characteristics of an actor based application to, in light of the above, introduce some NUMA-aware improvements to the Erlang VM. This modified VM uses the NUMA characteristics and the application knowledge to take better memory management, scheduling and load-balancing decisions. We show that, when we consider the default Erlang VM as the baseline, the modified VM can achieve performance improvements up to a factor of 2.50 while limiting the slowdown on the worst case by a factor of 1.15."
},
{
"paper_title": "Extending Erlang by utilising RefactorErl",
"paper_authors": [
"Dániel Horpácsi"
],
"paper_abstract": "In this paper, we present the idea of utilising a refactoring tool for implementing extensions to a programming language. We elaborate the correspondence between the main components of the compiler and the refactoring tool, and examine how analysis and transformation features of the tool can be exploited for turning its refactoring framework into a translation framework. The presented method allows one, for instance, to make the Erlang language supportive for embedding domain specific languages as well as to make its functions portable."
},
{
"paper_title": "Scalable persistent storage for Erlang: theory and practice",
"paper_authors": [
"Amir Ghaffari",
"Natalia Chechina",
"Phil Trinder",
"Jon Meredith"
],
"paper_abstract": "The many core revolution makes scalability a key property. The RELEASE project aims to improve the scalability of Erlang on emergent commodity architectures with 100,000 cores. Such architectures require scalable and available persistent storage on up to 100 hosts. We enumerate the requirements for scalable and available persistent storage, and evaluate four popular Erlang DBMSs against these requirements. This analysis shows that Mnesia and CouchDB are not suitable persistent storage at our target scale, but Dynamo-like NoSQL DataBase Management Systems (DBMSs) such as Cassandra and Riak potentially are. We investigate the current scalability limits of the Riak 1.1.1 NoSQL DBMS in practice on a 100-node cluster. We establish for the first time scientifically the scalability limit of Riak as 60 nodes on the Kalkyl cluster, thereby confirming developer folklore. We show that resources like memory, disk, and network do not limit the scalability of Riak. By instrumenting Erlang/OTP and Riak libraries we identify a specific Riak functionality that limits scalability. We outline how later releases of Riak are refactored to eliminate the scalability bottlenecks. We conclude that Dynamo-style NoSQL DBMSs provide scalable and available persistent storage for Erlang in general, and for our RELEASE target architecture in particular."
},
{
"paper_title": "Towards an abstraction for remote evaluation in Erlang",
"paper_authors": [
"Adrian Francalanza",
"Tyron Zerafa"
],
"paper_abstract": "Erlang is an industry-standard cross-platform functional programming language and runtime system (ERTS) intended for the development of scalable enterprise projects that are inherently concurrent and distributed systems [1]. In essence, an Erlang system consists of a number of actors [3] (processes) executing concurrently across a number of nodes. These actors interact with one another (mainly) through asynchronous messaging and are also capable of spawning further actors, either locally or at a remote node."
},
{
"paper_title": "Towards property-based testing of RESTful web services",
"paper_authors": [
"Pablo Lamela Seijas",
"Huiqing Li",
"Simon Thompson"
]
},
{
"paper_title": "Turning web services descriptions into quickcheck models for automatic testing",
"paper_authors": [
"Miguel A. Francisco",
"Macías López",
"Henrique Ferreiro",
"Laura M. Castro"
],
"paper_abstract": "In this work, we face the problem of generating good quality test suites and test cases for web services. We present a framework to test web services based on their formal description, following a black-box approach and using Property-Based Testing. Web services are a popular solution to integrate components when building a software system, or to allow communication between a system and third-party users, providing a flexible, reusable mechanism to access its functionalities. Testing of web services is a key activity: we need to verify their behaviour and ensure their quality as much as possible, as efficiently as possible. By automatically deriving QuickCheck models from its WSDL description and its OCL semantic constraints, we enable generation and execution of great amounts of automatically generated test cases. Thus, we avoid the usual compromise between effort and cost, which too often leads to smaller and less exhaustive test suites than desirable. To illustrate the advantages of our framework, we present an industrial case study: a distributed system which serves media contents customers' TV screens."
},
{
"paper_title": "Testing blocking operations with QuickCheck's component library",
"paper_authors": [
"Ulf Norell",
"Hans Svensson",
"Thomas Arts"
],
"paper_abstract": "It is a challenge to write test cases for software with blocking operations, i.e. operations that do not return until data become available. One should prevent the test case itself to block, whereas at the same time, one wants to test the blocking behaviour. Therefore, the standard solution is to write concurrent test cases, each call in the test case executed by a newly spawned process, together with a lot of boilerplate code. Manually crafted test cases can check that blocking calls are indeed blocked and unblocked as expected. Writing such test cases is error-prone and covering all interesting cases requires a lot of manually written tests. By using QuickCheck?s state machines one can automatically generate the test cases from a specification, but also here the boilerplate code is needed. We demonstrate that by using the component library in QuickCheck, an extension of the state machine formalism, the boilerplate code is no longer needed. Using the component library results in clear and concise specifications that are effectively used for testing. By using this new library, software with blocking operations can be tested much more thoroughly than by using a manual test approach."
}
]
},
{
"proceeding_title": "Erlang '12:Proceedings of the eleventh ACM SIGPLAN workshop on Erlang workshop",
"proceeding_contents": [
{
"paper_title": "Erlang as an implementation platform for BDI languages",
"paper_authors": [
"Álvaro Fernández Díaz",
"Clara Benac Earle",
"Lars-Åke Fredlund"
],
"paper_abstract": "In this paper we report on our experiences using Erlang to implement a subset of the agent-oriented programming language Jason. The principal existing implementation of Jason is written in Java, but suffers from a number of drawbacks, i.e., has severe limitations concerning the number of agents that can execute in parallel. Basing a Jason implementation on Erlang itself has the potential of improving such aspects of the resulting multi-agent platform. To evaluate Erlang as a programming language implementation platform the paper describes our experiences in mapping Jason to Erlang, highlighting the positive and negative aspects of Erlang for this task. Moreover, the paper contains a number of benchmarks to evaluate the quantitative aspects of the resulting Jason implementation, especially with respect to support large multi-agent systems."
},
{
"paper_title": "On preserving term sharing in the Erlang virtual machine",
"paper_authors": [
"Nikolaos Papaspyrou",
"Konstantinos Sagonas"
],
"paper_abstract": "In programming language implementations, one of the most important design decisions concerns the underlying representation of terms. In functional languages with immutable terms, the runtime system can choose to preserve sharing of subterms or destroy sharing and expand terms to their flattened representation during certain key operations. Both options have pros and cons. The implementation of Erlang in the Erlang/OTP system from Ericsson has so far opted for an implementation where sharing of subterms is not preserved when terms are copied (e.g., when sent from one process to another or when used as arguments in spawns). In this paper we describe our experiences and argue through examples why flattening terms during copying is not a good idea for a language like Erlang. More importantly, we propose a sharing-preserving copying mechanism for Erlang/OTP and describe a publicly available complete implementation of this mechanism. Performance results show that, even in extreme cases where no subterms are shared, this implementation has a reasonable overhead which is negligible in practice. In cases where shared subterms do exist, perhaps accidentally, the performance savings can be substantial."
},
{
"paper_title": "ErLLVM: an LLVM backend for Erlang",
"paper_authors": [
"Konstantinos Sagonas",
"Chris Stavrakakis",
"Yiannis Tsiouris"
],
"paper_abstract": "This paper describes ErLLVM, a new backend for the HiPE compiler, the native code compiler of Erlang/OTP, that targets the LLVM compiler infrastructure. Besides presenting the overall architecture of ErLLVM and its integration in Erlang/OTP, we describe the changes to LLVM that ErLLVM required and discuss technical challenges and decisions we took. Finally, we provide a detailed performance evaluation of ErLLVM compared to BEAM, the existing backends of the HiPE compiler, and Erjang."
},
{
"paper_title": "A scalability benchmark suite for Erlang/OTP",
"paper_authors": [
"Stavros Aronis",
"Nikolaos Papaspyrou",
"Katerina Roukounaki",
"Konstantinos Sagonas",
"Yiannis Tsiouris",
"Ioannis E. Venetis"
],
"paper_abstract": "Programming language implementers rely heavily on benchmarking for measuring and understanding performance of algorithms, architectural designs, and trade-offs between alternative implementations of compilers, runtime systems, and virtual machine components. Given this fact, it seems a bit ironic that it is often more difficult to come up with a good benchmark suite than a good implementation of a programming language. This paper presents the main aspects of the design and the current status of bencherl, a publicly available scalability benchmark suite for applications written in Erlang. In contrast to other benchmark suites, which are usually designed to report a particular performance point, our benchmark suite aims to assess scalability, i.e., help developers to study a set of performance points that show how an application's performance changes when additional resources (e.g., CPU cores, schedulers, etc.) are added. We describe the scalability dimensions that the suite aims to examine and present its infrastructure and current set of benchmarks. We also report some limited set of performance results in order to show the capabilities of our suite."
},
{
"paper_title": "Distributed computation on dynamo-style distributed storage: riak pipe",
"paper_authors": [
"Bryan Fink"
],
"paper_abstract": "The Dynamo model, as described by Amazon in 2007, has become a popular concept in the development of distributed storage systems. The model accounts for only CRUD operations, however. This paper describes a system called Riak Pipe that enables the use of generic functions in place of CRUD operations. This allows Dynamo-model users to exploit other resources, such as CPU time, available in their cluster, as well as to gain the efficiencies offered by data-local processing."
},
{
"paper_title": "Failover and takeover contingency mechanisms for network partition and node failure",
"paper_authors": [
"Macías López",
"Laura M. Castro",
"David Cabrero"
],
"paper_abstract": "Proper definition of suitable mechanisms to cope with network partition and to recover from node failure are among the most common problems when designing and implementing a fault-tolerant distributed system. The concern is even more serious when the different scenarios could not be predicted beforehand and are detected once the system is at deployment stage. There are a number of decisions that can be made when choosing the right contingency mechanisms to deal with these distribution-bounded problems. The factors that must be taken into account include not only the technology in use, the node layout, the message protocol and the properties of the messages to be exchanged, certain desired/demanded features such as latency, bandwidth,... but also the communications network reliability, and even the hardware where the system is running on. In this paper we present ADVERTISE, a distributed system for advertisement transmission to on-customer-home set-top boxes (STBs) over a Digital TV network (iDTV) of a cable operator. We use this system as a case study to explain how we addressed the aforementioned problems, and present a set of good practices that can be extrapolated to comparable systems."
},
{
"paper_title": "Co-ops: concurrent algorithmic skeletons for Erlang",
"paper_authors": [
"Jay Nelson"
],
"paper_abstract": "Erlang offers a programmer 3-4 orders of magnitude more processes than conventional languages. This difference in approach to concurrency leads to architectures and attitudes embracing processes as key elements of a software system, providing fault isolation, distributed algorithms, and code modularity. Coming hardware improvements promise 3-4 orders of magnitude more CPUs than conventional hardware. How will this additional power be used by Erlang programmers and how might it impact Erlang system architecture? This poster introduces a new library of cooperating processes or \"co-ops\" which implement an algorithmic skeleton as a directed acyclic graph (DAG) spanning a large number of processes, trading program code for dataflow scaffolding to gain a more principled architecture and explicitly defined concurrent data pathways."
},
{
"paper_title": "Towards automatic actor pinning on multi-core architectures",
"paper_authors": [
"Emilio Francesquini",
"Alfredo Goldman",
"Jean-François Méhaut"
],
"paper_abstract": "The actor model is a high-level programming abstraction that attempts to ease the development of parallel applications, among others, by shielding the developer from the underlying platform. In this model the execution relies on a runtime environment (RE) to be able to efficiently use the underlying machine. Modern processors possess a hierarchical architecture of memory, thus making the performance of an application dependent on the placement of the application threads. This makes the choices of the RE much more important since the chosen actor placement will have a considerable impact on the application performance. In this paper we describe a work in progress that aims to create an on-line automatic actor placement engine. Using application profiling in association with hardware counters, it will pin and migrate actors to processing units aiming to optimize performance."
},
{
"paper_title": "ooErlang: another object oriented extension to Erlang",
"paper_authors": [
"Jucimar Maia Silva, Jr.",
"Rafael Dueire Lins"
],
"paper_abstract": "This paper presents ooErlang, an object oriented extension to the Erlang programming language. Its simple syntax, closer to other widely used object-oriented languages such as Java, makes easier its adoption."
},
{
"paper_title": "TinyMT pseudo random number generator for Erlang",
"paper_authors": [
"Kenji Rikitake"
],
"paper_abstract": "This paper is a case study of implementing Tiny Mersenne Twister (TinyMT) pseudo random number generator (PRNG) for Erlang. TinyMT has a longer generation period (2127-1) than the stock implementation of Erlang/OTP random module. TinyMT can generate multiple independent number streams by choosing different generation parameters, which is suitable for parallel generation. Our test results of the pure Erlang implementation show the execution time of RNG generating integers with TinyMT is approximately two to six times slower of that with the stock random module. Additional implementation with Native Interface Functions (NIFs) improved the execution speed to approximately three times as faster than that of the random module. The results suggest TinyMT will be a good candidate as an alternative PRNG for Erlang, regarding the increased period of the RNG and the benefit of generating parallel independent random number streams."
},
{
"paper_title": "Hansei: property-based development of concurrent systems",
"paper_authors": [
"Joseph Blomstedt"
],
"paper_abstract": "Avoiding concurrency errors remains one of the main challenges in day-to-day Erlang development. Errors can arise both from unexpected process and message interleaving, as well as concurrent clients or users interacting with the system in an unanticipated manner. To address this issue, many testing tools have been developed over the years. One approach is to pair a property-based testing tool with a tool that overrides the Erlang scheduler. This allows developers to focus on defining the properties of their system, and let the testing tools worry about testing these properties across different interleavings. This paper builds upon this approach and presents the Hansei testing framework. Hansei provides an approach to writing property-based tests that build on standard OTP abstractions, as well as a testing infrastructure designed to support test-driven development of concurrent systems. The Hansei infrastructure builds on the existing QuickCheck property-based testing tool, and supports both built-in message interleaving as well as a testing mode that works with existing process interleaving tools. Existing tools are more mature and feature-complete, while the built-in Hansei support has the unique property of being able to operate on code that spans multiple Erlang VMs."
}
]
},
{
"proceeding_title": "Erlang '11:Proceedings of the 10th ACM SIGPLAN workshop on Erlang",
"proceeding_contents": [
{
"paper_title": "A decade of Yaws",
"paper_authors": [
"Steve Vinoski"
],
"paper_abstract": "Yaws -- \"Yet Another Web Server\" -- is a popular open source HTTP 1.1 Erlang web server known for its reliability and stability. Conceived by legendary Erlang programmer Claes \"Klacke\" Wikström in 2001, Yaws has evolved and matured over the past decade thanks to Klacke's sustained leadership and a dedicated community of contributors and users. The relatively small core of Yaws supports a broad set of features including basic file serving, dynamic web page generation, cookies, URL rewriting, Common Gateway Interface (CGI), data streaming, JavaScript Object Notation (JSON), and WebSockets. Yaws takes full advantage of Erlang's lightweight processes, packet decoders, interprocess messaging, applications, and other language features that facilitate reliability and stability. This talk explores the design and features of Yaws, its community of contributors and users, how it takes advantage of Erlang, and possible future directions for Yaws."
},
{
"paper_title": "Erlang ETS tables and software transactional memory: how transactions make ETS tables more like ordinary actors",
"paper_authors": [
"Patrik Nyblom"
],
"paper_abstract": "This article describes a way to make the ETS tables of Erlang fit better into the actor programming model. Enhancing the interface of the ETS tables with a concept of transactions is suggested as a means both to achieve increased parallelism, better performance and less error-prone code, while still keeping no less true to the actor model."
},
{
"paper_title": "Accelerating race condition detection through procrastination",
"paper_authors": [
"Thomas Arts",
"John Hughes",
"Ulf Norell",
"Nicholas Smallbone",
"Hans Svensson"
],
"paper_abstract": "Race conditions are notoriously frustrating to find, and good tools can help. The main difficulty is reliably provoking the race condition. In previous work we presented a randomising scheduler for Erlang that helps with this task. In a language without pervasive shared mutable state, such as Erlang, performing scheduling decisions at random uncovers race conditions surprisingly well. However, it is not always enough. We describe a technique, procrastination, that aims to provoke race conditions more often than by random scheduling alone. It works by running the program and looking for pairs of events that might interfere, such as two message sends to the same process. Having found such a pair of events, we re-run the program but try to provoke a race condition by reversing the order of the two events. We apply our technique to a piece of industrial Erlang code. Compared to random scheduling alone, procrastination allows us to find minimal failing test cases more reliably and more quickly."
},
{
"paper_title": "Typed callbacks for more robust behaviours",
"paper_authors": [
"Stavros Aronis",
"Konstantinos Sagonas"
],
"paper_abstract": "Behaviours are one of the most widely used features of Erlang/OTP. They offer a convenient and well-tested abstraction layer for frequently employed design patterns in concurrent Erlang programming. In effect, they allow programmers to focus on the functional characteristics of their applications without having to resort to Erlang's concurrency-supporting primitives. However, when it comes to ensuring that behaviours are properly used and callbacks are as expected, the current Erlang/OTP compiler performs only minimal checks. This is no fault of the compiler though, because most/all of the callbacks' API exists only in the documentation or the comments accompanying the code; as such, it cannot always be trusted and it is almost impossible to have it mechanically processed. In this paper, we propose a small extension to the language of function specifications of Erlang to allow the formal definition of the behaviours' callback API. We have implemented this extension on the development branch of Erlang/OTP and provide evidence of how it can be leveraged by static analysis tools such as Dialyzer to detect behaviour misuses."
},
{
"paper_title": "Model-based testing of data types with side effects",
"paper_authors": [
"Thomas Arts",
"Laura M. Castro"
],
"paper_abstract": "Data types are the core of many applications, and libraries offering implementations of data types should better be solid and well tested. Testing purely functional data types with QuickCheck provides a complete test method for data types, but establishing a complete test method for data types with side-effects is still an open issue. In this paper we show how we can use a stateful QuickCheck model to establish a complete test method for any data type. Considering side effects allows us to move from the purely functional world to the imperative world, as needed to face the testing of data types implementations in languages such as C. We therefore applied our method to some of the data types provided by the well-known GNOME Glib library."
},
{
"paper_title": "A PropEr integration of types and function specifications with property-based testing",
"paper_authors": [
"Manolis Papadakis",
"Konstantinos Sagonas"
],
"paper_abstract": "We present a tight integration of the language of types and function specifications of Erlang with property-based testing. To achieve this integration we have developed from scratch PropEr, an open-source QuickCheck-inspired property-based testing tool. We present technical details of this integration, most notably how the conversion of recursive types into appropriate generators takes place and how function specifications can be turned automatically into simple properties in order to exercise the code of these functions. Finally, we present experiences and advice for the proper use of PropEr."
},
{
"paper_title": "Test-driven development of concurrent programs using concuerror",
"paper_authors": [
"Alkis Gotovos",
"Maria Christakis",
"Konstantinos Sagonas"
],
"paper_abstract": "This paper advocates the test-driven development of concurrent Erlang programs in order to detect early and eliminate the vast majority of concurrency-related errors that may occur in their execution. To facilitate this task we have developed a tool, called Concuerror, that exhaustively explores process interleaving (possibly up to some preemption bound) and presents detailed interleaving information of any errors that occur. We describe in detail the use of Concuerror on a non-trivial concurrent Erlang program that we develop step by step in a test-driven fashion."
},
{
"paper_title": "Extracting QuickCheck specifications from EUnit test cases",
"paper_authors": [
"Thomas Arts",
"Pablo Lamela Seijas",
"Simon Thompson"
],
"paper_abstract": "Writing EUnit tests is more common than writing QuickCheck specifications, although QuickCheck specifications potentially explore far more scenarios than manually written unit tests. In particular for implementations that have side-effects, writing a good set of EUnit tests is often difficult and labour intensive. In this paper we report on mechanisms to extract QuickCheck specifications from EUnit test suites. We use the QSM algorithm to infer state machines from sets of positive and negative traces derived from the test suite. These traces can be derived either statically or dynamically and we describe both approaches here. Finally we show how to move from the inferred state machine to a QuickCheck state machine. This QuickCheck state machine can then be used to generate tests, which include the EUnit tests, but also include many new and different combinations that can augment the test suite. In this way, one can achieve substantially better testing with little extra work."
},
{
"paper_title": "Testing a database for race conditions with QuickCheck: none",
"paper_authors": [
"John M. Hughes",
"Hans Bolinder"
],
"paper_abstract": "In 2009, Claessen et al. presented a way of testing for race conditions in Erlang programs, using QuickCheck to generate parallel tests, a randomizing scheduler to provoke races, and a sequential consistency condition to detect failures of atomicity [1]. That work used a small industrial prototype as the main example, showing how two race conditions could be detected and diagnosed. In this paper, we apply the same methods to dets, a vital component of the mnesia database system, and more than an order of magnitude larger. dets is known to fail occasionally in production, making it a promising candidate for a race condition hunt. We found five race conditions with relatively little effort, two of which may account for the observed failures in production. We explain how the testing was done, present most of the QuickCheck specification used, and describe the problems we discovered and their causes."
},
{
"paper_title": "SFMT pseudo random number generator for Erlang",
"paper_authors": [
"Kenji Rikitake"
],
"paper_abstract": "The stock implementation of Erlang/OTP pseudo random number generator (PRNG), random module, is based on an algorithm developed in 1980s called AS183, and has known statistic deficiencies for large-scale applications. Using modern PRNG algorithms with longer generation periods reduces the deficiencies. This paper is a case study of sfmt-erlang module, an implementation of SIMD-oriented Fast Mersenne Twister (SFMT) PRNG with the native interface functions (NIFs) of Erlang. The test results show the execution speed of the implementation is approximately three times faster than the random module on the x86 and x86_64 architecture computers, and the execution own time for generating single random number sequences is proportional to the internal state table length."
},
{
"paper_title": "Disco: a computing platform for large-scale data analytics",
"paper_authors": [
"Prashanth Mundkur",
"Ville Tuulos",
"Jared Flatow"
],
"paper_abstract": "We describe the design and implementation of Disco, a distributed computing platform for MapReduce style computations on large-scale data. Disco is designed for operation in clusters of commodity server machines, and provides both a fault-tolerant scheduling and execution layer as well as a distributed and replicated storage layer. Disco is implemented in Erlang and Python; Erlang is used for the implementation of the core aspects of cluster monitoring, job management, task scheduling and distributed filesystem, while Python is used to implement the standard Disco library. Disco has been used in production for several years at Nokia, to analyze tens of terabytes of data daily on a cluster of over 100 nodes. With a small but very functional codebase, it provides a free, proven, and effective component of a full-fledged data analytics stack."
},
{
"paper_title": "Implementation of sequence BDDs in Erlang",
"paper_authors": [
"Shuhei Denzumi",
"Hiroki Arimura",
"Shin-ichi Minato"
],
"paper_abstract": "In this paper, we present an implementation of Erlang of an efficient index structure, called Sequence Binary Decision Diagrams (SeqBDDs), for knowledge discovery in large sequence data. Recently, Loekito, Bailey, and Pei (KAIS, 2009) proposed SeqBDD. SeqBDDs are a compact indices for efficiently representing the set of sequences. Furthermore, SeqBDDs provide a rich collection of operations for sets of sequences, which are useful for implementing sequence mining algorithms. We propose SeqBDDs as powerful framework for string processing and Erlang is appropriate language for SeqBDD. SeqBDD system heavily uses hash tables to avoid redundant memory and computation. We implemented SeqBDD package with ETS for hash tables."
},
{
"paper_title": "Interfacing dynamically typed languages and the why tool: reasoning about lists and tuples",
"paper_authors": [
"Cláudio Amaral",
"Mário Florido",
"Patrik Jansson"
],
"paper_abstract": "Formal software verification is currently contributing to new generations of software systems that are proved to follow a given specification. Unfortunately, most dynamically typed languages lack the tools for such reasoning. We present a tool used to help verify some user specified properties on a small language. The process is based on functional contracts with annotations on the source code that later are transformed into logic goals that need to be proved in order to conclude that the program meets its specification. As part of the tool we also present a term model for dynamically typed data structures."
},
{
"paper_title": "Modeling growth and dynamics of neural networks via message passing in Erlang: neural models have a natural home in message passing functional programming languages",
"paper_authors": [
"Trevor Bain",
"Patrick Campbell",
"Jonas Karlsson"
],
"paper_abstract": "Erlang is well suited as a platform for modeling neural dynamics and development. We overview similarities between neural architecture and language paradigms in Erlang, specifically functional programming, message passing, distributed computing and concurrency. We present examples of using Erlang to model neural dynamics and development respectively. Finally, we synthesize these two examples into an artificial retina and then we conclude with an overview of ongoing work."
}
]
},
{
"proceeding_title": "Erlang '10:Proceedings of the 9th ACM SIGPLAN workshop on Erlang",
"proceeding_contents": [
{
"paper_title": "From test cases to FSMs: augmented test-driven development and property inference",
"paper_authors": [
"Thomas Arts",
"Simon Thompson"
],
"paper_abstract": "This paper uses the inference of finite state machines from EUnit test suites for Erlang programs to make two contributions. First, we show that the inferred FSMs provide feedback on the adequacy of the test suite that is developed incrementally during the test-driven development of a system. This is novel because the feedback we give is independent of the implementation of the system. Secondly, we use FSM inference to develop QuickCheck properties for testing state-based systems. This has the effect of transforming a fixed set of tests into a property which can be tested using randomly generated data, substantially widening the coverage and scope of the tests."
},
{
"paper_title": "Coordinating and visualizing independent behaviors in erlang",
"paper_authors": [
"Guy Wiener",
"Gera Weiss",
"Assaf Marron"
],
"paper_abstract": "Behavioral programming, introduced by the LSC language and extended by the BPJ Java library, enables development of behaviors as independent modules that are relatively oblivious of each other, yet are integrated at run-time yielding cohesive system behavior. In this paper we present a proof-of-concept for infrastructure and a design pattern that enable development of such behavioral programs in Erlang. Each behavior scenario, called a behavior thread, or b-thread, runs in its own Erlang process. Runs of programs are sequences of events that result from three kinds of b-thread actions: requesting that events be considered for triggering, waiting for triggered events, and blocking events that may be requested by other b-threads. A central mechanism handles these requests, and coordinates b-thread execution, yielding composite, integrated system behavior. We also introduce a visualization tool for Erlang programs written in the proposed design pattern. We believe that enabling the modular incremental development of behavioral programming in Erlang could further simplify the development and maintenance of applications consisting of concurrent independent behaviors."
},
{
"paper_title": "A unified semantics for future Erlang",
"paper_authors": [
"Hans Svensson",
"Lars-Åke Fredlund",
"Clara Benac Earle"
],
"paper_abstract": "The formal semantics of Erlang is a bit too complicated to be easily understandable. Much of this complication stems from the desire to accurately model the current implementations (Erlang/OTP R11-R14), which include features (and optimizations) developed during more than two decades. The result is a two-tier semantics where systems, and in particular messages, behave differently in a local and a distributed setting. With the introduction of multi-core hardware, multiple run-queues and efficient SMP support, the boundary between local and distributed is diffuse and should ultimately be removed. In this paper we develop a new, much cleaner semantics, for such future implementations of Erlang. We hope that this paper can stimulate some much needed debate regarding a number of poorly understood features of current and future implementations of Erlang."
},
{
"paper_title": "Chain replication in theory and in practice",
"paper_authors": [
"Scott Lystig Fritchie"
],
"paper_abstract": "When implementing a distributed storage system, using an algorithm with a formal definition and proof is a wise idea. However, translating any algorithm into effective code can be difficult because the implementation must be both correct and fast. This paper is a case study of the implementation of the chain replication protocol in a distributed key-value store called Hibari. In theory, the chain replication algorithm is quite simple and should be straightforward to implement correctly. In practice, however, there were many implementation details that had effects both profound and subtle. The Erlang community, as well as distributed systems implementors in general, can use the lessons learned with Hibari (specifically in areas of performance enhancements and failure detection) to avoid many dangers that lurk at the interface between theory and real-world computing."
},
{
"paper_title": "Analysis of preprocessor constructs in Erlang",
"paper_authors": [
"Róbert Kitlei",
"István Bozó",
"Tamás Kozsik",
"Máté Tejfel",
"Melinda Tóth"
],
"paper_abstract": "Program analysis and transformation tools work on source code, which - as in the case of Erlang - may contain macros and other preprocessor directives. Such preprocessor constructs have to be treated in an utterly different way than lexical and syntactical constructs. This paper presents an approach to treat preprocessor constructs in a non-invasive way that is reasonably efficient and supports code transformations and analyses in an Erlang specific framework."
},
{
"paper_title": "Generic load regulation framework for Erlang",
"paper_authors": [
"Ulf T. Wiger"
],
"paper_abstract": "Although Telecoms, the domain for which Erlang was conceived, has strong and ubiquitous requirements on overload protection, the Erlang/OTP platform offers no unified approach to addressing the problem. The Erlang community mailing list frequently sports discussions on how to avoid overload situations in individual components and processes, indicating that such an approach would be welcome. As Telecoms migrated from carefully regulated single-service networks towards multimedia services on top of best-effort multi-service packet data backbones, much was learned about providing end-to-end quality of service with a network of loosely coupled components, with only basic means of prioritization and flow control. This paper explores the similarity of such networks with typical Erlang-based message-passing architectures, and argues that a robust way of managing high-load conditions is to regulate at the input edges of the system, and sampling known internal choke points in order to dynamically maintain optimum throughput. A selection of typical overload conditions are discussed, and a new load regulation framework - JOBS - is presented, together with examples of how such overload conditions can be mitigated."
},
{
"paper_title": "Implementing a multiagent negotiation protocol in Erlang",
"paper_authors": [
"Álvaro Fernández Díaz",
"Clara Benac Earle",
"Lars-Åke Fredlund"
],
"paper_abstract": "In this paper we present an implementation in Erlang of a multi-agent negotiation protocol. The protocol is an extension of the well-known Contract Net Protocol, where concurrency and fault-tolerance have been addressed. We present some evidence that show Erlang is a very good choice for implementing this kind of protocol by identifying a quite high mapping between protocol specification and Erlang constructs. Moreover, we also elaborate on the added advantage that it can handle a larger number of agents than other implementations, with substantially better performance."
},
{
"paper_title": "Quickchecking refactoring tools",
"paper_authors": [
"Dániel Drienyovszky",
"Dániel Horpácsi",
"Simon Thompson"
],
"paper_abstract": "Refactoring is the transformation of program source code in a way that preserves the behaviour of the program. Many tools exist for automating a number of refactoring steps, but these tools are often poorly tested. We present an automated testing framework based on QuickCheck for testing refactoring tools written for the Erlang programming language."
},
{
"paper_title": "Using Erlang to implement a autonomous build and distribution system for software projects",
"paper_authors": [
"Tino Breddin"
],
"paper_abstract": "The uptake of Open-Source Software (OSS) has led to new business models as well as software development practices. OSS projects are constrained by their limited resources both in time and manpower. In order to be successful such projects have to leverage tools to automate as many tasks as possible while providing usable results. One such set of tools used in software development are continuous build systems, which help teams to build and test their software whenever a change is published without manual interaction. The available systems have proven to be essential for any kind software project but are lacking real innovation. This paper presents how Erlang, especially its distributed operation, fault-tolerance and lightweight processes, has been utilized to develop a next-generation continuous build system. This system executes many long-running tasks in parallel for any given change of the monitored software project, providing developers not only with the latest state of the project but also offers customizable software packaging and patch distribution."
}
]
},
{
"proceeding_title": "ERLANG '09:Proceedings of the 8th ACM SIGPLAN workshop on ERLANG",
"proceeding_contents": [
{
"paper_title": "Erlang 2009 Invited Talk",
"paper_authors": [
"Jan Lehnardt"
]
},
{
"paper_title": "Latest news from the Erlang OTP team at Ericsson",
"paper_authors": [
"Kenneth Lundin"
]
},
{
"paper_title": "Cleaning up Erlang code is a dirty job but somebody's gotta do it",
"paper_authors": [
"Thanassis Avgerinos",
"Konstantinos Sagonas"
],
"paper_abstract": "This paper describes opportunities for automatically modernizing Erlang applications, cleaning them up, eliminating certain bad smells from their code and occasionally also improving their performance. In addition, we present concrete examples of code improvements and our experiences from using a software tool with these capabilities, tidier, on Erlang code bases of significant size."
},
{
"paper_title": "Automated module interface upgrade",
"paper_authors": [
"László Lövei"
],
"paper_abstract": "During the lifetime of a software product the interface of some used library modules might change in such a way that the new interface is no longer compatible with the old one. This paper proposes a generic interface migration schema to automatically transform the software in the case of such an incompatible change. The solution is based on refactoring techniques and data flow analysis, and makes use of a formal description of the differences between the old and the new interfaces. The approach is illustrated with a real-life example."
},
{
"paper_title": "Automatic assessment of failure recovery in Erlang applications",
"paper_authors": [
"Jan Henry Nyström"
],
"paper_abstract": "Erlang is a concurrent functional language, especially tailored for distributed, highly concurrent and fault-tolerant software. An important part of Erlang is its support for failure recovery. A designer implements failure recovery by organising the processes of an Erlang application into tree structures, in which parent processes monitor failures of their children and are responsible for their restart. Libraries support the creation of such structures during system initialisation. We present a technique to automatically analyse that the process structure of an Erlang application is constructed in a way that guarantees recovery from process failures. First, we extract (part of) the process structure by static analysis of the initialisation code of the application. Thereafter, analysis of the process structure checks that it will recover from any process failure. We have implemented the technique in a tool, and applied it to several OTP library applications and to a subsystem of the AXD 301 ATM switch."
},
{
"paper_title": "Teaching Erlang using robotics and player/stage",
"paper_authors": [
"Sten Grüner",
"Thomas Lorentsen"
],
"paper_abstract": "Computer science is often associated with dull code debugging instead of solving interesting problems. This fact causes a decrease in the number of computer science students which can be stopped by giving lectures on an interesting context like robotics. In this paper we introduce an easily deployable and extensible library which allows programming a popular robot simulator in Erlang. New possibilities for visual, simple and attractive teaching of functional languages are open."
},
{
"paper_title": "Development of a distributed system applied to teaching and learning",
"paper_authors": [
"Hugo Cortés",
"Mónica García",
"Jorge Hernández",
"Manuel Hernández",
"Esperanza Pérez-Cordoba",
"Erik Ramos"
],
"paper_abstract": "The emergence of networked computers has originated new technologies for teaching and learning, particularly, the technology of learning management systems. We have applied Erlang to deal with the concurrent part of a distributed system to support teaching and learning tasks. We have also employed declarative programming together with some formal tools to elaborate the specification and the conceptual model of the system and some extreme programming techniques to deal with some issues of software development. We show how Erlang supports the transition from the specification to the implementation, and the whole concurrent and computational process of our distributed system."
},
{
"paper_title": "ECT: an object-oriented extension to Erlang",
"paper_authors": [
"Gábor Fehér",
"András G. Békés"
],
"paper_abstract": "To structure the code of an Erlang program, one can split it into modules. There are also available some basic data structures: lists, records and tuples. However, there are no fully functional classes that encapsulate data and functionality, and can be extended with inheritance. We think these features could promote code reuse in certain cases, therefore we decided to extend the language with object-oriented capabilities. A strong evidence of the usability of this is the fact that part of the program itself was rewritten using our newly created language elements, and the new version was simpler and cleaner than the original Erlang one. Our main goals were to preserve the single-assignment nature of Erlang and to keep method-call and value-access times constant. It was also a priority to make the extension easily installable, to reach as much developers as possible. For this, we avoided changes in the Erlang compiler itself. Instead, we created the extension as a parse transformation1 . In our implementation a class is a module that contain the methods, and a record type whose instances are the object instances. Both methods and fields can be inherited. We also examined the currently available other object-oriented extensions for Erlang, and compared them with ours. Our implementation has strong advantages, but it also lacks some features. Compatibility with records and speed are the main advantages. In this paper - among describing and comparing our extension - we also show the possible ways of adding the missing features."
},
{
"paper_title": "Implementing an LTL-to-Büchi translator in Erlang: a protest experience report",
"paper_authors": [
"Hans Svensson"
],
"paper_abstract": "In order to provide a nice user experience in McErlang, a model checker for Erlang programs, we needed an LTL-to-Büchi translator. This paper reports on our experiences implementing a translator in Erlang using well known algorithms described in literature. We followed a property driven development schema, where QuickCheck properties were formulated before writing the implementation. We successfully implement an LTL-to-Büchi translator, where the end result performs on par (or better) than two well known reference implementations."
},
{
"paper_title": "Model based testing of data constraints: testing the business logic of a Mnesia application with Quviq QuickCheck",
"paper_authors": [
"Nicolae Paladi",
"Thomas Arts"
],
"paper_abstract": "Correct implementation of data constraints, such as referential integrity constraints and business rules is an essential precondition for data consistency. Though most modern commercial DBMSs support data constraints, the latter are often implemented in the business logic of the applications. This is especially true for non relational DBMS like Mnesia, which do not provide constraints enforcement mechanisms. This case study examines a database application which uses Mnesia as data storage in order to determine, express and test data constraints with Quviq QuickCheck, adopting a model-based testing approach. Some of the important stages of the study described in the article are: reverse engineering of the database, analysis of the obtained database structure diagrams and extraction of data constraint, validation of constraints, formulating the test specifications and finally running the generated test suits. As a result of running the test suits randomly generated by QuickCheck, we have detected several violations of the identified and validated business rules. We have found that the applied methodology is suitable for applications using non relational, unnormalized databases. It is important to note the methodology applied within the case study is not bound to a specific application or DBMS, and can be applied to other database applications."
},
{
"paper_title": "Automatic testing of TCP/IP implementations using QuickCheck",
"paper_authors": [
"Javier Paris",
"Thomas Arts"
],
"paper_abstract": "We describe how to use model based testing for testing a network stack. We present a framework that together with the property based testing tool QuickCheck can be used to test the TCP layer of the Internet protocol stack. TCP is a rather difficult protocol to test, since it hides a lot of operations for the user that communicates to the stack via a socket interface. Internally, a lot happens and by only controlling the interface, full testing is not possible. This is typical for more complex protocols and we therefore claim that the presented method can easily be extended to other cases. We present an automatic test case generator for TCP using Quickcheck. This tester generates packet flows to test specific features of a TCP stack. It then controls the stack under test to run the test by using the interface provided by it (for example, the socket interface), and by sending replies to the packets created by the stack under test. We validated the test framework on the standard Linux TCP/IP implementation."
},
{
"paper_title": "Recent improvements to the McErlang model checker",
"paper_authors": [
"Clara Benac Earle",
"Lars Ake Fredlund"
],
"paper_abstract": "In this paper we describe a number of recent improvements to the McErlang model checker, including a new source to source translation to enable more Erlang programs to work under McErlang, a methodology for writing properties that can be verified by McErlang, and a combination of simulation and model checking. The latter two features are illustrated by means of the messenger example found in the documentation of the Erlang/OTP distribution."
}
]
},
{
"proceeding_title": "ERLANG '08:Proceedings of the 7th ACM SIGPLAN workshop on ERLANG",
"proceeding_contents": [
{
"paper_title": "Testing Erlang data types with quviq quickcheck",
"paper_authors": [
"Thomas Arts",
"Laura M. Castro",
"John Hughes"
],
"paper_abstract": "When creating software, data types are the basic bricks. Most of the time a programmer will use data types defined in library modules, therefore being tested by many users over many years. But sometimes, the appropriate data type is unavailable in the libraries and has to be constructed from scratch. In this way, new basic bricks are created, and potentially used in many products in the future. It pays off to test such data types thoroughly. This paper presents a structured methodology to follow when testing data types using Quviq QuickCheck, a tool for random testing against specifications. The validation process will be explained carefully, from the convenience of defining a model for the datatype to be tested, to a strategy for better shrinking of failing test cases, and including the benefits of working with symbolic representations. The leading example in this paper is a data type implemented for a risk management information system, a commercial product developed in Erlang, that has been used on a daily basis for several years."
},
{
"paper_title": "Early fault detection with model-based testing",
"paper_authors": [
"Jonas Boberg"
],
"paper_abstract": "Current and future trends for software include increasingly complex requirements on interaction between systems. As a result, the difficulty of system testing increases. Model-based testing is a test technique where test cases are generated from a model of the system. In this study we explore model-based testing on the system level, starting from early development. We apply model-based testing to a subsystem of a message gateway product in order to improve early fault detection. The results are compared to another subsystem that is tested with hand-crafted test cases. Based on our experiences, we present a set of challenges and recommendations for system-level, model-based testing. Our results indicate that model-based testing, starting from early development, significantly increases the number of faults detected during system testing."
},
{
"paper_title": "Erlang testing and tools survey",
"paper_authors": [
"Tamás Nagy",
"Anikó Nagyné Víg"
],
"paper_abstract": "As the commercial usage of Erlang increases, so does the need for mature development and testing tools. This paper aims to evaluate the available tools with their shortcomings, strengths and commercial usability compared to common practices in other languages. To identify the needs of Erlang developers in this area we published an online survey advertising it in various media. The results of this survey and additional research in this field is presented. Through the comparison of tools and the requirements of the developers the paper identifies paths for future development."
},
{
"paper_title": "A comparative evaluation of imperative and functional implementations of the imap protocol",
"paper_authors": [
"Francesco Cesarini",
"Viviana Pappalardo",
"Corrado Santoro"
],
"paper_abstract": "This paper describes a comparative analysis of several implementations of the IMAP4 client-side protocol, written in Erlang, C#, Java, Python and Ruby. The aim is basically to understand whether Erlang is able to fit the requirements of such a kind of applications, and also to study some parameters to evaluate the suitability of a language for the development of certain type of programs. We analysed five different libraries, comparing their characteristics through some software metrics: number of source lines of code, memory consumption, performances (execution time) and functionality of primitives. We describe pros and cons of each library and we conclude on the suitability of Erlang as a language for the implementation of protocol- and string-intensive TCP/IP-based applications."
},
{
"paper_title": "Scalaris: reliable transactional p2p key/value store",
"paper_authors": [
"Thorsten Schütt",
"Florian Schintke",
"Alexander Reinefeld"
],
"paper_abstract": "We present Scalaris, an Erlang implementation of a distributed key/value store. It uses, on top of a structured overlay network, replication for data availability and majority based distributed transactions for data consistency. In combination, this implements the ACID properties on a scalable structured overlay. By directly mapping the keys to the overlay without hashing, arbitrary key-ranges can be assigned to nodes, thereby allowing a better load-balancing than would be possible with traditional DHTs. Consequently, Scalaris can be tuned for fast data access by taking, e.g. the nodes' geographic location or the regional popularity of certain keys into account. This improves Scalaris' lookup speed in datacenter or cloud computing environments. Scalaris is implemented in Erlang. We describe the Erlang software architecture, including the transactional Java interface to access Scalaris. Additionally, we present a generic design pattern to implement a responsive server in Erlang that serializes update operations on a common state, while concurrently performing fast asynchronous read requests on the same state. As a proof-of-concept we implemented a simplified Wikipedia frontend and attached it to the Scalaris data store backend. Wikipedia is a challenging application. It requires - besides thousands of concurrent read requests per seconds - serialized, consistent write operations. For Wikipedia's category and backlink pages, keys must be consistently changed within transactions. We discuss how these features are implemented in Scalaris and show its performance."
},
{
"paper_title": "High-performance technical computing with erlang",
"paper_authors": [
"Alceste Scalas",
"Giovanni Casu",
"Piero Pili"
],
"paper_abstract": "High-performance Technical Computing (HPTC) is a branch of HPC (High-performance Computing) that deals with scientific applications, such as physics simulations. Due to its numerical nature, it has been traditionally based on low-level or mathematically-oriented languages (C, C++, Fortran), extended with libraries that implement remote execution and inter-process communication (like MPI and PVM). But those libraries just provide what Erlang does out-of-the-box: networking, process distribution, concurrency, interprocess communication and fault tolerance. So, is it possible to use Erlang as a foundation for developing HPTC applications? This paper shows our experiences in using Erlang for distributed number-crunching systems. We introduce two extensions: a simple and efficient foreign function interface (FFI), and an Erlang binding for numerical libraries. We use them as a basis for developing a simple mathematically-oriented programming language (in the style of Matlab™) compiled into Core Erlang. These tools are later used for creating a HPTC framework (based on message-passing) and an IDE for distributed applications. The results of this research and development show that Erlang/OTP can be used as a platform for developing large and scalable numerical applications."
},
{
"paper_title": "Refactoring with wrangler, updated: data and process refactorings, and integration with eclipse",
"paper_authors": [
"Huiqing Li",
"Simon Thompson",
"György Orosz",
"Melinda Tóth"
],
"paper_abstract": "Wrangler is a refactoring tool for Erlang, implemented in Erlang. This paper reports the latest developments in Wrangler, which include improved user experience, the introduction of a number of data- and process-related refactorings, and also the implementation of an Eclipse plug-in which, together with Erlide, provides refactoring support for Erlang in Eclipse."
},
{
"paper_title": "Gradual typing of erlang programs: a wrangler experience",
"paper_authors": [
"Konstantinos Sagonas",
"Daniel Luna"
],
"paper_abstract": "Currently most Erlang programs contain no or very little type information. This sometimes makes them unreliable, hard to use, and difficult to understand and maintain. In this paper we describe our experiences from using static analysis tools to gradually add type information to a medium sized Erlang application that we did not write ourselves: the code base of Wrangler. We carefully document the approach we followed, the exact steps we took, and discuss possible difficulties that one is expected to deal with and the effort which is required in the process. We also show the type of software defects that are typically brought forward, the opportunities for code refactoring and improvement, and the expected benefits from embarking in such a project. We have chosen Wrangler for our experiment because the process is better explained on a code base which is small enough so that the interested reader can retrace its steps, yet large enough to make the experiment quite challenging and the experiences worth writing about. However, we have also done something similar on large parts of Erlang/OTP. The result can partly be seen in the source code of Erlang/OTP R12B-3."
},
{
"paper_title": "Refactoring module structure",
"paper_authors": [
"László Lövei",
"Csaba Hoch",
"Hanna Köllö",
"Tamás Nagy",
"Anikó Nagyné Víg",
"Dániel Horpácsi",
"Róbert Kitlei",
"Roland Király"
],
"paper_abstract": "This paper focuses on restructuring software written in Erlang. In large software projects, it is a common problem that internal structural complexity can grow to an extent where maintenance becomes impossible. This situation can be avoided by careful design, building loosely coupled components with strictly defined interfaces. However, when these design decisions are not made in the right time, it becomes necessary to split an already working software into such components, without breaking its functionality. There is strong industrial demand for such transformations in refactoring legacy code. A refactoring tool is very useful in the execution of such a restructuring. This paper shows that the semantical analysis required for refactoring is also useful for making suggestions on clustering. Existing analysis results are used to cover the whole process of module restructuring, starting with planning the new structure, and finishing by making the necessary source code transformations."
}
]
},
{
"proceeding_title": "ERLANG '07:Proceedings of the 2007 SIGPLAN workshop on ERLANG Workshop",
"proceeding_contents": [
{
"paper_title": "Extended process registry for erlang",
"paper_authors": [
"Ulf T. Wiger"
],
"paper_abstract": "The built-in process registry has proven to be an extremely useful feature of the Erlang language. It makes it easy to provide named services, which can be reached without knowing the process identifier of the serving process. However, the current registry also has limitations: names can only be atoms (unstructured), processes can register under at most one name, and it offers no means of efficient search and iteration. In Ericsson's IMS Gateway products, a recurring task was to maintain mapping tables in order to locate call handling processes based on different properties. A common pattern, a form of index table, was identified, and resulted in the development of an extended process registry. It was not immediately obvious that this would be worthwhile, or even efficient enough to be useful. But as implementation progressed, designers found more and more uses for the extended process registry, which resulted in significant reduction of code volume and a more homogeneous implementation. It also provided a powerful means of debugging systems with many tens of thousand processes. This paper describes the extended process registry, critiques it, and proposes a new implementation that offers more symmetry, better performance and support for a global namespace."
},
{
"paper_title": "A language for specifying type contracts in erlang and its interaction with success typings",
"paper_authors": [
"Miguel Jimenez",
"Tobias Lindahl",
"Konstantinos Sagonas"
],
"paper_abstract": "We propose a small extension of the Erlang language that allows programmers to specify contracts with type information at the level of individual functions. Such contracts are optional and they document the intended uses of functions. Contracts allow automatic documentation tools such as Edoc to generate better documentation and defect detection tools such as Dialyzer to detect more type clashes. Since the Erlang/OTP system already contains components which perform automatic type inference of success typings, we also describe how contracts interact with success typings and can often provide some key information to the inference process."
},
{
"paper_title": "Introducing records by refactoring",
"paper_authors": [
"László Lövei",
"Zoltán Horváth",
"Tamás Kozsik",
"Roland Király"
],
"paper_abstract": "This paper focuses on introducing a new transformation to our existing model for refactoring Erlang programs. The goal of the transformation is to introduce a new abstraction level in data representation by substituting a group ofrelated data with a record. Using records enhances the legibility of the source code, makes further development easier, and makes programming less error-prone by providing better possibilities for both compilation time and runtime checks. There is a strong industrial demand for such a transformation in refactoring legacy code. Erlang is a dynamically typed language, and many of its semantical rules are also dynamic. Therefore the main challenge in this research is to ensure the safety of statically performed refactoring steps."
},
{
"paper_title": "Towards hard real-time erlang",
"paper_authors": [
"Vincenzo Nicosia"
],
"paper_abstract": "In the last decades faster and more powerful computers made possible to seriously take into account high-level and functional programming languages also for non-academic projects. Haskell, Erlang, O'CAML have been effectively exploited in many application fields, demonstrating how high-level languages can help in writing efficient, readable and almost bug-free code, rapidly stealing the prominent position gained in many fields by OO languages such as Java and C++. One of the fields where low-level imperative languages are still preferred to functional programming is that of hard real-time applications, since usually programmers (and managers) think that high-level languages are really not able to cope with the complex and critical requirements of real-time. In this paper we propose an implementation of a hard real-time scheduler entirely written in Erlang, and perfectly integrated with the Erlang BEAM emulator. Performance analysis show that the proposed solution is effective, precise and efficient, while remaining really simple to use as expected by Erlang programmers."
},
{
"paper_title": "Programming distributed erlang applications: pitfalls and recipes",
"paper_authors": [
"Hans Svensson",
"Lars-Åke Fredlund"
],
"paper_abstract": "We investigate the distributed part of the Erlang programminglanguage, with an aim to develop robust distributed systems andalgorithms running on top of Erlang runtime systems. Although the stepto convert an application running on a single node to a fullydistributed (multi-node) application is deceptively simple (changingcalls to spawn so that processes are spawned on differentnodes), there are some corner cases in the Erlang language and APIwhere the introduction of distribution can cause problems. In thispaper we discuss a number of such pitfalls, where the semantics ofcommunicating processes differs significantly depending if theprocesses reside on the same node or not, we also provide someguidelines for safe programming of distributed systems."
},
{
"paper_title": "A more accurate semantics for distributed erlang",
"paper_authors": [
"Hans Svensson",
"Lars-Åke Fredlund"
],
"paper_abstract": "In order to formally reason about distributed Erlang systems, it is necessary to have a formal semantics. In a previous paper we have proposed such a semantics for distributed Erlang. However, recent work with a model checker for Erlang revealed that the previous attempt wasnot good enough. In this paper we present a more accurate semantics for distributed Erlang. The more accurate semantics includes several modifications and additions to the semantics for distributed Erlang proposed by Claessen and Svensson in 2005, which in turn is an extension to Fredlund's formal single-node semantics for Erlang. Themost distinct addition to the previous semantics is the possibility to correctly model disconnected nodes."
},
{
"paper_title": "Verification of timed erlang/OTP components using the process algebra μcrl",
"paper_authors": [
"Qiang Guo",
"John Derrick"
],
"paper_abstract": "Recent work has looked at how Erlang programs could be model-checked via translation into the process algebra μCRL. Rules for translating Erlang programs and OTP components into μCRL have been defined and investigated. However, in the existing work, no rule is defined for the translation of timeout events into μCRL. This could degrade the usability of the existing work as in some real applications, timeout events play a significant role in the system development. In this paper, by extending the existing work, we investigate the verification of timed Erlang/OTP components in μCRL. By using an explicit tick action in the μCRL specification, a discrete-time timing model is defined to support the translation of timed Erlang functions into μCRL. Two small examples are presented, which demonstrates the applications of the proposed approach."
},
{
"paper_title": "Priority messaging made easy",
"paper_authors": [
"Jan Henry Nystrom"
],
"paper_abstract": "This paper provides an introduction to the finer points of priority based message reception. A new behaviour is devised to provide a generic server that does priority based message reception. The combination of generic finite state machines and prioritised is also discussed. Finally it is demonstrated how behaviours could be significantly strengthened in usefulness by allowing the behaviour info to specify that a parse transform. An Erlang Extension Proposal suggesting the change is included in the appendices."
},
{
"paper_title": "Optimising TCP/IP connectivity",
"paper_authors": [
"Oscar Hellström"
],
"paper_abstract": "With the increased use of network enabled applications and server hosted software systems, scalability with respect to network connectivity is becoming an increasingly important subject. The programming language Erlang has previously been shown to be a suitable choice for creating highly available, scalable and robust telecoms systems. In this exploratory study we want to investigate how to optimise an Erlang system for maximum TCP/IP connectivity in terms of operating system, tuning of the operating system TCP stack and tuning of the Erlang Runtime System. The study shows how a series of benchmarks are used to evaluate the impact of these factors and how to evaluate the best settings for deploying and configuring an Erlang application. We conclude that the choice of operating system and the use of kernel poll both have a major impact on the scalability of the benchmarked systems."
},
{
"paper_title": "An erlang framework for autonomous mobile robots",
"paper_authors": [
"Corrado Santoro"
],
"paper_abstract": "This paper presents an Erlang-based framework, developed by the authors, for the realisation of software systems for autonomous mobile robots. On the basis of the analysis of the main problems arising in the design and implementation of such a kind of robots, a software infrastructure has been derived, organised in a set of layers, each one composed by some modules. This layered architecture is employed to separate hardware interfacing problems from control loops and high-level strategy tasks; such an organisation allows also a developer to concentrate only on the specific problem to be dealt with, and to immediately identify the module or layer to be involved for the realisation of a specific functionality. The proposed structure also favours reuse and rapid refactoring, since the replacement of a piece of hardware or a change in a functionality requires only few changes on a specific module, leaving the remaining system untouched. The overall software architecture is described in the paper, as well as the functionality of the various modules composing it; the advantages introduced by the framework are also highlighted, while some examples and a case study are used to show its effectiveness."
},
{
"paper_title": "Learning programming with erlang",
"paper_authors": [
"Frank Huch"
],
"paper_abstract": "This paper presents an interactive framework for pupils to learn the basic concepts of programming by means of the functional programming language Erlang. Beside the idea of the framework we also sketch the different learning targets and exercises to deepen programming skills. The framework was successfully utilized in a programming course for pupils in their last three school years."
}
]
},
{
"proceeding_title": "ERLANG '06:Proceedings of the 2006 ACM SIGPLAN workshop on Erlang",
"proceeding_contents": [
{
"paper_title": "EUnit: a lightweight unit testing framework for Erlang",
"paper_authors": [
"Richard Carlsson",
"Mickaël Rémond"
],
"paper_abstract": "In recent years, agile development methods have become increasingly popular, and although few people seem willing to go all the way with Extreme Programming, it seems that there is a general consensus today that test-driven development with unit testing is a Good Thing. However, this requires that tests are easy to write, since programmers are generally lazy, and that running the tests is easy and quick and the test results are presented in a concise manner, to make the feedback loop as short as possible. Failing this, it is likely that testing will not be extensively used during development.The concept of a lightweight unit testing framework that fulfils these goals, tailored to a particular programming language, was popularized by the JUnit framework for Java, written by Kent Beck and Erich Gamma. This was based on an earlier framework for Smalltalk called SUnit, by Kent Beck. The ideas in JUnit are easily transferred to any other object-oriented language, and today variants have been written for many different programming languages.We will here present our adaptation of these ideas to Erlang. The EUnit framework provides the usual features of such frameworks: writing tests is very easy indeed, and so is running them. Like most other similar frameworks, we rely on program introspection and naming conventions to reduce the amount of coding necessary to write tests. However, there are also some unusual aspects. Since Erlang is functional, rather than object oriented, it is not possible to use inheritance to provide basic test functionality, nor to use object instantiation to handle things like setup/teardown of contexts for tests. Instead, we base our system on a \"language\" for describing sets of tests, using mainly lists, tuples, and lambda expressions. We also make rather heavy use of preprocessor macros to allow more compact and readable notation. Because test descriptions are data, they can be easily combined, abstracted over, or even be generated on the fly. Lambda expressions allow subtests to be instantiated with setup/teardown contexts.Furthermore, the parallel and distributed nature of the Erlang language on one hand provides a challenge, but on the other hand gives us enormous power and flexibility. For instance, it is trivial to express in our test description language that a set of tests should be executed by a separate process, or on a specific machine, or all in parallel, or even as a number of subsets of parallel tests, with each subset running on a separate machine. Apart from providing easy distributed job control, this allows us to write unit tests that test the behaviour of parallel and distributed programs, something which otherwise tends to require so much coding for each test as to be impractical.Although EUnit is still under development, we feel that it has great potential, and that it uses some novel ideas that could be used to implement similar frameworks in other functional languages. EUnit is free software under the GNU Lesser General Public License. It has not yet been publicly released as of this writing."
},
{
"paper_title": "Testing telecoms software with quviq QuickCheck",
"paper_authors": [
"Thomas Arts",
"John Hughes",
"Joakim Johansson",
"Ulf Wiger"
],
"paper_abstract": "We present a case study in which a novel testing tool, Quviq QuickCheck, is used to test an industrial implementation of the Megaco protocol. We considered positive and negative testing and we used our developed specification to test an old version in order to estimate how useful QuickCheck could potentially be when used early in development.The results of the case study indicate that, by using Quviq QuickCheck, we would have been able to detect faults early in the development.We detected faults that had not been detected by other testing techniques. We found unclarities in the specifications and potential faults when the software is used in a different setting. The results are considered promising enough to Ericsson that they are investing in an even larger case study, this time from the beginning of the development of a new product."
},
{
"paper_title": "Model checking erlang programs: the functional approach",
"paper_authors": [
"Lars-Åke Fredlund",
"Clara Benac Earle"
],
"paper_abstract": "We present the new model checker McErlang for verifying Erlang programs. In comparison with the etomcrl tool set, McErlang differs mainly in that it is implemented in Erlang. The implementation language offers several advantages: checkable programs use \"almost\" normal Erlang, correctness properties are formulated in Erlang itself instead of a temporal logic, and it is easier to properly diagnose program bugs discovered by the model checker. In addition the model checker can easily be modified, thanks largely to the use of Erlang. The drawback of writing the model checker in Erlang is, potentially, severely reduced performance compared with model checking tools programmed in programming languages which permit destructive updates of data structures."
},
{
"paper_title": "Concurrency oriented programming in termite scheme",
"paper_authors": [
"Guillaume Germain"
]
},
{
"paper_title": "Dryverl: a flexible Erlang/C binding compiler",
"paper_authors": [
"Romain Lenglet",
"Shigeru Chiba"
],
"paper_abstract": "This article introduces Dryverl, an Erlang/C binding code generator. Dryverl aims at becoming the most abstract, open and efficient tool for implementing any Erlang/C bindings, as either C port drivers, C port programs, or C nodes. The most original feature of Dryverl is to provide users with open Erlang/C bindings, similar to distributed bindings in open distributed processing systems, to allow specifying programmatically the data transformations that must often be performed in Erlang/C bindings. Implementation details are hidden to developers, and implementation differences between port drivers, port programs, and nodes are abstracted by Dryverl, and Dryverl aims at generating the most efficient implementations possible for every target mechanism."
},
{
"paper_title": "Concurrent caching",
"paper_authors": [
"Jay Nelson"
],
"paper_abstract": "A concurrent cache design is presented which allows cached data to be spread across a cluster of computers. The implementation separates persistent storage from cache storage and abstracts the cache behaviour so that the user can experiment with cache size and replacement policy to optimize performance for a given system, even if the production data store is not available. Using processes to implement cached objects allows for runtime configurability and adaptive use policies as well as parallelization to optimize resource access efficiency."
},
{
"paper_title": "Towards automatic verification of Erlang programs by π-calculus translation",
"paper_authors": [
"Chanchal Kumar Roy",
"Thomas Noll",
"Banani Roy",
"James R. Cordy"
],
"paper_abstract": "ERLANG is a concurrent, dynamically typed, distributed, purely functional programming language with non-purely functional libraries that is mainly employed in telecommunication systems. This paper provides a contribution to the formal modeling and verificationn of programs written in Erlang. It presents a mapping of Erlang programs to the π-calculus, a process algebra whose name-passing feature allows representation of the mobile aspects of software written in Erlang in a natural way."
},
{
"paper_title": "Comparing C++ and ERLANG for motorola telecoms software",
"paper_authors": [
"Phil Trinder"
],
"paper_abstract": "There is considerable folklore suggesting that ERLANG aids the rapid production of robust distributed systems, but only a few rather general studies published. This talk reports the first systematic comparative evaluation of ERLANG in the context of substantial commercial products.Our research strategy is to re-engineer two C++/CORBA telecoms applications in ERLANG and make comparative measurements of both implementations. The first component is a medium-scale (15K line) Dispatch Call Controller (DCC), and the second a smaller (3K line) Data Mobility (DM) component that is closely integrated with five other components of a radio communications subsystem (RCS). To investigate interoperation costs we have constructed two DMs: a pure ERLANG implementation and an ER-LANG/C implementation that reuses some C DM libraries.We investigate the following six research questions, first considering the potential benefits of a high-level distributed language technology like ERLANG."
},
{
"paper_title": "From HTTP to HTML: Erlang/OTP experiences in web based service applications",
"paper_authors": [
"Francesco Cesarini",
"Lukas Larsson",
"Michal Ślaski"
],
"paper_abstract": "This paper describes the lessons learnt when internally developing web applications in Erlang. On the basis of these experiences, a framework called the Web Platform has been implemented. The Web Platform follows a design pattern separating data processing and formatting, allowing the construction of flexible and maintainable software architectures. It also delivers mechanisms for building dynamic pages and components. On top of the platform and components, web interfaces to commercial Erlang systems have been built."
},
{
"paper_title": "Evaluation of database management systems for Erlang",
"paper_authors": [
"Emil Hellman"
],
"paper_abstract": "Erlang/OTP's DBMS Mnesia is lacking in several important areas to consider when implementing very large databases with massive scalability requirements. This article reveals the result from a study examining what Erlang developers consider important aspects of DBMSs and an analytical hierarchy process (AHP) evaluation on four mature open source DBMSs based on those criteria. AHP is suggested as good method to evaluate DBMSs for Erlang projects. The criteria used in this evaluation were derived from a survey sent to the Erlang community. It should therefore be noted that which DBMS to use in Erlang projects should also be determined by the project's and the software's own specific criteria."
}
]
},
{
"proceeding_title": "ERLANG '05:Proceedings of the 2005 ACM SIGPLAN workshop on Erlang",
"proceeding_contents": [
{
"paper_title": "Bit-level binaries and generalized comprehensions in Erlang",
"paper_authors": [
"Per Gustafsson",
"Konstantinos Sagonas"
],
"paper_abstract": "Binary (i.e., bit stream) data are omnipresent in computer and network applications but most functional programming languages currently do not provide sufficient support for them. Erlang is an exception since it does support direct manipulation of binary data, albeit currently restricted to byte streams, not bit streams. To ameliorate the situation, we extend Erlang's built-in binary datatype so that it becomes flexible enough to handle bit streams properly. To further simplify programming on bit streams we then show how binary comprehensions can be introduced in the language and how binary and list comprehensions can be extended to allow both binary and list generators."
},
{
"paper_title": "A stream library using Erlang binaries",
"paper_authors": [
"Jay Nelson"
],
"paper_abstract": "An implementation of a Stream Library for erlang is described which uses Built-In Functions (BIFs) for fast access. The approach uses binaries to represent and process stream data in high volume, high performance applications. The library is intended to assist developers dealing with communication protocols, purely textual content, formatted data records and the routing of streamed data. The new BIFs are shown to improve performance as much as 250 times over native erlang functions. The reduction in memory usage caused by the BIFs also allows successful processing in situations that crashed the runtime as application functions."
},
{
"paper_title": "None",
"paper_authors": [
"Tobias Lindahl",
"Konstantinos Sagonas"
],
"paper_abstract": "We describe and document the techniques used in TOOL, a fully automatic type annotator for Erlang programs based on constraint-based type inference of success typings (a notion closely related to principal typings). The inferred typings are fine-grained and the type system currently includes subtyping and subtype polymorphism but not parametric polymorphism. In particular, we describe and illustrate through examples a type inference algorithm tailored to Erlang's characteristics which is modular, reasonably fast, and appears to scale well in practice."
},
{
"paper_title": "Verifying fault-tolerant Erlang programs",
"paper_authors": [
"Clara Benac Earle",
"Lars-Åke Fredlund",
"John Derrick"
],
"paper_abstract": "In this paper we target the verification of fault tolerant aspects of distributed applications written in Erlang. Erlang is unusual in several respects. First, it is one of a few functional languages that is used in industry. Secondly the programming language contains support for concurrency and distribution as well as including constructs for handling fault-tolerance.Erlang programmers, of course, mostly work with ready-made language components. Our approach to verification of fault tolerance is to verify systems built using two central components of most Erlang software, a generic server component with fault tolerance handling, and a supervisor component that restarts failed processes.To verify Erlang programs built using these components we automatically translate them into processes of the μCRL process algebra, generate their state spaces, and use a model checker to determine whether they satisfy correctness properties specified in the μ-calculus.The key observation of this paper is that, due to the usage of these higher-level design patterns (supervisors and generic servers) that structure process communication and fault recovery, the state space generated from a Erlang program, even with failures occurring, is relatively small, and can be generated automatically. Moreover the method is independent from the actual Erlang program studied, and is thus reusable.We demonstrate the approach in a case study where a server, built using the generic server component, implements a locking service for a number of client processes, and show that the server tolerates client failures."
},
{
"paper_title": "A new leader election implementation",
"paper_authors": [
"Hans Svensson",
"Thomas Arts"
],
"paper_abstract": "In this article we introduce a new implementation of a leader election algorithm used in the generic leader behavior known as gen_leader.erl. The first open source release of the generic leader [6] contains a few errors. The new implementation is based on a different algorithm, which has been adopted to fulfill the existing requirements. The testing techniques used to identify the errors in the first implementation have also been used to check the implementation we propose here. We even extended the amount of testing and used an additional new testing technique to increase our confidence in the implementation of this very tricky algorithm. The new implementation passed all tests successfully. In this paper we describe the algorithm and we discuss the testing techniques used during the implementation."
},
{
"paper_title": "Atom garbage collection",
"paper_authors": [
"Thomas Lindgren"
],
"paper_abstract": "Atoms are a central data type in Erlang, yet there is currently no method for memory management of atoms in the most popular implementations. For long-lived systems, this leads to obscure bugs or defensive programming.In this paper, we discuss the relevant issues for atom garbage collection and propose some principles. We then devise an incremental copying collector with some optimizations.While a full implementation of the collector still remains to be done, an examination of pause times and space overheads, based on existing systems and realistic data, indicates that the algorithm is practical."
},
{
"paper_title": "Remote controlling devices using instant messaging: building an intelligent gateway in Erlang/OTP",
"paper_authors": [
"Simon Aurell"
],
"paper_abstract": "This paper shows how instant messaging technology can be used for remote controlling of devices, and outlines some of the issues involved, of which the most important is security. The concept of controlling and monitoring devices using instant messaging dialogue, presence and buddy list features is applied to a home automation context, and the idea and implementation of a prototype system is described. The paper describes how the excellent robustness and prototyping qualities of Erlang/OTP were exploited to quickly build a prototype system. It also shows how a gateway capable of speaking multiple device protocols can provide a single access point to different kinds of devices and services, and how the concept of agents can be used to add a layer of intelligence to the set of devices being controlled or monitored."
},
{
"paper_title": "A high performance Erlang Tcp/Ip stack",
"paper_authors": [
"Javier Paris",
"Victor Gulias",
"Alberto Valderruten"
],
"paper_abstract": "Functional languages are not often associated with the development of network stacks, mainly due to the lower performance and lack of support for system programming than more conventional languages such as C. However, there are functional languages that offer features which make it easier to develop network protocols than using a more conventional approach based on an imperative language. Erlang, for instance, offers support for distribution, concurrency and soft real time built-in into the language. All those features are desirable to ease the development of network protocol implementations. It is also possible to implement a reasonably efficient network stack in this language, provided some precautions are taken in the design. By using Erlang distribution it is possible to support fault tolerant distributed Tcp connections that take advantage of the distributed nature of the applications implemented in Erlang to provide low cost synchronization, with just some support from the application itself."
},
{
"paper_title": "ERESYE: artificial intelligence in Erlang programs",
"paper_authors": [
"Antonella Di Stefano",
"Francesca Gangemi",
"Corrado Santoro"
],
"paper_abstract": "This paper describes ERESYE, a tool for the realization of intelligent systems expert systems) using the Erlang language. ERESYE is a rule production system that allows rules to be written as Erlang function clauses, providing support for their execution. ERESYE is also able to support object-oriented concepts and ontologies thanks to a suitable ontology handling tool, providing means to translate object-based concepts into an Erlang form. The architecture of ERESYE and its basic working scheme are described in the paper. A comparison with CLIPS, one of the most known tools for expert system programming, is also made. The description of some examples of ERESYE usage are provided to show the effectiveness and the validity of the proposed solution, which opens new and interesting application scenario for Erlang."
},
{
"paper_title": "Modeling Erlang in the pi-calculus",
"paper_authors": [
"Thomas Noll",
"Chanchal Kumar Roy"
],
"paper_abstract": "This paper provides a contribution to the formal modeling and verification of programs written in the concurrent functional programming language Erlang, which is designed for telecommunication applications. It presents a mapping of Core Erlang programs into the π--calculus, a process algebra whose name--passing feature allows to represent the mobile aspects of Erlang software in a natural way."
},
{
"paper_title": "A semantics for distributed Erlang",
"paper_authors": [
"Koen Claessen",
"Hans Svensson"
],
"paper_abstract": "We propose an extension to Fredlund's formal semantics for Erlang that models the concept of nodes. The motivation is that there exist sequences of events that can occur in practice, but are impossible to describe using a single-node semantics, such as Fredlund's. The consequence is that some errors in distributed systems might not be detected by model checkers based on Fredlund's original semantics, or by other single-node verification techniques such as testing. Our extension is modest; it re-uses most of Fredlund's work but adds an extra layer at the top-level."
}
]
},
{
"proceeding_title": "ERLANG '04:Proceedings of the 2004 ACM SIGPLAN workshop on Erlang",
"proceeding_contents": [
{
"paper_title": "EX11: a GUI in a concurrent functional language",
"paper_authors": [
"Joe Armstrong"
],
"paper_abstract": "In this paper, I describe how GUIs can be made from collections of communicating parallel processes. The paper describes EX11 which is an Erlang binding to the X protocol. I describe the X windows programming model and show how X protocol messages can be naturally mapped onto Erlang messages. The code to perfom this mapping makes extensive use of the Erlang bit syntax and as such provides a good example of the use of the bit syntax to implement a reasonably complex protocol. I give code examples which make use of the EX11 widget library and show how the widget library itself is implemented."
},
{
"paper_title": "Monitoring and state transparency of distributed systems",
"paper_authors": [
"Martin J. Logan"
],
"paper_abstract": "This paper presents the System Status suite of applications. These applications are used to provide a simple, uniform, and low developer cost system for exporting and tracking the state of OTP applications and services over a distributed server farm network architecture. The terms, simple, and low developer cost, will be elaborated on later in the paper. The system is intended to provide no formalized management framework it is specifically a state/status export and monitoring infrastructure."
},
{
"paper_title": "Troubleshooting a large erlang system",
"paper_authors": [
"Mats Cronqvist"
],
"paper_abstract": "In this paper, we discuss some experiences from a large, industrial software project using a functional programming language. In particular, we will focus on programming errors.The software studied is the AXD 301 (a multi-service switch from Ericsson AB [1]) control system. It is implemented in a functional language Erlang [2 ]. We will discuss ho this affects programmer productivity.There are now well over 1,000 AXD 301's deployed. Even though a properly handled AXD 301 is quite reliable, there exists a great deal of knowledge about problems that do occur in production code. We will analyze what kinds of programming errors cause these problems, and suggest some methods for preventing and, when that fails, finding the errors. We will also describe some tools that has been specifically developed to aid in debugging.One (perceived) problem with using a interpreted, functional language is execution speed. In practice, we have found that the overhead of running in an emulator is not dramatic, and that it is often more than compensated for by the advantages. The expressiveness of the language and the absence of low-level bugs means that programmers have more time to spend on tuning the code. And since the emulator has good support for tracing, one can perform very advanced profiling, thus making the code intrinsically more effective. We will discuss a profiling tool developed for that purpose."
},
{
"paper_title": "Erlang's exception handling revisited",
"paper_authors": [
"Richard Carlsson",
"Björn Gustavsson",
"Patrik Nyblom"
],
"paper_abstract": "This paper describes the new exception handling in the ERLANG programming language, to be introduced in the forthcoming Release 10 of the Erlang/OTP system. We give a comprehensive description of the behaviour of exceptions in modern-day ERLANG, present a theoretical model of the semantics of exceptions, and use this to derive the new try-construct."
},
{
"paper_title": "An external short message entity for gambling services",
"paper_authors": [
"Enrique Marcote",
"Daniel I. Iglesia",
"Carlos J. Escudero"
],
"paper_abstract": "This paper introduces a new platform designed for mobile gambling services. The special characteristics of these services lead us to developed a multiservice platform that easily brought mobility to applications and services not designed for that.This system was designed to work over SMS, allowing automation of the tedious work of programming reliable SMS-based interfaces. The new system automatically generates user front-ends for a wide variety of services based on forms. As we will see in the paper, the interfaces are defined by means of the novel W3C XForms standard.The resulting platform was designed to be highly efficient, reliable and fault tolerant. Choosing Erlang/OTP as the development environment was a key factor to acquire these goals.As part of the project an open source Erlang SMPP implementation was developed, key features of this library, named OSERL, are also described in this paper."
},
{
"paper_title": "An implementation of the SMB protocol in erlang",
"paper_authors": [
"Torbjrn Trnkvist"
],
"paper_abstract": "This paper describes the implementation of a subset of the SMB protocol in Erlang. We discuss the motivation for this work and its outcome, and compare the performance and memory consumption of our implementation with Samba."
},
{
"paper_title": "HiPE on AMD64",
"paper_authors": [
"Daniel Luna",
"Mikael Pettersson",
"Konstantinos Sagonas"
],
"paper_abstract": "Erlang is a concurrent functional language designed for developing large-scale, distributed, fault-tolerant systems. The primary implementation of the language is the Erlang/OTP system from Ericsson. Even though Erlang/OTP is by default based on a virtual machine interpreter, it nowadays also includes the HiPE (High Performance Erlang) native code compiler as a fully integrated component. This paper describes the recently developed port of HiPE to the AMD64 architecture. We discuss technical issues that had to be addressed when developing the port, decisions we took and why, and report on the speedups (compared with BEAM) which HiPE/AMD64 achieves across a range of Erlang programs and how these compare with speedups for the more mature SPARC and x86 back-ends."
},
{
"paper_title": "Flow graphs for testing sequential erlang programs",
"paper_authors": [
"Manfred Widera"
],
"paper_abstract": "Testing of software components during development is a heavily used approach to detect programming errors and to evaluate the quality of software. Systematic approaches to software testing get a more and more increasing impact on software development processes. For imperative programs there are several approaches to measure the appropriateness of a set of test cases for a program part under testing. Some of them are source code directed and are given as coverage criteria on flow graphs.This paper gives a definition of flow graphs for Erlang programs and describes a tool for generating such flow graphs. It provides a first step towards the transfer of advanced source code directed testing methods to functional programming."
},
{
"paper_title": "Structured programming using processes",
"paper_authors": [
"Jay Nelson"
],
"paper_abstract": "Structured Programming techniques are applied to a personal accounting software application implemented in erlang as a demonstration of the utility of processes as design constructs. Techniques for enforcing strong encapsulation, partitioning for fault isolation and data flow instrumentation, reusing code, abstracting and adapting interfaces, simulating inputs, managing and distributing resources and creating complex application behavior are described. The concept of inductive decomposition is introduced as a method for modularizing code based on the dynamic behavior of the system over time, rather than the static structure of a program. This approach leads to code subsumption, behavior abstraction, automated testing, dynamic data versioning and dynamic code revision all of which contribute to more reliable, fault-tolerant software."
},
{
"paper_title": "On modelling agent systems with erlang",
"paper_authors": [
"Carlos Varela",
"Carlos Abalde",
"Laura Castro",
"Jose Gulías"
],
"paper_abstract": "Multi-agent systems are a kind of concurrent distributed systems. In this work, some guidelines on how to create multi-agent systems using Erlang are presented. The modelled system supports cooperation among agents by plan exchange, reconfiguration and has a certain fault-tolerance. The distributed and concurrent functional programming Erlang, together with OTP platform, allows the creation of high-availability and fault-tolerant concurrent and distributed systems, and it seems to be an interesting framework for implementing multi-agent systems."
}
]
},
{
"proceeding_title": "ERLANG '03:Proceedings of the 2003 ACM SIGPLAN workshop on Erlang",
"proceeding_contents": [
{
"paper_title": "Evaluating distributed functional languages for telecommunications software",
"paper_authors": [
"J. H. Nyström",
"P. W. Trinder",
"D. J. King"
],
"paper_abstract": "The distributed telecommunications sector not only requires minimal time to market, but also software that is reliable, available, maintainable and scalable. High level programming languages have the potential to reduce development time and improve maintainability due to their compact code size. Moreover reliability is improved by safe type systems and relatively easy verification.This paper outlines plans and initial results from a joint project between Motorola and Heriot-Watt University that aims to evaluate the suitability of distributed functional languages for constructing telecommunications software. The evaluation will use the ERLANG and Glasgow distributed Haskell(GdH) languages, and be based on the construction of several typical applications. The evaluation will focus on reliability issues like ease of verification, availability issues like fault-tolerance or resilience, as well as whether the languages deliver the required functionalities, like real-time capabilities. The impact of specific languages techniques will also be assessed, including type system, strictness, validation and distributed coordination. The ERLANG and GdH implementations of the applications will be compared with existing C++/CORBA and Java/JINI implementations.The first application, a Dispatch Call Controller(DCC), has been constructed in ERLANG and measured on a Beowulf cluster. We find that the DCC scales, achieving a relative speedup of 14.5 on 16 processors. The DCC is resilient, achieving 105% throughput at 200% load and 56% throughput at 9000% load on 16 processors. The DCC is fault-tolerant, remaining available despite any one process or processor failure. The DCC has dynamic adaptability, remaining available as processors are added or removed."
},
{
"paper_title": "Automated test generation for industrial Erlang applications",
"paper_authors": [
"Johan Blom",
"Bengt Jonsson"
],
"paper_abstract": "We present an implemented technique for generating test cases from state machine specifications. The work is motivated by a need for testing of protocols and services developed by the company Mobile Arts. We have developed a syntax for description of state machines extended with data variables. From such state machines, test cases are generated by symbolic execution. The test cases are symbolically represented; concrete test cases are generated by instantiation of data parameters."
},
{
"paper_title": "Extending the VoDKA architecture to improve resource modelling",
"paper_authors": [
"Juan José Sánchez Penas",
"Carlos Abalde Ramiro"
],
"paper_abstract": "VoDKA is a Video-on-Demand server developed using Erlang/OTP. In this paper, the evolution of the core architecture of the system, designed for improving resource modelling, is described. After explaining the main goals of the project, the steps taken towards an optimal architecture are explained. Finally, a new architecture is proposed, solving all the problems and limitations in the previous ones. Special attention is paid to the use of design patterns, implementation behaviours, and reusable software components."
},
{
"paper_title": "ARMISTICE: an experience developing management software with Erlang",
"paper_authors": [
"David Cabrero",
"Carlos Abalde",
"Carlos Varela",
"Laura Castro"
],
"paper_abstract": "In this paper, some experiences of using the concurrent functional language Erlang to implement a classical vertical application, a risk management information system, are presented. Due to the complex nature of the business logic and the interactions involved in the client/server architecture deployed, traditional development techniques are unsatisfactory. First, the nature of the problem suggests an iterative design approach. The use of abstractions (functional patterns) and compositionality (both functional and concurrent composition) have been key factors to reduce the amount of time spent adapting the system to changes in requirements. Despite our initial concerns, the gap between classical software engineering and the functional programming paradigm has been successfully fullfiled."
},
{
"paper_title": "Parameterized modules in Erlang",
"paper_authors": [
"Richard Carlsson"
],
"paper_abstract": "This paper describes how the Erlang programming language could be extended with parameterized modules, in a way that is compatible with existing code. This provides a powerful way of creating callbacks, that avoids the limitations involved with function closures, and extends current programming practices in a systematic way that also eliminates a common source of errors. The usage of parameterized modules is similar to Object-Oriented programming, and is naturally complemented by the currently underused feature of behaviours (interface declarations), which are also explained in detail."
},
{
"paper_title": "All you wanted to know about the HiPE compiler: (but might have been afraid to ask)",
"paper_authors": [
"K. Sagonas",
"M. Pettersson",
"R. Carlsson",
"P. Gustafsson",
"T. Lindahl"
],
"paper_abstract": "We present a user-oriented description of features and characteristics of the High Performance ERLANG (HiPE) native code compiler, which nowadays is part of Erlang/OTP. In particular, we describe components and recent additions to the compiler that improve its performance and extend its functionality. In addition, we attempt to give some recommendations on how users can get the best out of HiPE's performance."
},
{
"paper_title": "A study of Erlang ETS table implementations and performance",
"paper_authors": [
"Scott Lystig Fritchie"
],
"paper_abstract": "The viability of implementing an in-memory database, Erlang ETS, using a relatively-new data structure, called a Judy array, was studied by comparing the performance of ETS tables based on four data structures: AVL balanced binary trees, B-trees, resizable linear hash tables, and Judy arrays. The benchmarks used workloads of sequentially- and randomly-ordered keys at table populations from 700 keys to 54 million keys.Benchmark results show that ETS table insertion, lookup, and update operations on Judy-based tables are significantly faster than all other table types for tables that exceed CPU data cache size (70,000 keys or more). The relative speed of Judy-based tables improves as table populations grow to 54 million keys and memory usage approaches 3GB. Term deletion and table traversal operations by Judy-based tables are slower than the linear hash table-based type, but the additional cost of the deletion operation is smaller than the combined savings of the other operations.Resizing a hash table to 232 buckets, managed by a Judy array, creates the most consistent performance improvements and uses only about 6% more memory than a regular hash table. Other applications could benefit substantially by this application of Judy arrays."
},
{
"paper_title": "A soft-typing system for Erlang",
"paper_authors": [
"Sven-Olof Nyström"
],
"paper_abstract": "This paper presents a soft-typing system for the programming language Erlang. The system is based on two concepts; a (forward) data flow analysis that determines upper approximations of the possible values of expressions and other constructs, and a specification language that allows the programmer to specify the interface of a module. We examine the programming language Erlang and point to various aspects of the language that make it hard to type. We present experimental result of applying the soft-typing system to some previously written programs."
}
]
},
{
"proceeding_title": "ERLANG '02:Proceedings of the 2002 ACM SIGPLAN workshop on Erlang",
"proceeding_contents": [
{
"paper_title": "Hierarchical module namespaces in Erlang",
"paper_authors": [
"Richard Carlsson"
],
"paper_abstract": "This paper describes how the Erlang language has been extended with a hierarchical namespace for modules, generally known as \"packages\", similar to that used for classes in Java."
},
{
"paper_title": "Native code compilation of Erlang's bit syntax",
"paper_authors": [
"Per Gustafsson",
"Konstantinos Sagonas"
],
"paper_abstract": "Erlang's bit syntax caters for flexible pattern matching on bit streams (objects known as binaries). Binaries are nowadays heavily used in typical Erlang applications such as protocol programming, which in turn has created a need for efficient support of the basic operations on binaries.To this effect, we describe a scheme for efficient native code compilation of Erlang's bit syntax. The scheme relies on partial translation for avoiding code explosion, and improves the performance of programs manipulating binaries by translating frequently occurring instances of BEAM instructions into native code via an intermediate translation to instructions of a register transfer language. Our performance evaluation shows that in a HiPE-enabled Erlang/OTP system, the obtained speedups are often significant."
},
{
"paper_title": "Trace analysis of Erlang programs",
"paper_authors": [
"Thomas Arts",
"Lars-Åke Fredlund"
],
"paper_abstract": "The paper reports on an experiment to provide the Erlang programming language with a tool package for convenient trace generation, collection and to support analysis of traces using a set of techniques. Due to the frequent use of state-based software design patterns in Erlang programming we can in many cases recover not only the events from a trace log, but also the program states causing these events. This makes it possible to obtain program models from execution traces. In our work we make use of these program models for program visualization and model checking."
},
{
"paper_title": "World-class product certification using Erlang",
"paper_authors": [
"Ulf Wiger",
"Gösta Ask",
"Kent Boortz"
],
"paper_abstract": "It is now ten years ago since the decision was made to apply the functional programming language Erlang to real production projects at Ericsson. In late 1995, development on the Open Telecom Platform (OTP) started, and in mid 1996 the AXD 301 project became the first user of OTP. The AXD 301 Multi-service Switch was released in October 1998, and later became \"the heart of ENGINE\", Ericsson's leading Voice over Packet solution.In those early days of Erlang programming, high-level tools for development and testing were not really available, and programmers used mainly the Emacs editor and the Erlang shell. Still, anecdotal evidence suggested a 4-10x productivity increase compared to mainstream programming techniques.Through the years, significant progress has been made, especially in the area of automated testing of Erlang programs. The OTP team designed an Erlang-based test suite execution environment, were developers can easily write their own automated test suites, and now performs nightly builds where more than one thousand test cases are executed on ten different platforms. OTP designers can view the outcome in web-based test reports as they come to work the next day. Each corrected bug results in a new test case that is incorporated into the ever-growing test suite. Thus, this world-class middleware is certified to telecom-class quality without a dedicated test team!The AXD 301 project uses OTP's test environment, and executes more than 10,000 automated test cases before each major release. Designers and testers compose their own test suites, and the designers carry out function tests with little or no help from the Integration and Certification team. Each test case can be run both in a simulated environment on the designer's workstation and in the test lab on real hardware. In order to provide stimuli to the system, the testers often design their own traffic generators in Erlang.To analyze the faults that occur, Erlang offers an increasing wealth of debugging options. Beyond the symbolic error messages, which are often sufficient to locate the fault, Erlang developers are able to dynamically turn on tracing on message passing, scheduling events, garbage collections, selected function calls, etc.This paper demonstrates how Erlang's declarative syntax and pattern matching provide an outstanding environment for test suite development."
},
{
"paper_title": "The evolution of Erlang drivers and the Erlang driver toolkit",
"paper_authors": [
"Scott Lystig Fritchie"
],
"paper_abstract": "Erlang is gaining a reputation as a good language for rapid prototyping, but one area where its reputation is weaker than those of traditional scripting languages is extensibility. Erlang is actually fairly easy to extend, but the learning curve is steep. To reduce the time necessary to create Erlang extensions, called \"drivers,\" for existing code libraries written in C, the Erlang Driver Toolkit (EDTK) was developed. Its code generator can produce all or nearly all of the Erlang and C code required to implement both major types of Erlang drivers. Although it is still under active development, EDTK has already proven to be a time- and effort-saving tool for creating robust, full-featured driver extensions for three well-known Open Source C libraries."
},
{
"paper_title": "OTP in server farms",
"paper_authors": [
"Michael Bruening",
"Hal Snyder",
"Martin Logan"
],
"paper_abstract": "Ericsson's OTP (Open Telecom Platform) offers a number of attractive features if you want to provide a variety of information services on a network with high availability, scalability, and extensibility. However, the major uses of OTP have been in closed, relay-rack systems, rather than in clusters of general-purpose servers. We describe issues we encountered while bringing up applications on OTP in the latter environment at a computer telephony company."
},
{
"paper_title": "Global scheduler properties derived from local restrictions",
"paper_authors": [
"Thomas Arts",
"Juan José Sánchez Penas"
],
"paper_abstract": "The VoDka server is a video-on-demand system for a Spanish cable company. We look at the distributed scheduler of this system. This scheduler enables that whenever a user agent is asking for a certain movie, this request is transferred through the system and a set of possible play-back qualities is returned to the agent. In case of a non-empty set, the agent selects one and the movie is streamed to the user.The storage subsystem of the server is composed by a hierarchy of different storage systems, i.e. disks, CD players or tapes. These devices all have restrictions of which the process controlling the device is aware of. A second layer of processes controls a set of devices in one machine and has restrictions, for example, the bandwidth of its connection. A third layer may be further out in the network and serve as a cache to store more popular movies.Every process in the scheduler of the system has a function determining local restrictions, given the configuration and present state of the system. We have built a tool to construct complete models of several configurations. With techniques from the area of formal methods (in particular model checking) these models are used to determine global properties of the system, such as the maximum number of a certain class of movies that can be served in parallel."
},
{
"paper_title": "On reducing interprocess communication overhead in concurrent programs",
"paper_authors": [
"Erik Stenman",
"Konstantinos Sagonas"
],
"paper_abstract": "We present several different ideas for increasing the performance of highly concurrent programs in general and Erlang programs in particular. These ideas range from simple implementation tricks that reduce communication latency to more thorough code rewrites guided by inlining across process boundaries. We also briefly discuss the impact of different heap architectures on interprocess communication in general and on our proposed optimizations in particular."
},
{
"paper_title": "Getting Erlang to talk to the outside world",
"paper_authors": [
"Joe Armstrong"
],
"paper_abstract": "How should Erlang talk to the outside world? --- this question becomes interesting if we want to build distributed applications where Erlang is one of a number of communicating components.We assume these components interact by exchanging messages --- at this level of abstraction, details of programming language, operating system and host architecture are irrelevant. What is important is the ease with which we can construct such systems, and the precision with which we can isolate faulty components in such a system. Also of importance is the efficiency (both in terms of CPU and bandwidth requirements) with which we can send and receive messages in the system.One widely adopted solution to this problem involves the XML family of standards (XML, XML-schemas, SOAP and WSDL) --- we argue that this is inefficient and overly complex and propose basing our system on a simpler binary scheme called UBF (Universal Binary Format). The UBF scheme has the expressive power of the XML set of standards --- but is considerably simpler.UBF has been prototyped in Erlang --- the entire scheme (equivalent in semantic power to the XML series of standards) was implemented in a mere 1100 lines of Erlang. UBF encoding of terms is also shown to be more space efficient than the existing \"Erlang term format\". For example, UBF encoded parse trees of Erlang programs are on average about 60% of the size of the equivalent ETS format encoding which is used in the open source Erlang distribution."
}
]
}
]
},
{
"conference_title": "Functional Art, Music, Modeling & Design",
"conference_contents": [
{
"proceeding_title": "FARM '14:Proceedings of the 2nd ACM SIGPLAN international workshop on Functional art, music, modeling & design",
"proceeding_contents": [
{
"paper_title": "LiveCodeLab 2.0 and its language LiveCodeLang",
"paper_authors": [
"Davide Della Casa",
"Guy John"
],
"paper_abstract": "We present LiveCodeLab 2.0, a web-based livecoding framework, and its language LiveCodeLang. We describe its operation, its connection with other livecoding frameworks, and its aspects related to functional programming."
},
{
"paper_title": "[Demo abstract] Scripthica: a web environment for collective algorithmic composition",
"paper_authors": [
"Gabriel Alejandro Sanchez Fernandez"
],
"paper_abstract": "This paper presents the initial design and development work done on a new open source computer-aided music composition web environment for collective algorithmic music composition. Scripthica is envisioned as a web environment where users can compose and share algorithmic music compositions created with the JavaScript and Scheme programming languages."
},
{
"paper_title": "Functional generation of harmony and melody",
"paper_authors": [
"José Pedro Magalhães",
"Hendrik Vincent Koops"
],
"paper_abstract": "We present FComp, a system for automatic generation of harmony and accompanying melody. Building on previous work on functional modelling of musical harmony, FComp first creates a foundational harmony by generating random (but user-guided) values of a datatype that encodes the rules of tonal harmony. Then, a melody that fits to the harmony is generated in a compositional sequence: generate all \"possible\" melodies, filter them to remove obvious bad choices, pick one candidate note per chord, and then embellish the resulting melodic line. At this very early stage, we aim to define a solid system as a foundation that can be used to further improve upon. We care especially about modularity, so that each individual part of the pipeline can be easily improved, and ease of adaptation, so that users can quickly adapt the generated music to their liking. The resulting system generates simple but harmonious music, and serves as a good case study on how functional programming enables quick and clean prototyping of new ideas, even in the realm of automatic music composition."
},
{
"paper_title": "[Demo abstract] using Haskell as DSL for controlling immersive media experiences: Ludic support and deep immersion in nordic technology-supported LARP",
"paper_authors": [
"Mikael Vejdemo-Johansson",
"Henrik Bäärnhielm",
"Daniel Sundström"
],
"paper_abstract": "For the technology supported Nordic LARP The Monitor Celestra in March 2013, we built a sound system to support deep immersion into the game world using a Haskell-embedded Domain Specific Language to specify soundscapes. The soundscapes helped simulating an operating spaceship using a WW2-era destroyer as the stage. The system consisted of 14 loudspeaker stations built on Raspberry Pi, a central game server and local clients running on laptops or desktop computers owned by game directors and game masters. In the demo, we describe the design choices and requirements we faced while building the system, how Haskell and Hackage libraries guided and supported the project, describe and show how game designers and game directors used Haskell to influence and steer the game as well as showcase a smaller installation of the complete system."
},
{
"paper_title": "Exploring melody space in a live context using declarative functional programming",
"paper_authors": [
"Thomas Greve Kristensen"
],
"paper_abstract": "This paper introduces Composer, a system offering composition capabilities for live performance, requiring no prior experience with composition and programming. Current research in computer assisted composition is focused on offline composition. A composer is seen as a person that composes pieces of music which are then performed at a later date, either by the composer or an artist. There has been work done in computer assisted live performance, but the focus in that field has mainly been on the live generation of synthesizers and novel, virtual instruments and musical interfaces. Unlike existing systems, Composer is intended to be used in a live context for the composition of novel melodies. The system makes no assumptions about the user's existing experience as a composer or a programmer. Instead of giving the user unbounded freedom, the system only allows the user to manipulate key properties of the desired melodies. The constraints the user can put on the melodies are the scale or mode in which the melody is set; the tonic note of the scale or mode; the cadence of the melody; the tempo of the melody; and the relative gap-size between notes in the melody. These rules are modelled using a declarative programming model that also supports automatic enumeration of the space of valid melodies. As complete enumeration of this search space is infeasible in a live context, experiments have been performed and their results are presented, to limit the size of the enumerated space while still yielding sufficient variation in the composed pieces. Furthermore, the general system design is presented and it is discussed how choices concerning the inter-communication between components in the system helps the system to be responsive and usable in a live composition context."
},
{
"paper_title": "[Demo abstract] Music suite: a family of musical representations",
"paper_authors": [
"Hans Hoglund"
],
"paper_abstract": "The Music Suite is a collection of Haskell libraries for composition, analysis and manipulation of music. It aims to be open and comprehensive rather than complete. Representations can be combined in many ways to form a suitable representation for almost any kind of music. In a sense, the Music Suite describes a family of domain-specific languages. Common Music Notation (CMN) is included as a special case. The Music Suite differs from many other music representation systems in that it allows continuous representation of pitch, dynamics and other musical aspects, in addition to the common discrete representations."
},
{
"paper_title": "[Demo abstract] Sound and soundness: practical total functional data-flow programming",
"paper_authors": [
"Baltasar Trancóny Widemann",
"Markus Lepper"
],
"paper_abstract": "The field of declarative data-stream programming (discrete time, clocked synchronous, compositional, data-centric) is divided between the visual data-flow graph paradigm favored by domain experts, the functional reactive paradigm favored by academics, and the synchronous paradigm favored by developers of low-level systems. Each approach has its particular theoretical and practical merits and target audience. The programming language Sig has been designed to unify the underlying paradigms in a novel way. The natural expressivity of visual approaches is combined with the support for concise pattern-based symbolic computation of functional programming, and the rigorous, elementary semantical foundation of synchronous approaches. Here we demonstrate the current state of implementation of the Sig system by means of example programs that realize typical components of digital sound synthesis."
},
{
"paper_title": "Temporal semantics for a live coding language",
"paper_authors": [
"Samuel Aaron",
"Dominic Orchard",
"Alan F. Blackwell"
],
"paper_abstract": "Sonic Pi is a music live coding language that has been designed for educational use as a first programming language. However, it is not straightforward to achieve the necessary simplicity of a first language in a music live coding setting, for reasons largely related to the manipulation of time. The original version of Sonic Pi used a `sleep' function for managing time, blocking computation for a specified time period. However, while this approach was conceptually simple, it resulted in badly timed music, especially when multiple musical threads were executing concurrently. This paper describes an alternative programming approach for timing (implemented in Sonic Pi v2.0) which maintains syntactic compatibility with v1.0, yet provides accurate timing via interaction between real time and a \"virtual time''. We provide a formal specification of the temporal behaviour of Sonic Pi, motivated in relation to other recent approaches to the semantics of time in live coding and general computation. We then define a monadic model of the Sonic Pi temporal semantics which is sound with respect to this specification, using Haskell as a metalanguage."
},
{
"paper_title": "Tiled polymorphic temporal media",
"paper_authors": [
"Paul Hudak",
"David Janin"
],
"paper_abstract": "Tiled Polymorphic Temporal Media (tiled PTM) is an algebraic approach to specifying the composition of multimedia values having an inherent temporal quality -- for example sound clips, musical scores, computer animations, and video clips. Mathematically, one can think of a tiled PTM as a tiling in the one dimension of time. A tiled PTM value has two synchronization marks that specify, via an effective notion of tiled product, how the tiled PTM values are positioned in time relative to one another, possibly with overlaps. Together with a pseudo inverse operation, and the related reset and co-reset projection operators, the tiled product is shown to encompass both sequential and parallel products over temporal media. Up to observational equivalence, the resulting algebra of tiled PTM is shown to be an inverse monoid: the pseudo inverse being a semigroup inverse. These and other algebraic properties are explored in detail. In addition, recursively-defined infinite tiles are considered. Ultimately, in order for a tiled PTM to be renderable, we must know its beginning, and how to compute its evolving value over time. Though undecidable in the general case, we define decidable special cases that still permit infinite tilings. Finally, we describe an elegant specification, implementation, and proof of key properties in Haskell, whose lazy evaluation is crucial for assuring the soundness of recursive tiles. Illustrative examples, within the Euterpea framework for musical temporal media, are provided throughout."
},
{
"paper_title": "[Demo abstract] LittleBits synth kit as a physically-embodied, domain specific functional programming language",
"paper_authors": [
"James Noble",
"Timothy Jones"
],
"paper_abstract": "littleBits (littleBits.cc) is an open-source hardware library of pre assembled analogue components that can be easily assembled into circuits, disassembled, reassembled, and re-used. In this demo, we consider littleBits --- and the littleBits synth kit in particular ---as a physically-embodied domain specific functional programming language, and how littleBits circuits can be considered as monadic programs."
},
{
"paper_title": "Making programming languages to dance to: live coding with tidal",
"paper_authors": [
"Alex McLean"
],
"paper_abstract": "Live coding of music has grown into a vibrant international community of research and practice over the past decade, providing a new research domain where computer science blends with the performing arts. In this paper the domain of live coding is described, with focus on the programming language design challenges involved, and the ways in which a functional approach can meet those challenges. This leads to the introduction of Tidal 0.4, a Domain Specific Language embedded in Haskell. This is a substantial restructuring of Tidal, which now represents musical pattern as functions from time to events, inspired by Functional Reactive Programming."
},
{
"paper_title": "[Demo abstract] Patterning: repetitive and recursive pattern generation using clojure and quil",
"paper_authors": [
"Phil Jones"
],
"paper_abstract": "We describe work-in-progress on \"Patterning\", a Clojure library designed to work with Quil, that generates repetitive visual patterns inspired by traditional textile and wallpaper designs. Patterning uses function composition, recursion and lazily evaluated lists to allow short, elegant descriptions of complex, recursive and repetitive patterns. It is being developed to support a number of the author's ongoing art projects."
}
]
},
{
"proceeding_title": "FARM '13:Proceedings of the first ACM SIGPLAN workshop on Functional art, music, modeling & design",
"proceeding_contents": [
{
"paper_title": "Reduction as a transition controller for sound synthesis events",
"paper_authors": [
"Jean Bresson",
"Raphaël Foulon",
"Marco Stroppa"
],
"paper_abstract": "We present an application of reduction and higher-order functions in a recent computer-aided composition project. Our objective is the generation of control data for the Chant sound synthesizer using OpenMusic (OM), a domain-specific visual programming environment based on Common Lisp. The system we present allows to compose sounds by combining synthesis events in sequences. After the definition of the compositional primitives determining these events, we handle their sequencing, transitions and possible overlapping/fusion using a special fold operator. The artistic context of this project is the production of the opera Re Orso, premiered in 2012 at the Opera Comique, Paris."
},
{
"paper_title": "Programming mixed music in ReactiveML",
"paper_authors": [
"Guillaume Baudart",
"Louis Mandel",
"Marc Pouzet"
],
"paper_abstract": "Mixed music is about live musicians interacting with electronic parts which are controlled by a computer during the performance. It allows composers to use and combine traditional instruments with complex synthesized sounds and other electronic devices. There are several languages dedicated to the writing of mixed music scores. Among them, the Antescofo language coupled with an advanced score follower allows a composer to manage the reactive aspects of musical performances: how electronic parts interact with a musician. However these domain specific languages do not offer the expressiveness of functional programming. We embed the Antescofo language in a reactive functional programming language, ReactiveML. This approach offers to the composer recursion, higher order, inductive types, as well as a simple way to program complex reactive behaviors thanks to the synchronous model of concurrency on which ReactiveML is built. This article presents how to program mixed music in ReactiveML through several examples."
},
{
"paper_title": "The T-calculus: towards a structured programing of (musical) time and space",
"paper_authors": [
"David Janin",
"Florent Berthaut",
"Myriam Desainte-Catherine",
"Yann Orlarey",
"Sylvain Salvati"
],
"paper_abstract": "In the field of music system programming, the T-calculus is a proposal for combining space modeling and time programming into a single programming feature: spatiotemporal tiled programming. Based on a solid algebraic model, it aims at decomposing every operation on musical objects into the sequence of a synchronization operation that describes how objects are positioned one with respect the other, and a fusion operation that describes how their values are then combined. A first simple version of such a tiled calculus is presented and studied in this paper."
},
{
"paper_title": "From sonic Pi to overtone: creative musical experiences with domain-specific and functional languages",
"paper_authors": [
"Samuel Aaron",
"Alan F. Blackwell"
],
"paper_abstract": "Domain Specific and Functional languages provide an excellent linguistic context for exploring new forms of music notation -- not just for formalising compositions but also for live interaction workflows. This experience report describes two novel live coding systems that employ code execution to modify live sounds and music. The first of these systems, Sonic Pi, aims at teaching core computing notions to school students using live-coded music as a means of stimulating and maintaining student engagement. We describe how an emphasis on a functional style improves the ease in which core computer science concepts can be communicated to students. Secondly we describe Overtone, a functional language and live coding environment aimed towards mprofessional electronic musicians. We describe how Overtone's abstractions and architecture strongly benefit from a functional-oriented implementation. Both Sonic Pi and Overtone are freely available open-source platforms."
},
{
"paper_title": "A functional approach to automatic melody harmonisation",
"paper_authors": [
"Hendrik Vincent Koops",
"José Pedro Magalhães",
"W. Bas de Haas"
],
"paper_abstract": "Melody harmonisation is a centuries-old problem of long tradition, and a core aspect of composition in Western tonal music. In this work we describe FHarm, an automated system for melody harmonisation based on a functional model of harmony. Our system first generates multiple harmonically well-formed chord sequences for a given melody. From the generated sequences, the best one is chosen, by picking the one with the smallest deviation from the harmony model. Unlike all existing systems, FHarm guarantees that the generated chord sequences follow the basic rules of tonal harmony. We carry out two experiments to evaluate the quality of our harmonisations. In one experiment, a panel of harmony experts is asked to give its professional opinion and rate the generated chord sequences for selected melodies. In another experiment, we generate a chord sequence for a selected melody, and compare the result to the original harmonisation given by a harmony scholar. Our experiments confirm that FHarm generates realistic chords for each melody note. However, we also conclude that harmonising a melody with individually well-formed chord sequences from a harmony model does not guarantee a well-sounding coherence between the chords and the melody. We reflect on the experience gained with our experiment, and propose future improvements to refine the quality of the harmonisation."
},
{
"paper_title": "Grammar-based automated music composition in Haskell",
"paper_authors": [
"Donya Quick",
"Paul Hudak"
],
"paper_abstract": "Few algorithms for automated music composition are able to address the combination of harmonic structure, metrical structure, and repetition in a generalized way. Markov chains and neural nets struggle to address repetition of a musical phrase, and generative grammars generally do not handle temporal aspects of music in a way that retains a coherent metrical structure (nor do they handle repetition). To address these limitations, we present a new class of generative grammars called Probabilistic Temporal Graph Grammars, or PTGG's, that handle all of these features in music while allowing an elegant and concise implementation in Haskell. Being probabilistic allows one to express desired outcomes in a probabilistic manner; being temporal allows one to express metrical structure; and being a graph grammar allows one to express repetition of phrases through the sharing of nodes in the graph. A key aspect of our approach that enables handling of harmonic and metrical structure in addition to repetition is the use of rules that are parameterized by duration, and thus are actually functions. As part of our implementation, we also make use of a music-theoretic concept called chord spaces."
},
{
"paper_title": "Visualizing the turing tarpit",
"paper_authors": [
"Jason Hemann",
"Eric Holk"
],
"paper_abstract": "Minimal programming languages like Jot generate limited interest outside of the community of languages enthusiasts. This is unfortunate, because the simplicity of these languages endows them with an inherent beauty and provides deep insight into the nature of computation. We present a way of visualizing the behavior of many Jot programs at once, providing interesting images and also hinting at somewhat non-obvious relationships between programs. In the same way that fractals research has yielded new mathematical insights, research into visualization such as that presented here could produce new perspectives on the structure and nature of computation. A gallery containing the visualizations presented herein can be found at http://tarpit.github.io/TarpitGazer."
}
]
}
]
},
{
"conference_title": "Functional and Declarative Progamming in Education",
"conference_contents": [
{
"proceeding_title": "FDPE '08:Proceedings of the 2008 international workshop on Functional and declarative programming in education",
"proceeding_contents": [
{
"paper_title": "Htdp and dmda in the battlefield: a case study in first-year programming instruction",
"paper_authors": [
"Annette Bieniusa",
"Markus Degen",
"Phillip Heidegger",
"Peter Thiemann",
"Stefan Wehr",
"Martin Gasbichler",
"Michael Sperber",
"Marcus Crestani",
"Herbert Klaeren",
"Eric Knauel"
],
"paper_abstract": "Teaching the introductory course on programming is hard, even with well-proven didactic methods and material. This is a report on the first-year programming course taught at Tübingen and Freiburg universities. The course builds on the well-developed systematic approaches using functional programming, pioneered by the PLT group. In recent years, we have introduced novel approaches to the teaching process itself. In particular, assisted programming sessions gave the students a solid basis for developing their programming skills. In this paper we trace the development of our approach. Furthermore, we have collected information on how well our course had worked, and how the results together with our experience gained over years have lead to substantial, measurable improvements."
},
{
"paper_title": "The chilling descent: making the transition to a conventional curriculum",
"paper_authors": [
"Prabhakar Ragde"
],
"paper_abstract": "The transitional course following an introduction to computer science using functional programming must prepare students to handle a traditional, imperative-based curriculum while ensuring that the lessons of the introductory course are not lost. This paper describes the design of a second course using both Scheme and C, and examines the rationales behind the major design decisions."
},
{
"paper_title": "Functional programming and theorem proving for undergraduates: a progress report",
"paper_authors": [
"Rex Page",
"Carl Eastlund",
"Matthias Felleisen"
],
"paper_abstract": "For the past five years, the University of Oklahoma has used the ACL2 theorem prover for a year-long sequence on software engineering. The goal of the course is to introduce students to functional programming with \"Applicative Common Lisp\" (ACL) and to expose them to defect recognition at all levels, including unit testing, randomized testing of conjectures, and formal theorem proving in \"a Computational Logic\" (ACL2). Following Page's example, Northeastern University has experimented with the introduction of ACL2 into the freshman curriculum for the past two years. Northeastern's goal is to supplement an introductory course on functional program design with a course on logic and theorem proving that integrates the topic with programming projects. This paper reports on our joint project's progress. On the technical side, the paper presents the Scheme-based integrated development environment, its run-time environment for functional GUI programming, and its support for different forms of testing. On the experience side, the paper summarizes the introduction of these tools into the courses, the reaction of industrial observers of Oklahoma's software engineering course, and the feedback from a first outreach workshop."
},
{
"paper_title": "SASyLF: an educational proof assistant for language theory",
"paper_authors": [
"Jonathan Aldrich",
"Robert J. Simmons",
"Key Shin"
],
"paper_abstract": "Teaching and learning formal programming language theory is hard, in part because it's easy to make mistakes and hard to find them. Proof assistants can help check proofs, but their learning curve is too steep to use in most classes, and is a barrier to researchers too. In this paper we present SASyLF, an LF-based proof assistant specialized to checking theorems about programming languages and logics. SASyLF has a simple design philosophy: language and logic syntax, semantics, and meta-theory should be written as closely as possible to the way it is done on paper. We describe how we designed the SASyLF syntax to be accessible to students learning type theory, and how students can understand its semantics directly in terms of the theory they are taught in class. SASyLF can express proofs typical of an introductory graduate type theory course. SASyLF proofs are generally very explicit, but its built-in support for variable binding provides substitution properties for free and avoids awkward variable encodings. We describe preliminary experience teaching with SASyLF."
},
{
"paper_title": "Experimenting with formal languages using forlan",
"paper_authors": [
"Alley Stoughton"
],
"paper_abstract": "We give an introduction to the Forlan formal language theory toolset, which was designed to facilitate sophisticated experimentation with formal languages. Forlan is embedded in the functional programming language Standard ML, a language whose notation and concepts are similar to those of mathematics. It is strongly typed and interactive, properties that help make experimentation robust, simple and enjoyable. We give an extended example of the kind of experimentation that Forlan makes possible. It involves the use of closure properties/algorithms for regular languages/finite automata and a \"difference\" function on strings of zeros and ones."
},
{
"paper_title": "A robot in every classroom: robots and functional programming across the curriculum",
"paper_authors": [
"David Wakeling"
],
"paper_abstract": "It has been suggested that the fall in the number of young people wishing to study computer science might be arrested by repackaging the current material into new modules which set it in a context that appeals and motivates. In this paper, we try this idea out by repackaging some introductory material into a \"Robotics\" module using a functional programming language. The advantages of our module are that its problem-based learning bridges the gap between the classroom and the laboratory, and that it allows everyone to concentrate on \"computer science\" rather than \"machine\" and \"language\" details. The disadvantages of our module are that its skills are not obviously those expected elsewhere, and that it has high setup and support costs."
},
{
"paper_title": "Teaching functional programming with soccer-fun",
"paper_authors": [
"Peter Achten"
],
"paper_abstract": "In this paper we report on our experience with the functional framework Soccer-Fun, which is a domain specific language for simulating football. It has been developed for an introductory course in functional programming at the Radboud University Nijmegen, The Netherlands. We have used Soccer-Fun in teaching during the past four years. We have also experience in using Soccer-Fun for pupils in secondary education. Soccer-Fun is stimulating because it is about a well known problem domain. It engages students to problem solving with functional programming because it allows them to compete at several disciplines: the best performing football team can become champion of a tournament; the best written code can be awarded with a prize; students can be judged on the algorithms used. This enables every student to participate and perform at her favorite skill. Soccer-Fun is implemented in Clean and uses its GUI toolkit Object I/O for rendering. It can be implemented in any functional programming language that supports some kind of windowing toolkit."
},
{
"paper_title": "Declarative language extensions for prolog courses",
"paper_authors": [
"Ulrich Neumerkel",
"Markus Triska",
"Jan Wielemaker"
],
"paper_abstract": "In this paper we present several extensions to support a more declarative view of programming in Prolog. These extensions enable introductory Prolog courses to concentrate on the pure parts of Prolog for longer periods than without. Even quite complex programs can now be written free of any reference to the more problematic constructs. Our extensions include an alternate way to handle the occurs-check, efficient side-effect free I/O with DCGs, and a uniform approach to integer arithmetic that overcomes the disadvantages of arithmetical evaluation and finite domain constraints, but combines and amplifies their strengths. All extensions have been included recently into the SWI-Prolog distribution."
},
{
"paper_title": "Tips on teaching types and functions",
"paper_authors": [
"Fritz Ruehr"
],
"paper_abstract": "Many beginning students of functional programming have difficulty understanding higher-order functions and their types. Experienced functional programmers have such a close familiarity and intuitive grasp of these crucial concepts that they may find it hard to \"bridge the gap,\" so as to provide their students with a firm understanding of these ideas. I describe a loosely-related cluster of tips and techniques which address the pedagogy of higher-order functions and types, for students with varying degrees of mathematical background and different learning styles - these techniques include tabular presentations, code tools, visual metaphors and an abstract algebra. Although the underlying ideas will be familiar to experts, I believe these presentations can help educators bring important ideas in functional programming to a broader range of students, with less pain and with a deeper understanding."
}
]
},
{
"proceeding_title": "FDPE '05:Proceedings of the 2005 workshop on Functional and declarative programming in education",
"proceeding_contents": [
{
"paper_title": "How to design class hierarchies",
"paper_authors": [
"Matthias Felleisen"
],
"paper_abstract": "Colleges and universities must expose their students of computer science to object-oriented programming (OOP) even if the majority of the faculty believes that OOP is not the proper programming paradigm for novices. OOP is an important paradigm of thought, and OOP languages are widely used in commercial settings. Ignoring these facts means to ignore the students' needs.In the past, institutions that introduce functional programming first have explained object-oriented programming via closures and method-oriented dispatch. Put differently, in such courses, students learn to implement objects, message passing, and delegation. They do not learn to design object-oriented programs. Although I firmly believe that our students benefit from such knowledge, I will argue that it is inappropriate as an introduction of object-oriented programming.My talk will instead present a novel approach to the first-year programming curriculum. Specifically, I will explain how a functional semester ideally prepares students for the true essence of object-oriented programming according to Alan Kay: the systematic construction of small modules of code and the construction of programs without assignment statements. Experience shows that these courses prepare students better for upper-level courses than a year of plain object-oriented programming. Initial reports from our students' co-op employers appear to confirm the experiences of our upper-level instructors."
},
{
"paper_title": "From functional to object-oriented programming: a smooth transition for beginners",
"paper_authors": [
"Rudolf Berghammer",
"Frank Huch"
],
"paper_abstract": "Many Computer Science curricula at universities start programming with a functional programming language (for instance, SML, Haskell, Scheme) and later change to the imperative programming paradigm. For the latter usually the object-oriented programming language Java is used. However, this puts a burden on the students, since even the smallest Java program cannot be formulated without the notion of class and static and public method. In this paper we present an approach for changing from functional to object-oriented programming. Using (Standard) ML for the functional programming paradigm, it still prepares the decisive notions of object-orientation by specific constructs of this language. As experience at the University of Kiel has shown, this smoothes the transition and helps the students getting started with programming in the Java language."
},
{
"paper_title": "Laziness without all the hard work: combining lazy and strict languages for teaching",
"paper_authors": [
"Eli Barzilay",
"John Clements"
],
"paper_abstract": "Students have trouble understanding the difference between lazy and strict programming. It is difficult to compare the two directly, because popular strict languages and popular lazy languages differ in their syntax, in their type systems, and in other ways unrelated to the lazy/strict evaluation discipline.While teaching programming languages courses, we have discovered that an extension to PLT Scheme allows the system to accommodate both lazy and strict evaluation in the same system. Moreover, the extension is simple and transparent. Finally, the simple nature of the extension means that the resulting system provides a rich environment for both lazy and strict programs without modification."
},
{
"paper_title": "Word puzzles in Haskell: interactive games for functional programming exercises",
"paper_authors": [
"S. A. Curtis"
],
"paper_abstract": "This paper describes some functional programming exercises in the form of implementing some interactive word puzzle games. The games share a common framework and provide good opportunities for practising higher-order functions, recursion, and other list processing functions. Experience suggests that these games are motivating and enjoyable for students."
},
{
"paper_title": "Teaching of image synthesis in functional style",
"paper_authors": [
"Jerzy Karczmarczuk"
],
"paper_abstract": "We have taught the 3D modelling and image synthesis for computer science students (Master level), exploiting very intensely the functional style of programming/scene description. Although no pure functional language was used, since we wanted to use popular programmable packages, such as POV-Ray, or the interactive modeller/renderer Blender, scriptable in Python, we succeded in showing that typical functional tools, such as higher-order functional objects, compositions and recursive combinations are useful, easy to grasp and to implement. We constructed implicit and parametric surfaces in a generic way, we have shown how to transform (deform) and blend surfaces using functional methods, and we have even found a case where the laziness, implemented through Python generators, turned to be useful.We exploited also some functional methods for the image processing: creation of procedural textures and their transformation."
},
{
"paper_title": "MinCaml: a simple and efficient compiler for a minimal functional language",
"paper_authors": [
"Eijiro Sumii"
],
"paper_abstract": "We present a simple compiler, consisting of only 2000 lines of ML, for a strict, impure, monomorphic, and higher-order functional language. Although this language is minimal, our compiler generates as fast code as standard compilers like Objective Caml and GCC for several applications including ray tracing, written in the opti-mal style of each language implementation. Our primary purpose is education at undergraduate level to convince students--as well as average programmers--that functional languages are simple and efficient."
},
{
"paper_title": "Engineering software correctness",
"paper_authors": [
"Rex Page"
],
"paper_abstract": "Software engineering courses offer one of many opportunities for providing students with a significant experience in declarative programming. This report discusses some results from taking advantage of this opportunity in a two-semester sequence of software engineering courses for students in their final year of baccalaureate studies in computer science. The sequence is based on functional programming using ACL2, a purely functional subset of Common Lisp with a built-in, computational logic developed by J Strother Moore and his colleagues over the past three decades. The course sequence has been offered twice, so far, in two consecutive academic years. Certain improvements evolved in the second offering, and while this report focuses on that offering, it also offers reasons for the changes. The discussion outlines the topical coverage and required projects, suggests further improvements, and observes educational effects based on conversations with students and evaluations of their course projects. In general, it appears that most students enjoyed the approach and learned concepts and practices of interest to them. Seventy-six students have completed the two-course sequence, half of them in the first offering and half in the second. All of the students gained enough competence in functional programming to apply it in future projects in industry or graduate school. In the second offering, about forty percent of the students gained enough competence with the ACL2 mechanized logic to make significant use of it in verifying properties of software. About ten percent acquired more competence than might reasonably be expected, enough to see new opportunities for applications and lead future software development efforts in the direction of declarative software with proven correctness properties."
}
]
}
]
},
{
"conference_title": "Functional High-Performance Computing",
"conference_contents": [
{
"proceeding_title": "FHPC '14:Proceedings of the 3rd ACM SIGPLAN workshop on Functional high-performance computing",
"proceeding_contents": [
{
"paper_title": "Ziria: wireless programming for hardware dummies",
"paper_authors": [
"Dimitrios Vytiniotis"
],
"paper_abstract": "Software-defined radio (SDR) brings the flexibility of software to the domain of wireless protocol design, promising both an ideal platform for research and innovation and the rapid deployment of new protocols on existing hardware. Most existing SDR platforms require careful hand-tuning of low-level code to be useful in the real world. In this talk I will describe Ziria, an SDR platform that is both easily programmable and performant. Ziria introduces a programming model that builds on ideas from functional programming and that is tailored to wireless physical layer tasks. The model captures the inherent and important distinction between data and control paths in this domain. I will describe the programming model, give an overview of the execution model, compiler optimizations, and current work. We have used Ziria to produce an implementation of 802.11a/g and a partial implementation of LTE."
},
{
"paper_title": "Pension reserve computations on GPUs",
"paper_authors": [
"Christian Harrington",
"Nicolai Dahl",
"Peter Sestoft",
"David Raymond Christiansen"
],
"paper_abstract": "New regulations from the European Union, called Solvency II, require that life insurance and pension providers perform more complicated calculations to demonstrate their solvency. At the same time, exploiting alternative computational paradigms such as GPGPU requires a high degree of expertise about the hardware and ties the computational infrastructure to one particular platform. In an industry where contracts literally last a lifetime, this is far from optimal. We demonstrate the feasibility of an alternative approach in which life insurance and pension products are represented in a high-level, declarative, domain-specific language from which platform-specific high-performance code can be generated. Specifically, we generate CUDA C code after applying both domain- and platform-specific optimizations. This code significantly outperforms equivalent code running on conventional CPUs."
},
{
"paper_title": "Parallel computation of multifield topology: experience of Haskell in a computational science application",
"paper_authors": [
"David J. Duke",
"Fouzhan Hosseini",
"Hamish Carr"
],
"paper_abstract": "Codes for computational science and downstream analysis (visualization and/or statistical modelling) have historically been dominated by imperative thinking, but this situation is evolving, both through adoption of higher-level tools such as Matlab, and through some adoption of functional ideas in the next generation of toolkits being driven by the vision of extreme-scale computing. However, this is still a long way from seeing a functional language like Haskell used in a live application. This paper makes three contributions to functional programming in computational science. First, we describe how use of Haskell was interleaved in the development of the first practical approach to multifield topology, and its application to the analysis of data from nuclear simulations that has led to new insight into fission. Second, we report subsequent developments of the functional code (i) improving sequential performance to approach that of an imperative implementation, and (ii) the introduction of parallelism through four skeletons exhibiting good scaling and different time/space trade-offs. Finally we consider the broader question of how, where, and why functional programming may - or may not - find further use in computational science."
},
{
"paper_title": "An efficient representation for lazy constructors using 64-bit pointers",
"paper_authors": [
"Georgios Fourtounis",
"Nikolaos Papaspyrou"
],
"paper_abstract": "Pointers in the AMD64 architecture contain unused space, a feature often exploited by modern programming language implementations. We use this property in a defunctionalizing compiler for a subset of Haskell, generating fast programs having a compact memory representation of their runtime structures. We demonstrate that, in most cases, the compact representation is faster, uses less memory and has better cache characteristics. Our prototype shows competitive performance when compared to GHC with full optimizations on."
},
{
"paper_title": "Size slicing: a hybrid approach to size inference in futhark",
"paper_authors": [
"Troels Henriksen",
"Martin Elsman",
"Cosmin E. Oancea"
],
"paper_abstract": "We present a shape inference analysis for a purely-functional language, named Futhark, that supports nested parallelism via array combinators such as map, reduce, filter}, and scan}. Our approach is to infer code for computing precise shape information at run-time, which in the most common cases can be effectively optimized by standard compiler optimizations. Instead of restricting the language or sacrificing ease of use, the language allows the occasional shape-dynamic, and even shape-misbehaving, constructs. Inherently shape-dynamic code is treated with a fall-back technique that preserves, asymptotically, the number of operations of the program and that computes and returns the array's shape alongside with its value. This approach leads to a shape-dependent system with existentially-quantified types, where static shape inference corresponds to eliminating existential quantifications from the types of program expressions. We optimize the common case to negligible overhead via size slicing: a technique that separates the computation of the array's shape from its values. This allows the shape to be calculated in advance and to be used to instantiate the previously existentially-quantified shapes of the value slice. We report negligible overhead, on several mini-benchmarks and three real-world applications."
},
{
"paper_title": "Defunctionalizing push arrays",
"paper_authors": [
"Bo Joel Svensson",
"Josef Svenningsson"
],
"paper_abstract": "Recent work on embedded domain specific languages (EDSLs) for high performance array programming has given rise to a number of array representations. In Feldspar and Obsidian there are two different kinds of arrays, called Pull and Push arrays. Both Pull and Push arrays are deferred; they are methods of computing arrays, rather than elements stored in memory. The reason for having multiple array types is to obtain code that performs better. Pull and Push arrays provide this by guaranteeing that operations fuse automatically. It is also the case that some operations are easily implemented and perform well on Pull arrays, while for some operations, Push arrays provide better implementations. But do we really need to have more than one array representation? In this paper we derive a new array representation from Push arrays that have all the good qualities of Pull and Push arrays combined. This new array representation is obtained via defunctionalization of a Push array API."
},
{
"paper_title": "Fusing filters with integer linear programming",
"paper_authors": [
"Amos Robinson",
"Ben Lippmeier",
"Gabriele Keller"
],
"paper_abstract": "The key to compiling functional, collection oriented array programs into efficient code is to minimise memory traffic. Simply fusing subsequent array operations into a single computation is not sufficient; we also need to cluster separate traversals of the same array into a single traversal. Previous work demonstrated how Integer Linear Programming (ILP) can be used to cluster the operators in a general data-flow graph into subgraphs, which can be individually fused. However, these approaches can only handle operations which preserve the size of the array, thereby missing out on some optimisation opportunities. This paper addresses this shortcoming by extending the ILP approach with support for size-changing operations, using an external ILP solver to find good clusterings."
},
{
"paper_title": "Lazy data-oriented evaluation strategies",
"paper_authors": [
"Prabhat Totoo",
"Hans-Wolfgang Loidl"
],
"paper_abstract": "This paper presents a number of flexible parallelism control mechanisms in the form of evaluation strategies for tree-like data structures implemented in Glasgow parallel Haskell. We achieve additional flexibility by using laziness and circular programs in the coordination code. Heuristics-based parameter selection is employed to auto-tune these strategies for improved performance on a shared-memory machine without programmer-specified parameters. In particular for unbalanced trees we demonstrate improved performance on a state-of-the-art multi-core server: giving a speedup of up to 37.5 on 48 cores for a constructed test program, and up to 15 for two other non-trivial applications using these strategies, a Barnes-Hut implementation of the n-body problem and a sparse matrix multiplication implementation."
},
{
"paper_title": "Group communication patterns for high performance computing in scala",
"paper_authors": [
"Felix P. Hargreaves",
"Daniel Merkle",
"Peter Schneider-Kamp"
],
"paper_abstract": "We developed a Functional Object-Oriented Parallel framework (FooPar) for high-level high-performance computing in Scala. Central to this framework are Distributed Memory Parallel Data structures (DPDs), i.e., collections of data distributed in a shared nothing system together with parallel operations on these data. In this paper, we first present FooPar's architecture and the idea of DPDs and group communications. Then, we show how DPDs can be implemented elegantly and efficiently in Scala based on the Traversable/Builder pattern, unifying Functional and Object-Oriented Programming. We prove the correctness and safety of one communication algorithm and show how specification testing (via ScalaCheck) can be used to bridge the gap between proof and implementation. Furthermore, we show that the group communication operations of FooPar outperform those of the MPJ Express open source MPI-bindings for Java, both asymptotically and empirically. FooPar has already been shown to be capable of achieving close-to-optimal performance for dense matrix-matrix multiplication via JNI. In this article, we present results on a parallel implementation of the Floyd-Warshall algorithm in FooPar, achieving more than 94% efficiency compared to the serial version on a cluster using 100 cores for matrices of dimension 38000 x 38000."
},
{
"paper_title": "Native offload of Haskell repa programs to integrated GPUs",
"paper_authors": [
"Hai Liu",
"Laurence E. Day",
"Neal Glew",
"Todd A. Anderson",
"Rajkishore Barik"
],
"paper_abstract": "In light of recent hardware advances, general-purpose computing on graphics processing units (GPGPU) is becoming increasingly commonplace, and needs novel programming models due to GPUs' radically different architecture. For the most part, existing approaches to programming GPUs within a high-level programming language choose to embed a domain-specific language (DSL) within a host metalanguage and then implement a compiler that maps programs written within that DSL to code in low-level languages such as OpenCL or CUDA. An alternative, underexplored, approach is to compile a restricted subset of the host language itself directly down to OpenCL/CUDA. We believe more research should be done to compare these two approaches and their relative merits. As a step in this direction, we implemented a quick proof of concept of the alternative approach. Specifically, we extend the Repa library with a computeG function to offload a computation to the GPU. As long as the requested computation meets certain restrictions, we compile it to OpenCL 2.0 using the recently added feature for shared virtual memory. We can successfully run nine benchmarks on an Intel integrated GPU. We obtain the expected performance from the GPU on six of those benchmarks, and are close to the expected performance on two more. In this paper, we describe an offload primitive for Haskell, how to extend Repa to use it, how to implement that primitive in the Intel Labs Haskell Research Compiler, and evaluate the approach on nine benchmarks, comparing to two different CPUs, and for one benchmark to hand-written OpenCL code."
},
{
"paper_title": "LambdaJIT: a dynamic compiler for heterogeneous optimizations of STL algorithms",
"paper_authors": [
"Thibaut Lutz",
"Vinod Grover"
],
"paper_abstract": "C++11 introduced a set of new features to extend the core language and the standard library. Amongst the new features are basic blocks for concurrency management like threads and atomic operation support, and a new syntax to declare single purpose, one off functions, called lambda functions, which integrate nicely to the Standard Template Library (STL). The STL provides a set of high level algorithms operating on data ranges, often applying a user defined function, which can now be expressed as a lambda function. Together, an STL algorithm and a lambda function provides a concise and efficient way to express a data traversal pattern and localized computation. This paper presents LambdaJIT; a C++11 compiler and a runtime system which enable lambda functions used alongside STL algorithms to be optimized or even re-targeted at runtime. We use compiler integration of the new C++ features to analyze and automatically parallelize the code whenever possible. The compiler also injects part of a program's internal representation into the compiled program binary, which can be used by the runtime to re-compile and optimize the code. We take advantage of the features of lambda functions to create runtime optimizations exceeding those of traditional offline or online compilers. Finally, the runtime can use the embedded intermediate representation with a different backend target to safely offload computation to an accelerator such as a GPU, matching and even outperforming CUDA by up to 10%."
}
]
},
{
"proceeding_title": "FHPC '13:Proceedings of the 2nd ACM SIGPLAN workshop on Functional high-performance computing",
"proceeding_contents": [
{
"paper_title": "The manticore project",
"paper_authors": [
"Matthew Fluet"
],
"paper_abstract": "The Manticore project is a research effort to design and implement a parallel functional programming language that targets commodity multicore and shared-memory multiprocessors. Our language is a dialect of Standard ML, called Parallel ML (PML), that starts with a strict, mutation-free functional core and extends with both implicitly-threaded constructs for fine-grain parallelism and CML-style explicit concurrency for coarse-grain parallelism. We have a prototype implementation that demonstrates both reasonable sequential performance and good scalability on both 32-core Intel machines and 48-core AMD machines. Our past research contributions include: a parallel implementation of CML; a novel infrastructure for nested schedulers; a collection of expressive implicitly-threaded parallel constructs with mostly sequential semantics; a Lazy Tree Splitting (LTS) strategy for performance-robust work-stealing of parallel computations over irregular tree-like data structures. In this talk, I will motivate and describe the high-points in both the design of the Parallel ML language and the implementation of the Manticore compiler and runtime system. After briefly discussing some notable results among our past research contributions, I will highlight our most recent research efforts. In one line of work, we have demonstrated the importance of treating even commodity desktops and servers as non-uniform memory access (NUMA) machines. This is particularly important for the scalability of parallel garbage collection, where unbalanced work with lower memory traffic is often better than balanced work with high memory traffic. In another line of work, we have explored data-only flattening, a compilation strategy for nested data parallelism the eschews the traditional vectorization approach which transforms both control and data and was designed for wide-vector SIMD architectures. Instead, data-only flattening transforms nested data structures, but leaves control structures intact, a strategy that is better suited to multicore architectures. Finally, we are exploring language features that provide controlled forms of (deterministic and nondeterministic) mutable state within parallel computations. We begin with the observation that there are parallel stateful algorithms that exhibit significantly better performance than the corresponding parallel algorithm without mutable state. To support such algorithms, we extend Manticore two with memoziation of pure functions using a high-performance implementation of a dynamically sized, parallel hash table to provide scalable performance. We are also exploring various execution models for general mutable state, with the crucial design criteria that all executions should preserve the ability to reason locally about the behavior of code."
},
{
"paper_title": "ViperVM: a runtime system for parallel functional high-performance computing on heterogeneous architectures",
"paper_authors": [
"Sylvain Henry"
],
"paper_abstract": "The current trend in high-performance computing is to use heterogeneous architectures (i.e. multi-core with accelerators such as GPUs or Xeon Phi) because they offer very good performance over energy consumption ratios. Programming these architectures is notoriously hard, hence their use is still somewhat restricted to parallel programming experts. The situation is improving with frameworks using high-level programming models to generate efficient computation kernels for these new accelerator architectures. However, an orthogonal issue is to efficiently manage memory and kernel scheduling especially on architectures containing multiple accelerators. Task graph based runtime systems have been a first step toward efficiently automatizing these tasks. However they introduce new challenges of their own such as task granularity adaptation that cannot be easily automatized. In this paper, we present a programming model and a preliminary implementation of a runtime system called ViperVM that takes advantage of parallel functional programming to extend task graph based runtime systems. The main idea is to substitute dynamically created task graphs with pure functional programs that are evaluated in parallel by the runtime system. Programmers can associate kernels (written in OpenCL, CUDA, Fortran...) to identifiers that can then be used as pure functions in programs. During parallel evaluation, the runtime system automatically schedules kernels on available accelerators when it has to reduce one of these identifiers. An extension of this mechanism consists in associating both a kernel and a functional expression to the same identifier and to let the runtime system decide either to execute the kernel or to evaluate the expression. We show that this mechanism can be used to perform dynamic granularity adaptation."
},
{
"paper_title": "Towards a streaming model for nested data parallelism",
"paper_authors": [
"Frederik M. Madsen",
"Andrzej Filinski"
],
"paper_abstract": "The language-integrated cost semantics for nested data parallelism pioneered by NESL provides an intuitive, high-level model for predicting performance and scalability of parallel algorithms with reasonable accuracy. However, this predictability, obtained through a uniform, parallelism-flattening execution strategy, comes at the price of potentially prohibitive space usage in the common case of computations with an excess of available parallelism, such as dense-matrix multiplication. We present a simple nested data-parallel functional language and associated cost semantics that retains NESL's intuitive work--depth model for time complexity, but also allows highly parallel computations to be expressed in a space-efficient way, in the sense that memory usage on a single (or a few) processors is of the same order as for a sequential formulation of the algorithm, and in general scales smoothly with the actually realized degree of parallelism, not the potential parallelism. The refined semantics is based on distinguishing formally between fully materialized (i.e., explicitly allocated in memory all at once) \"vectors\" and potentially ephemeral \"sequences\" of values, with the latter being bulk-processable in a streaming fashion. This semantics is directly compatible with previously proposed piecewise execution models for nested data parallelism, but allows the expected space usage to be reasoned about directly at the source-language level. The language definition and implementation are still very much work in progress, but we do present some preliminary examples and timings, suggesting that the streaming model has practical potential."
},
{
"paper_title": "Towards systematic parallel programming of graph problems via tree decomposition and tree parallelism",
"paper_authors": [
"Qi Wang",
"Meixian Chen",
"Yu Liu",
"Zhenjiang Hu"
],
"paper_abstract": "Many graph optimization problems, such as the Maximum Weighted Independent Set problem, are NP-hard. For large scale graphs that have billions of edges or vertices, these problems are hard to be computed directly even using popular data-intensive frameworks like MapReduce or Pregel that are deployed on large computer-clusters, because of the extremely high computational complexity. On the other hand, many studies have shown the existence of polynomial time algorithms on graphs with bounded treewidth, which makes it possible to solve these problems on large graphs. However, the algorithms are usually difficult to be understood or parallelized. In this paper, we propose a novel programming framework which provides a user-friendly programming interface and automatic in-black-box parallelization. The programming interface, which is a simple and straightforward abstraction called Generate-Test-Aggregate (GTA for short), is used to describe a set of graph problems. We propose to derive bottom-up dynamic programming algorithms on tree decompositions from the user-specified GTA algorithms, and further transform the bottom-up algorithms to parallel ones which run in a divide-and-conquer manner on a list of subtrees. Besides, balanced tree partition strategies are discussed for efficient parallel computing. Our preliminary experimental results on the Maximum Weighted Independent Set problem demonstrate the practical viability of our approaches."
},
{
"paper_title": "Counting and occurrence sort for GPUs using an embedded language",
"paper_authors": [
"Josef David Svenningsson",
"Bo Joel Svensson",
"Mary Sheeran"
],
"paper_abstract": "This paper investigates two sorting algorithms: counting sort and a variation, occurrence sort, which also removes duplicate elements, and examines their suitability for running on the GPU. The duplicate removing variation turns out to have a natural functional, data-parallel implementation which makes it particularly interesting for GPUs. The algorithms are implemented in Obsidian, a high-level domain specific language for GPU programming. Measurements show that our implementations in many cases outperform the sorting algorithm provided by the library Thrust. Furthermore, occurrence sort is another factor of two faster than ordinary counting sort. We conclude that counting sort is an important contender when considering sorting algorithms for the GPU, and that occurrence sort is highly preferable when applicable. We also show that Obsidian can produce very competitive code."
},
{
"paper_title": "A T2 graph-reduction approach to fusion",
"paper_authors": [
"Troels Henriksen",
"Cosmin Eugen Oancea"
],
"paper_abstract": "Fusion is one of the most important code transformations as it has the potential to substantially optimize both the memory hierarchy time overhead and, sometimes asymptotically, the space requirement. In functional languages, fusion is naturally and relatively easily derived as a producer-consumer relation between program constructs that expose a richer, higher-order algebra of program invariants, such as the map-reduce list homomorphisms. In imperative languages, fusing producer-consumer loops requires dependency analysis on arrays applied at loop-nest level. Such analysis, however, has often been labeled as \"heroic effort\" and, if at all, is supported only in its simplest and most conservative form in industrial compilers. Related implementations in the functional context typically apply fusion only when the to-be-fused producer is used exactly once, i.e., in the consumer. This guarantees that the transformation is conservative: the resulting program does not duplicate computation. We show that the above restriction is more conservative than needed, and present a structural-analysis technique, inspired from the T1--T2 transformation for reducible data flow, that enables fusion even in some cases when the producer is used in different consumers and without duplicating computation. We report an implementation of the fusion algorithm for a functional-core language, named L0, which is intended to support nested parallelism across regular multi-dimensional arrays. We succinctly describe L0's semantics and the compiler infrastructure on which the fusion transformation relies, and present compiler-generated statistics related to fusion on a set of six benchmarks."
},
{
"paper_title": "Semantics-preserving data layout transformations for improved vectorisation",
"paper_authors": [
"Artjoms Sinkarovs",
"Sven-Bodo Scholz"
],
"paper_abstract": "Data-Layouts that are favourable from an algorithmic perspective often are less suitable for vectorisation, i.e., for an effective use of modern processor's vector instructions. This paper presents work on a compiler driven approach towards automatically transforming data layouts into a form that is suitable for vectorisation. In particular, we present a program transformation for a first-order functional array programming language that systematically modifies they layouts of all data structures. At the same time, the transformation also adjusts the code that operates on these structures so that the overall computation remains unchanged. We define a correctness criterion for layout modifying program transformations and we show that our transformation abides to this criterion."
},
{
"paper_title": "LVars: lattice-based data structures for deterministic parallelism",
"paper_authors": [
"Lindsey Kuper",
"Ryan R. Newton"
],
"paper_abstract": "Programs written using a deterministic-by-construction model of parallel computation are guaranteed to always produce the same observable results, offering programmers freedom from subtle, hard-to-reproduce nondeterministic bugs that are the scourge of parallel software. We present LVars, a new model for deterministic-by-construction parallel programming that generalizes existing single-assignment models to allow multiple assignments that are monotonically increasing with respect to a user-specified lattice. LVars ensure determinism by allowing only monotonic writes and \"threshold\" reads that block until a lower bound is reached. We give a proof of determinism and a prototype implementation for a language with LVars and describe how to extend the LVars model to support a limited form of nondeterminism that admits failures but never wrong answers."
},
{
"paper_title": "Towards a functional run-time for dense NLA domain",
"paper_authors": [
"Mauro Blanco",
"Pablo Perdomo",
"Pablo Ezzatti",
"Alberto Pardo",
"Marcos Viera"
],
"paper_abstract": "We investigate the use of functional programming to develop a numerical linear algebra run-time; i.e. a framework where the solvers can be adapted easily to different contexts and task parallelism can be attained (semi-) automatically. We follow a bottom up strategy, where the first step is the design and implementation of a framework layer, composed by a functional version of BLAS (Basic Linear Algebra Subprograms) routines. The framework allows the manipulation of arbitrary representations for matrices and vectors and it is also possible to write and combine multiple implementations of BLAS operations based on different algorithms and parallelism strategies. Using this framework, we implement a functional version of Cholesky factorization, which serves as a proof of concept to evaluate the flexibility and performance of our approach."
},
{
"paper_title": "Data parallelism in Haskell",
"paper_authors": [
"Manuel M.T. Chakravarty"
],
"paper_abstract": "The implicit data parallelism in collective operations on aggregate data structures constitutes an attractive parallel programming model for functional languages. Beginning with our work on integrating nested data parallelism into Haskell, we explored a variety of different approaches to array-centric data parallel programming in Haskell, experimented with a range of code generation and optimisation strategies, and targeted both multicore CPUs and GPUs. In addition to practical tools for parallel programming, the outcomes of this research programme include more widely applicable concepts, such as Haskell's type families and stream fusion. In this talk, I will contrast the different approaches to data parallel programming that we explored. I will discuss their strengths and weaknesses and review what we have learnt in the course of exploring the various options. This includes our experience of implementing these approaches in the Glasgow Haskell Compiler as well the experimental results that we have gathered so far. Finally, I will outline the remaining open challenges and our plans for the future. This talk is based on joint work with Gabriele Keller, Sean Lee, Roman Leshchinskiy, Ben Lippmeier, Trevor L. McDonell, and Simon Peyton Jones."
}
]
},
{
"proceeding_title": "FHPC '12:Proceedings of the 1st ACM SIGPLAN workshop on Functional high-performance computing",
"proceeding_contents": [
{
"paper_title": "Using domain-specific languages and access-execute descriptors to expand the parallel code synthesis design space: keynote talk",
"paper_authors": [
"Paul H.J. Kelly"
],
"paper_abstract": "This talk is about the following idea: can we simultaneously raise the level at which programmers can reason about code, and also provide the compiler with a model of the computation that enables it to generate faster code than you could reasonably write by hand? We have been working with three large computational fluid dynamics frameworks [3, 5, 8], and I will present some of our experience in building compiler tools at various levels of abstraction. Our primary goal is to build tools that automatically synthesise the best possible implementation. By getting the abstraction right, we can capture design choices far beyond what a conventional compiler can do. I will illustrate this with examples involving low-level parallel code generation (eg for GPUs) [1, 4], high-level cross-cutting almost-algorithmic choices (such as whether to actually build a global sparse system matrix) [6, 7], and semantic properties (enabling massive common subexpression elimination in finite-element assembly). I will also show some of the power of a generative approach in supporting free navigation of the alternatives, such as refining either the mesh or the polynomial order in a finite-element fluid dynamics application [2]. What is the right code to generate, for a given hardware platform? How does this change as problem parameters change? The key, we believe, is to start with the right representation of the problem, and to build tools that can automate the combination of code generation alternatives."
},
{
"paper_title": "Parallel programming in Haskell almost for free: an embedding of intel's array building blocks",
"paper_authors": [
"Bo Joel Svensson",
"Mary Sheeran"
],
"paper_abstract": "Nowadays, performance in processors is increased by adding more cores or wider vector units, or by combining accelerators like GPUs and traditional cores on a chip. Programming for these diverse architectures is a challenge. We would like to exploit all the resources at hand without putting too much burden on the programmer. Ideally, the programmer should be presented with a machine model abstracted from the specific number of cores, SIMD width or the existence of a GPU or not. Intel's Array Building Blocks (ArBB) is a system that takes on these challenges. ArBB is a language for data parallel and nested data parallel programming, embedded in C++. By offering a retargetable dynamic compilation framework, it provides vectorisation and threading to programmers without the need to write highly architecture specific code. We aim to bring the same benefits to the Haskell programmer by implementing a Haskell frontend (embedding) of the ArBB system. We call this embedding EmbArBB. We use standard Haskell embedded language procedures to provide an interface to the ArBB functionality in Haskell. EmbArBB is work in progress and does not currently support all of the ArBB functionality. Some small programming examples illustrate how the Haskell embedding is used to write programs. ArBB code is short and to the point in both C++ and Haskell. Matrix multiplication has been benchmarked in sequential C++, ArBB in C++, EmbArBB and the Repa library. The C++ and the Haskell embeddings have almost identical performance, showing that the Haskell embedding does not impose any large extra overheads. Two image processing algorithms have also been benchmarked against Repa. In these benchmarks at least, EmbArBB performance is much better than that of the Repa library, indicating that building on ArBB may be a cheap and easy approach to exploiting data parallelism in Haskell."
},
{
"paper_title": "Avalanche: a fine-grained flow graph model for irregular applications on distributed-memory systems",
"paper_authors": [
"Jeremiah J. Willcock",
"Ryan R. Newton",
"Andrew Lumsdaine"
],
"paper_abstract": "Flow graph models have recently become increasingly popular as a way to express parallel computations. However, most of these models either require specialized languages and compilers or are library-based solutions requiring coarse-grained applications to achieve acceptable performance. Yet, graph algorithms and other irregular applications are increasingly important to modern high-performance computing, and these applications are not amenable to coarsening without complicating algorithm structure. One effective existing approach for these applications relies on active messages;. However, the separation of control flow between the main program and active message handlers introduces programming difficulties. To ameliorate this problem, we present Avalanche, a flow graph model for fine-grained applications that automatically generates active-message handlers. Avalanche is built as a C++ library on top of our previously-developed Active Pebbles model; a set of combinators builds graphs at compile-time, allowing several optimizations to be applied by the library and a standard C++ compiler. In particular, consecutive flow graph nodes can be fused; experimental results show that flow graphs built from small components can still efficiently operate on fine-grained data."
},
{
"paper_title": "Harnessing parallelism in FPGAs using the hume language",
"paper_authors": [
"Jocelyn Sérot",
"Greg Michaelson"
],
"paper_abstract": "We propose to use Hume, a general purpose, functionally inspired, programming language, initially oriented to resource-aware embedded applications, to implement fine-grain parallel applications on FPGAs. We show that the Hume description of programs as a set of asynchronous boxes connected by wires has a very natural interpretation in terms of register-transfer level hardware description, hence leading to efficient implementations on FPGAs. The paper describes the basic compilation process from a subset of Hume to synthetisable RTL VHDL and show preliminary experimental results obtained with a very simple perceptron application."
},
{
"paper_title": "Usage of petri nets for high performance computing",
"paper_authors": [
"Stanislav Böhm",
"Marek Běhálek"
],
"paper_abstract": "Petri nets are a well established graphical and mathematical modelling language for a description of concurrent systems. The main scope of this paper is to present our approach how to use Petri nets for high-performance computing. They are rarely used in this area. As a proof of concept, we are developing a tool Kaira. The modelling language in the tool is based on our extension of Coloured Petri Nets. The basic concept is to use a visual language to model parallel behaviour and communication. Sequential parts of a program are written in C/C++. In contrast to other Petri Nets based tools, Kaira is not intended only for modelling and simulation, but it can also generate standalone parallel applications from models. Generated applications use MPI and threads. This paper also presents new Kaira's features including modules for computations on structured objects, more controllable semantics of mapping to MPI processes and a support for the hybrid computing."
},
{
"paper_title": "Haskell vs. f# vs. scala: a high-level language features and parallelism support comparison",
"paper_authors": [
"Prabhat Totoo",
"Pantazis Deligiannis",
"Hans-Wolfgang Loidl"
],
"paper_abstract": "This paper provides a performance and programmability comparison of high-level parallel programming support in Haskell, F# and Scala. Developing several parallel versions, we employ skeleton-based, semi-explicit and explicit approaches to parallelism. We focus on advanced language features for separating computational and coordination aspects of the code and tuning performance. We also assess the impact of functional purity and multi-paradigm design of the languages on program development and performance. Basis for these comparisons are several Barnes-Hut implementations of the n-body problem in all three languages, on both Linux and Windows. Our performance measurements on state-of-the-art multi-cores achieve a speedup up to 5.62 (on 8 cores) with a highly-tuned Haskell version. For comparable implementations in Scala and F# we achieve speedups of 4.51 (on 8 cores) and 2.28 (on 4 cores), respectively. We observe that near best speedups are achieved using the highest level abstraction in these languages."
},
{
"paper_title": "Financial software on GPUs: between Haskell and Fortran",
"paper_authors": [
"Cosmin E. Oancea",
"Christian Andreetta",
"Jost Berthold",
"Alain Frisch",
"Fritz Henglein"
],
"paper_abstract": "This paper presents a real-world pricing kernel for financial derivatives and evaluates the language and compiler tool chain that would allow expressive, hardware-neutral algorithm implementation and efficient execution on graphics-processing units (GPU). The language issues refer to preserving algorithmic invariants, e.g., inherent parallelism made explicit by map-reduce-scan functional combinators. Efficient execution is achieved by manually; applying a series of generally-applicable compiler transformations that allows the generated-OpenCL code to yield speedups as high as 70x and 540x on a commodity mobile and desktop GPU, respectively. Apart from the concrete speed-ups attained, our contributions are twofold: First, from a language perspective;, we illustrate that even state-of-the-art auto-parallelization techniques are incapable of discovering all the requisite data parallelism when rendering the functional code in Fortran-style imperative array processing form. Second, from a performance perspective;, we study which compiler transformations are necessary to map the high-level functional code to hand-optimized OpenCL code for GPU execution. We discover a rich optimization space with nontrivial trade-offs and cost models. Memory reuse in map-reduce patterns, strength reduction, branch divergence optimization, and memory access coalescing, exhibit significant impact individually. When combined, they enable essentially full utilization of all GPU cores. Functional programming has played a crucial double role in our case study: Capturing the naturally data-parallel structure of the pricing algorithm in a transparent, reusable and entirely hardware-independent fashion; and supporting the correctness of the subsequent compiler transformations to a hardware-oriented target language by a rich class of universally valid equational properties. Given the observed difficulty of automatically parallelizing imperative sequential code and the inherent labor of porting hardware-oriented and -optimized programs, our case study suggests that functional programming technology can facilitate high-level; expression of leading-edge performant portable; high-performance systems for massively parallel hardware architectures."
},
{
"paper_title": "Seeing the futures: profiling shared-memory parallel racket",
"paper_authors": [
"James Swaine",
"Burke Fetscher",
"Vincent St-Amour",
"Robert Bruce Findler",
"Matthew Flatt"
],
"paper_abstract": "This paper presents the latest chapter in our adventures coping with a large, sequentially-tuned, legacy runtime system in today's parallel world. Specifically, this paper introduces our new graphical visualizer that helps programmers understand how to program in parallel with Racket's futures and, to some extent, what performs well in sequential Racket. Overall, our experience with parallelism in Racket is that we can achieve reasonable parallel performance in Racket without sacrificing the most important property of functional programming language implementations, namely safety. That is, Racket programmers are guaranteed that every Racket primitive (and thus all functions built using Racket primitives) will either behave properly, or it will signal an error explaining what went wrong. That said, however, it is challenging to understand how to best use futures to achieve interesting speedups, and the visualizer is our attempt to more widely disseminate key performance details of the runtime system in order to help Racket programmers maximize performance."
},
{
"paper_title": "Parallel discrete event simulation with Erlang",
"paper_authors": [
"Luca Toscano",
"Gabriele D'Angelo",
"Moreno Marzolla"
],
"paper_abstract": "Discrete Event Simulation (DES) is a widely used technique in which the state of the simulator is updated by events happening at discrete points in time (hence the name). DES is used to model and analyze many kinds of systems, including computer architectures, communication networks, street traffic, and others. Parallel and Distributed Simulation (PADS) aims at improving the efficiency of DES by partitioning the simulation model across multiple processing elements, in order to enable larger and/or more detailed studies to be carried out. The interest on PADS is increasing since the widespread availability of multicore processors and affordable high performance computing clusters. However, designing parallel simulation models requires considerable expertise, the result being that PADS techniques are not as widespread as they could be. In this paper we describe ErlangTW, a parallel simulation middleware based on the Time Warp synchronization protocol. ErlangTW is entirely written in Erlang, a concurrent, functional programming language specifically targeted at building distributed systems. We argue that writing parallel simulation models in Erlang is considerably easier than using conventional programming languages. Moreover, ErlangTW allows simulation models to be executed either on single-core, multicore and distributed computing architectures. We describe the design and prototype implementation of ErlangTW, and report some preliminary performance results on multicore and distributed architectures using the well known PHOLD benchmark."
},
{
"paper_title": "An embedded DSL for stochastic processes: research article",
"paper_authors": [
"Michael Flænø Werk",
"Joakim Ahnfelt-Rønne",
"Ken Friis Larsen"
],
"paper_abstract": "We present a domain specific language embedded in Haskell for specifying stochastic processes, called SPL;. It is designed with the goal of matching the notation used in mathematical finance, where the price of a financial contract is specified using stochastic processes and distributions. SPL; is declarative in the sense that it is agnostic of the choice of discretization and of the computational model. We provide an implementation of SPL; that performs Monte Carlo simulation using GPGPU, and we present data indicating that this gives a 100x speedup compared to hand-written sequential C, and that the speedup scales linearly with the number of available cores."
}
]
}
]
},
{
"conference_title": "Functional Programming Concepts in Domain-Specific Languages",
"conference_contents": [
{
"proceeding_title": "FPCDSL '13:Proceedings of the 1st annual workshop on Functional programming concepts in domain-specific languages",
"proceeding_contents": [
{
"paper_title": "Bluespec and Haskell",
"paper_authors": [
" Arvind"
],
"paper_abstract": "Bluespec is a commercial hardware design language based on the abstraction of Guarded Atomic Actions (GAAs). GAAs provide a level of modularity and composition that the standard FSM-based representation does not. At it's heart Bluespec can be looked at as a relatively simple DSL (GAAs and modules) with a fully func-tioning Haskell-like meta programming layer on top. Bluespec allows the designer to use different compositional approaches which often results in flexible, robust and efficient designs. We have found that Hardware experts generally have substantially better intuition about which constructions provide better circuit level properties and are easily able to express relatively efficient designs, though, at least initially, in a rather verbose manner. In contrast, functional programmers are almost immediately able to leverage the advanced language features to get a concise and flexible design representation but have difficulty making the generated hardware efficient. As both sets of users gain experience they get better at the other aspect of design and climb towards achieving both microarchitectural and representational excellence simultaneously which appears unmatched by other higher-level general purpose hardware languages. We will present a historical development of Bluespec, and muse on topics such as C-based versus FP-based syntax, DSLs versus Embedded DSLs etc."
},
{
"paper_title": "Functional synthesis of genetic regulatory networks",
"paper_authors": [
"Jacob Beal",
"Aaron Adler"
],
"paper_abstract": "As synthetic biologists improve their ability to engineer complex computations in living organisms, there is increasing interest in using programming languages to assist in the design and composition of biological constructs. In this paper, we argue that there is a natural fit between functional programming and genetic regulatory networks, exploring this connection in depth through the example of BioProto, a piggyback DSL on the Proto general-purpose spatial language. In particular, we present the first formalization of BioProto syntax and semantics, and compare these to the formal syntax and semantics of the parent language Proto. Finally, we examine the pragmatics of implementing BioProto and challenges to proving correctness of BioProto programs."
},
{
"paper_title": "Encoding secure information flow with restricted delegation and revocation in Haskell",
"paper_authors": [
"Doaa Hassan",
"Amr Sabry"
],
"paper_abstract": "Distributed applications typically involve many components, each with unique security and privacy requirements. Such applications require fine-grained access control mechanisms that allow dynamic delegation and revocation of access rights. Embedding such domain-specific requirements in a functional language like Haskell puts all the expressiveness of the host language at the disposal of the domain user. Using a custom monad, we design and implement an embedded Haskell library that embraces the decentralized label model, allowing mutually-distrusting principals to express individual confidentiality and integrity policies. Our language includes first-class references, higher-order functions, declassification and endorsement of policies, and user authority in the presence of global unrestricted delegation. Then, building on previous work by the first author, we extend the language to enable fine-grained dynamic delegation and revocation of access rights. The resulting language generalizes, extends, and simplifies various libraries for expressing and reasoning about information flow."
},
{
"paper_title": "QuaFL: a typed DSL for quantum programming",
"paper_authors": [
"Andrei Lapets",
"Marcus P. da Silva",
"Mike Thome",
"Aaron Adler",
"Jacob Beal",
"Martin Roetteler"
],
"paper_abstract": "Quantum computers represent a novel kind of programmable hardware with properties and restrictions that are distinct from those of classical computers. We investigate how some existing abstractions and programming language features developed within the programming languages community can be adapted to expose the unique capabilities of quantum computers to programmers while at the same time allowing them to manage the new and unfamiliar constraints of programming a quantum device. We introduce QuaFL, a statically typed domain-specific programming language for writing high-level definitions of algorithms that can be compiled into logical quantum circuits. The primary purpose of QuaFL is to support programmers in defining high-level yet physically realizable quantum algorithms and in helping them make informed decisions about implementation trade-offs. QuaFL allows programmers to use high-level data structures including integers, fixed point reals, and arrays within quantum algorithms, and to explicitly define superpositions and unitary transformations on data. The QuaFL type system allows programmers to distinguish between classical and quantum portions of a program, uses a variant of linear types and an orthogonality checking algorithm to ensure the quantum portions are physically realizable, and provides type size annotations that can facilitate automated computation of the quantities of quantum resources that will be necessary to run the compiled program (i.e., a logical quantum circuit)."
},
{
"paper_title": "Embrace, defend, extend: a methodology for embedding preexisting DSLs",
"paper_authors": [
"Abhishek Kulkarni",
"Ryan R. Newton"
],
"paper_abstract": "Domain-specific languages offer programming abstractions that enable higher efficiency, productivity and portability specific to a given application domain. Domain-specific languages such as StreamIt have valuable auto-parallelizing code-generators, but they require learning a new language and tool-chain and may not integrate easily with a larger application. One solution is to transform such standalone DSLs into embedded languages within a general-purpose host language. This prospect comes with its own challenges, namely the compile-time and runtime integration of the two languages. In this paper, we address these challenges, presenting our solutions in the context of a prototype embedding of StreamIt in Haskell. By demonstrating this methodology, we hope to encourage more reuse of DSL technology, and fewer short-lived reimplementations of existing techniques."
},
{
"paper_title": "Abstract resource cost derivation for logical quantum circuit descriptions",
"paper_authors": [
"Andrei Lapets",
"Martin Roetteler"
],
"paper_abstract": "Resources that are necessary to operate a quantum computer (such as qubits) have significant costs. Thus, there is interest in finding ways to determine these costs for both existing and novel quantum algorithms. Information about these costs (and how they might vary under multiple parameters and circumstances) can then be used to navigate trade-offs and make optimizations within an algorithm implementation. We present a domain-specific language called QuIGL for describing logical quantum circuits; the QuIGL language has specialized features supporting the explicit annotation and automatic derivation of descriptions of the resource costs associated with each logical quantum circuit description (as well as any of its component procedures). We also present a formal framework for defining abstract transformations from QuIGL circuit descriptions into labelled, parameterized quantity expressions that can be used to compute exact counts or estimates of the cost of the circuit along chosen cost dimensions and for given input sizes. We demonstrate how this framework can be instantiated for calculating costs along specific dimensions (such as the number of qubits or the T-depth of a logical quantum circuit)."
},
{
"paper_title": "Sensitivity analysis using type-based constraints",
"paper_authors": [
"Loris D'Antoni",
"Marco Gaboardi",
"Emilio Jesús Gallego Arias",
"Andreas Haeberlen",
"Benjamin Pierce"
],
"paper_abstract": "Function sensitivity --- how much the result of a function can change with respect to linear changes in the input --- is a key concept in many research areas. For instance, in differential privacy, one of the most common mechanisms for turning a (possibly privacy-leaking) query into a differentially private one involves establishing a boundon its sensitivity. One approach to sensitivity analysis is to use a type-based approach, extending the Hindley-Milner type system with functional types capturing statically the sensitivity of a functional expression. This approach --- based on affine logic --- has been used in Fuzz, a language for differentially private queries. We describe an automatic typed-based analysis that infers and checks the sensitivity annotations for simple functional programs. We have implemented a prototype in Fuzz's compiler. The first component of the analysis extends the typechecker to generate nonlinear constraints over the positive real numbers extended with infinity, which are then checked by the Z3 SMT solver; a solution for them will provide an upper bound on the sensitivity annotations and ensure the correctness of the annotations. We also present a simple sensitivity minimization procedure and demonstrate the effectiveness of the approach by analyzing several examples."
}
]
}
]
},
{
"conference_title": "High-level parallel programming and applications",
"conference_contents": [
{
"proceeding_title": "HLPP '11:Proceedings of the fifth international workshop on High-level parallel programming and applications",
"proceeding_contents": [
{
"paper_title": "Towards auto-tuning description language to heterogeneous computing environment",
"paper_authors": [
"Takahiro Katagiri"
],
"paper_abstract": "Computer architectures are becoming more and more complex due to non-standardized memory accesses and hierarchical caches. It is very difficult for scientists and engineers to optimize their code to extract potential performance improvements on these architectures. Due to this, automatic performance tuning (AT) technology, hence, is a key technology to reduce cost of development for high performance numerical software. In this talk, the following two aims are folded. First, we introduce current AT studies. We focus on AT technology for numerical computations in viewpoint of numerical libraries, languages, code generators, and OS run-time software. Second, we explain ABCLibScript [1], which is an auto-tuning description language for C and Fortran90 for numerical computations to numerical software developers. ABCLibScript provides automatic code generation functions for dedicated code optimization, such as loop unrolling, algorithm selection, and varying of specified variables described by the user. We also explain HxABCLibScript[2], which is an AT language with extended function from original ABCLibScript to heterogeneous computer environment, which includes CPU and GPU (Graphics Processing Unit). The description of HxABCLibScript can free from selection of CPU and GPU switching to the arbitrary parts of program from users. The preliminary results show that the function of HxABCLibScript was highly efficient for simple kernels of typical numerical computations, such as a matrix-matrix multiplication, or a stencil computation from the Poisson's equation solver. The automatically generated codes from the description of HxABCLibScript can select the best computer resources between CPU and GPU according to problem size or the number of iterations on the program."
},
{
"paper_title": "Cache size in a cost model for heterogeneous skeletons",
"paper_authors": [
"Khari Armih",
"Greg Michaelson",
"Phil Trinder"
],
"paper_abstract": "High performance architectures are increasingly heterogeneous with shared and distributed memory components. Programming such architectures is complicated and performance portability is a major issue as the architectures evolve. This paper proposes a new architectural cost model that accounts for cache size and improves on heterogeneous architectures, and demonstrates a skeleton-based programming model that simplifies programming heterogeneous architectures. We further demonstrate that the cost model can be exploited by skeletons to improve load balancing on heterogeneous architectures. The heterogeneous skeleton model facilitates performance portability, using the architectural cost model to automatically balance load across heterogeneous components of the architecture. For both a data parallel benchmark, and realistic image processing program we obtain good performance for the heterogeneous skeleton on homogeneous shared and distributed memory architectures, and on three heterogeneous architectures. We also show that taking cache size into account in the model leads to improved balance and performance."
},
{
"paper_title": "An efficient skew-insensitive algorithm for join processing on grid architectures",
"paper_authors": [
"Mohamad Al Hajj Hassan",
"Mostafa Bamha",
"Frédéric Loulergue"
],
"paper_abstract": "Scientific experiments in many domains generate a huge amount of data whose size is in the range of hundreds of megabytes to petabytes. These data are stored on geographically distributed and heterogeneous resources. Researchers who need to analyze and have a fast access to such data are also located all over the globe. Queries executed by these researchers may require the transfer of huge amount of data over the wide area network in a reasonable time. Due to these emerging needs, the grid infrastructure was born. In this paper, we are interested in treating join queries on the grid. We propose a new parallel algorithm allowing to reduce communication and disk Input/Output costs to minimum. This algorithm guarantees a balanced load among all processing nodes in each cluster and then among all the clusters of a grid architecture."
},
{
"paper_title": "Formally specifying and analyzing a parallel virtual machine for lazy functional languages using Maude",
"paper_authors": [
"Georgios Fourtounis",
"Peter Csaba Ölveczky",
"Nikolaos Papaspyrou"
],
"paper_abstract": "Pure lazy functional languages are a promising programming paradigm for harvesting massive parallelism, as their abstraction features and lack of side effects support the development of modular programs without unneeded serialization. We give a new formal message passing semantics for implicitly parallel execution of a lazy functional programming language, based on the intensional transformation that converts programs in functional style to a form that can be executed in a dataflow paradigm. We use rewriting logic to define the semantics of our parallel virtual machine and we use the Maude tool to formally analyze our model. We also briefly discuss a prototype parallel implementation of our model in Erlang."
},
{
"paper_title": "Type system for a safe execution of parallel programs in BSML",
"paper_authors": [
"Frédéric Gava",
"Louis Gesbert",
"Frédéric Loulergue"
],
"paper_abstract": "BSML, or Bulk Synchronous Parallel ML, is a high-level language based on ML and dedicated to parallel computation. In this paper, an extended type system that guarantees the safety of parallel programs is presented. It prevents non-determinism and deadlocks by ensuring that the invariants needed to preserve the structured parallelism are verified. Imperative extensions (references, exceptions) are included, and the system is designed for compatibility with modules."
}
]
},
{
"proceeding_title": "HLPP '10:Proceedings of the fourth international workshop on High-level parallel programming and applications",
"proceeding_contents": [
{
"paper_title": "Calculational parallel programming: parallel programming with homomorphism and mapreduce",
"paper_authors": [
"Zhenjiang Hu"
],
"paper_abstract": "Parallel skeletons are designed to encourage programmers to build parallel programs from ready-made components for which efficient implementations are known to exist, making both parallel programming and parallelization process simpler. Homomorphism and mapReduce are two known parallel skeletons. Homomorphism, widely studied in the program calculation community for more than twenty years, ideally suits the divide-and-conquer parallel computation paradigm over lists, trees, and other general algebraic data types. In addition, it is also equipped with a set of useful theorems for manipulation of homomorphism. On the other hand, mapReduce is a relatively new skeleton but has emerged as one of the most widely used parallel programming platforms for processing data on terabyte and petabyte scales. It allows for easy parallelization of data intensive computations over many machines, and is used daily at companies such as Yahoo!, Google, Amazon, and Facebook. Despite simplicity of these two skeletons, it still remains as a challenge for a programmer to solve his nontrivial problems with these skeletons. Consider, as an example, the known maximum segment sum problem, whose task is to compute the largest possible sum of a consecutive sublists in a given list. It is actually far from being obvious how this problem can be efficiently solved with mapReduce. In this talk, I would like to show a calculational framework that can support systematic development of efficient parallel programs using homomorphism and mapReduce. Being more constructive, this calculational framework for parallel programming is not only helpful in design of efficient parallel programs, but also promising in construction of parallelizing compile.r"
},
{
"paper_title": "SymGrid-Par: a standard skeleton-based framework for computational algebra systems",
"paper_authors": [
"Phil Trinder"
],
"paper_abstract": "The SymGrid-Par framework is being developed as part of the European FP6 SCIEnce project (I3-026133) to provide a standard skeleton-based framework for parallelising large computational algebra problems in the Maple, GAP, Kant and Mupad systems. The computational algebra community uses a number of domain specific high level languages each with specific capabilities, for example GAP specialises in computations over groups. The community are keen to develop standards, to improve interoperability between computer algebra systems (CAS), and to avoid duplicating implementation effort. Algebraic computations are challenging to parallelise as they are symbolic rather than numeric, and hence require a relatively rich set of data structures. Parallel tasks are often generated dynamically, and are of highly irregular size, e.g. varying in size by 5 orders of magnitude. SymGrid-Par orchestrates sequential computational algebra (CA) components into a parallel, and possibly grid-enabled application. It provides a native skeleton-based interface to the CA programmer, so for example a GAP programmer might invoke a parallel GAP map function. There are both generic skeletons like map and reduce, and domain specific skeletons like orbit and transitive closure.. The skeletons are implemented by a coordination server that distributes the work to multiple instances of the sequential CAS on multiple processors; dynamically manages load distribution; and reassembles the results for return to the invoking CAS. The coordination server exploits the dynamic parallelism and load management capabilities of the Eden and GpH parallel Haskells. Invocations between SymGrid-Par components use our new standardised SCSCP protocol, currently supported by 7 CAS, and mathematical objects are represented in the standard XML-based OpenMath format. The generic SymGrid-Par framework delivers performance comparable with, and typically better than, a specialised parallel CAS implementation like ParGAP. Many CA problems have large task granularity, and hence SymGrid-Par gives good performance on both cluster and multicore architectures. Moreover, the standardised interface can be exploited to orchestrate multiple CAS to solve problems that cannot be solved in a single CAS. In the newly-started HPC-GAP project we are adapting the SymGrid-Par design for emerging large-scale HPC architectures. Key issues are scaling to 1 000 000 cores, controlling locality, and recovering from failures."
},
{
"paper_title": "SkePU: a multi-backend skeleton programming library for multi-GPU systems",
"paper_authors": [
"Johan Enmyren",
"Christoph W. Kessler"
],
"paper_abstract": "We present SkePU, a C++ template library which provides a simple and unified interface for specifying data-parallel computations with the help of skeletons on GPUs using CUDA and OpenCL. The interface is also general enough to support other architectures, and SkePU implements both a sequential CPU and a parallel OpenMP backend. It also supports multi-GPU systems. Copying data between the host and the GPU device memory can be a performance bottleneck. A key technique in SkePU is the implementation of lazy memory copying in the container type used to represent skeleton operands, which allows to avoid unnecessary memory transfers. We evaluate SkePU with small benchmarks and a larger application, a Runge-Kutta ODE solver. The results show that a skeleton approach to GPU programming is viable, especially when the computation burden is large compared to memory I/O (the lazy memory copying can help to achieve this). It also shows that utilizing several GPUs have a potential for performance gains. We see that SkePU offers good performance with a more complex and realistic task such as ODE solving, with up to 10 times faster run times when using SkePU with a GPU backend compared to a sequential solver running on a fast CPU."
},
{
"paper_title": "Lessons from implementing the biCGStab method with SkeTo library",
"paper_authors": [
"Kiminori Matsuzaki",
"Kento Emoto"
],
"paper_abstract": "Recent computing environments achieving high performance with parallelism call for methodology for easy parallel programming, and skeletal parallel programming is such a methodology. There have been many studies on the development of parallel skeleton libraries and optimization for skeletal programs, but not so many studies have been done about applying the skeletal parallel programming to real applications. We implemented a BiCGStab method, which is widely used for solving systems of linear equations, with parallel skeletons provided in the parallel skeleton library SkeTo. First we implemented two skeletal programs, then applied optimization techniques, and finally developed efficient skeletal programs compared with the original sequential program. Through the implementation, optimization, and experiments of the skeletal programs, we obtained several lessons for realizing efficient and easy-to-use skeleton libraries. In this paper, we report the development of skeletal programs for the BiCGStab method and summarize the lessons obtained through the process."
},
{
"paper_title": "Estimating parallel performance, a skeleton-based approach",
"paper_authors": [
"Oleg Lobachev",
"Rita Loogen"
],
"paper_abstract": "In this paper we estimate parallel execution times, based on identifying separate \"parts\" of the work done by parallel programs. We assume that programs are described using algorithmic skeletons. Therefore our runtime analysis works without any source code inspection. The time of parallel program execution is expressed in terms of the sequential work and the parallel penalty. We measure these values for different problem sizes and numbers of processors and estimate them for unknown values in both dimensions. This allows us to predict parallel execution time for unknown inputs and non-available processor numbers. Another useful application of our formalism is a measure of parallel program quality. We analyse the values for parallel penalty both for growing input size and for increasing numbers of processing elements. From these data, conclusions on parallel performance and scalability are drawn."
},
{
"paper_title": "BSP-WHY: an intermediate language for deductive verification of BSP programs",
"paper_authors": [
"Jean Fortin",
"Frédéric Gava"
],
"paper_abstract": "We present BSP-Why, a tool for verifying BSP programs. It is intended to be used as an intermediate core-language for verification tools (mainly condition generators) of BSP extensions of realistic programming languages such as C, JAVA, etc. BSP-Why is based on a sequential simulation of the BSP programs which allows to generate pure sequential codes for the back-end condition generator Why and thus benefit of its large range of existing provers - proof assistants or automatic decision procedures. In this manner, BSP-Why is able to generate proof obligations for BSP programs."
},
{
"paper_title": "Parallel greedy graph matching using an edge partitioning approach",
"paper_authors": [
"Md. Mostofa Ali Patwary",
"Rob H. Bisseling",
"Fredrik Manne"
],
"paper_abstract": "We present a parallel version of the Karp-Sipser graph matching heuristic for the maximum cardinality problem. It is bulk-synchronous, separating computation and communication, and uses an edge-based partitioning of the graph, translated from a two-dimensional partitioning of the corresponding adjacency matrix. It is shown that the communication volume of Karp-Sipser graph matching is proportional to that of parallel sparse matrix-vector multiplication (SpMV), so that efficient partitioners developed for SpMV can be used. The algorithm is presented using a small basic set of 7 message types, which are discussed in detail. Experimental results show that for most matrices, edge-based partitioning is superior to vertex-based partitioning, in terms of both parallel speedup and matching quality. Good speedups are obtained on up to 64 processors."
},
{
"paper_title": "Hybrid bulk synchronous parallelism library for clustered smp architectures",
"paper_authors": [
"Khaled Hamidouche",
"Joel Falcou",
"Daniel Etiemble"
],
"paper_abstract": "This paper presents the design and implementation of BSP++, a C++ parallel programming library based on the Bulk Synchronous Parallelism model to perform high performance computing on both SMP and SPMD architectures using OpenMPI and MPI. We show how C++ support for genericity provides a functional and intuitive user interface which still delivers a large fraction of performance compared to hand written code. We show how the library structure and programming models allow simple hybrid programming by composing BSP super-steps and letting BSP++ handling the middleware interface. The performance and scalability of this approach are then assessed by various benchmarks of classic HPC application kernels and distributed algorithms on various hybrid machines including a subset of the GRID5000 grid."
}
]
}
]
},
{
"conference_title": "Haskell",
"conference_contents": [
{
"proceeding_title": "Haskell '14:Proceedings of the 2014 ACM SIGPLAN symposium on Haskell",
"proceeding_contents": [
{
"paper_title": "Effect handlers in scope",
"paper_authors": [
"Nicolas Wu",
"Tom Schrijvers",
"Ralf Hinze"
],
"paper_abstract": "Algebraic effect handlers are a powerful means for describing effectful computations. They provide a lightweight and orthogonal technique to define and compose the syntax and semantics of different effects. The semantics is captured by handlers, which are functions that transform syntax trees. Unfortunately, the approach does not support syntax for scoping constructs, which arise in a number of scenarios. While handlers can be used to provide a limited form of scope, we demonstrate that this approach constrains the possible interactions of effects and rules out some desired semantics. This paper presents two different ways to capture scoped constructs in syntax, and shows how to achieve different semantics by reordering handlers. The first approach expresses scopes using the existing algebraic handlers framework, but has some limitations. The problem is fully solved in the second approach where we introduce higher-order syntax."
},
{
"paper_title": "Embedding effect systems in Haskell",
"paper_authors": [
"Dominic Orchard",
"Tomas Petricek"
],
"paper_abstract": "Monads are now an everyday tool in functional programming for abstracting and delimiting effects. The link between monads and effect systems is well-known, but in their typical use, monads provide a much more coarse-grained view of effects. Effect systems capture fine-grained information about the effects, but monads provide only a binary view: effectful or pure. Recent theoretical work has unified fine-grained effect systems with monads using a monad-like structure indexed by a monoid of effect annotations (called parametric effect monads). This aligns the power of monads with the power of effect systems. This paper leverages recent advances in Haskell's type system (as provided by GHC) to embed this approach in Haskell, providing user-programmable effect systems. We explore a number of practical examples that make Haskell even better and safer for effectful programming. Along the way, we relate the examples to other concepts, such as Haskell's implicit parameters and coeffects."
},
{
"paper_title": "Experience report: the next 1100 Haskell programmers",
"paper_authors": [
"Jasmin Christian Blanchette",
"Lars Hupel",
"Tobias Nipkow",
"Lars Noschinski",
"Dmitriy Traytel"
],
"paper_abstract": "We report on our experience teaching a Haskell-based functional programming course to over 1100 students for two winter terms. The syllabus was organized around selected material from various sources. Throughout the terms, we emphasized correctness through QuickCheck tests and proofs by induction. The submission architecture was coupled with automatic testing, giving students the possibility to correct mistakes before the deadline. To motivate the students, we complemented the weekly assignments with an informal competition and gave away trophies in a award ceremony."
},
{
"paper_title": "Experience report: type-checking polymorphic units for astrophysics research in Haskell",
"paper_authors": [
"Takayuki Muranushi",
"Richard A. Eisenberg"
],
"paper_abstract": "Many of the bugs in scientific programs have their roots in mistreatment of physical dimensions, via erroneous expressions in the quantity calculus. Now that the type system in the Glasgow Haskell Compiler is rich enough to support type-level integers and other promoted datatypes, we can type-check the quantity calculus in Haskell. In addition to basic dimension-aware arithmetic and unit conversions, our units library features an extensible system of dimensions and units, a notion of dimensions apart from that of units, and unit polymorphism designed to describe the laws of physics. We demonstrate the utility of units by writing an astrophysics research paper. This work is free of unit concerns because every quantity expression in the paper is rigorously type-checked."
},
{
"paper_title": "LiquidHaskell: experience with refinement types in the real world",
"paper_authors": [
"Niki Vazou",
"Eric L. Seidel",
"Ranjit Jhala"
],
"paper_abstract": "Haskell has many delightful features. Perhaps the one most beloved by its users is its type system that allows developers to specify and verify a variety of program properties at compile time. However, many properties, typically those that depend on relationships between program values are impossible, or at the very least, cumbersome to encode within the existing type system. Many such properties can be verified using a combination of Refinement Types and external SMT solvers. We describe the refinement type checker liquidHaskell, which we have used to specify and verify a variety of properties of over 10,000 lines of Haskell code from various popular libraries, including containers, hscolour, bytestring, text, vector-algorithms and xmonad. First, we present a high-level overview of liquidHaskell, through a tour of its features. Second, we present a qualitative discussion of the kinds of properties that can be checked -- ranging from generic application independent criteria like totality and termination, to application specific concerns like memory safety and data structure correctness invariants. Finally, we present a quantitative evaluation of the approach, with a view towards measuring the efficiency and programmer effort required for verification, and discuss the limitations of the approach."
},
{
"paper_title": "SmartCheck: automatic and efficient counterexample reduction and generalization",
"paper_authors": [
"Lee Pike"
],
"paper_abstract": "QuickCheck is a powerful library for automatic test-case generation. Because QuickCheck performs random testing, some of the counterexamples discovered are very large. QuickCheck provides an interface for the user to write shrink functions to attempt to reduce the size of counter examples. Hand-written implementations of shrink can be complex, inefficient, and consist of significant boilerplate code. Furthermore, shrinking is only one aspect in debugging: counterexample generalization is the process of extrapolating from individual counterexamples to a class of counterexamples, often requiring a flash of insight from the programmer. To improve counterexample reduction and generalization, we introduce SmartCheck. SmartCheck is a debugging tool that reduces algebraic data using generic search heuristics to efficiently find smaller counterexamples. In addition to shrinking, SmartCheck also automatically generalizes counterexamples to formulas representing classes of counterexamples. SmartCheck has been implemented for Haskell and is freely available."
},
{
"paper_title": "The HdpH DSLs for scalable reliable computation",
"paper_authors": [
"Patrick Maier",
"Robert Stewart",
"Phil Trinder"
],
"paper_abstract": "The statelessness of functional computations facilitates both parallelism and fault recovery. Faults and non-uniform communication topologies are key challenges for emergent large scale parallel architectures. We report on HdpH and HdpH-RS, a pair of Haskell DSLs designed to address these challenges for irregular task-parallel computations on large distributed-memory architectures. Both DSLs share an API combining explicit task placement with sophisticated work stealing. HdpH focuses on scalability by making placement and stealing topology aware whereas HdpH-RS delivers reliability by means of fault tolerant work stealing. We present operational semantics for both DSLs and investigate conditions for semantic equivalence of HdpH and HdpH-RS programs, that is, conditions under which topology awareness can be transparently traded for fault tolerance. We detail how the DSL implementations realise topology awareness and fault tolerance. We report an initial evaluation of scalability and fault tolerance on a 256-core cluster and on up to 32K cores of an HPC platform."
},
{
"paper_title": "Systems demonstration: writing NetBSD sound drivers in Haskell",
"paper_authors": [
"Kiwamu Okabe",
"Takayuki Muranushi"
],
"paper_abstract": "Most strongly typed, functional programming languages are not equipped with a reentrant garbage collector. Therefore such languages are not used for operating systems programming, where the virtues of types are most desired. We propose the use of Context-Local Heaps (CLHs) to achieve reentrancy, which also increasing the speed of garbage collection. We have implemented CLHs in Ajhc, a Haskell compiler derived from jhc, rewritten some NetBSD sound drivers using Ajhc, and benchmarked them. The reentrant, faster garbage collection that CLHs provide opens the path to type-assisted operating systems programming."
},
{
"paper_title": "A seamless, client-centric programming model for type safe web applications",
"paper_authors": [
"Anton Ekblad",
"Koen Claessen"
],
"paper_abstract": "We propose a new programming model for web applications which is (1) seamless; one program and one language is used to produce code for both client and server, (2) client-centric; the programmer takes the viewpoint of the client that runs code on the server rather than the other way around, (3) functional and type-safe, and (4) portable; everything is implemented as a Haskell library that implicitly takes care of all networking code. Our aim is to improve the painful and error-prone experience of today's standard development methods, in which clients and servers are coded in different languages and communicate with each other using ad-hoc protocols. We present the design of our library called Haste.App, an example web application that uses it, and discuss the implementation and the compiler technology on which it depends."
},
{
"paper_title": "Demo proposal: making web applications -XSafe",
"paper_authors": [
"Amit A. Levy",
"David Terei",
"Deian Stefan",
"David Maziéres"
],
"paper_abstract": "Simple is a web framework for Haskell. Simple came out of our work on Hails, a platform for secure web applications. For Hails, we needed a flexible web framework that uses no unsafe language features and can be used to build apps outside the IO monad. Unlike many mainstream web frameworks, Simple does not enforce a particular structure or paradigm. Instead, it simply provides a set of composable building blocks to help developers structure and organize their web applications. We've used Simple to build both traditional web applications as well as applications with explicit, strong safety and security guarantees. In the demonstration, we'll focus on the former -- introducing the framework and motivating it's utility for traditional web apps -- and show how we can leverage the LIO information flow control library to add mandatory security policies to apps."
},
{
"paper_title": "Building secure systems with LIO (demo)",
"paper_authors": [
"Deian Stefan",
"Amit Levy",
"Alejandro Russo",
"David Maziéres"
],
"paper_abstract": "LIO is a decentralized information flow control (DIFC) system, implemented in Haskell. In this demo proposal, we give an overview of the LIO library and show how LIO can be used to build secure systems. In particular, we show how to specify high-level security policies in the context of web applications, and describe how LIO automatically enforces these policies even in the presence of untrusted code."
},
{
"paper_title": "Promoting functions to type families in Haskell",
"paper_authors": [
"Richard A. Eisenberg",
"Jan Stolarek"
],
"paper_abstract": "Haskell, as implemented in the Glasgow Haskell Compiler (GHC), is enriched with many extensions that support type-level programming, such as promoted datatypes, kind polymorphism, and type families. Yet, the expressiveness of the type-level language remains limited. It is missing many features present at the term level, including case expressions, anonymous functions, partially-applied functions, and let expressions. In this paper, we present an algorithm - with a proof of correctness - to encode these term-level constructs at the type level. Our approach is automated and capable of promoting a wide array of functions to type families. We also highlight and discuss those term-level features that are not promotable. In so doing, we offer a critique on GHC's existing type system, showing what it is already capable of and where it may want improvement. We believe that delineating the mismatch between GHC's term level and its type level is a key step toward supporting dependently typed programming."
},
{
"paper_title": "A simple semantics for Haskell overloading",
"paper_authors": [
"J. Garrett Morris"
],
"paper_abstract": "As originally proposed, type classes provide overloading and ad-hoc definition, but can still be understood (and implemented) in terms of strictly parametric calculi. This is not true of subsequent extensions of type classes. Functional dependencies and equality constraints allow the satisfiability of predicates to refine typing; this means that the interpretations of equivalent qualified types may not be interconvertible. Overlapping instances and instance chains allow predicates to be satisfied without determining the implementations of their associated class methods, introducing truly non-parametric behavior. We propose a new approach to the semantics of type classes, interpreting polymorphic expressions by the behavior of each of their ground instances, but without requiring that those behaviors be parametrically determined. We argue that this approach both matches the intuitive meanings of qualified types and accurately models the behavior of programs."
},
{
"paper_title": "Foreign inline code: systems demonstration",
"paper_authors": [
"Manuel M.T. Chakravarty"
]
},
{
"paper_title": "Indentation-sensitive parsing for Parsec",
"paper_authors": [
"Michael D. Adams",
"Ömer S. Ağacan"
],
"paper_abstract": "Several popular languages including Haskell and Python use the indentation and layout of code as an essential part of their syntax. In the past, implementations of these languages used ad hoc techniques to implement layout. Recent work has shown that a simple extension to context-free grammars can replace these ad hoc techniques and provide both formal foundations and efficient parsing algorithms for indentation sensitivity. However, that previous work is limited to bottom-up, LR($k$) parsing, and many combinator-based parsing frameworks including Parsec use top-down algorithms that are outside its scope. This paper remedies this by showing how to add indentation sensitivity to parsing frameworks like Parsec. It explores both the formal semantics of and efficient algorithms for indentation sensitivity. It derives a Parsec-based library for indentation-sensitive parsing and presents benchmarks on a real-world language that show its efficiency and practicality."
},
{
"paper_title": "Reflection without remorse: revealing a hidden sequence to speed up monadic reflection",
"paper_authors": [
"Atze van der Ploeg",
"Oleg Kiselyov"
],
"paper_abstract": "A series of list appends or monadic binds for many monads performs algorithmically worse when left-associated. Continuation-passing style (CPS) is well-known to cure this severe dependence of performance on the association pattern. The advantage of CPS dwindles or disappears if we have to examine or modify the intermediate result of a series of appends or binds, before continuing the series. Such examination is frequently needed, for example, to control search in non-determinism monads. We present an alternative approach that is just as general as CPS but more robust: it makes series of binds and other such operations efficient regardless of the association pattern-- and also provides efficient access to intermediate results. The key is to represent such a conceptual sequence as an efficient sequence data structure. Efficient sequence data structures from the literature are homogeneous and cannot be applied as they are in a type-safe way to series of monadic binds. We generalize them to type aligned sequences and show how to construct their (assuredly order-preserving) implementations. We demonstrate that our solution solves previously undocumented, severe performance problems in iteratees, LogicT transformers, free monads and extensible effects."
}
]
},
{
"proceeding_title": "Haskell '13:Proceedings of the 2013 ACM SIGPLAN symposium on Haskell",
"proceeding_contents": [
{
"paper_title": "An EDSL approach to high performance Haskell programming",
"paper_authors": [
"Johan Ankner",
"Josef David Svenningsson"
],
"paper_abstract": "This paper argues for a new methodology for writing high performance Haskell programs by using Embedded Domain Specific Languages. We exemplify the methodology by describing a complete library, meta-repa, which is a reimplementation of parts of the repa library. The paper describes the implementation of meta-repa and contrasts it with the standard approach to writing high performance libraries. We conclude that even though the embedded language approach has an initial cost of defining the language and some syntactic overhead it gives a more tailored programming model, stronger performance guarantees, better control over optimizations, simpler implementation of fusion and inlining and allows for moving type level programming down to value level programming in some cases. We also provide benchmarks showing that meta-repa is as fast, or faster, than repa. Furthermore, meta-repa also includes push arrays and we demonstrate their usefulness for writing certain high performance kernels such as FFT."
},
{
"paper_title": "Names for free: polymorphic views of names and binders",
"paper_authors": [
"Jean-Philippe Bernardy",
"Nicolas Pouillard"
],
"paper_abstract": "We propose a novel technique to represent names and binders in Haskell. The dynamic (run-time) representation is based on de Bruijn indices, but it features an interface to write and manipulate variables conviently, using Haskell-level lambdas and variables. The key idea is to use rich types: a subterm with an additional free variable is viewed either as forallν.ν → Term(ɑ + ν) or ϶ν.ν x Term(ν.ν) depending on whether it is constructed or analysed. We demonstrate on a number of examples how this approach permits to express term construction and manipulation in a natural way, while retaining the good properties of representations based on de Bruijn indices."
},
{
"paper_title": "Understanding idiomatic traversals backwards and forwards",
"paper_authors": [
"Richard Bird",
"Jeremy Gibbons",
"Stefan Mehner",
"Janis Voigtländer",
"Tom Schrijvers"
],
"paper_abstract": "We present new ways of reasoning about a particular class of effectful Haskell programs, namely those expressed as idiomatic traversals. Starting out with a specific problem about labelling and unlabelling binary trees, we extract a general inversion law, applicable to any monad, relating a traversal over the elements of an arbitrary traversable type to a traversal that goes in the opposite direction. This law can be invoked to show that, in a suitable sense, unlabelling is the inverse of labelling. The inversion law, as well as a number of other properties of idiomatic traversals, is a corollary of a more general theorem characterising traversable functors as finitary containers: an arbitrary traversable object can be decomposed uniquely into shape and contents, and traversal be understood in terms of those. Proof of the theorem involves the properties of traversal in a special idiom related to the free applicative functor."
},
{
"paper_title": "Adding structure to monoids: thus hopefully ending Haskell's string type confusion",
"paper_authors": [
"Mario BlaĚević"
],
"paper_abstract": "This paper presents the rationale and design of monoid-subclasses. This Haskell library consists of a collection of type classes that generalize the interface of several common data types, most importantly those used to represent strings. We demonstrate that the mathematical theory behind monoid-subclasses can bring substantial practical benefits to the Haskell library ecosystem by generalizing attoparsec, one of the most popular Haskell parsing libraries."
},
{
"paper_title": "Splittable pseudorandom number generators using cryptographic hashing",
"paper_authors": [
"Koen Claessen",
"Michał H. Pałka"
],
"paper_abstract": "We propose a new splittable pseudorandom number generator (PRNG) based on a cryptographic hash function. Splittable PRNGs, in contrast to linear PRNGs, allow the creation of two (seemingly) independent generators from a given random number generator. Splittable PRNGs are very useful for structuring purely functional programs, as they avoid the need for threading around state. We show that the currently known and used splittable PRNGs are either not efficient enough, have inherent flaws, or lack formal arguments about their randomness. In contrast, our proposed generator can be implemented efficiently, and comes with a formal statements and proofs that quantify how 'random' the results are that are generated. The provided proofs give strong randomness guarantees under assumptions commonly made in cryptography."
},
{
"paper_title": "Extensible effects: an alternative to monad transformers",
"paper_authors": [
"Oleg Kiselyov",
"Amr Sabry",
"Cameron Swords"
],
"paper_abstract": "We design and implement a library that solves the long-standing problem of combining effects without imposing restrictions on their interactions (such as static ordering). Effects arise from interactions between a client and an effect handler (interpreter); interactions may vary throughout the program and dynamically adapt to execution conditions. Existing code that relies on monad transformers may be used with our library with minor changes, gaining efficiency over long monad stacks. In addition, our library has greater expressiveness, allowing for practical idioms that are inefficient, cumbersome, or outright impossible with monad transformers. Our alternative to a monad transformer stack is a single monad, for the coroutine-like communication of a client with its handler. Its type reflects possible requests, i.e., possible effects of a computation. To support arbitrary effects and their combinations, requests are values of an extensible union type, which allows adding and, notably, subtracting summands. Extending and, upon handling, shrinking of the union of possible requests is reflected in its type, yielding a type-and-effect system for Haskell. The library is lightweight, generalizing the extensible exception handling to other effects and accurately tracking them in types."
},
{
"paper_title": "Maintaining verified software",
"paper_authors": [
"Joe Leslie-Hurd"
],
"paper_abstract": "Maintaining software in the face of evolving dependencies is a challenging problem, and in addition to good release practices there is a need for automatic dependency analysis tools to avoid errors creeping in. Verified software reveals more semantic information in the form of mechanized proofs of functional specifications, and this can be used for dependency analysis. In this paper we present a scheme for automatic dependency analysis of verified software, which for each program checks that the collection of installed libraries is sufficient to guarantee its functional correctness. We illustrate the scheme with a case study of Haskell packages verified in higher order logic. The dependency analysis reduces the burden of maintaining verified Haskell packages by automatically computing version ranges for the packages they depend on, such that any combination provides the functionality required for correct operation."
},
{
"paper_title": "Hasochism: the pleasure and pain of dependently typed haskell programming",
"paper_authors": [
"Sam Lindley",
"Conor McBride"
],
"paper_abstract": "Haskell's type system has outgrown its Hindley-Milner roots to the extent that it now stretches to the basics of dependently typed programming. In this paper, we collate and classify techniques for programming with dependent types in Haskell, and contribute some new ones. In particular, through extended examples---merge-sort and rectangular tilings---we show how to exploit Haskell's constraint solver as a theorem prover, delivering code which, as Agda programmers, we envy. We explore the compromises involved in simulating variations on the theme of the dependent function space in an attempt to help programmers put dependent types to work, and to inform the evolving language design both of Haskell and of dependently typed languages more broadly."
},
{
"paper_title": "Data flow fusion with series expressions in Haskell",
"paper_authors": [
"Ben Lippmeier",
"Manuel M.T. Chakravarty",
"Gabriele Keller",
"Amos Robinson"
],
"paper_abstract": "Existing approaches to array fusion can deal with straight-line producer consumer pipelines, but cannot fuse branching data flows where a generated array is consumed by several different consumers. Branching data flows are common and natural to write, but a lack of fusion leads to the creation of an intermediate array at every branch point. We present a new array fusion system that handles branches, based on Waters's series expression framework, but extended to work in a functional setting. Our system also solves a related problem in stream fusion, namely the introduction of duplicate loop counters. We demonstrate speedup over existing fusion systems for several key examples."
},
{
"paper_title": "The Intel labs Haskell research compiler",
"paper_authors": [
"Hai Liu",
"Neal Glew",
"Leaf Petersen",
"Todd A. Anderson"
],
"paper_abstract": "The Glasgow Haskell Compiler (GHC) is a well supported optimizing compiler for the Haskell programming language, along with its own extensions to the language and libraries. Haskell's lazy semantics imposes a runtime model which is in general difficult to implement efficiently. GHC achieves good performance across a wide variety of programs via aggressive optimization taking advantage of the lack of side effects, and by targeting a carefully tuned virtual machine. The Intel Labs Haskell Research Compiler uses GHC as a frontend, but provides a new whole-program optimizing backend by compiling the GHC intermediate representation to a relatively generic functional language compilation platform. We found that GHC's external Core language was relatively easy to use, but reusing GHC's libraries and achieving full compatibility were harder. For certain classes of programs, our platform provides substantial performance benefits over GHC alone, performing 2x faster than GHC with the LLVM backend on selected modern performance-oriented benchmarks; for other classes of programs, the benefits of GHC's tuned virtual machine continue to outweigh the benefits of more aggressive whole program optimization. Overall we achieve parity with GHC with the LLVM backend. In this paper, we describe our Haskell compiler stack, its implementation and optimization approach, and present benchmark results comparing it to GHC."
},
{
"paper_title": "Monadic functional reactive programming",
"paper_authors": [
"Atze van der Ploeg"
],
"paper_abstract": "Functional Reactive Programming (FRP) is a way to program reactive systems in functional style, eliminating many of the problems that arise from imperative techniques. In this paper, we present an alternative FRP formulation that is based on the notion of a reactive computation: a monadic computation which may require the occurrence of external events to continue. A signal computation is a reactive computation that may also emit values. In contrast to signals in other FRP formulations, signal computations can end, leading to a monadic interface for sequencing signal phases. This interface has several advantages: routing is implicit, sequencing signal phases is easier and more intuitive than when using the switching combinators found in other FRP approaches, and dynamic lists require much less boilerplate code. In other FRP approaches, either the entire FRP expression is re-evaluated on each external stimulus, or impure techniques are used to prevent redundant re-computations. We show how Monadic FRP can be implemented straightforwardly in a purely functional way while preventing redundant re-computations."
},
{
"paper_title": "Mio: a high-performance multicore io manager for GHC",
"paper_authors": [
"Andreas Richard Voellmy",
"Junchang Wang",
"Paul Hudak",
"Kazuhiko Yamamoto"
],
"paper_abstract": "Haskell threads provide a key, lightweight concurrency abstraction to simplify the programming of important network applications such as web servers and software-defined network (SDN) controllers. The flagship Glasgow Haskell Compiler (GHC) introduces a run-time system (RTS) to achieve a high-performance multicore implementation of Haskell threads, by introducing effective components such as a multicore scheduler, a parallel garbage collector, an IO manager, and efficient multicore memory allocation. Evaluations of the GHC RTS, however, show that it does not scale well on multicore processors, leading to poor performance of many network applications that try to use lightweight Haskell threads. In this paper, we show that the GHC IO manager, which is a crucial component of the GHC RTS, is the scaling bottleneck. Through a series of experiments, we identify key data structure, scheduling, and dispatching bottlenecks of the GHC IO manager. We then design a new multicore IO manager named Mio that eliminates all these bottlenecks. Our evaluations show that the new Mio manager improves realistic web server throughput by 6.5x and reduces expected web server response time by 5.7x. We also show that with Mio, McNettle (an SDN controller written in Haskell) can scale effectively to 40+ cores, reach a throughput of over 20 million new requests per second on a single machine, and hence become the fastest of all existing SDN controllers."
},
{
"paper_title": "Causality of optimized Haskell: what is burning our cycles?",
"paper_authors": [
"Peter M. Wortmann",
"David Duke"
],
"paper_abstract": "Profiling real-world Haskell programs is hard, as compiler optimizations make it tricky to establish causality between the source code and program behavior. In this paper we attack the root issue by performing a causality analysis of functional programs under optimization. We apply our findings to build a novel profiling infrastructure on top of the Glasgow Haskell Compiler, allowing for performance analysis even of aggressively optimized programs."
}
]
},
{
"proceeding_title": "Haskell '12:Proceedings of the 2012 Haskell Symposium",
"proceeding_contents": [
{
"paper_title": "The HERMIT in the machine: a plugin for the interactive transformation of GHC core language programs",
"paper_authors": [
"Andrew Farmer",
"Andy Gill",
"Ed Komp",
"Neil Sculthorpe"
],
"paper_abstract": "The importance of reasoning about and refactoring programs is a central tenet of functional programming. Yet our compilers and development toolchains only provide rudimentary support for these tasks. This paper introduces a programmatic and compiler-centric interface that facilitates refactoring and equational reasoning. To develop our ideas, we have implemented HERMIT, a toolkit enabling informal but systematic transformation of Haskell programs from inside the Glasgow Haskell Compiler's optimization pipeline. With HERMIT, users can experiment with optimizations and equational reasoning, while the tedious heavy lifting of performing the actual transformations is done for them. HERMIT provides a transformation API that can be used to build higher-level rewrite tools. One use-case is prototyping new optimizations as clients of this API before being committed to the GHC toolchain. We describe a HERMIT application - a read-eval-print shell for performing transformations using HERMIT. We also demonstrate using this shell to prototype an optimization on a specific example, and report our initial experiences and remaining challenges."
},
{
"paper_title": "Template your boilerplate: using template haskell for efficient generic programming",
"paper_authors": [
"Michael D. Adams",
"Thomas M. DuBuisson"
],
"paper_abstract": "Generic programming allows the concise expression of algorithms that would otherwise require large amounts of handwritten code. A number of such systems have been developed over the years, but a common drawback of these systems is poor runtime performance relative to handwritten, non-generic code. Generic-programming systems vary significantly in this regard, but few consistently match the performance of handwritten code. This poses a dilemma for developers. Generic-programming systems offer concision but cost performance. Handwritten code offers performance but costs concision. This paper explores the use of Template Haskell to achieve the best of both worlds. It presents a generic-programming system for Haskell that provides both the concision of other generic-programming systems and the efficiency of handwritten code. Our system gives the programmer a high-level, generic-programming interface, but uses Template Haskell to generate efficient, non-generic code that outperforms existing generic-programming systems for Haskell. This paper presents the results of benchmarking our system against both handwritten code and several other generic-programming systems. In these benchmarks, our system matches the performance of handwritten code while other systems average anywhere from two to twenty times slower."
},
{
"paper_title": "Guiding parallel array fusion with indexed types",
"paper_authors": [
"Ben Lippmeier",
"Manuel Chakravarty",
"Gabriele Keller",
"Simon Peyton Jones"
],
"paper_abstract": "We present a refined approach to parallel array fusion that uses indexed types to specify the internal representation of each array. Our approach aids the client programmer in reasoning about the performance of their program in terms of the source code. It also makes the intermediate code easier to transform at compile-time, resulting in faster compilation and more reliable runtimes. We demonstrate how our new approach improves both the clarity and performance of several end-user written programs, including a fluid flow solver and an interpolator for volumetric data."
},
{
"paper_title": "Vectorisation avoidance",
"paper_authors": [
"Gabriele Keller",
"Manuel M.T. Chakravarty",
"Roman Leshchinskiy",
"Ben Lippmeier",
"Simon Peyton Jones"
],
"paper_abstract": "Flattening nested parallelism is a vectorising code transform that converts irregular nested parallelism into flat data parallelism. Although the result has good asymptotic performance, flattening thoroughly restructures the code. Many intermediate data structures and traversals are introduced, which may or may not be eliminated by subsequent optimisation. We present a novel program analysis to identify parts of the program where flattening would only introduce overhead, without appropriate gain. We present empirical evidence that avoiding vectorisation in these cases leads to more efficient programs than if we had applied vectorisation and then relied on array fusion to eliminate intermediates from the resulting code."
},
{
"paper_title": "Testing type class laws",
"paper_authors": [
"Johan Jeuring",
"Patrik Jansson",
"Cláudio Amaral"
],
"paper_abstract": "The specification of a class in Haskell often starts with stating, in comments, the laws that should be satisfied by methods defined in instances of the class, followed by the type of the methods of the class. This paper develops a framework that supports testing such class laws using QuickCheck. Our framework is a light-weight class law testing framework, which requires a limited amount of work per class law, and per datatype for which the class law is tested. We also show how to test class laws with partially-defined values. Using partially-defined values, we show that the standard lazy and strict implementations of the state monad do not satisfy the expected laws."
},
{
"paper_title": "Feat: functional enumeration of algebraic types",
"paper_authors": [
"Jonas Duregård",
"Patrik Jansson",
"Meng Wang"
],
"paper_abstract": "In mathematics, an enumeration of a set S is a bijective function from (an initial segment of) the natural numbers to S. We define \"functional enumerations\" as efficiently computable such bijections. This paper describes a theory of functional enumeration and provides an algebra of enumerations closed under sums, products, guarded recursion and bijections. We partition each enumerated set into numbered, finite subsets. We provide a generic enumeration such that the number of each part corresponds to the size of its values (measured in the number of constructors). We implement our ideas in a Haskell library called testing-feat, and make the source code freely available. Feat provides efficient \"random access\" to enumerated values. The primary application is property-based testing, where it is used to define both random sampling (for example QuickCheck generators) and exhaustive enumeration (in the style of SmallCheck). We claim that functional enumeration is the best option for automatically generating test cases from large groups of mutually recursive syntax tree types. As a case study we use Feat to test the pretty-printer of the Template Haskell library (uncovering several bugs)."
},
{
"paper_title": "None",
"paper_authors": [
"Koen Claessen"
],
"paper_abstract": "Although quantification over functions in QuickCheck properties has been supported from the beginning, displaying and shrinking them as counter examples has not. The reason is that in general, functions are infinite objects, which means that there is no sensible show function for them, and shrinking an infinite object within a finite number of steps seems impossible. This paper presents a general technique with which functions as counter examples can be shrunk to finite objects, which can then be displayed to the user. The approach turns out to be practically usable, which is shown by a number of examples. The two main limitations are that higher-order functions cannot be dealt with, and it is hard to deal with terms that contain functions as subterms."
},
{
"paper_title": "Surveyor: a DSEL for representing and analyzing strongly typed surveys",
"paper_authors": [
"Wyatt Allen",
"Martin Erwig"
],
"paper_abstract": "Polls and surveys are increasingly employed to gather information about attitudes and experiences of all kinds of populations and user groups. The ultimate purpose of a survey is to identify trends and relationships that can inform decision makers. To this end, the data gathered by a survey must be appropriately analyzed. Most of the currently existing tools focus on the user interface aspect of the data collection task, but pay little attention to the structure and type of the collected data, which are usually represented as potentially tag-annotated, but otherwise unstructured, plain text. This makes the task of writing data analysis programs often difficult and error-prone, whereas a typed data representation could support the writing of type-directed data analysis tools that would enjoy the many benefits of static typing. In this paper we present Surveyor, a DSEL that allows the compositional construction of typed surveys, where the types describe the structure of the data to be collected. A survey can be run to gather typed data, which can then be subjected to analysis tools that are built using Surveyor's typed combinators. Altogether the Surveyor DSEL realizes a strongly typed and type-directed approach to data gathering and analysis. The implementation of our DSEL is based on GADTs to allow a flexible, yet strongly typed representation of surveys. Moreover, the implementation employs the Scrap-Your-Boilerplate library to facilitate the type-dependent traversal, extraction, and combination of data gathered from surveys."
},
{
"paper_title": "Wormholes: introducing effects to FRP",
"paper_authors": [
"Daniel Winograd-Cort",
"Paul Hudak"
],
"paper_abstract": "Functional reactive programming (FRP) is a useful model for programming real-time and reactive systems in which one defines a signal function to process a stream of input values into a stream of output values. However, performing side effects (e.g. memory mutation or input/output) in this model is tricky and typically unsafe. In previous work, Winograd-Cort et al. [2012] introduced resource types and wormholes to address this problem. This paper better motivates, expands upon, and formalizes the notion of a wormhole to fully unlock its potential. We show, for example, that wormholes can be used to define the concept of causality. This in turn allows us to provide behaviors such as looping, a core component of most languages, without building it directly into the language. We also improve upon our previous design by making wormholes less verbose and easier to use. To formalize the notion of a wormhole, we define an extension to the simply typed lambda calculus, complete with typing rules and operational semantics. In addition, we present a new form of semantic transition that we call a temporal transition to specify how an FRP program behaves over time and to allow us to better reason about causality. As our model is designed for a Haskell implementation, the semantics are lazy. Finally, with the language defined, we prove that our wormholes indeed allow side effects to be performed safely in an FRP framework."
},
{
"paper_title": "None",
"paper_authors": [
"Brent A. Yorgey"
],
"paper_abstract": "The monoid is a humble algebraic structure, at first glance even downright boring. However, there's much more to monoids than meets the eye. Using examples taken from the diagrams vector graphics framework as a case study, I demonstrate the power and beauty of monoids for library design. The paper begins with an extremely simple model of diagrams and proceeds through a series of incremental variations, all related somehow to the central theme of monoids. Along the way, I illustrate the power of compositional semantics; why you should also pay attention to the monoid's even humbler cousin, the semigroup; monoid homomorphisms; and monoid actions."
},
{
"paper_title": "Dependently typed programming with singletons",
"paper_authors": [
"Richard A. Eisenberg",
"Stephanie Weirich"
],
"paper_abstract": "Haskell programmers have been experimenting with dependent types for at least a decade, using clever encodings that push the limits of the Haskell type system. However, the cleverness of these encodings is also their main drawback. Although the ideas are inspired by dependently typed programs, the code looks significantly different. As a result, GHC implementors have responded with extensions to Haskell's type system, such as GADTs, type families, and datatype promotion. However, there remains a significant difference between programming in Haskell and in full-spectrum dependently typed languages. Haskell enforces a phase separation between runtime values and compile-time types. Therefore, singleton types are necessary to express the dependency between values and types. These singleton types introduce overhead and redundancy for the programmer. This paper presents the singletons library, which generates the boilerplate code necessary for dependently typed programming using GHC. To compare with full-spectrum languages, we present an extended example based on an Agda interface for safe database access. The paper concludes with a detailed discussion on the current capabilities of GHC for dependently typed programming and suggestions for future extensions to better support this style of programming."
},
{
"paper_title": "None",
"paper_authors": [
"Wouter Swierstra"
],
"paper_abstract": "This report documents the insights gained from implementing the core functionality of xmonad, a popular window manager written in Haskell, in the Coq proof assistant. Rather than focus on verification, this report outlines the technical challenges involved with incorporating Coq code in a Haskell project."
},
{
"paper_title": "Safe haskell",
"paper_authors": [
"David Terei",
"Simon Marlow",
"Simon Peyton Jones",
"David Mazières"
],
"paper_abstract": "Though Haskell is predominantly type-safe, implementations contain a few loopholes through which code can bypass typing and module encapsulation. This paper presents Safe Haskell, a language extension that closes these loopholes. Safe Haskell makes it possible to confine and safely execute untrusted, possibly malicious code. By strictly enforcing types, Safe Haskell allows a variety of different policies from API sandboxing to information-flow control to be implemented easily as monads. Safe Haskell is aimed to be as unobtrusive as possible. It enforces properties that programmers tend to meet already by convention. We describe the design of Safe Haskell and an implementation (currently shipping with GHC) that infers safety for code that lies in a safe subset of the language. We use Safe Haskell to implement an online Haskell interpreter that can securely execute arbitrary untrusted code with no overhead. The use of Safe Haskell greatly simplifies this task and allows the use of a large body of existing code and tools."
},
{
"paper_title": "Layout-sensitive language extensibility with SugarHaskell",
"paper_authors": [
"Sebastian Erdweg",
"Felix Rieger",
"Tillmann Rendel",
"Klaus Ostermann"
],
"paper_abstract": "Programmers need convenient syntax to write elegant and concise programs. Consequently, the Haskell standard provides syntactic sugar for some scenarios (e.g., do notation for monadic code), authors of Haskell compilers provide syntactic sugar for more scenarios (e.g., arrow notation in GHC), and some Haskell programmers implement preprocessors for their individual needs (e.g., idiom brackets in SHE). But manually written preprocessors cannot scale: They are expensive, error-prone, and not composable. Most researchers and programmers therefore refrain from using the syntactic notations they need in actual Haskell programs, but only use them in documentation or papers. We present a syntactically extensible version of Haskell, SugarHaskell, that empowers ordinary programmers to implement and use custom syntactic sugar. Building on our previous work on syntactic extensibility for Java, SugarHaskell integrates syntactic extensions as sugar libraries into Haskell's module system. Syntax extensions in SugarHaskell can declare arbitrary context-free and layout-sensitive syntax. SugarHaskell modules are compiled into Haskell modules and further processed by a Haskell compiler. We provide an Eclipse-based IDE for SugarHaskell that is extensible, too, and automatically provides syntax coloring for all syntax extensions imported into a module. We have validated SugarHaskell with several case studies, including arrow notation (as implemented in GHC) and EBNF as a concise syntax for the declaration of algebraic data types with associated concrete syntax. EBNF declarations also show how to extend the extension mechanism itself: They introduce syntactic sugar for using the declared concrete syntax in other SugarHaskell modules."
}
]
},
{
"proceeding_title": "Haskell '11:Proceedings of the 4th ACM symposium on Haskell",
"proceeding_contents": [
{
"paper_title": "Extending monads with pattern matching",
"paper_authors": [
"Tomas Petricek",
"Alan Mycroft",
"Don Syme"
],
"paper_abstract": "Sequencing of effectful computations can be neatly captured using monads and elegantly written using do notation. In practice such monads often allow additional ways of composing computations, which have to be written explicitly using combinators. We identify joinads, an abstract notion of computation that is stronger than monads and captures many such ad-hoc extensions. In particular, joinads are monads with three additional operations: one of type m a -> m b -> m (a, b) captures various forms of parallel composition, one of type m a -> m a -> m a that is inspired by choice and one of type m a -> m (m a) that captures aliasing of computations. Algebraically, the first two operations form a near-semiring with commutative multiplication. We introduce docase notation that can be viewed as a monadic version of case. Joinad laws imply various syntactic equivalences of programs written using docase that are analogous to equivalences about case. Examples of joinads that benefit from the notation include speculative parallelism, waiting for a combination of user interface events, but also encoding of validation rules using the intersection of parsers."
},
{
"paper_title": "Bringing back monad comprehensions",
"paper_authors": [
"George Giorgidze",
"Torsten Grust",
"Nils Schweinsberg",
"Jeroen Weijers"
],
"paper_abstract": "This paper is about a Glasgow Haskell Compiler (GHC) extension that generalises Haskell's list comprehension notation to monads. The monad comprehension notation implemented by the extension supports generator and filter clauses, as was the case in the Haskell 1.4 standard. In addition, the extension generalises the recently proposed parallel and SQL-like list comprehension notations to monads. The aforementioned generalisations are formally defined in this paper. The extension will be available in GHC 7.2. This paper gives several instructive examples that we hope will facilitate wide adoption of the extension by the Haskell community. We also argue why the do notation is not always a good fit for monadic libraries and embedded domain-specific languages, especially for those that are based on collection monads. Should the question of how to integrate the extension into the Haskell standard arise, the paper proposes a solution to the problem that led to the removal of the monad comprehension notation from the language standard."
},
{
"paper_title": "Termination combinators forever",
"paper_authors": [
"Maximilian Bolingbroke",
"Simon Peyton Jones",
"Dimitrios Vytiniotis"
],
"paper_abstract": "We describe a library-based approach to constructing termination tests suitable for controlling termination of symbolic methods such as partial evaluation, supercompilation and theorem proving. With our combinators, all termination tests are correct by construction. We show how the library can be designed to embody various optimisations of the termination tests, which the user of the library takes advantage of entirely transparently."
},
{
"paper_title": "Hobbits for Haskell: a library for higher-order encodings in functional programming languages",
"paper_authors": [
"Edwin Westbrook",
"Nicolas Frisby",
"Paul Brauner"
],
"paper_abstract": "Adequate encodings are a powerful programming tool, which eliminate whole classes of program bugs: they ensure that a program cannot generate ill-formed data, because such data is not part of the representation; and they also ensure that a program is well-defined, meaning that it cannot have different behaviors on different representations of the same piece of data. Unfortunately, it has proven difficult to define adequate encodings of programming languages themselves. Such encodings would be very useful in language processing tools such as interpreters, compilers, model-checking tools, etc., as these systems are often difficult to get correct. The key problem in representing programming languages is in encoding binding constructs; previous approaches have serious limitations in either the operations they allow or the correcness guarantees they make. In this paper, we introduce a new library for Haskell that allows the user to define and use higher-order encodings, a powerful technique for representing bindings. Our library allows straightforward recursion on bindings using pattern-matching, which is not possible in previous approaches. We then demonstrate our library on a medium-sized example, lambda-lifting, showing how our library can be used to make strong correctness guarantees at compile time."
},
{
"paper_title": "A library writer's guide to shortcut fusion",
"paper_authors": [
"Thomas Harper"
],
"paper_abstract": "There are now a variety of shortcut fusion techniques in the wild for removing intermediate data structures in Haskell. They are often presented, however, specialised to a specific data structure and interface. This can make it difficult to transfer these techniques to other settings. In this paper, we give a roadmap for a library writer who would like to implement fusion for his own library. We explain shortcut fusion without reference to any specific implementation by treating it as an instance of data refinement. We also provide an example application of our framework using the features available in the Glasgow Haskell Compiler."
},
{
"paper_title": "Efficient parallel stencil convolution in Haskell",
"paper_authors": [
"Ben Lippmeier",
"Gabriele Keller"
],
"paper_abstract": "Stencil convolution is a fundamental building block of many scientific and image processing algorithms. We present a declarative approach to writing such convolutions in Haskell that is both efficient at runtime and implicitly parallel. To achieve this we extend our prior work on the Repa array library with two new features: partitioned and cursored arrays. Combined with careful management of the interaction between GHC and its back-end code generator LLVM, we achieve performance comparable to the standard OpenCV library."
},
{
"paper_title": "A monad for deterministic parallelism",
"paper_authors": [
"Simon Marlow",
"Ryan Newton",
"Simon Peyton Jones"
],
"paper_abstract": "We present a new programming model for deterministic parallel computation in a pure functional language. The model is monadic and has explicit granularity, but allows dynamic construction of dataflow networks that are scheduled at runtime, while remaining deterministic and pure. The implementation is based on monadic concurrency, which has until now only been used to simulate concurrency in functional languages, rather than to provide parallelism. We present the API with its semantics, and argue that parallel execution is deterministic. Furthermore, we present a complete work-stealing scheduler implemented as a Haskell library, and we show that it performs at least as well as the existing parallel programming models in Haskell."
},
{
"paper_title": "Prettier concurrency: purely functional concurrent revisions",
"paper_authors": [
"Daan Leijen",
"Manuel Fahndrich",
"Sebastian Burckhardt"
],
"paper_abstract": "This article presents an extension to the work of Launchbury and Peyton-Jones on the ST monad. Using a novel model for concurrency, called concurrent revisions [3,5], we show how we can use concurrency together with imperative mutable variables, while still being able to safely convert such computations (in the Rev monad) into pure values again. In contrast to many other transaction models, like software transactional memory (STM), concurrent revisions never use rollback and always deterministically resolve conflicts. As a consequence, concurrent revisions integrate well with side-effecting I/O operations. Using deterministic conflict resolution, concurrent revisions can deal well with situations where there are many conflicts between different threads that modify a shared data structure. We demonstrate this by describing a concurrent game with conflicting concurrent tasks."
},
{
"paper_title": "Flexible dynamic information flow control in Haskell",
"paper_authors": [
"Deian Stefan",
"Alejandro Russo",
"John C. Mitchell",
"David Mazières"
],
"paper_abstract": "We describe a new, dynamic, floating-label approach to language-based information flow control, and present an implementation in Haskell. A labeled IO monad, LIO, keeps track of a current label and permits restricted access to IO functionality, while ensuring that the current label exceeds the labels of all data observed and restricts what can be modified. Unlike other language-based work, LIO also bounds the current label with a current clearance that provides a form of discretionary access control. In addition, programs may encapsulate and pass around the results of computations with different labels. We give precise semantics and prove confidentiality and integrity properties of the system."
},
{
"paper_title": "Embedded parser generators",
"paper_authors": [
"Jonas Duregård",
"Patrik Jansson"
],
"paper_abstract": "We present a novel method of embedding context-free grammars in Haskell, and to automatically generate parsers and pretty-printers from them. We have implemented this method in a library called BNFC-meta (from the BNF Converter, which it is built on). The library builds compiler front ends using metaprogramming instead of conventional code generation. Parsers are built from labelled BNF grammars that are defined directly in Haskell modules. Our solution combines features of parser generators (static grammar checks, a highly specialised grammar DSL) and adds several features that are otherwise exclusive to combinatory libraries such as the ability to reuse, parameterise and generate grammars inside Haskell. To allow writing grammars in concrete syntax, BNFC-meta provides a quasi-quoter that can parse grammars (embedded in Haskell files) at compile time and use metaprogramming to replace them with their abstract syntax. We also generate quasi-quoters so that the languages we define with BNFC-meta can be embedded in the same way. With a minimal change to the grammar, we support adding anti-quotation to the generated quasi-quoters, which allows users of the defined language to mix concrete and abstract syntax almost seamlessly. Unlike previous methods of achieving anti-quotation, the method used by BNFC-meta is simple, efficient and avoids polluting the abstract syntax types."
},
{
"paper_title": "Towards Haskell in the cloud",
"paper_authors": [
"Jeff Epstein",
"Andrew P. Black",
"Simon Peyton-Jones"
],
"paper_abstract": "We present Cloud Haskell, a domain-specific language for developing programs for a distributed computing environment. Implemented as a shallow embedding in Haskell, it provides a message-passing communication model, inspired by Erlang, without introducing incompatibility with Haskell's established shared-memory concurrency. A key contribution is a method for serializing function closures for transmission across the network. Cloud Haskell has been implemented; we present example code and some preliminary performance measurements."
}
]
},
{
"proceeding_title": "Haskell '10:Proceedings of the third ACM Haskell symposium on Haskell",
"proceeding_contents": [
{
"paper_title": "Invertible syntax descriptions: unifying parsing and pretty printing",
"paper_authors": [
"Tillmann Rendel",
"Klaus Ostermann"
],
"paper_abstract": "Parsers and pretty-printers for a language are often quite similar, yet both are typically implemented separately, leading to redundancy and potential inconsistency. We propose a new interface of syntactic descriptions, with which both parser and pretty-printer can be described as a single program. Whether a syntactic description is used as a parser or as a pretty-printer is determined by the implementation of the interface. Syntactic descriptions enable programmers to describe the connection between concrete and abstract syntax once and for all, and use these descriptions for parsing or pretty-printing as needed. We also discuss the generalization of our programming technique towards an algebra of partial isomorphisms."
},
{
"paper_title": "The performance of the Haskell containers package",
"paper_authors": [
"Milan Straka"
],
"paper_abstract": "In this paper, we perform a thorough performance analysis of the containers package, the de facto standard Haskell containers library, comparing it to the most of existing alternatives on HackageDB. We then significantly improve its performance, making it comparable to the best implementations available. Additionally, we describe a new persistent data structure based on hashing, which offers the best performance out of available data structures containing Strings and ByteStrings."
},
{
"paper_title": "A systematic derivation of the STG machine verified in Coq",
"paper_authors": [
"Maciej Pirog",
"Dariusz Biernacki"
],
"paper_abstract": "Shared Term Graph (STG) is a lazy functional language used as an intermediate language in the Glasgow Haskell Compiler (GHC). In this article, we present a natural operational semantics for STG and we mechanically derive a lazy abstract machine from this semantics, which turns out to coincide with Peyton-Jones and Salkild's Spineless Tagless G-machine (STG machine) used in GHC. Unlike other constructions of STG-like machines present in the literature, ours is based on a systematic and scalable derivation method (inspired by Danvy et al.'s functional correspondence between evaluators and abstract machines) and it leads to an abstract machine that differs from the original STG machine only in inessential details. In particular, it handles non-trivial update scenarios and partial applications identically as the STG machine. The entire derivation has been formalized in the Coq proof assistant. Thus, in effect, we provide a machine checkable proof of the correctness of the STG machine with respect to the natural semantics."
},
{
"paper_title": "A generic deriving mechanism for Haskell",
"paper_authors": [
"José Pedro Magalhães",
"Atze Dijkstra",
"Johan Jeuring",
"Andres Löh"
],
"paper_abstract": "Haskell's deriving mechanism supports the automatic generation of instances for a number of functions. The Haskell 98 Report only specifies how to generate instances for the Eq, Ord, Enum, Bounded, Show, and Read classes. The description of how to generate instances is largely informal. The generation of instances imposes restrictions on the shape of datatypes, depending on the particular class to derive. As a consequence, the portability of instances across different compilers is not guaranteed. We propose a new approach to Haskell's deriving mechanism, which allows users to specify how to derive arbitrary class instances using standard datatype-generic programming techniques. Generic functions, including the methods from six standard Haskell 98 derivable classes, can be specified entirely within Haskell 98 plus multi-parameter type classes, making them lightweight and portable. We can also express Functor, Typeable, and many other derivable classes with our technique. We implemented our deriving mechanism together with many new derivable classes in the Utrecht Haskell Compiler."
},
{
"paper_title": "Exchanging sources between clean and Haskell: a double-edged front end for the clean compiler",
"paper_authors": [
"John van Groningen",
"Thomas van Noort",
"Peter Achten",
"Pieter Koopman",
"Rinus Plasmeijer"
],
"paper_abstract": "The functional programming languages Clean and Haskell have been around for over two decades. Over time, both languages have developed a large body of useful libraries and come with interesting language features. It is our primary goal to benefit from each other's evolutionary results by facilitating the exchange of sources between Clean and Haskell and study the forthcoming interactions between their distinct languages features. This is achieved by using the existing Clean compiler as starting point, and implementing a double-edged front end for this compiler: it supports both standard Clean 2.1 and (currently a large part of) standard Haskell 98. Moreover, it allows both languages to seamlessly use many of each other's language features that were alien to each other before. For instance, Haskell can now use uniqueness typing anywhere, and Clean can use newtypes efficiently. This has given birth to two new dialects of Clean and Haskell, dubbed Clean* and Haskell*. Additionally, measurements of the performance of the new compiler indicate that it is on par with the flagship Haskell compiler GHC."
},
{
"paper_title": "Experience report: using hackage to inform language design",
"paper_authors": [
"J. Garrett Morris"
],
"paper_abstract": "Hackage, an online repository of Haskell applications and libraries, provides a hub for programmers to both release code to and use code from the larger Haskell community. We suggest that Hackage can also serve as a valuable resource for language designers: by providing a large collection of code written by different programmers and in different styles, it allows language designers to see not just how features could be used theoretically, but how they are (and are not) used in practice. We were able to make such a use of Hackage during the design of the class system for a new Haskell-like programming language. In this paper, we sketch our language design problem, and how we used Hackage to help answer it. We describe our methodology in some detail, including both ways that it was and was not effective, and summarize our results."
},
{
"paper_title": "Nikola: embedding compiled GPU functions in Haskell",
"paper_authors": [
"Geoffrey Mainland",
"Greg Morrisett"
],
"paper_abstract": "We describe Nikola, a first-order language of array computations embedded in Haskell that compiles to GPUs via CUDA using a new set of type-directed techniques to support re-usable computations. Nikola automatically handles a range of low-level details for Haskell programmers, such as marshaling data to/from the GPU, size inference for buffers, memory management, and automatic loop parallelization. Additionally, Nikola supports both compile-time and run-time code generation, making it possible for programmers to choose when and where to specialize embedded programs."
},
{
"paper_title": "Concurrent orchestration in Haskell",
"paper_authors": [
"John Launchbury",
"Trevor Elliott"
],
"paper_abstract": "We present a concurrent scripting language embedded in Haskell, emulating the functionality of the Orc orchestration language by providing many-valued (real) non-determinism in the context of concurrent effects. We provide many examples of its use, as well as a brief description of how we use the embedded Orc DSL in practice. We describe the abstraction layers of the implementation, and use the fact that we have a layered approach to demonstrate algebraic properties satisfied by the combinators."
},
{
"paper_title": "Seq no more: better strategies for parallel Haskell",
"paper_authors": [
"Simon Marlow",
"Patrick Maier",
"Hans-Wolfgang Loidl",
"Mustafa K. Aswad",
"Phil Trinder"
],
"paper_abstract": "We present a complete redesign of evaluation strategies, a key abstraction for specifying pure, deterministic parallelism in Haskell. Our new formulation preserves the compositionality and modularity benefits of the original, while providing significant new benefits. First, we introduce an evaluation-order monad to provide clearer, more generic, and more efficient specification of parallel evaluation. Secondly, the new formulation resolves a subtle space management issue with the original strategies, allowing parallelism (sparks) to be preserved while reclaiming heap associated with superfluous parallelism. Related to this, the new formulation provides far better support for speculative parallelism as the garbage collector now prunes unneeded speculation. Finally, the new formulation provides improved compositionality: we can directly express parallelism embedded within lazy data structures, producing more compositional strategies, and our basic strategies are parametric in the coordination combinator, facilitating a richer set of parallelism combinators. We give measurements over a range of benchmarks demonstrating that the runtime overheads of the new formulation relative to the original are low, and the new strategies even yield slightly better speedups on average than the original strategies"
},
{
"paper_title": "Scalable i/o event handling for GHC",
"paper_authors": [
"Bryan O'Sullivan",
"Johan Tibell"
],
"paper_abstract": "We have developed a new, portable I/O event manager for the Glasgow Haskell Compiler (GHC) that scales to the needs of modern server applications. Our new code is transparently available to existing Haskell applications. Performance at lower concurrency levels is comparable with the existing implementation. We support millions of concurrent network connections, with millions of active timeouts, from a single multithreaded program, levels far beyond those achievable with the current I/O manager. In addition, we provide a public API to developers who need to create event-driven network applications."
},
{
"paper_title": "An llVM backend for GHC",
"paper_authors": [
"David A. Terei",
"Manuel M.T. Chakravarty"
],
"paper_abstract": "In the presence of ever-changing computer architectures, high-quality optimising compiler backends are moving targets that require specialist knowledge and sophisticated algorithms. In this paper, we explore a new backend for the Glasgow Haskell Compiler (GHC) that leverages the Low Level Virtual Machine (LLVM), a new breed of compiler written explicitly for use by other compiler writers, not high-level programmers, that promises to enable outsourcing of low-level and architecture-dependent aspects of code generation. We discuss the conceptual challenges and our backend design. We also provide an extensive quantitative evaluation of the performance of the backend and of the code it produces."
},
{
"paper_title": "Hoopl: a modular, reusable library for dataflow analysis and transformation",
"paper_authors": [
"Norman Ramsey",
"João Dias",
"Simon Peyton Jones"
],
"paper_abstract": "Dataflow analysis and transformation of control-flow graphs is pervasive in optimizing compilers, but it is typically entangled with the details of a particular compiler. We describe Hoopl, a reusable library that makes it unusually easy to define new analyses and transformations for any compiler written in Haskell. Hoopl's interface is modular and polymorphic, and it offers unusually strong static guarantees. The implementation encapsulates state-of-the-art algorithms (interleaved analysis and rewriting, dynamic error isolation), and it cleanly separates their tricky elements so that they can be understood independently."
},
{
"paper_title": "Supercompilation by evaluation",
"paper_authors": [
"Maximilian Bolingbroke",
"Simon Peyton Jones"
],
"paper_abstract": "This paper shows how call-by-need supercompilation can be recast to be based explicitly on an evaluator, contrasting with standard presentations which are specified as algorithms that mix evaluation rules with reductions that are unique to supercompilation. Building on standard operational-semantics technology for call-by-need languages, we show how to extend the supercompilation algorithm to deal with recursive let expressions."
},
{
"paper_title": "Species and functors and types, oh my!",
"paper_authors": [
"Brent A. Yorgey"
],
"paper_abstract": "The theory of combinatorial species, although invented as a purely mathematical formalism to unify much of combinatorics, can also serve as a powerful and expressive language for talking about data types. With potential applications to automatic test generation, generic programming, and language design, the theory deserves to be much better known in the functional programming community. This paper aims to teach the basic theory of combinatorial species using motivation and examples from the world of functional programming. It also introduces the species library, available on Hackage, which is used to illustrate the concepts introduced and can serve as a platform for continued study and research."
}
]
},
{
"proceeding_title": "Haskell '09:Proceedings of the 2nd ACM SIGPLAN symposium on Haskell",
"proceeding_contents": [
{
"paper_title": "Haskell Symposium Program Chair's Report",
"paper_authors": [
"Stephanie Weirich"
]
},
{
"paper_title": "The future of Haskell discussion"
},
{
"paper_title": "Tool Demonstration CLasH From Haskell to Hardware",
"paper_authors": [
"Christiaan Baaij",
"Matthijs Kooijman",
"Jan Kuper",
"Marco Gerards",
"Bert Molenkamp"
]
},
{
"paper_title": "Types are calling conventions",
"paper_authors": [
"Maximilian C. Bolingbroke",
"Simon L. Peyton Jones"
],
"paper_abstract": "It is common for compilers to derive the calling convention of a function from its type. Doing so is simple and modular but misses many optimisation opportunities, particularly in lazy, higher-order functional languages with extensive use of currying. We restore the lost opportunities by defining Strict Core, a new intermediate language whose type system makes the missing distinctions: laziness is explicit, and functions take multiple arguments and return multiple results."
},
{
"paper_title": "Losing functions without gaining data: another look at defunctionalisation",
"paper_authors": [
"Neil Mitchell",
"Colin Runciman"
],
"paper_abstract": "We describe a transformation which takes a higher-order program, and produces an equivalent first-order program. Unlike Reynolds-style defunctionalisation, it does not introduce any new data types, and the results are more amenable to subsequent analysis operations. We can use our method to improve the results of existing analysis operations, including strictness analysis, pattern-match safety and termination checking. Our transformation is implemented, and works on a Core language to which Haskell programs can be reduced. Our method cannot always succeed in removing all functional values, but in practice is remarkably successful."
},
{
"paper_title": "Push-pull functional reactive programming",
"paper_authors": [
"Conal M. Elliott"
],
"paper_abstract": "Functional reactive programming (FRP) has simple and powerful semantics, but has resisted efficient implementation. In particular, most past implementations have used demand-driven sampling, which accommodates FRP's continuous time semantics and fits well with the nature of functional programming. Consequently, values are wastefully recomputed even when inputs don't change, and reaction latency can be as high as the sampling period. This paper presents a way to implement FRP that combines data- and demand-driven evaluation, in which values are recomputed only when necessary, and reactions are nearly instantaneous. The implementation is rooted in a new simple formulation of FRP and its semantics and so is easy to understand and reason about. On the road to a new implementation, we'll meet some old friends (monoids, functors, applicative functors, monads, morphisms, and improving values) and make some new friends (functional future values, reactive normal form, and concurrent \"unambiguous choice\")."
},
{
"paper_title": "Unembedding domain-specific languages",
"paper_authors": [
"Robert Atkey",
"Sam Lindley",
"Jeremy Yallop"
],
"paper_abstract": "Higher-order abstract syntax provides a convenient way of embedding domain-specific languages, but is awkward to analyse and manipulate directly. We explore the boundaries of higher-order abstract syntax. Our key tool is the unembedding of embedded terms as de Bruijn terms, enabling intensional analysis. As part of our solution we present techniques for separating the definition of an embedded program from its interpretation, giving modular extensions of the embedded language, and different ways to encode the types of the embedded language."
},
{
"paper_title": "Lazy functional incremental parsing",
"paper_authors": [
"Jean-Philippe Bernardy"
],
"paper_abstract": "Structured documents are commonly edited using a free-form editor. Even though every string is an acceptable input, it makes sense to maintain a structured representation of the edited document. The structured representation has a number of uses: structural navigation (and optional structural editing), structure highlighting, etc. The construction of the structure must be done incrementally to be efficient: the time to process an edit operation should be proportional to the size of the change, and (ideally) independent of the total size of the document. We show that combining lazy evaluation and caching of intermediate (partial) results enables incremental parsing. We build a complete incremental parsing library for interactive systems with support for error-correction."
},
{
"paper_title": "Roll your own test bed for embedded real-time protocols: a haskell experience",
"paper_authors": [
"Lee Pike",
"Geoffrey Brown",
"Alwyn Goodloe"
],
"paper_abstract": "We present by example a new application domain for functional languages: emulators for embedded real-time protocols. As a case-study, we implement a simple emulator for the Biphase Mark Protocol, a physical-layer network protocol in Haskell. The surprising result is that a pure functional language with no built-in notion of time is extremely well-suited for constructing such emulators. Furthermore, we use Haskell's property-checker QuickCheck to automatically generate real-time parameters for simulation. We also describe a novel use of QuickCheck as a \"probability calculator\" for reliability analysis."
},
{
"paper_title": "A compositional theory for STM Haskell",
"paper_authors": [
"Johannes Borgström",
"Karthikeyan Bhargavan",
"Andrew D. Gordon"
],
"paper_abstract": "We address the problem of reasoning about Haskell programs that use Software Transactional Memory (STM). As a motivating example, we consider Haskell code for a concurrent non-deterministic tree rewriting algorithm implementing the operational semantics of the ambient calculus. The core of our theory is a uniform model, in the spirit of process calculi, of the run-time state of multi-threaded STM Haskell programs. The model was designed to simplify both local and compositional reasoning about STM programs. A single reduction relation captures both pure functional computations and also effectful computations in the STM and I/O monads. We state and prove liveness, soundness, completeness, safety, and termination properties relating source processes and their Haskell implementation. Our proof exploits various ideas from concurrency theory, such as the bisimulation technique, but in the setting of a widely used programming language rather than an abstract process calculus. Additionally, we develop an equational theory for reasoning about STM Haskell programs, and establish for the first time equations conjectured by the designers of STM Haskell. We conclude that using a pure functional language extended with STM facilitates reasoning about concurrent implementation code."
},
{
"paper_title": "Parallel performance tuning for Haskell",
"paper_authors": [
"Don Jones, Jr.",
"Simon Marlow",
"Satnam Singh"
],
"paper_abstract": "Parallel Haskell programming has entered the mainstream with support now included in GHC for multiple parallel programming models, along with multicore execution support in the runtime. However, tuning programs for parallelism is still something of a black art. Without much in the way of feedback provided by the runtime system, it is a matter of trial and error combined with experience to achieve good parallel speedups. This paper describes an early prototype of a parallel profiling system for multicore programming with GHC. The system comprises three parts: fast event tracing in the runtime, a Haskell library for reading the resulting trace files, and a number of tools built on this library for presenting the information to the programmer. We focus on one tool in particular, a graphical timeline browser called ThreadScope. The paper illustrates the use of ThreadScope through a number of case studies, and describes some useful methodologies for parallelizing Haskell programs."
},
{
"paper_title": "The architecture of the Utrecht Haskell compiler",
"paper_authors": [
"Atze Dijkstra",
"Jeroen Fokker",
"S. Doaitse Swierstra"
],
"paper_abstract": "In this paper we describe the architecture of the Utrecht Haskell Compiler (UHC). UHC is a new Haskell compiler, that supports most (but not all) Haskell 98 features, plus some experimental extensions. It targets multiple backends, including a bytecode interpreter backend and a whole-program analysis backend, both via C. The implementation is rigorously organized as stepwise transformations through some explicit intermediate languages. The tree walks of all transformations are expressed as an algebra, with the aid of an Attribute Grammar based preprocessor. The compiler is just one materialization of a framework that supports experimentation with language variants, thanks to an aspect-oriented internal organization."
},
{
"paper_title": "Alloy: fast generic transformations for Haskell",
"paper_authors": [
"Neil C.C. Brown",
"Adam T. Sampson"
],
"paper_abstract": "Data-type generic programming can be used to traverse and manipulate specific parts of large heterogeneously-typed tree structures, without the need for tedious boilerplate. Generic programming is often approached from a theoretical perspective, where the emphasis lies on the power of the representation rather than on efficiency. We describe use cases for a generic system derived from our work on a nanopass compiler, where efficiency is a real concern, and detail a new generics approach (Alloy) that we have developed in Haskell to allow our compiler passes to traverse the abstract syntax tree quickly. We benchmark our approach against several other Haskell generics approaches and statistically analyse the results, finding that Alloy is fastest on heterogeneously-typed trees."
},
{
"paper_title": "Type-safe observable sharing in Haskell",
"paper_authors": [
"Andy Gill"
],
"paper_abstract": "Haskell is a great language for writing and supporting embedded Domain Specific Languages (DSLs). Some form of observable sharing is often a critical capability for allowing so-called deep DSLs to be compiled and processed. In this paper, we describe and explore uses of an IO function for reification which allows direct observation of sharing."
},
{
"paper_title": "Finding the needle: stack traces for GHC",
"paper_authors": [
"Tristan O.R. Allwood",
"Simon Peyton Jones",
"Susan Eisenbach"
],
"paper_abstract": "Even Haskell programs can occasionally go wrong. Programs calling head on an empty list, and incomplete patterns in function definitions can cause program crashes, reporting little more than the precise location where error was ultimately called. Being told that one application of the head function in your program went wrong, without knowing which use of head went wrong can be infuriating. We present our work on adding the ability to get stack traces out of GHC, for example that our crashing head was used during the evaluation of foo, which was called during the evaluation of bar, during the evaluation of main. We provide a transformation that converts GHC Core programs into ones that pass a stack around, and a stack library that ensures bounded heap usage despite the highly recursive nature of Haskell. We call our extension to GHC StackTrace."
}
]
},
{
"proceeding_title": "Haskell '08:Proceedings of the first ACM SIGPLAN symposium on Haskell",
"proceeding_contents": [
{
"paper_title": "Lightweight monadic regions",
"paper_authors": [
"Oleg Kiselyov",
"Chung-chieh Shan"
],
"paper_abstract": "We present Haskell libraries that statically ensure the safe use of resources such as file handles. We statically prevent accessing an already closed handle or forgetting to close it. The libraries can be trivially extended to other resources such as database connections and graphic contexts. Because file handles and similar resources are scarce, we want to not just assure their safe use but further deallocate them soon after they are no longer needed. Relying on Fluet and Morrisett's [4] calculus of nested regions, we contribute a novel, improved, and extended implementation of the calculus in Haskell, with file handles as resources. Our library supports region polymorphism and implicit region subtyping, along with higher-order functions, mutable state, recursion, and run-time exceptions. A program may allocate arbitrarily many resources and dispose of them in any order, not necessarily LIFO. Region annotations are part of an expression's inferred type. Our new Haskell encoding of monadic regions as monad transformers needs no witness terms. It assures timely deallocation even when resources have markedly different lifetimes and the identity of the longest-living resource is determined only dynamically. For contrast, we also implement a Haskell library for manual resource management, where deallocation is explicit and safety is assured by a form of linear types. We implement the linear typing in Haskell with the help of phantom types and a parameterized monad to statically track the type-state of resources."
},
{
"paper_title": "A library for light-weight information-flow security in haskell",
"paper_authors": [
"Alejandro Russo",
"Koen Claessen",
"John Hughes"
],
"paper_abstract": "Protecting confidentiality of data has become increasingly important for computing systems. Information-flow techniques have been developed over the years to achieve that purpose, leading to special-purpose languages that guarantee information-flow security in programs. However, rather than producing a new language from scratch, information-flow security can also be provided as a library. This has been done previously in Haskell using the arrow framework. In this paper, we show that arrows are not necessary to design such libraries and that a less general notion, namely monads, is sufficient to achieve the same goals. We present a monadic library to provide information-flow security for Haskell programs. The library introduces mechanisms to protect confidentiality of data for pure computations, that we then easily, and modularly, extend to include dealing with side-effects. We also present combinators to dynamically enforce different declassification policies when release of information is required in a controlled manner. It is possible to enforce policies related to what, by whom, and when information is released or a combination of them. The well-known concept of monads together with the light-weight characteristic of our approach makes the library suitable to build applications where confidentiality of data is an issue."
},
{
"paper_title": "Haskell session types with (almost) no class",
"paper_authors": [
"Riccardo Pucella",
"Jesse A. Tov"
],
"paper_abstract": "We describe an implementation of session types in Haskell. Session types statically enforce that client-server communication proceeds according to protocols. They have been added to several concurrent calculi, but few implementations of session types are available. Our embedding takes advantage of Haskell where appropriate, but we rely on no exotic features. Thus our approach translates with minimal modification to other polymorphic, typed languages such as ML and Java. Our implementation works with existing Haskell concurrency mechanisms, handles multiple communication channels and recursive session types, and infers protocols automatically. While our implementation uses unsafe operations in Haskell, it does not violate Haskell's safety guarantees. We formalize this claim in a concurrent calculus with unsafe communication primitives over which we layer our implementation of session types, and we prove that the session types layer is safe. In particular, it enforces that channel-based communication follows consistent protocols."
},
{
"paper_title": "Smallcheck and lazy smallcheck: automatic exhaustive testing for small values",
"paper_authors": [
"Colin Runciman",
"Matthew Naylor",
"Fredrik Lindblad"
],
"paper_abstract": "This paper describes two Haskell libraries for property-based testing. Following the lead of QuickCheck, these testing libraries SmallCheck and Lazy SmallCheck also use type-based generators to obtain test-sets of finite values for which properties are checked, and report any counter-examples found. But instead of using a sample of randomly generated values they test properties for all values up to some limiting depth, progressively increasing this limit. The paper explains the design and implementation of both libraries and evaluates them in comparison with each other and with QuickCheck."
},
{
"paper_title": "Not all patterns, but enough: an automatic verifier for partial but sufficient pattern matching",
"paper_authors": [
"Neil Mitchell",
"Colin Runciman"
],
"paper_abstract": "We describe an automated analysis of Haskell 98 programs to check statically that, despite the possible use of partial (or non-exhaustive) pattern matching, no pattern-match failure can occur. Our method is an iterative backward analysis using a novel form of pattern-constraint to represent sets of data values. The analysis is defined for a core first-order language to which Haskell 98 programs are reduced. Our analysis tool has been successfully applied to a range of programs, and our techniques seem to scale well. Throughout the paper, methods are represented much as we have implemented them in practice, again in Haskell."
},
{
"paper_title": "Yi: an editor in haskell for haskell",
"paper_authors": [
"Jean-Philippe Bernardy"
],
"paper_abstract": "Yi is a text editor written in Haskell and extensible in Haskell. We take advantage of Haskell's expressive power to define embedded DSLs that form the foundation of the editor. In turn, these DSLs provide a flexible mechanism to create extended versions of the editor. Yi also provides some support for editing Haskell code."
},
{
"paper_title": "Haskell, do you read me?: constructing and composing efficient top-down parsers at runtime",
"paper_authors": [
"Marcos Viera",
"S. Doaitse Swierstra",
"Eelco Lempsink"
],
"paper_abstract": "The Haskell definition and implementation of read is far from perfect. In the first place read is not able to handle the associativities defined for infix operators. Furthermore, it puts constraints on the way show is defined, and especially forces it to generate far more parentheses than expected. Lastly, it may give rise to exponential parsing times. All this is due to the compositionality requirement for read functions, which imposes a top-down parsing strategy. We propose a different approach, based on typed abstract syntax, in which grammars describing the data types are composed dynamically. Using the transformation libraries described in a companion paper these syntax descriptions are combined and transformed into parsers at runtime, from which the required read function are constructed. In this way we obtain linear parsing times, achieve consistency with the defined associativities, and may use a version of show which generates far fewer parentheses, thus improving readability of printed values. The described transformation algorithms can be incorporated in a Haskell compiler, thus moving most of the work involved to compile time."
},
{
"paper_title": "Shared subtypes: subtyping recursive parametrized algebraic data types",
"paper_authors": [
"Ki Yung Ahn",
"Tim Sheard"
],
"paper_abstract": "A newtype declaration in Haskell introduces a new type renaming an existing type. The two types are viewed by the programmer as semantically different, but share the same runtime representation. When operations on the two semantic views coincide, the run-time cost of conversion between the two types is reduced to zero (in both directions) because of this common representation. We describe a new language feature called Shared Subtypes (SSubtypes), which generalizes these properties of the newtype declaration. SSubtypes allow programmers to specify subtype rules between types and sharing rules between data constructors. A value of a type T, where T is a subtype of U, can always be cast, at no cost, to value of type U. This free up-casting allows library functions that consume the supertype to be applied without cost to subtypes. Yet any semantic interpretations desired by the programmer can be enforced by the compiler. SSubtype declarations work particularly well with GADTs. GADTs use differing type indexes to make explicit semantic differences, by using a different index for each way of viewing the data. Shared subtypes allow GADTs to share the same runtime representation as a reference type, of which the GADT is a refinement."
},
{
"paper_title": "Language and program design for functional dependencies",
"paper_authors": [
"Mark P. Jones",
"Iavor S. Diatchki"
],
"paper_abstract": "Eight years ago, functional dependencies, a concept from the theory of relational databases, were proposed as a mechanism for avoiding common problems with multiple parameter type classes in Haskell. In this context, functional dependencies give programmers a means to specify the semantics of a type class more precisely, and to obtain more accurate inferred types as a result. As time passed, however, several issues were uncovered - both in the design of a language to support functional dependencies, and in the ways that programmers use them - that led some to search for new, better alternatives. This paper focusses on two related aspects of design for functional dependencies: (i) the design of language/type system extensions that implement them; and (ii) the design of programs that use them. Our goal is to clarify the issues of what functional dependencies are, how they should be used, and how the problems encountered with initial proposals and implementations can be addressed."
},
{
"paper_title": "Making monads first-class with template haskell",
"paper_authors": [
"Pericles S. Kariotis",
"Adam M. Procter",
"William L. Harrison"
],
"paper_abstract": "Monads as an organizing principle for programming and semantics are notoriously difficult to grasp, yet they are a central and powerful abstraction in Haskell. This paper introduces a domain-specific language, MonadLab, that simplifies the construction of monads, and describes its implementation in Template Haskell. MonadLab makes monad construction truly first class, meaning that arcane theoretical issues with respect to monad transformers are completely hidden from the programmer. The motivation behind the design of MonadLab is to make monadic programming in Haskell simpler while providing a tool for non-Haskell experts that will assist them in understanding this powerful abstraction."
},
{
"paper_title": "Comparing libraries for generic programming in haskell",
"paper_authors": [
"Alexey Rodriguez",
"Johan Jeuring",
"Patrik Jansson",
"Alex Gerdes",
"Oleg Kiselyov",
"Bruno C. d. S. Oliveira"
],
"paper_abstract": "Datatype-generic programming is defining functions that depend on the structure, or \"shape\", of datatypes. It has been around for more than 10 years, and a lot of progress has been made, in particular in the lazy functional programming language Haskell. There are morethan 10 proposals for generic programming libraries orlanguage extensions for Haskell. To compare and characterise the many generic programming libraries in atyped functional language, we introduce a set of criteria and develop a generic programming benchmark: a set of characteristic examples testing various facets of datatype-generic programming. We have implemented the benchmark for nine existing Haskell generic programming libraries and present the evaluation of the libraries. The comparison is useful for reaching a common standard for generic programming, but also for a programmer who has to choose a particular approach for datatype-generic programming."
},
{
"paper_title": "Clase: cursor library for a structured editor",
"paper_authors": [
"Tristan O.R. Allwood",
"Susan Eisenbach"
],
"paper_abstract": "The zipper is a well known design pattern for providing a cursor-like interface to a data structure. However, the classic treatise by Huet (1) only scratches the surface of some of the potential applications of the zipper. In this work we have taken inspiration from Huet, and built a library suitable as an underpinning for a structured editor for programming languages. We consider a zipper structure that is suitable for traversing heterogeneous data types, encoding routes to other places in the tree (for bookmark or quick-jump functionality), expressing lexically bound information using contexts, and traversals for rendering a program indicating where the cursor is currently focused in the whole."
},
{
"paper_title": "Haskell: batteries included",
"paper_authors": [
"Duncan Coutts",
"Isaac Potoczny-Jones",
"Don Stewart"
],
"paper_abstract": "The quality of a programming language itself is only one component in the ability of application writers to get the job done. Programming languages can succeed or fail based on the breadth and quality of their library collection. Over the last few years, the Haskell community has risen to the task of building the library infrastructure necessary for Haskell to succeed as a programming language suitable for writing real-world applications. This on-going work, the Cabal and Hackage effort, is built on the open source model of distributed development, and have resulted in a flowering of development in the language with more code produced and reused now than at any point in the community's history. It is easier to obtain and use Haskell code, in a wider range of environments, than ever before. This demonstration describes the infrastructure and process of Haskell development inside the Cabal/Hackage framework, including the build system, library dependency resolution, centralised publication, documentation and distribution, and how the code escapes outward into the wider software community. We survey the benefits and trade-offs in a distributed, collaborative development ecosystem and look at a proposed Haskell Platform that envisages a complete Haskell development environment, batteries included."
}
]
},
{
"proceeding_title": "Haskell '07:Proceedings of the ACM SIGPLAN workshop on Haskell workshop",
"proceeding_contents": [
{
"paper_title": "Haskell program coverage",
"paper_authors": [
"Andy Gill",
"Colin Runciman"
],
"paper_abstract": "We describe the design, implementation and use of HPC, a tool-kit to record and display Haskell Program Coverage. HPC includes tools that instrument Haskell programs to record program coverage, run instrumented programs, and display information derived from coverage data in various ways."
},
{
"paper_title": "A lightweight interactive debugger for haskell",
"paper_authors": [
"Simon Marlow",
"José Iborra",
"Bernard Pope",
"Andy Gill"
],
"paper_abstract": "This paper describes the design and construction of a Haskell source-level debugger built into the GHCi interactive environment. We have taken a pragmatic approach: the debugger is based on the traditional stop-examine-continue model of online debugging, which is simple and intuitive, but has traditionally been shunned in the context of Haskell because it exposes the lazy evaluation order. We argue that this drawback is not as severe as it may seem, and in some cases is an advantage. The design focuses on availability: our debugger is intended to work on all programs that can be compiled with GHC, and without requiring the programmer to jump through additional hoops to debug their program. The debugger has a novel approach for reconstructing the type of runtime values in a polymorphic context. Our implementation is light on complexity, and was integrated into GHC without significant upheaval."
},
{
"paper_title": "Beauty in the beast",
"paper_authors": [
"Wouter Swierstra",
"Thorsten Altenkirch"
],
"paper_abstract": "It can be very difficult to debug impure code, let alone prove its correctness. To address these problems, we provide a functional specification of three central components of Peyton Jones's awkward squad: teletype IO, mutable state, and concurrency. By constructing an internal model of such concepts within our programming language, we can test, debug, and reason about programs that perform IO as if they were pure. In particular, we demonstrate how our specifications may be used in tandem with QuickCheck to automatically test complex pointer algorithms and concurrent programs."
},
{
"paper_title": "A functional-logic library for wired",
"paper_authors": [
"Matthew Naylor",
"Emil Axelsson",
"Colin Runciman"
],
"paper_abstract": "We develop a Haskell library for functional-logic programming, motivated by the implementation of Wired, a relational embedded domain-specific language for describing and analysing digital circuits at the VLSI-layout level. Compared to a previous library for logic programming by Claessen and Ljunglöf, we support residuation, easier creation of logical data types, and pattern matching. We discuss other applications of our library, including test-data generation, and various extensions, including lazy narrowing."
},
{
"paper_title": "Uniform boilerplate and list processing",
"paper_authors": [
"Neil Mitchell",
"Colin Runciman"
],
"paper_abstract": "Generic traversals over recursive data structures are often referred to as boilerplate code. The definitions of functions involving such traversals may repeat very similar patterns, but with variations for different data types and different functionality. Libraries of operations abstracting away boilerplate code typically rely on elaborate types to make operations generic. The motivating observation for this paper is that most traversals have value-specific behaviour for just one type. We present the design of a new library exploiting this assumption. Our library allows concise expression of traversals with competitive performance."
},
{
"paper_title": "Comprehensive comprehensions",
"paper_authors": [
"Simon Peyton Jones",
"Philip Wadler"
],
"paper_abstract": "We propose an extension to list comprehensions that makes it easy to express the kind of queries one would write in SQL using ORDER BY, GROUP BY, and LIMIT. Our extension adds expressive power to comprehensions, and generalises the SQL constructs that inspired it. It is easy to implement, using simple desugaring rules."
},
{
"paper_title": "Why it's nice to be quoted: quasiquoting for haskell",
"paper_authors": [
"Geoffrey Mainland"
],
"paper_abstract": "Quasiquoting allows programmers to use domain specific syntax to construct program fragments. By providing concrete syntax for complex data types, programs become easier to read, easier to write, and easier to reason about and maintain. Haskell is an excellent host language for embedded domain specific languages, and quasiquoting ideally complements the language features that make Haskell perform so well in this area. Unfortunately, until now no Haskell compiler has provided support for quasiquoting. We present an implementation in GHC and demonstrate that by leveraging existing compiler capabilities, building a full quasiquoter requires little more work than writing a parser. Furthermore, we provide a compile-time guarantee that all quasiquoted data is type-correct."
},
{
"paper_title": "A type-preserving closure conversion in haskell",
"paper_authors": [
"Louis-Julien Guillemette",
"Stefan Monnier"
],
"paper_abstract": "The use of typed intermediate languages can significantly increase the reliability of a compiler. By type-checking the code produced at each transformation stage, one can identify bugs in the compiler that would otherwise be much harder to find. Also it guarantees that any property that was enforced by the source-level type-system is holds also or the generated code. Recently, several people have tried to push this effort a bit further by verifying formally that the compiler indeed preserves typing. This is usually done with proof assistants or experimental languages. Instead, we decided to use Haskell (with GHC's extensions), to see how far we can go with a more mainstream system, supported by robust compilers and plentiful libraries. This article presents one part of our type preserving compiler, namely the closure conversion and its associated hoisting phase, where we use GADTs to let Haskell's type checker verify the we obey the object language's typing rules and that we correctly preserve types from one phase to the other. This should be both a good showcase as well as a good stress test for GADTs, so we also discuss our experience, as well as some trade-offs in the choice of representation, namely between higher-order abstract syntax (HOAS) and a first order representation (i.e. de Bruijn indices) and justify our choice of a de Bruijn representation. We incidentally present a type preserving conversion from HOAS (used in earlier phases of the compiler[6]) to a de Bruijn representation."
},
{
"paper_title": "Demo outline: switched-on yampa",
"paper_authors": [
"George Giorgidze",
"Henrik Nilsson"
],
"paper_abstract": "In this demonstration, we present an implementation of a modular synthesizer in Haskell using Yampa. A synthesizer, be it a hardware instrument or a pure software implementation, as here, is said to be modular if it provides sound-generating and sound-shaping components that can be interconnected in arbitrary ways. Yampa, a Haskell-embedded implementation of Functional Reactive Programming, supports flexible construction of hybrid systems. Since music is a hybrid continuous-time and discrete-time phenomenon, Yampa and is a good fit for such applications, offering some unique possibilities compared to most languages targeting music or audio applications. The demonstration illustrates this point by showing how simple audio blocks can be described and then interconnected in a network with dynamically changing structure, reflecting the changing demands of a musical performance."
},
{
"paper_title": "Harpy: run-time code generation in haskell",
"paper_authors": [
"Martin Grabmüeller",
"Dirk Kleeblatt"
],
"paper_abstract": "We present Harpy, a Haskell library for run-time code generation of x86 machine code. Harpy provides efficient generation of machine code, a convenient domain specific language for generating code and a collection of code generation combinators."
},
{
"paper_title": "A shortcut fusion rule for circular program calculation",
"paper_authors": [
"João Paulo Fernandes",
"Alberto Pardo",
"João Saraiva"
],
"paper_abstract": "Circular programs are a powerful technique to express multiple traversal algorithms as a single traversal function in a lazy setting. In this paper, we present a shortcut deforestation technique to calculate circular programs. The technique we propose takes as input the composition of two functions, such that the first builds an intermediate structure and some additional context information which are then processed by the second one, to produce the final result. Our transformation into circular programs achieves intermediate structure deforestation and multiple traversal elimination. Furthermore, the calculated programs preserve the termination properties of the original ones."
},
{
"paper_title": "Lightweight concurrency primitives for GHC",
"paper_authors": [
"Peng Li",
"Simon Marlow",
"Simon Peyton Jones",
"Andrew Tolmach"
],
"paper_abstract": "The Glasgow Haskell Compiler (GHC) has quite sophisticated support for concurrency in its runtime system, which is written in low-level C code. As GHC evolves, the runtime system becomes increasingly complex, error-prone, difficult to maintain and difficult to add new concurrency features. This paper presents an alternative approach to implement concurrency in GHC. Rather than hard-wiring all kinds of concurrency features, the runtime system is a thin substrate providing only a small set of concurrency primitives, and the remaining concurrency features are implemented in software libraries written in Haskell. This design improves the safety of concurrency support; it also provides more customizability of concurrency features, which can be developed as Haskell library packages and deployed modularly."
},
{
"paper_title": "Xmonad",
"paper_authors": [
"Don Stewart",
"Spencer Sjanssen"
],
"paper_abstract": "xmonad is a tiling window manager for the X Window system, implemented, configured and dynamically extensible in Haskell. This demonstration presents the case that software dominated by side effects can be developed with the precision and efficiency we expect from Haskell by utilising purely functional data structures, an expressive type system, extended static checking and property-based testing. In addition, we describe the use of Haskell as an application configuration and extension language."
}
]
},
{
"proceeding_title": "Haskell '06:Proceedings of the 2006 ACM SIGPLAN workshop on Haskell",
"proceeding_contents": [
{
"paper_title": "RepLib: a library for derivable type classes",
"paper_authors": [
"Stephanie Weirich"
],
"paper_abstract": "Some type class instances can be automatically derived from the structure of types. As a result, the Haskell language includes the \"deriving\" mechanism to automatic generates such instances for a small number of built-in type classes. In this paper, we present RepLib, a GHC library that enables a similar mechanism for arbitrary type classes. Users of RepLib can define the relationship between the structure of a datatype and the associated instance declaration by a normal Haskell functions that pattern-matches a representation type. Furthermore, operations defined in this manner are extensible-instances for specific types not defined by type structure may also be incorporated. Finally, this library also supports the definition of operations defined by parameterized types."
},
{
"paper_title": "A generic recursion toolbox for Haskell or: scrap your boilerplate systematically",
"paper_authors": [
"Deling Ren",
"Martin Erwig"
],
"paper_abstract": "Haskell programmers who deal with complex data types often need to apply functions to specific nodes deeply nested inside of terms. Typically, implementations for those applications require so-called boilerplate code, which recursively visits the nodes and carries the functions to the places where they need to be applied. The scrap-your-boilerplate approach proposed by Lämmel and Peyton Jones tries to solve this problem by defining a general traversal design pattern that performs the traversal automatically so that the programmers can focus on the code that performs the actual transformation.In practice we often encounter applications that require variations of the recursion schema and call for more sophisticated generic traversals. Defining such traversals from scratch requires a profound understanding of the underlying mechanism and is everything but trivial.In this paper we analyze the problem domain of recursive traversal strategies, by integrating and extending previous approaches. We then extend the scrap-your-boilerplate approach by rich traversal strategies and by a combination of transformations and accumulations, which leads to a comprehensive recursive traversal library Reclib in a statically typed framework.We define a two-layer library targeted at general programmers and programmers with knowledge in traversal strategies. The highlevel interface defines a universal combinator that can be customized to different one-pass traversal strategies with different coverage and different traversal order. The lower-layer interface provides a set of primitives that can be used for defining more sophisticated traversal strategies such as fixpoint traversals. The interfaceis simple and succinct. Like the original scrap-your-boilerplate approach, it makes use of rank-2 polymorphism and functional dependencies, implemented in GHC."
},
{
"paper_title": "Strong types for relational databases",
"paper_authors": [
"Alexandra Silva",
"Joost Visser"
],
"paper_abstract": "Haskell's type system with multi-parameter constructor classes and functional dependencies allows static (compile-time) computations to be expressed by logic programming on the level of types. This emergent capability has been exploited for instance to model arbitrary-length tuples (heterogeneous lists), extensible records, functions with variable length argument lists, and (homogenous) lists of statically fixed length (vectors).We explain how type-level programming can be exploited to define a strongly-typed model of relational databases and operations on them. In particular, we present a strongly typed embedding of a significant subset of SQL in Haskell. In this model, meta-data is represented by type-level entities that guard the semantic correctness of database operations at compile time.Apart from the standard relational database operations, such as selection and join, we model functional dependencies (among table attributes), normal forms, and operations for database transformation. We show how functional dependency information can be represented at the type level, and can be transported through operations. This means that type inference statically computes functional dependencies on the result from those on the arguments.Our model shows that Haskell can be used to design and prototype typed languages for designing, programming, and transforming relational databases."
},
{
"paper_title": "Polymorphic variants in Haskell",
"paper_authors": [
"Koji Kagawa"
],
"paper_abstract": "In languages that support polymorphic variants, a single variant value can be passed to many contexts that accept different sets of constructors. Polymorphic variants can be used in order to introduce extensible algebraic datatypes into functional programming languages and are potentially useful for application domains such as interpreters, graphical user interface (GUI) libraries and database interfaces, where the number of necessary constructors cannot be determined in advance. Very few functional languages, however, have a mechanism to extend existing datatypes by adding new constructors. In general, for polymorphic variants to be useful, we would need some mechanisms to reuse existing functions and extend them for new constructors.Actually, the type system of Haskell, when extended with parametric type classes (or multi-parameter type classes with functional dependencies), has enough power not only to mimic polymorphic variants but also to extend existing functions for new constructors.This paper, first, explains how to do this in Haskell's type system (Haskell 98 with popular extensions). However, this encoding of polymorphic variants is difficult to use in practice. This is because it is quite tedious for programmers to write mimic codes by hand and because the problem of ambiguous overloading resolution would embarrass programmers. Therefore, the paper proposes an extension of Haskell's type classes that supports polymorphic variants directly. It has a novel form of instance declarations where records and variants are handled symmetrically.This type system can produce vanilla Haskell codes as a result of type inference. Therefore it behaves as a preprocessor which translates the extended language into plain Haskell. Programmers would be able to use polymorphic variants without worrying nasty problems such as ambiguities."
},
{
"paper_title": "Extended static checking for haskell",
"paper_authors": [
"Dana N. Xu"
],
"paper_abstract": "Program errors are hard to detect and are costly both to programmers who spend significant efforts in debugging, and to systems that are guarded by runtime checks. Extended static checking can reduce these costs by helping to detect bugs at compile-time, where possible. Extended static checking has been applied to objectoriented languages, like Java and C#, but it has not been applied to a lazy functional language, like Haskell. In this paper, we describe an extended static checking tool for Haskell, named ESC/Haskell, that is based on symbolic computation and assisted by a few novel strategies. One novelty is our use of Haskell as the specification language itself for pre/post conditions. Any Haskell function (including recursive and higher order functions) can be used in our specification which allows sophisticated properties to be expressed. To perform automatic verification, we rely on a novel technique based on symbolic computation that is augmented by counter-example guided unrolling. This technique can automate our verification process and be efficiently implemented."
},
{
"paper_title": "Running the manual: an approach to high-assurance microkernel development",
"paper_authors": [
"Philip Derrin",
"Kevin Elphinstone",
"Gerwin Klein",
"David Cock",
"Manuel M. T. Chakravarty"
],
"paper_abstract": "We propose a development methodology for designing and prototyping high assurance microkernels, and describe our application of it. The methodology is based on rapid prototyping and iterative refinement of the microkernel in a functional programming language. The prototype provides a precise semi-formal model, which is also combined with a machine simulator to form a reference implementation capable of executing real user-level software, to obtain accurate feedback on the suitability of the kernel API during development phases. We extract from the prototype a machine-checkable formal specification in higher-order logic, which may be used to verify properties of the design, and also results in corrections to the design without the need for full verification. We found the approach leads to productive, highly iterative development where formal modelling, semi-formal design and prototyping, and end use all contribute to a more mature final design in a shorter period of time."
},
{
"paper_title": "Strongly typed memory areas programming systems-level data structures in a functional language",
"paper_authors": [
"Iavor S. Diatchki",
"Mark P. Jones"
],
"paper_abstract": "Modern functional languages offer several attractive features to support development of reliable and secure software. However, in our efforts to use Haskell for systems programming tasks-including device driver and operating system construction-we have also encountered some significant gaps in functionality. As a result, we have been forced, either to code some non-trivial components in more traditional but unsafe languages like C or assembler, or else to adopt aspects of the foreign function interface that compromise on strong typing and type safety.In this paper, we describe how we have filled one of these gaps by extending a Haskell-like language with facilities for working directly with low-level, memory-based data structures. Using this extension, we are able to program a wide range of examples, including hardware interfaces, kernel data structures, and operating system APIs. Our design allows us to address concerns about representation, alignment, and placement (in virtual or physical address spaces) that are critical in some systems applications, but clearly beyond the scope of most existing functional languages.Our approach leverages type system features that are wellknown and widely supported in existing Haskell implementations, including kinds, multiple parameter type classes, functional dependencies, and improvement. One interesting feature is the use of a syntactic abbreviation that makes it easy to define and work with functions at the type level."
},
{
"paper_title": "User-level transactional programming in Haskell",
"paper_authors": [
"Peter Thiemann"
],
"paper_abstract": "Correct handling of concurrently accessed external resources is a demanding problem in programming. The standard approaches rely on database transactions or concurrency mechanisms like locks. The paper considers two such resources, global variables and databases, and defines transactional APIs for them in Haskell. The APIs provide a novel flavor of user-level transactions which are particularly suitable in the context of web-based systems. This suitability is demonstrated by providing a second implementation in the context of WASH, a Haskell-based Web programming system. The underlying implementation framework works for both kinds of resources and can serve as a blueprint for further implementations of user-level transactions. The Haskell type system provides an encapsulation of the transactional scope that avoids unintended breakage of the transactional guarantees."
},
{
"paper_title": "An extensible dynamically-typed hierarchy of exceptions",
"paper_authors": [
"Simon Marlow"
],
"paper_abstract": "In this paper we address the lack of extensibility of the exception type in Haskell. We propose a lightweight solution involving the use of existential types and the Typeable class only, and show how our solution allows a fully extensible hierarchy of exception types to be declared, in which a single overloaded catch operator can be used to catch either specific exception types, or exceptions belonging to any subclass in the hierarchy. We also show how to combine the existing object-oriented framework OOHaskell with our design, such that OOHaskell objects can be thrown and caught as exceptions, with full support for implicit OOHaskell subtyping in the catch operator."
},
{
"paper_title": "Interactive debugging with GHCi",
"paper_authors": [
"David Himmelstrup"
],
"paper_abstract": "With my presentation I intend to demonstrate an implementation of breakpoint combinators in GHCi. These combinators are designed to aid the debugging process of Haskell programs by halting the execution and letting the user observe variables of their choice. In contrast to the existing tools (such as Hat, Hood, Buddha and Debug. Trace), which in effect allow something similar, the combinators I will be demonstrating give the user the ability to observe the properties, not just the stringification, of variables. The combinators are a more low-level approach to the problem of debugging and do not provide as advanced features as Hat or Buddha. However, no sophisticated debugging system for Haskell has been really widely adopted by the Haskell community, primarily because they lack support for a variety of commonly used Glasgow Haskell extensions. The breakpoint combinators, on the other hand, are integrated in GHCi and work out-of-the-box with all Glasgow Haskell programs."
},
{
"paper_title": "Introducing the Haskell equational reasoning assistant",
"paper_authors": [
"Andy Gill"
],
"paper_abstract": "We introduce the new, improved version of the Haskell Equational Reasoning Assistant, which consists of an Ajax application for rewriting Haskell fragments in their context, and an API for scripting non-trivial rewrites."
},
{
"paper_title": "GenI: natural language generation in Haskell",
"paper_authors": [
"Eric Kow"
],
"paper_abstract": "In this article we present GenI, a chart based surface realisation tool implemented in Haskell. GenI takes as input a set of first order terms (the input semantics) and a grammar for a given target language (e.g., English, French, Spanish, etc.) and generates sentences in the target language, whose semantic meaning corresponds to the input semantics.The aim of the article is not so much to present GenI or to describe how it is implemented. Rather, we will focus on the aspects of functional programming (higher order functions, monads) and Haskell (typeclasses) that we found important to its design."
},
{
"paper_title": "Statically typed linear algebra in Haskell",
"paper_authors": [
"Frederik Eaton"
],
"paper_abstract": "Numerical computations are often specified in terms of operations on vectors and matrices. This is partly because it is often natural to do so; but it is also partly because, being otherwise useful, such operations have been provided very fast implementations in linear algebra libraries such as ATLAS (implementing BLAS), LAPACK, fftw, etc. Due to their better cache awareness and use of specialized processor instructions, a high-level invocation of an operation such as matrix multiplication using these libraries may execute orders of magnitude more quickly than a straightforward low-level implementation written in, say, C. The combination of efficiency and expressivity has made the framework of linear algebra an exceedingly popular one for scientists. For example, Matlab[1], an interpreter of a simple linear algebra language, has become a standard research tool in many fields.The process of expressing an algorithm in terms of matrices can be error-prone. Matlab and other popular matrix languages are dynamically-typed, which means that type errors are only detected at run-time. Even statically typed languages rarely keep track of more information in an object type than its tensor rank (e.g., to discriminate between matrices and vectors) and element type.If we could additionally expose object dimensions to the type system then we would ideally be able to detect a much wider variety of common errors at compile time than is currently possible. This would in turn make it easier to build and maintain larger numericsintensive software projects.We call our idea of exposing dimensions to the type system \"strongly typed linear algebra\". We have written a prototype implementation in Haskell, which is based on Alberto Ruiz's GSLHaskell[2] and which uses techniques from Kiselyov and Shan's \"Implicit Configurations\" paper[3].The presentation will cover the key aspects of our design such as the use of GADTs to combine matrix and vector types, and the use of higher-rank types and staging (namely, Template Haskell) to closely approximate a dependent-type system. Then, we will give a demonstration of interactive use, and compare our system to existing systems in the areas of speed and usability."
},
{
"paper_title": "Haskell' status report",
"paper_authors": [
"Isaac Jones"
],
"paper_abstract": "The Haskell programming language is more-or-less divided into two \"branches\". The Haskell 98 standard is the \"stable\" branch of the language, and that has been a big success. A lot of progress has been made over the last few years in the \"research\" branch of the Haskell language. It is constantly advancing, and we feel that it is time for a new standard which reflects those advancements. This talk is a status report from the Haskell' committee to the Haskell community."
}
]
},
{
"proceeding_title": "Haskell '05:Proceedings of the 2005 ACM SIGPLAN workshop on Haskell",
"proceeding_contents": [
{
"paper_title": "Darcs: distributed version management in haskell",
"paper_authors": [
"David Roundy"
],
"paper_abstract": "A common reaction from people who hear about darcs, the source control system I created, is that it sounds like a great tool, but it is a shame that it is written in Haskell. People think that because darcs is written in Haskell it will be a slow memory hog with very few contributors to the project. I will give a somewhat historical overview of my experiences with the Haskell language, libraries and tools.I will begin with a brief overview of the darcs advanced revision control system, how it works and how it differs from other version control systems. Then I will go through various problems and successes I have had in using the Haskell language and libraries in darcs, roughly in the order I encountered them. In the process I will give a bit of a tour through the darcs source code. In each case, I will tell about the problem I wanted to solve, what I tried, how it worked, and how it might have worked better (if that is possible)."
},
{
"paper_title": "Visual haskell: a full-featured haskell development environment",
"paper_authors": [
"Krasimir Angelov",
"Simon Marlow"
],
"paper_abstract": "We describe the design and implementation of a full-featured Haskell development environment, based on Microsoft's extensible Visual Studio environment.Visual Haskell provides a number of features not found in existing Haskell development environments: interactive error-checking, displaying of inferred types in the editor, and other features based on static properties of the source code. Visual Haskell also provides full support for developing and building multi-module Haskell projects, based on the Cabal architecture. Visual Haskell supports the full GHC language, and can be used to develop real Haskell applications (including the code of the plugin itself).Visual Haskell has driven developments in other Haskell-related projects: Cabal, the Concurrent FFI extension, and an API to allow programmatic access to GHC itself. Furthermore, development of the Visual Haskell plugin required industrial-strength foreign language interoperability; we describe all our experiences in detail."
},
{
"paper_title": "Haskell ready to dazzle the real world",
"paper_authors": [
"Martijn M. Schrage",
"Arjan van IJzendoorn",
"Linda C. van der Gaag"
],
"paper_abstract": "Haskell has proved itself to be a suitable implementation language for large software projects. Nevertheless, surprisingly few graphical end-user applications have been written in Haskell. Based on our experience with the development of the Bayesian network toolbox Dazzle, we argue that the language is indeed very well suited for writing such applications. Popular language features, such as higher-order functions, laziness, and light syntax for data structures, turn out to hold their ground in a large interactive end-user application. Haskell, combined with the truly platform-independent GUI library wxHaskell, is ready for building real-world applications."
},
{
"paper_title": "Dynamic applications from the ground up",
"paper_authors": [
"Don Stewart",
"Manuel M. T. Chakravarty"
],
"paper_abstract": "Some Lisp programs such as Emacs, but also the Linux kernel (when fully modularised) are mostly dynamic; i.e., apart from a small static core, the significant functionality is dynamically loaded. In this paper, we explore fully dynamic applications in Haskell where the static core is minimal and code is hot swappable. We demonstrate the feasibility of this architecture by two applications: Yi, an extensible editor, and Lambdabot, a plugin-based IRC robot. Benefits of the approach include hot swappable code and sophisticated application configuration and extension via embedded DSLs. We illustrate both benefits in detail at the example of a novel embedded DSL for editor interfaces."
},
{
"paper_title": "Haskell server pages through dynamic loading",
"paper_authors": [
"Niklas Broberg"
],
"paper_abstract": "Haskell Server Pages (HSP) is a domain specific language, based on Haskell, for writing dynamic web pages. Its main features are concrete XML expressions as first class values, pattern-matching on XML, and a runtime system for evaluating dynamic web pages.The first design of HSP was made by Erik Meijer and Danny van Velzen in 2000, but it was never fully designed nor implemented. In this paper we refine, extend and improve their design of the language and describe how to implement HSP using dynamic loading of pages."
},
{
"paper_title": "Haskell on a shared-memory multiprocessor",
"paper_authors": [
"Tim Harris",
"Simon Marlow",
"Simon Peyton Jones"
],
"paper_abstract": "Multi-core processors are coming, and we need ways to program them. The combination of purely-functional programming and explicit, monadic threads, communicating using transactional memory, looks like a particularly promising way to do so. This paper describes a full-scale implementation of shared-memory parallel Haskell, based on the Glasgow Haskell Compiler. Our main technical contribution is a lock-free mechanism for evaluating shared thunks that eliminates the major performance bottleneck in parallel evaluation of a lazy language. Our results are preliminary but promising: we can demonstrate wall-clock speedups of a serious application (GHC itself), even with only two processors, compared to the same application compiled for a uni-processor."
},
{
"paper_title": "Verifying haskell programs using constructive type theory",
"paper_authors": [
"Andreas Abel",
"Marcin Benke",
"Ana Bove",
"John Hughes",
"Ulf Norell"
],
"paper_abstract": "Proof assistants based on dependent type theory are closely related to functional programming languages, and so it is tempting to use them to prove the correctness of functional programs. In this paper, we show how Agda, such a proof assistant, can be used to prove theorems about Haskell programs. Haskell programs are translated into an Agda model of their semantics, by translating via GHC's Core language into a monadic form specially adapted to represent Haskell's polymorphism in Agda's predicative type system. The translation can support reasoning about either total values only, or total and partial values, by instantiating the monad appropriately. We claim that, although these Agda models are generated by a relatively complex translation process, proofs about them are simple and natural, and we offer a number of examples to support this claim."
},
{
"paper_title": "Putting curry-howard to work",
"paper_authors": [
"Tim Sheard"
],
"paper_abstract": "The Curry-Howard isomorphism states that types are propositions and that programs are proofs. This allows programmers to state and enforce invariants of programs by using types. Unfortunately, the type systems of today's functional languages cannot directly express interesting properties of programs. To alleviate this problem, we propose the addition of three new features to functional programming languages such as Haskell: Generalized Algebraic Datatypes, Extensible Kind Systems, and the generation, propagation, and discharging of Static Propositions. These three new features are backward compatible with existing features, and combine to enable a new programming paradigm for functional programmers. This paradigm makes it possible to state and enforce interesting properties of programs using the type system, and it does this in manner that leaves intact the functional programming style, known and loved by functional programmers everywhere."
},
{
"paper_title": "There and back again: arrows for invertible programming",
"paper_authors": [
"Artem Alimarine",
"Sjaak Smetsers",
"Arjen van Weelden",
"Marko van Eekelen",
"Rinus Plasmeijer"
],
"paper_abstract": "Invertible programming occurs in the area of data conversion where it is required that the conversion in one direction is the inverse of the other. For that purpose, we introduce bidirectional arrows (bi-arrows). The bi-arrow class is an extension of Haskell's arrow class with an extra combinator that changes the direction of computation.The advantage of the use of bi-arrows for invertible programming is the preservation of invertibility properties using the bi-arrow combinators. Programming with bi-arrows in a polytypic or generic way exploits this the most. Besides bidirectional polytypic examples, including invertible serialization, we give the definition of a monadic bi-arrow transformer, which we use to construct a bidirectional parser/pretty printer."
},
{
"paper_title": "TypeCase: a design pattern for type-indexed functions",
"paper_authors": [
"Bruno C. d. S. Oliveira",
"Jeremy Gibbons"
],
"paper_abstract": "A type-indexed function is a function that is defined for each member of some family of types. Haskell's type class mechanism provides collections of open type-indexed functions, in which the indexing family can be extended by defining a new type class instance but the collection of functions is fixed. The purpose of this paper is to present TypeCase: a design pattern that allows the definition of closed type-indexed functions, in which the index family is fixed but the collection of functions is extensible. It is inspired by Cheney and Hinze's work on lightweight approaches to generic programming. We generalise their techniques as a design pattern. Furthermore, we show that type-indexed functions with type-indexed types, and consequently generic functions with generic types, can also be encoded in a lightweight manner, thereby overcoming one of the main limitations of the lightweight approaches."
},
{
"paper_title": "Polymorphic string matching",
"paper_authors": [
"Richard S. Bird"
],
"paper_abstract": "Calculational developments of functional programs have been likened to conjuring tricks: enjoyable to watch but often a mystery as to how they are done. This pearl explains the trick. The aim is to give new calculations of two famous algorithms in string matching, the Knuth-Morris-Pratt algorithm and the Boyer-Moore algorithm. The string matching problem is formulated polymorphically, so the only available property of elements of the alphabet is an equality test."
},
{
"paper_title": "Halfs: a Haskell filesystem",
"paper_authors": [
"Isaac Jones"
],
"paper_abstract": "In the course of developing a web server for an embedded operating system, we had need of a filesystem which was small enough to alter to our needs and written in a high-level language so that we could show certain high assurance properties about its behavior. Since we had already ported the Haskell runtime to this operating system, Haskell was the obvious language of choice.This presentation will give a brief overview of the design principals of Halfs, present the Halfs library interface and its integration with the Linux FUSE module, and show basic tasks such as reading and writing to the filesystem."
}
]
},
{
"proceeding_title": "Haskell '04:Proceedings of the 2004 ACM SIGPLAN workshop on Haskell",
"proceeding_contents": [
{
"paper_title": "Functional pearl: i am not a number--i am a free variable",
"paper_authors": [
"Conor McBride",
"James McKinna"
],
"paper_abstract": "In this paper, we show how to manipulate syntax with binding using a mixed representation of names for free variables (with respect to the task in hand) and de Bruijn indices [5] for bound variables. By doing so, we retain the advantages of both representations: naming supports easy, arithmetic-free manipulation of terms; de Bruijn indices eliminate the need for α-conversion. Further, we have ensured that not only the user but also the implementation need never deal with de Bruijn indices, except within key basic operations.Moreover, we give a hierarchical representation for names which naturally reflects the structure of the operations we implement. Name choice is safe and straightforward. Our technology combines easily with an approach to syntax manipulation inspired by Huet's 'zippers'[10].Without the ideas in this paper, we would have struggled to implement EPIGRAM [19]. Our example-constructing inductive elimination operators for datatype families-is but one of many where it proves invaluable."
},
{
"paper_title": "Plugging Haskell in",
"paper_authors": [
"André Pang",
"Don Stewart",
"Sean Seefried",
"Manuel M. T. Chakravarty"
],
"paper_abstract": "Extension languages enable users to expand the functionality of an application without touching its source code. Commonly, these languages are dynamically typed languages, such as Lisp, Python, or domain-specific languages, which support runtime plugins via dynamic loading of components. We show that Haskell can be comfortably used as a statically typed extension language for both Haskell and foreign-language applications supported by the Haskell FFI, and that it can perform type-safe dynamic loading of plugins using dynamic types. Moreover, we discuss how plugin support is especially useful to applications where Haskell is used as an embedded domain-specific language (EDSL). We explain how to realise type-safe plugins using dynamic types, runtime compilation, and dynamic linking, exploiting infrastructure provided by the Glasgow Haskell Compiler. We demonstrate the practicability of our approach with several applications that serve as running examples."
},
{
"paper_title": "Extending the Haskell foreign function interface with concurrency",
"paper_authors": [
"Simon Marlow",
"Simon Peyton Jones",
"Wolfgang Thaller"
],
"paper_abstract": "A Haskell system that includes both the Foreign Function Interface and the Concurrent Haskell extension must consider how Concurrent Haskell threads map to external Operating System threads for the purposes of specifying in which thread a foreign call is made.Many concurrent languages take the easy route and specify a one-to-one correspondence between the language's own threads and external OS threads. However, OS threads tend to be expensive, so this choice can limit the performance and scalability of the concurrent language.The main contribution of this paper is a language design that provides a neat solution to this problem, allowing the implementor of the language enough flexibility to provide cheap lightweight threads, while still providing the programmer with control over the mapping between internal threads and external threads where necessary."
},
{
"paper_title": "Functional pearl: implicit configurations--or, type classes reflect the values of types",
"paper_authors": [
"Oleg Kiselyov",
"Chung-chieh Shan"
],
"paper_abstract": "The configurations problem is to propagate run-time preferences throughout a program, allowing multiple concurrent configuration sets to coexist safely under statically guaranteed separation. This problem is common in all software systems, but particularly acute in Haskell, where currently the most popular solution relies on unsafe operations and compiler pragmas.We solve the configurations problem in Haskell using only stable and widely implemented language features like the type-class system. In our approach, a term expression can refer to run-time configuration parameters as if they were compile-time constants in global scope. Besides supporting such intuitive term notation and statically guaranteeing separation, our solution also helps improve the program's performance by transparently dispatching to specialized code at run-time. We can propagate any type of configuration data-numbers, strings, IO actions, polymorphic functions, closures, and abstract data types. No previous approach to propagating configurations implicitly in any language provides the same static separation guarantees.The enabling technique behind our solution is to propagate values via types, with the help of polymorphic recursion and higher-rank polymorphism. The technique essentially emulates local type-class instance declarations while preserving coherence. Configuration parameters are propagated throughout the code implicitly as part of type inference rather than explicitly by the programmer. Our technique can be regarded as a portable, coherent, and intuitive alternative to implicit parameters. It motivates adding local instances to Haskell, with a restriction that salvages principal types."
},
{
"paper_title": "Programming graphics processors functionally",
"paper_authors": [
"Conal Elliott"
],
"paper_abstract": "Graphics cards for personal computers have recently undergone a radical transformation from fixed-function graphics pipelines to multi-processor, programmable architectures. Multi-processor architectures are clearly advantageous for graphics for the simple reason that graphics computations are naturally concurrent, mapping well to stateless stream processing. They therefore parallelize easily and need no random access to memory with its problematic latencies.This paper presents Vertigo, a purely functional, Haskell-embedded language for 3D graphics and an optimizing compiler that generates graphics processor code. The language integrates procedural surface modeling, shading, and texture generation, and the compiler exploits the unusual processor architecture. The shading sub-language is based on a simple and precise semantic model, in contrast to previous shading languages. Geometry and textures are also defined via a very simple denotational semantics. The formal semantics yields not only programs that are easy to understand and reason about, but also very efficient implementation, thanks to a compiler based on partial evaluation and symbolic optimization, much in the style of Pan [2].Haskell's overloading facility is extremely useful throughout Vertigo. For instance, math operators are used not just for floating point numbers, but also expressions (for differentiation and compilation), tuples, and functions. Typically, these overloadings cascade, as in the case of surfaces, which may be combined via math operators, though they are really functions over tuples of expressions on floating point numbers. Shaders may be composed with the same notational convenience. Functional dependencies are exploited for vector spaces, cross products, and derivatives."
},
{
"paper_title": "wxHaskell: a portable and concise GUI library for haskell",
"paper_authors": [
"Daan Leijen"
],
"paper_abstract": "wxHaskell is a graphical user interface (GUI) library for Haskell that is built on wxWidgets: a free industrial strength GUI library for C++ that has been ported to all major platforms, including Windows, Gtk, and MacOS X. In contrast with many other libraries, wxWidgets retains the native look-and-feel of each particular platform. We show how distinctive features of Haskell, like parametric polymorphism, higher-order functions, and first-class computations, can be used to present a concise and elegant monadic interface for portable GUI programs."
},
{
"paper_title": "Type-safe, self inspecting code",
"paper_authors": [
"Arthur I. Baars",
"S. Doaitse Swierstra"
],
"paper_abstract": "We present techniques for representing typed abstract syntax trees in the presence of observable recursive structures. The need for this arose from the desire to cope with left-recursion in combinator based parsers. The techniques employed can be used in a much wider setting however, since it enables the inspection and transformation of any program structure, which contains internal references. The hard part of the work is to perform such analyses and transformations in a setting in which the Haskell type checker is still able to statically check the correctness of the program representations, and hence the type correctness of the transformed program."
},
{
"paper_title": "Improving type error diagnosis",
"paper_authors": [
"Peter J. Stuckey",
"Martin Sulzmann",
"Jeremy Wazny"
],
"paper_abstract": "We present a number of methods for providing improved type error reports in the Haskell and Chameleon programming languages. We build upon our previous work [19] where we first introduced the idea of discovering type errors by translating the typing problem into a constraint problem and looking for minimal unsatisfiable subsets of constraints. This allowed us to find precise sets of program locations which are in conflict with each other. Here we extend this approach by extracting additional useful information from these minimal unsatisfiable sets. This allows us to report errors as conflicts amongst a number of possible, candidate types. The advantage of our approach is that it offers implementors the flexibility to employ heuristics to select where, amongst all the locations involved, an error should be reported. In addition, we present methods for providing improved subsumption and ambiguity error reporting."
},
{
"paper_title": "Haskell type browser",
"paper_authors": [
"Matthias Neubauer",
"Peter Thiemann"
]
},
{
"paper_title": "BNF converter",
"paper_authors": [
"Markus Forsberg",
"Aarne Ranta"
],
"paper_abstract": "We will demonstrate BNFC (the BNF Converter) [7, 6], a multi-lingual compiler tool. BNFC takes as its input a grammar written in LBNF (Labelled BNF) notation, and generates a compiler front-end (an abstract syntax, a lexer, and a parser). Furthermore, it generates a case skeleton usable as the starting point of back-end construction, a pretty printer, a test bench, and a LaTeX document usable as language specification. The program components can be generated in Haskell, Java, C and C++ and their standard parser and lexer tools. BNFC itself was written in Haskell.The methodology used for the generated front-end is based on Appel's books on compiler construction [3, 1, 2]. BNFC has been used as a teaching tool in compiler construction courses at Chalmers. It has also been applied to research-related programming language development, and in an industrial application producing a compiler for a telecommunications protocol description language [4].BNFC is freely available under the GPL license at its website and in the testing distribution of Debian Linux."
},
{
"paper_title": "Strongly typed heterogeneous collections",
"paper_authors": [
"Oleg Kiselyov",
"Ralf Lämmel",
"Keean Schupke"
],
"paper_abstract": "A heterogeneous collection is a datatype that is capable of storing data of different types, while providing operations for look-up, update, iteration, and others. There are various kinds of heterogeneous collections, differing in representation, invariants, and access operations. We describe HLIST - a Haskell library for strongly typed heterogeneous collections including extensible records. We illustrate HLIST's benefits in the context of type-safe database access in Haskell. The HLIST library relies on common extensions of Haskell 98. Our exploration raises interesting issues regarding Haskell's type system, in particular, avoidance of overlapping instances, and reification of type equality and type unification."
},
{
"paper_title": "Student paper: HaskellDB improved",
"paper_authors": [
"Björn Bringert",
"Anders Höckersten",
"Conny Andersson",
"Martin Andersson",
"Mary Bergman",
"Victor Blomqvist",
"Torbjörn Martin"
],
"paper_abstract": "We present an improved version of the HaskellDB database library. The original version relied on TRex, a Haskell extension supported only by the Hugs interpreter. We have replaced the use of TRex by a record implementation which uses more commonly implemented Haskell extensions.Additionally, HaskellDB now supports two different cross-platform database backends. Other changes include database creation functionality, bounded string support, performance enhancements, fixes to the optimisation logic, transaction support and more fine grained expression types."
}
]
},
{
"proceeding_title": "Haskell '03:Proceedings of the 2003 ACM SIGPLAN workshop on Haskell",
"proceeding_contents": [
{
"paper_title": "Functional Pearl trouble shared is trouble halved",
"paper_authors": [
"Richard Bird",
"Ralf Hinze"
],
"paper_abstract": "A nexus is a tree that contains shared nodes, nodes that have more than one incoming arc. Shared nodes are created in almost every functional program---for instance, when updating a purely functional data structure---though programmers are seldom aware of this. In fact, there are only a few algorithms that exploit sharing of nodes consciously. One example is constructing a tree in sublinear time. In this pearl we discuss an intriguing application of nexuses; we show that they serve admirably as memo structures featuring constant time access to memoized function calls. Along the way we encounter Boolean lattices and binomial trees."
},
{
"paper_title": "The Yampa arcade",
"paper_authors": [
"Antony Courtney",
"Henrik Nilsson",
"John Peterson"
],
"paper_abstract": "Simulated worlds are a common (and highly lucrative) application domain that stretches from detailed simulation of physical systems to elaborate video game fantasies. We believe that Functional Reactive Programming (FRP) provides just the right level of functionality to develop simulated worlds in a concise, clear and modular way. We demonstrate the use of FRP in this domain by presenting an implementation of the classic \"Space Invaders\" game in Yampa, our most recent Haskell-embedded incarnation of FRP."
},
{
"paper_title": "XML templates and caching in WASH",
"paper_authors": [
"Peter Thiemann"
],
"paper_abstract": "Caching of documents is an important concern on the Web. It is a major win in all situations where bandwidth is limited. Unfortunately, the increasing spread of dynamically generated documents seriously hampers traditional caching techniques in browsers and on proxy servers.WASH/CGI is a Haskell-based domain specific language for creating interactive Web applications. The Web pages generated by a WASH/CGI application are highly dynamic and cannot be cached with traditional means.We show how to implement the dynamic caching scheme of the BigWig language [2] in the context of WASH/CGI. The main issue in BigWig's caching scheme is the distinction between fixed parts (that should be cached) and variable parts (that need not be cached) of a document. Since BigWig is a standalone domain-specific language, its compiler can perform the distinction as part of its static analysis. Hence, the challenge in our implementation is to obtain the same information without involving the compiler. To this end, we extend WASH/CGI's document language by mode annotations and define the translation of the resulting annotated document language into JavaScript.To alleviate the awkwardness of programming directly in annotated language, we have defined a surface syntax in the style of HSP (Haskell Server Pages) [11]."
},
{
"paper_title": "Tool support for refactoring functional programs",
"paper_authors": [
"Huiqing Li",
"Claus Reinke",
"Simon Thompson"
],
"paper_abstract": "Refactorings are source-to-source program transformations which change program structure and organisation, but not program functionality. Documented in catalogues and supported by tools, refactoring provides the means to adapt and improve the design of existing code, and has thus enabled the trend towards modern agile software development processes. Refactoring has taken a prominent place in software development and maintenance, but most of this recent success has taken place in the OO and XP communities.In our project, we explore the prospects for 'Refactoring Functional Programs', taking Haskell as a concrete case-study. This paper discusses the variety of pragmatic and implementation issues raised by our work on the Haskell Refactorer. We briefly introduce the ideas behind refactoring, and a set of elementary functional refactorings. The core of the paper then outlines the main challenges that arise from our aim to produce practical tools for a decidedly non-toy language, summarizes our experience in trying to establish the necessary meta-programming infrastructure and gives an implementation overview of our current prototype refactoring tool. Using Haskell as our implementation language, we also offer some preliminary comments on Haskell programming-in-the-large."
},
{
"paper_title": "Modeling quantum computing in Haskell",
"paper_authors": [
"Amr Sabry"
],
"paper_abstract": "The paper develops a model of quantum computing from the perspective of functional programming. The model explains the fundamental ideas of quantum computing at a level of abstraction that is familiar to functional programmers. The model also illustrates some of the inherent difficulties in interpreting quantum mechanics and highlights the differences between quantum computing and traditional (functional or otherwise) computing models."
},
{
"paper_title": "Structure and interpretation of quantum mechanics: a functional framework",
"paper_authors": [
"Jerzy Karczmarczuk"
],
"paper_abstract": "We present a framework for representing quantum entities in Haskell. States and operators are functional objects, and their semantics is defined --- as far as possible --- independently of the base in the Hilbert space. We construct effectively the tensor states for composed systems, and we present a toy model of quantum circuit toolbox. We conclude that functional languages are right tools for formal computations in quantum physics. The paper focuses mainly on the representation, not on computational problems."
},
{
"paper_title": "Helium, for learning Haskell",
"paper_authors": [
"Bastiaan Heeren",
"Daan Leijen",
"Arjan van IJzendoorn"
],
"paper_abstract": "Helium is a user-friendly compiler designed especially for learning the functional programming language Haskell. The quality of the error messages has been the main concern both in the choice of the language features and in the implementation of the compiler. Helium implements almost full Haskell, where the most notable difference is the absence of type classes. Our goal is to let students learn functional programming more quickly and with more fun. The compiler has been successfully employed in two introductory programming courses at Utrecht University."
},
{
"paper_title": "Interactive type debugging in Haskell",
"paper_authors": [
"Peter J. Stuckey",
"Martin Sulzmann",
"Jeremy Wazny"
],
"paper_abstract": "In this paper we illustrate the facilities for type debugging of Haskell programs in the Chameleon programming environment. Chameleon provides an extension to Haskell supporting advanced and programmable type extensions. Chameleon maps the typing problem for a program to a system of constraints each attached to program code that generates the constraints. We use reasoning about constraint satisfiability and implication to find minimal justifications of type errors, and to explain unexpected types that arise. Through an interactive process akin to declarative debugging, a user can track down exactly where a type error occurs. The approach handles Hindley/Milner types with Haskell-style overloading. The Chameleon system provides a full implementation of our flexible type debugging scheme which can be used as a front-end to any existing Haskell system."
},
{
"paper_title": "HsDebug: debugging lazy programs by not being lazy",
"paper_authors": [
"Robert Ennals",
"Simon Peyton Jones"
]
},
{
"paper_title": "Haskell and principal types",
"paper_authors": [
"Karl-Filip Faxén"
],
"paper_abstract": "This paper points out two problems which prevent Haskell from having principal types. For each problem, we discuss a program which exhibits it. The first problem has to do with type signatures and class constraints containing both generic and nongeneric type variables. The second problem is caused by the monomorphism restriction. In both cases there is an interaction between generalization and class constraints where substituting a nonvariable type for a constrained type variable makes the constraint tautological and opens up for more aggressive generalization.We also discuss how these problems can be solved by introducing quantified class constraints and strengthening the monomorphism restriction from prohibiting only the generalization of constrained type variables to prohibiting any generalization at all. We also give an inference algorithm producing principal types for the new system."
},
{
"paper_title": "Simulating quantified class constraints",
"paper_authors": [
"Valery Trifonov"
],
"paper_abstract": "Defining nontrivial class instances for irregular and exponential datatypes in Haskell is challenging, and as a solution it has been proposed to extend the language with quantified class constraints of the form ∀a. C a ⇒ C' (f a) in the contexts of instance declarations. We show how to express the equivalent of such constraints in vanilla Haskell 98, but their utility in this language is limited. We also present a more flexible solution, which relies on a widely-supported language extension."
},
{
"paper_title": "Haskell tools from the programatica project",
"paper_authors": [
"Thomas Hallgren"
]
}
]
},
{
"proceeding_title": "Haskell '02:Proceedings of the 2002 ACM SIGPLAN workshop on Haskell",
"proceeding_contents": [
{
"paper_title": "Template meta-programming for Haskell",
"paper_authors": [
"Tim Sheard",
"Simon Peyton Jones"
],
"paper_abstract": "We propose a new extension to the purely functional programming language Haskell that supports compile-time meta-programming. The purpose of the system is to support the algorithmic construction of programs at compile-time.The ability to generate code at compile time allows the programmer to implement such features as polytypic programs, macro-like expansion, user directed optimization (such as inlining), and the generation of supporting data structures and functions from existing data structures and functions.Our design is being implemented in the Glasgow Haskell Compiler, ghc."
},
{
"paper_title": "A formal specification of the Haskell 98 module system",
"paper_authors": [
"Iavor S. Diatchki",
"Mark P. Jones",
"Thomas Hallgren"
],
"paper_abstract": "Many programming languages provide means to split large programs into smaller modules. The module system of a language specifies what constitutes a module and how modules interact.This paper presents a formal specification of the module system for the functional programming language Haskell. Although many aspects of Haskell have been subjected to formal analysis, the module system has, to date, been described only informally as part of the Haskell language report. As a result, some aspects of it are not well understood or are under-specified; this causes difficulties in reasoning about Haskell programs, and leads to practical problems such as inconsistencies between different implementations. One significant aspect of our work is that the specification is written in Haskell, which means that it can also be used as an executable test-bed, and as a starting point for Haskell implementers."
},
{
"paper_title": "A recursive do for Haskell",
"paper_authors": [
"Levent Erkök",
"John Launchbury"
],
"paper_abstract": "Certain programs making use of monads need to perform recursion over the values of monadic actions. Although the do-notation of Haskell provides a convenient framework for monadic programming, it lacks the generality to support such recursive bindings. In this paper, we describe an enhanced translation schema for the donotation and its integration into Haskell. The new translation allows variables to be bound recursively, provided the underlying monad comes equipped with an appropriate fixed-point operator."
},
{
"paper_title": "Eager Haskell: resource-bounded execution yields efficient iteration",
"paper_authors": [
"Jan-Willem Maessen"
],
"paper_abstract": "The advantages of the Haskell programming language are rooted in its clean equational semantics. Those advantages evaporate as soon as programmers try to write simple iterative computations and discover that their code must be annotated with calls to seq in order to overcome space leaks introduced by lazy evaluation. The Eager Haskell compiler executes Haskell programs eagerly by default, i.e., bindings and function arguments are evaluated before bodies. When resource bounds are exceeded, computation falls back and is restarted lazily. By using a hybrid of eager and lazy evaluation, we preserve the semantics of Haskell and yet permit efficient iteration."
},
{
"paper_title": "Functional reactive programming, continued",
"paper_authors": [
"Henrik Nilsson",
"Antony Courtney",
"John Peterson"
],
"paper_abstract": "Functional Reactive Programming (FRP) extends a host programming language with a notion of time flow. Arrowized FRP (AFRP) is a version of FRP embedded in Haskell based on the arrow combinators. AFRP is a powerful synchronous dataflow programming language with hybrid modeling capabilities, combining advanced synchronous dataflow features with the higher-order lazy functional abstractions of Haskell. In this paper, we describe the AFRP programming style and our Haskell-based implementation. Of particular interest are the AFRP combinators that support dynamic collections and continuation-based switching. We show how these combinators can be used to express systems with an evolving structure that are difficult to model in more traditional dataflow languages."
},
{
"paper_title": "Testing monadic code with QuickCheck",
"paper_authors": [
"Koen Claessen",
"John Hughes"
],
"paper_abstract": "QuickCheck is a previously published random testing tool for Haskell programs. In this paper we show how to use it for testing monadic code, and in particular imperative code written using the ST monad. QuickCheck tests a program against a specification: we show that QuickCheck's specification language is sufficiently powerful to represent common forms of specifications: algebraic, model-based (both functional and relational), and pre-/post-conditional. Moreover, all these forms of specification can be used directly for testing. We define a new language of monadic properties, and make a link between program testing and the notion of observational equivalence."
},
{
"paper_title": "Haddock, a Haskell documentation tool",
"paper_authors": [
"Simon Marlow"
],
"paper_abstract": "This paper describes Haddock, a tool for automatically generating documentation from Haskell source code. Haddock's unique approach to source code annotations provides a useful separation between the implementation of a library and the interface (and hence also the documentation) of that library, so that as far as possible the documentation annotations in the source code do not affect the programmer's freedom over the structure of the implementation. The internal structure and implementation of Haddock is also discussed."
},
{
"paper_title": "A lightweight implementation of generics and dynamics",
"paper_authors": [
"James Cheney",
"Ralf Hinze"
],
"paper_abstract": "The recent years have seen a number of proposals for extending statically typed languages by dynamics or generics. Most proposals --- if not all --- require significant extensions to the underlying language. In this paper we show that this need not be the case. We propose a particularly lightweight extension that supports both dynamics and generics. Furthermore, the two features are smoothly integrated: dynamic values, for instance, can be passed to generic functions. Our proposal makes do with a standard Hindley-Milner type system augmented by existential types. Building upon these ideas we have implemented a small library that is readily usable both with Hugs and with the Glasgow Haskell compiler."
},
{
"paper_title": "Techniques for embedding postfix languages in Haskell",
"paper_authors": [
"Chris Okasaki"
],
"paper_abstract": "One popular use for Haskell in recent years has been as a host language for domain-specific embedded languages. But how can one embed a postfix language in Haskell, given that Haskell only supports prefix and infix syntax? This paper describes several such embeddings for increasingly complex postfix languages."
}
]
}
]
},
{
"conference_title": "International Conference on Functional Programmuing",
"conference_contents": [
{
"proceeding_title": "ICFP '14:Proceedings of the 19th ACM SIGPLAN international conference on Functional programming",
"proceeding_contents": [
{
"paper_title": "Using formal methods to enable more secure vehicles: DARPA's HACMS program",
"paper_authors": [
"Kathleen Fisher"
],
"paper_abstract": "Networked embedded systems are ubiquitous in modern society. Examples include SCADA systems that manage physical infrastructure, medical devices such as pacemakers and insulin pumps, and vehicles such as airplanes and automobiles. Such devices are connected to networks for a variety of compelling reasons, including the ability to access diagnostic information conveniently, perform software updates, provide innovative features, and lower costs. Researchers and hackers have shown that these kinds of networked embedded systems are vulnerable to remote attacks and that such attacks can cause physical damage and can be hidden from monitors [1, 4]. DARPA launched the HACMS program to create technology to make such systems dramatically harder to attack successfully. Specifically, HACMS is pursuing a clean-slate, formal methods-based approach to the creation of high-assurance vehicles, where high assurance is defined to mean functionally correct and satisfying appropriate safety and security properties. Specific technologies include program synthesis, domain-specific languages, and theorem provers used as program development environments. Targeted software includes operating system components such as hypervisors, microkernels, file systems, and device drivers as well as control systems such as autopilots and adaptive cruise controls. Program researchers are leveraging existing high-assurance software including NICTA's seL4 microkernel and INRIA's CompCert compiler. Although the HACMS project is less than halfway done, the program has already achieved some remarkable success. At program kick-off, a Red Team easily hijacked the baseline open-source quadcopter that HACMS researchers are using as a research platform. At the end of eighteen months, the Red Team was not able to hijack the newly-minted \"SMACCMCopter\" running high-assurance HACMS code, despite being given six weeks and full access to the source code of the copter. An expert in penetration testing called the SMACCMCopter \"the most secure UAV on the planet.\" In this talk, I will describe the HACMS program: its motivation, the underlying technologies, current results, and future directions."
},
{
"paper_title": "Building embedded systems with embedded DSLs",
"paper_authors": [
"Patrick C. Hickey",
"Lee Pike",
"Trevor Elliott",
"James Bielman",
"John Launchbury"
],
"paper_abstract": "We report on our experiences in synthesizing a fully-featured autopilot from embedded domain-specific languages (EDSLs) hosted in Haskell. The autopilot is approximately 50k lines of C code generated from 10k lines of EDSL code and includes control laws, mode logic, encrypted communications system, and device drivers. The autopilot was built in less than two engineer years. This is the story of how EDSLs provided the productivity and safety gains to do large-scale low-level embedded programming and lessons we learned in doing so."
},
{
"paper_title": "Concurrent NetCore: from policies to pipelines",
"paper_authors": [
"Cole Schlesinger",
"Michael Greenberg",
"David Walker"
],
"paper_abstract": "In a Software-Defined Network (SDN), a central, computationally powerful controller manages a set of distributed, computationally simple switches. The controller computes a policy describing how each switch should route packets and populates packet-processing tables on each switch with rules to enact the routing policy. As network conditions change, the controller continues to add and remove rules from switches to adjust the policy as needed. Recently, the SDN landscape has begun to change as several proposals for new, reconfigurable switching architectures, such as RMT [5] and FlexPipe [14] have emerged. These platforms provide switch programmers with many, flexible tables for storing packet-processing rules, and they offer programmers control over the packet fields that each table can analyze and act on. These reconfigurable switch architectures support a richer SDN model in which a switch configuration phase precedes the rule population phase [4]. In the configuration phase, the controller sends the switch a graph describing the layout and capabilities of the packet processing tables it will require during the population phase. Armed with this foreknowledge, the switch can allocate its hardware (or software) resources more efficiently. We present a new, typed language, called Concurrent NetCore, for specifying routing policies and graphs of packet-processing tables. Concurrent NetCore includes features for specifying sequential, conditional and concurrent control-flow between packet-processing tables. We develop a fine-grained operational model for the language and prove this model coincides with a higher-level denotational model when programs are well-typed. We also prove several additional properties of well-typed programs, including strong normalization and determinism. To illustrate the utility of the language, we develop linguistic models of both the RMT and FlexPipe architectures and we give a multi-pass compilation algorithm that translates graphs and routing policies to the RMT model."
},
{
"paper_title": "SeLINQ: tracking information across application-database boundaries",
"paper_authors": [
"Daniel Schoepe",
"Daniel Hedin",
"Andrei Sabelfeld"
],
"paper_abstract": "The root cause for confidentiality and integrity attacks against computing systems is insecure information flow. The complexity of modern systems poses a major challenge to secure end-to-end information flow, ensuring that the insecurity of a single component does not render the entire system insecure. While information flow in a variety of languages and settings has been thoroughly studied in isolation, the problem of tracking information across component boundaries has been largely out of reach of the work so far. This is unsatisfactory because tracking information across component boundaries is necessary for end-to-end security. This paper proposes a framework for uniform tracking of information flow through both the application and the underlying database. Key enabler of the uniform treatment is recent work by Cheney et al., which studies database manipulation via an embedded language-integrated query language (with Microsoft's LINQ on the backend). Because both the host language and the embedded query languages are functional F#-like languages, we are able to leverage information-flow enforcement for functional languages to obtain information-flow control for databases \"for free\", synergize it with information-flow control for applications and thus guarantee security across application-database boundaries. We develop the formal results in the form of a security type system that includes a treatment of algebraic data types and pattern matching, and establish its soundness. On the practical side, we implement the framework and demonstrate its usefulness in a case study with a realistic movie rental database."
},
{
"paper_title": "Type-based parametric analysis of program families",
"paper_authors": [
"Sheng Chen",
"Martin Erwig"
],
"paper_abstract": "Previous research on static analysis for program families has focused on lifting analyses for single, plain programs to program families by employing idiosyncratic representations. The lifting effort typically involves a significant amount of work for proving the correctness of the lifted algorithm and demonstrating its scalability. In this paper, we propose a parameterized static analysis framework for program families that can automatically lift a class of type-based static analyses for plain programs to program families. The framework consists of a parametric logical specification and a parametric variational constraint solver. We prove that a lifted algorithm is correct provided that the underlying analysis algorithm is correct. An evaluation of our framework has revealed an error in a previous manually lifted analysis. Moreover, performance tests indicate that the overhead incurred by the general framework is bounded by a factor of 2."
},
{
"paper_title": "Romeo: a system for more flexible binding-safe programming",
"paper_authors": [
"Paul Stansifer",
"Mitchell Wand"
],
"paper_abstract": "Current languages for safely manipulating values with names only support term languages with simple binding syntax. As a result, no tools exist to safely manipulate code written in those languages for which name problems are the most challenging. We address this problem with Romeo, a language that respects α-equivalence on its values, and which has access to a rich specification language for binding, inspired by attribute grammars. Our work has the complex-binding support of David Herman's λm, but is a full-fledged binding-safe language like Pure FreshML."
},
{
"paper_title": "Maximal sharing in the Lambda calculus with letrec",
"paper_authors": [
"Clemens Grabmayer",
"Jan Rochel"
],
"paper_abstract": "Increasing sharing in programs is desirable to compactify the code, and to avoid duplication of reduction work at run-time, thereby speeding up execution. We show how a maximal degree of sharing can be obtained for programs expressed as terms in the lambda calculus with letrec. We introduce a notion of 'maximal compactness' for λletrec-terms among all terms with the same infinite unfolding. Instead of defined purely syntactically, this notion is based on a graph semantics. λletrec-terms are interpreted as first-order term graphs so that unfolding equivalence between terms is preserved and reflected through bisimilarity of the term graph interpretations. Compactness of the term graphs can then be compared via functional bisimulation. We describe practical and efficient methods for the following two problems: transforming a λletrec-term into a maximally compact form; and deciding whether two λletrec-terms are unfolding-equivalent. The transformation of a λletrec-terms L into maximally compact form L0 proceeds in three steps: (i) translate L into its term graph G = [[L]] ; (ii) compute the maximally shared form of G as its bisimulation collapse G0 ; (iii) read back a λletrec-term L0 from the term graph G0 with the property [[L0]] = G0. Then L0 represents a maximally shared term graph, and it has the same unfolding as L. The procedure for deciding whether two given λletrec-terms L1 and L2 are unfolding-equivalent computes their term graph interpretations [[L1]] and [[L2]], and checks whether these are bisimilar. For illustration, we also provide a readily usable implementation."
},
{
"paper_title": "Practical and effective higher-order optimizations",
"paper_authors": [
"Lars Bergstrom",
"Matthew Fluet",
"Matthew Le",
"John Reppy",
"Nora Sandler"
],
"paper_abstract": "Inlining is an optimization that replaces a call to a function with that function's body. This optimization not only reduces the overhead of a function call, but can expose additional optimization opportunities to the compiler, such as removing redundant operations or unused conditional branches. Another optimization, copy propagation, replaces a redundant copy of a still-live variable with the original. Copy propagation can reduce the total number of live variables, reducing register pressure and memory usage, and possibly eliminating redundant memory-to-memory copies. In practice, both of these optimizations are implemented in nearly every modern compiler. These two optimizations are practical to implement and effective in first-order languages, but in languages with lexically-scoped first-class functions (aka, closures), these optimizations are not available to code programmed in a higher-order style. With higher-order functions, the analysis challenge has been that the environment at the call site must be the same as at the closure capture location, up to the free variables, or the meaning of the program may change. Olin Shivers' 1991 dissertation called this family of optimizations superΒ and he proposed one analysis technique, called reflow, to support these optimizations. Unfortunately, reflow has proven too expensive to implement in practice. Because these higher-order optimizations are not available in functional-language compilers, programmers studiously avoid uses of higher-order values that cannot be optimized (particularly in compiler benchmarks). This paper provides the first practical and effective technique for superΒ (higher-order) inlining and copy propagation, which we call unchanged variable analysis. We show that this technique is practical by implementing it in the context of a real compiler for an ML-family language and showing that the required analyses have costs below 3% of the total compilation time. This technique's effectiveness is shown through a set of benchmarks and example programs, where this analysis exposes additional potential optimization sites."
},
{
"paper_title": "Worker/wrapper/makes it/faster",
"paper_authors": [
"Jennifer Hackett",
"Graham Hutton"
],
"paper_abstract": "Much research in program optimization has focused on formal approaches to correctness: proving that the meaning of programs is preserved by the optimisation. Paradoxically, there has been comparatively little work on formal approaches to efficiency: proving that the performance of optimized programs is actually improved. This paper addresses this problem for a general-purpose optimization technique, the worker/wrapper transformation. In particular, we use the call-by-need variant of improvement theory to establish conditions under which the worker/wrapper transformation is formally guaranteed to preserve or improve the time performance of programs in lazy languages such as Haskell."
},
{
"paper_title": "Compositional semantics for composable continuations: from abortive to delimited control",
"paper_authors": [
"Paul Downen",
"Zena M. Ariola"
],
"paper_abstract": "Parigot's λμ-calculus, a system for computational reasoning about classical proofs, serves as a foundation for control operations embodied by operators like Scheme's callcc. We demonstrate that the call-by-value theory of the λμ-calculus contains a latent theory of delimited control, and that a known variant of λμ which unshackles the syntax yields a calculus of composable continuations from the existing constructs and rules for classical control. To relate to the various formulations of control effects, and to continuation-passing style, we use a form of compositional program transformations which preserves the underlying structure of equational theories, contexts, and substitution. Finally, we generalize the call-by-name and call-by-value theories of the λμ-calculus by giving a single parametric theory that encompasses both, allowing us to generate a call-by-need instance that defines a calculus of classical and delimited control with lazy evaluation and sharing."
},
{
"paper_title": "Coeffects: a calculus of context-dependent computation",
"paper_authors": [
"Tomas Petricek",
"Dominic Orchard",
"Alan Mycroft"
],
"paper_abstract": "The notion of context in functional languages no longer refers just to variables in scope. Context can capture additional properties of variables (usage patterns in linear logics; caching requirements in dataflow languages) as well as additional resources or properties of the execution environment (rebindable resources; platform version in a cross-platform application). The recently introduced notion of coeffects captures the latter, whole-context properties, but it failed to capture fine-grained per-variable properties. We remedy this by developing a generalized coeffect system with annotations indexed by a coeffect shape. By instantiating a concrete shape, our system captures previously studied flat (whole-context) coeffects, but also structural (per-variable) coeffects, making coeffect analyses more useful. We show that the structural system enjoys desirable syntactic properties and we give a categorical semantics using extended notions of indexed comonad. The examples presented in this paper are based on analysis of established language features (liveness, linear logics, dataflow, dynamic scoping) and we argue that such context-aware properties will also be useful for future development of languages for increasingly heterogeneous and distributed platforms."
},
{
"paper_title": "Behavioral software contracts",
"paper_authors": [
"Robert Bruce Findler"
],
"paper_abstract": "Programmers embrace contracts. They can use the language they know and love to formulate logical assertions about the behavior of their programs. They can use the existing IDE infrastructure to log contracts, to test, to debug, and to profile their programs. The keynote presents the challenges and rewards of supporting contracts in a modern, full-spectrum programming language. It covers technical challenges of contracts while demonstrating the non-technical motivation for contract system design choices and showing how contracts and contract research can serve practicing programmers. The remainder of this article is a literature survey of contract research, with an emphasis on recent work about higher-order contracts and blame."
},
{
"paper_title": "Soft contract verification",
"paper_authors": [
"Phúc C. Nguyen",
"Sam Tobin-Hochstadt",
"David Van Horn"
],
"paper_abstract": "Behavioral software contracts are a widely used mechanism for governing the flow of values between components. However, run-time monitoring and enforcement of contracts imposes significant overhead and delays discovery of faulty components to run-time. To overcome these issues, we present soft contract verification, which aims to statically prove either complete or partial contract correctness of components, written in an untyped, higher-order language with first-class contracts. Our approach uses higher-order symbolic execution, leveraging contracts as a source of symbolic values including unknown behavioral values, and employs an updatable heap of contract invariants to reason about flow-sensitive facts. We prove the symbolic execution soundly approximates the dynamic semantics and that verified programs can't be blamed. The approach is able to analyze first-class contracts, recursive data structures, unknown functions, and control-flow-sensitive refinements of values, which are all idiomatic in dynamic languages. It makes effective use of an off-the-shelf solver to decide problems without heavy encodings. The approach is competitive with a wide range of existing tools - including type systems, flow analyzers, and model checkers - on their own benchmarks."
},
{
"paper_title": "On teaching *how to design programs*: observations from a newcomer",
"paper_authors": [
"Norman Ramsey"
],
"paper_abstract": "This paper presents a personal, qualitative case study of a first course using How to Design Programs and its functional teaching languages. The paper reconceptualizes the book's six-step design process as an eight-step design process ending in a new \"review and refactor\" step. It recommends specific approaches to students' difficulties with function descriptions, function templates, data examples, and other parts of the design process. It connects the process to interactive \"world programs.\" It recounts significant, informative missteps in course design and delivery. Finally, it identifies some unsolved teaching problems and some potential solutions."
},
{
"paper_title": "SML# in industry: a practical ERP system development",
"paper_authors": [
"Atsushi Ohori",
"Katsuhiro Ueno",
"Kazunori Hoshi",
"Shinji Nozaki",
"Takashi Sato",
"Tasuku Makabe",
"Yuki Ito"
],
"paper_abstract": "This paper reports on our industry-academia project of using a functional language in business software production. The general motivation behind the project is our ultimate goal of adopting an ML-style higher-order typed functional language in a wide range of ordinary software development in industry. To probe the feasibility and identify various practical problems and needs, we have conducted a 15 month pilot project for developing an enterprise resource planning (ERP) system in SML#. The project has successfully completed as we have planned, demonstrating the feasibility of SML#. In particular, seamless integration of SQL and direct C language interface are shown to be useful in reliable and efficient development of a data intensive business application. During the program development, we have found several useful functional programming patterns and a number of possible extensions of an ML-style language with records. This paper reports on the project details and the lessons learned from the project."
},
{
"paper_title": "Lem: reusable engineering of real-world semantics",
"paper_authors": [
"Dominic P. Mulligan",
"Scott Owens",
"Kathryn E. Gray",
"Tom Ridge",
"Peter Sewell"
],
"paper_abstract": "Recent years have seen remarkable successes in rigorous engineering: using mathematically rigorous semantic models (not just idealised calculi) of real-world processors, programming languages, protocols, and security mechanisms, for testing, proof, analysis, and design. Building these models is challenging, requiring experimentation, dialogue with vendors or standards bodies, and validation; their scale adds engineering issues akin to those of programming to the task of writing clear and usable mathematics. But language and tool support for specification is lacking. Proof assistants can be used but bring their own difficulties, and a model produced in one, perhaps requiring many person-years effort and maintained over an extended period, cannot be used by those familiar with another. We introduce Lem, a language for engineering reusable large-scale semantic models. The Lem design takes inspiration both from functional programming languages and from proof assistants, and Lem definitions are translatable into OCaml for testing, Coq, HOL4, and Isabelle/HOL for proof, and LaTeX and HTML for presentation. This requires a delicate balance of expressiveness, careful library design, and implementation of transformations - akin to compilation, but subject to the constraint of producing usable and human-readable code for each target. Lem's effectiveness is demonstrated by its use in practice."
},
{
"paper_title": "Safe zero-cost coercions for Haskell",
"paper_authors": [
"Joachim Breitner",
"Richard A. Eisenberg",
"Simon Peyton Jones",
"Stephanie Weirich"
],
"paper_abstract": "Generative type abstractions -- present in Haskell, OCaml, and other languages -- are useful concepts to help prevent programmer errors. They serve to create new types that are distinct at compile time but share a run-time representation with some base type. We present a new mechanism that allows for zero-cost conversions between generative type abstractions and their representations, even when such types are deeply nested. We prove type safety in the presence of these conversions and have implemented our work in GHC."
},
{
"paper_title": "Hindley-milner elaboration in applicative style: functional pearl",
"paper_authors": [
"François Pottier"
],
"paper_abstract": "Type inference - the problem of determining whether a program is well-typed - is well-understood. In contrast, elaboration - the task of constructing an explicitly-typed representation of the program - seems to have received relatively little attention, even though, in a non-local type inference system, it is non-trivial. We show that the constraint-based presentation of Hindley-Milner type inference can be extended to deal with elaboration, while preserving its elegance. This involves introducing a new notion of \"constraint with a value\", which forms an applicative functor."
},
{
"paper_title": "Settable and non-interfering signal functions for FRP: how a first-order switch is more than enough",
"paper_authors": [
"Daniel Winograd-Cort",
"Paul Hudak"
],
"paper_abstract": "Functional Reactive Programming (FRP) provides a method for programming continuous, reactive systems by utilizing signal functions that, abstractly, transform continuous input signals into continuous output signals. These signals may also be streams of events, and indeed, by allowing signal functions themselves to be the values carried by these events (in essence, signals of signal functions), one can conveniently make discrete changes in program behavior by \"switching\" into and out of these signal functions. This higher-order notion of switching is common among many FRP systems, in particular those based on arrows, such as Yampa. Although convenient, the power of switching is often an overkill and can pose problems for certain types of program optimization (such as causal commutative arrows [14]), as it causes the structure of the program to change dynamically at run-time. Without a notion of just-in-time compilation or related idea, which itself is beset with problems, such optimizations are not possible at compile time. This paper introduces two new ideas that obviate, in a predominance of cases, the need for switching. The first is a non-interference law for arrows with choice that allows an arrowized FRP program to dynamically alter its own structure (within statically limited bounds) as well as abandon unused streams. The other idea is a notion of a settable signal function that allows a signal function to capture its present state and later be restarted from some previous state. With these two features, canonical uses of higher-order switchers can be replaced with a suitable first-order design, thus enabling a broader range of static optimizations."
},
{
"paper_title": "Functional programming for dynamic and large data with self-adjusting computation",
"paper_authors": [
"Yan Chen",
"Umut A. Acar",
"Kanat Tangwongsan"
],
"paper_abstract": "Combining type theory, language design, and empirical work, we present techniques for computing with large and dynamically changing datasets. Based on lambda calculus, our techniques are suitable for expressing a diverse set of algorithms on large datasets and, via self-adjusting computation, enable computations to respond automatically to changes in their data. To improve the scalability of self-adjusting computation, we present a type system for precise dependency tracking that minimizes the time and space for storing dependency metadata. The type system eliminates an important assumption of prior work that can lead to recording spurious dependencies. We present a type-directed translation algorithm that generates correct self-adjusting programs without relying on this assumption. We then show a probabilistic-chunking technique to further decrease space usage by controlling the fundamental space-time tradeoff in self-adjusting computation. We implement and evaluate these techniques, showing promising results on challenging benchmarks involving large graphs."
},
{
"paper_title": "Depending on types",
"paper_authors": [
"Stephanie Weirich"
],
"paper_abstract": "Is Haskell a dependently typed programming language? Should it be? GHC's many type-system features, such as Generalized Algebraic Datatypes (GADTs), datatype promotion, multiparameter type classes, and type families, give programmers the ability to encode domain-specific invariants in their types. Clever Haskell programmers have used these features to enhance the reasoning capabilities of static type checking. But really, how far have we come? Could we do more? In this talk, I will discuss dependently typed programming in Haskell, through examples, analysis and comparisons with modern full-spectrum dependently typed languages, such as Coq, Agda and Idris. What sorts of dependently typed programming can be done in Haskell now? What could GHC learn from these languages? Conversely, what lessons can GHC offer in return?"
},
{
"paper_title": "Homotopical patch theory",
"paper_authors": [
"Carlo Angiuli",
"Edward Morehouse",
"Daniel R. Licata",
"Robert Harper"
],
"paper_abstract": "Homotopy type theory is an extension of Martin-Löf type theory, based on a correspondence with homotopy theory and higher category theory. In homotopy type theory, the propositional equality type becomes proof-relevant, and corresponds to paths in a space. This allows for a new class of datatypes, called higher inductive types, which are specified by constructors not only for points but also for paths. In this paper, we consider a programming application of higher inductive types. Version control systems such as Darcs are based on the notion of patches - syntactic representations of edits to a repository. We show how patch theory can be developed in homotopy type theory. Our formulation separates formal theories of patches from their interpretation as edits to repositories. A patch theory is presented as a higher inductive type. Models of a patch theory are given by maps out of that type, which, being functors, automatically preserve the structure of patches. Several standard tools of homotopy theory come into play, demonstrating the use of these methods in a practical programming context."
},
{
"paper_title": "Pattern matching without K",
"paper_authors": [
"Jesper Cockx",
"Dominique Devriese",
"Frank Piessens"
],
"paper_abstract": "Dependent pattern matching is an intuitive way to write programs and proofs in dependently typed languages. It is reminiscent of both pattern matching in functional languages and case analysis in on-paper mathematics. However, in general it is incompatible with new type theories such as homotopy type theory (HoTT). As a consequence, proofs in such theories are typically harder to write and to understand. The source of this incompatibility is the reliance of dependent pattern matching on the so-called K axiom - also known as the uniqueness of identity proofs - which is inadmissible in HoTT. The Agda language supports an experimental criterion to detect definitions by pattern matching that make use of the K axiom, but so far it lacked a formal correctness proof. In this paper, we propose a new criterion for dependent pattern matching without K, and prove it correct by a translation to eliminators in the style of Goguen et al. (2006). Our criterion both allows more good definitions than existing proposals, and solves a previously undetected problem in the criterion offered by Agda. It has been implemented in Agda and is the first to be supported by a formal proof. Thus it brings the benefits of dependent pattern matching to contexts where we cannot assume K, such as HoTT. It also points the way to new forms of dependent pattern matching, for example on higher inductive types."
},
{
"paper_title": "Refinement types for Haskell",
"paper_authors": [
"Niki Vazou",
"Eric L. Seidel",
"Ranjit Jhala",
"Dimitrios Vytiniotis",
"Simon Peyton-Jones"
],
"paper_abstract": "SMT-based checking of refinement types for call-by-value languages is a well-studied subject. Unfortunately, the classical translation of refinement types to verification conditions is unsound under lazy evaluation. When checking an expression, such systems implicitly assume that all the free variables in the expression are bound to values. This property is trivially guaranteed by eager, but does not hold under lazy, evaluation. Thus, to be sound and precise, a refinement type system for Haskell and the corresponding verification conditions must take into account which subset of binders actually reduces to values. We present a stratified type system that labels binders as potentially diverging or not, and that (circularly) uses refinement types to verify the labeling. We have implemented our system in LIQUIDHASKELL and present an experimental evaluation of our approach on more than 10,000 lines of widely used Haskell libraries. We show that LIQUIDHASKELL is able to prove 96% of all recursive functions terminating, while requiring a modest 1.7 lines of termination-annotations per 100 lines of code."
},
{
"paper_title": "A theory of gradual effect systems",
"paper_authors": [
"Felipe Bañados Schwerter",
"Ronald Garcia",
"Éric Tanter"
],
"paper_abstract": "Effect systems have the potential to help software developers, but their practical adoption has been very limited. We conjecture that this limited adoption is due in part to the difficulty of transitioning from a system where effects are implicit and unrestricted to a system with a static effect discipline, which must settle for conservative checking in order to be decidable. To address this hindrance, we develop a theory of gradual effect checking, which makes it possible to incrementally annotate and statically check effects, while still rejecting statically inconsistent programs. We extend the generic type-and-effect framework of Marino and Millstein with a notion of unknown effects, which turns out to be significantly more subtle than unknown types in traditional gradual typing. We appeal to abstract interpretation to develop and validate the concepts of gradual effect checking. We also demonstrate how an effect system formulated in Marino and Millstein's framework can be automatically extended to support gradual checking."
},
{
"paper_title": "How to keep your neighbours in order",
"paper_authors": [
"Conor Thomas McBride"
],
"paper_abstract": "I present a datatype-generic treatment of recursive container types whose elements are guaranteed to be stored in increasing order, with the ordering invariant rolled out systematically. Intervals, lists and binary search trees are instances of the generic treatment. On the journey to this treatment, I report a variety of failed experiments and the transferable learning experiences they triggered. I demonstrate that a total element ordering is enough to deliver insertion and flattening algorithms, and show that (with care about the formulation of the types) the implementations remain as usual. Agda's instance arguments and pattern synonyms maximize the proof search done by the typechecker and minimize the appearance of proofs in program text, often eradicating them entirely. Generalizing to indexed recursive container types, invariants such as size and balance can be expressed in addition to ordering. By way of example, I implement insertion and deletion for 2-3 trees, ensuring both order and balance by the discipline of type checking."
},
{
"paper_title": "A relational framework for higher-order shape analysis",
"paper_authors": [
"Gowtham Kaki",
"Suresh Jagannathan"
],
"paper_abstract": "We propose the integration of a relational specification framework within a dependent type system capable of verifying complex invariants over the shapes of algebraic datatypes. Our approach is based on the observation that structural properties of such datatypes can often be naturally expressed as inductively-defined relations over the recursive structure evident in their definitions. By interpreting constructor applications (abstractly) in a relational domain, we can define expressive relational abstractions for a variety of complex data structures, whose structural and shape invariants can be automatically verified. Our specification language also allows for definitions of parametricrelations for polymorphic data types that enable highly composable specifications and naturally generalizes to higher-order polymorphic functions. We describe an algorithm that translates relational specifications into a decidable fragment of first-order logic that can be efficiently discharged by an SMT solver. We have implemented these ideas in a type checker called CATALYST that is incorporated within the MLton SML compiler. Experimental results and case studies indicate that our verification strategy is both practical and effective."
},
{
"paper_title": "There is no fork: an abstraction for efficient, concurrent, and concise data access",
"paper_authors": [
"Simon Marlow",
"Louis Brandy",
"Jonathan Coens",
"Jon Purdy"
],
"paper_abstract": "We describe a new programming idiom for concurrency, based on Applicative Functors, where concurrency is implicit in the Applicative While it is generally applicable, our technique was designed with a particular application in mind: an internal service at Facebook that identifies particular types of content and takes actions based on it. Our application has a large body of business logic that fetches data from several different external sources. The framework described in this paper enables the business logic to execute efficiently by automatically fetching data concurrently; we present some preliminary results."
},
{
"paper_title": "Folding domain-specific languages: deep and shallow embeddings (functional Pearl)",
"paper_authors": [
"Jeremy Gibbons",
"Nicolas Wu"
],
"paper_abstract": "A domain-specific language can be implemented by embedding within a general-purpose host language. This embedding may be deep or shallow, depending on whether terms in the language construct syntactic or semantic representations. The deep and shallow styles are closely related, and intimately connected to folds; in this paper, we explore that connection."
},
{
"paper_title": "Krivine nets: a semantic foundation for distributed execution",
"paper_authors": [
"Olle Fredriksson",
"Dan R. Ghica"
],
"paper_abstract": "We define a new approach to compilation to distributed architectures based on networks of abstract machines. Using it we can implement a generalised and fully transparent form of Remote Procedure Call that supports calling higher-order functions across node boundaries, without sending actual code. Our starting point is the classic Krivine machine, which implements reduction for untyped call-by-name PCF. We successively add the features that we need for distributed execution and show the correctness of each addition. Then we construct a two-level operational semantics, where the high level is a network of communicating machines, and the low level is given by local machine transitions. Using these networks, we arrive at our final system, the Krivine Net. We show that Krivine Nets give a correct distributed implementation of the Krivine machine, which preserves both termination and non-termination properties. All the technical results have been formalised and proved correct in Agda. We also implement a prototype compiler which we compare with previous distributing compilers based on Girard's Geometry of Interaction and on Game Semantics."
},
{
"paper_title": "Distilling abstract machines",
"paper_authors": [
"Beniamino Accattoli",
"Pablo Barenbaum",
"Damiano Mazza"
],
"paper_abstract": "It is well-known that many environment-based abstract machines can be seen as strategies in lambda calculi with explicit substitutions (ES). Recently, graphical syntaxes and linear logic led to the linear substitution calculus (LSC), a new approach to ES that is halfway between small-step calculi and traditional calculi with ES. This paper studies the relationship between the LSC and environment-based abstract machines. While traditional calculi with ES simulate abstract machines, the LSC rather distills them: some transitions are simulated while others vanish, as they map to a notion of structural congruence. The distillation process unveils that abstract machines in fact implement weak linear head reduction, a notion of evaluation having a central role in the theory of linear logic. We show that such a pattern applies uniformly in call-by-name, call-by-value, and call-by-need, catching many machines in the literature. We start by distilling the KAM, the CEK, and a sketch of the ZINC, and then provide simplified versions of the SECD, the lazy KAM, and Sestoft's machine. Along the way we also introduce some new machines with global environments. Moreover, we show that distillation preserves the time complexity of the executions, i.e. the LSC is a complexity-preserving abstraction of abstract machines."
}
]
},
{
"proceeding_title": "ICFP '13:Proceedings of the 18th ACM SIGPLAN international conference on Functional programming",
"proceeding_contents": [
{
"paper_title": "Interactive programming with dependent types",
"paper_authors": [
"Ulf Norell"
],
"paper_abstract": "In dependently typed languages run-time values can appear in types, making it possible to give programs more precise types than in languages without dependent types. This can range from keeping track of simple invariants like the length of a list, to full functional correctness. In addition to having some correctness guarantees on the final program, assigning more precise types to programs means that you can get more assistance from the type checker while writing them. This is what I focus on here, demonstrating how the programming environment of Agda can help you when developing dependently typed programs."
},
{
"paper_title": "Verified decision procedures for MSO on words based on derivatives of regular expressions",
"paper_authors": [
"Dmitriy Traytel",
"Tobias Nipkow"
],
"paper_abstract": "Monadic second-order logic on finite words (MSO) is a decidable yet expressive logic into which many decision problems can be encoded. Since MSO formulas correspond to regular languages, equivalence of MSO formulas can be reduced to the equivalence of some regular structures (e.g. automata). This paper presents a verified functional decision procedure for MSO formulas that is not based on automata but on regular expressions. Functional languages are ideally suited for this task: regular expressions are data types and functions on them are defined by pattern matching and recursion and are verified by structural induction. Decision procedures for regular expression equivalence have been formalized before, usually based on Brzozowski derivatives. Yet, for a straightforward embedding of MSO formulas into regular expressions an extension of regular expressions with a projection operation is required. We prove total correctness and completeness of an equivalence checker for regular expressions extended in that way. We also define a language-preserving translation of formulas into regular expressions with respect to two different semantics of MSO. Our results have been formalized and verified in the theorem prover Isabelle. Using Isabelle's code generation facility, this yields purely functional, formally verified programs that decide equivalence of MSO formulas."
},
{
"paper_title": "C-SHORe: a collapsible approach to higher-order verification",
"paper_authors": [
"Christopher Broadbent",
"Arnaud Carayol",
"Matthew Hague",
"Olivier Serre"
],
"paper_abstract": "Higher-order recursion schemes (HORS) have recently received much attention as a useful abstraction of higher-order functional programs with a number of new verification techniques employing HORS model-checking as their centrepiece. This paper contributes to the ongoing quest for a truly scalable model-checker for HORS by offering a different, automata theoretic perspective. We introduce the first practical model-checking algorithm that acts on a generalisation of pushdown automata equi-expressive with HORS called collapsible pushdown systems (CPDS). At its core is a substantial modification of a recently studied saturation algorithm for CPDS. In particular it is able to use information gathered from an approximate forward reachability analysis to guide its backward search. Moreover, we introduce an algorithm that prunes the CPDS prior to model-checking and a method for extracting counter-examples in negative instances. We compare our tool with the state-of-the-art verification tools for HORS and obtain encouraging results. In contrast to some of the main competition tackling the same problem, our algorithm is fixed-parameter tractable, and we also offer significantly improved performance over the only previously published tool of which we are aware that also enjoys this property. The tool and additional material are available from http://cshore.cs.rhul.ac.uk."
},
{
"paper_title": "Automatic SIMD vectorization for Haskell",
"paper_authors": [
"Leaf Petersen",
"Dominic Orchard",
"Neal Glew"
],
"paper_abstract": "Expressing algorithms using immutable arrays greatly simplifies the challenges of automatic SIMD vectorization, since several important classes of dependency violations cannot occur. The Haskell programming language provides libraries for programming with immutable arrays, and compiler support for optimizing them to eliminate the overhead of intermediate temporary arrays. We describe an implementation of automatic SIMD vectorization in a Haskell compiler which gives substantial vector speedups for a range of programs written in a natural programming style. We compare performance with that of programs compiled by the Glasgow Haskell Compiler."
},
{
"paper_title": "Exploiting vector instructions with generalized stream fusio",
"paper_authors": [
"Geoffrey Mainland",
"Roman Leshchinskiy",
"Simon Peyton Jones"
],
"paper_abstract": "Stream fusion is a powerful technique for automatically transforming high-level sequence-processing functions into efficient implementations. It has been used to great effect in Haskell libraries for manipulating byte arrays, Unicode text, and unboxed vectors. However, some operations, like vector append, still do not perform well within the standard stream fusion framework. Others, like SIMD computation using the SSE and AVX instructions available on modern x86 chips, do not seem to fit in the framework at all. In this paper we introduce generalized stream fusion, which solves these issues. The key insight is to bundle together multiple stream representations, each tuned for a particular class of stream consumer. We also describe a stream representation suited for efficient computation with SSE instructions. Our ideas are implemented in modified versions of the GHC compiler and vector library. Benchmarks show that high-level Haskell code written using our compiler and libraries can produce code that is faster than both compiler- and hand-vectorized C."
},
{
"paper_title": "Optimising purely functional GPU programs",
"paper_authors": [
"Trevor L. McDonell",
"Manuel M.T. Chakravarty",
"Gabriele Keller",
"Ben Lippmeier"
],
"paper_abstract": "Purely functional, embedded array programs are a good match for SIMD hardware, such as GPUs. However, the naive compilation of such programs quickly leads to both code explosion and an excessive use of intermediate data structures. The resulting slow-down is not acceptable on target hardware that is usually chosen to achieve high performance. In this paper, we discuss two optimisation techniques, sharing recovery and array fusion, that tackle code explosion and eliminate superfluous intermediate structures. Both techniques are well known from other contexts, but they present unique challenges for an embedded language compiled for execution on a GPU. We present novel methods for implementing sharing recovery and array fusion, and demonstrate their effectiveness on a set of benchmarks."
},
{
"paper_title": "Type-theory in color",
"paper_authors": [
"Jean-Philippe Bernardy",
"Moulin Guilhem"
],
"paper_abstract": "Dependent type-theory aims to become the standard way to formalize mathematics at the same time as displacing traditional platforms for high-assurance programming. However, current implementations of type theory are still lacking, in the sense that some obvious truths require explicit proofs, making type-theory awkward to use for many applications, both in formalization and programming. In particular, notions of erasure are poorly supported. In this paper we propose an extension of type-theory with colored terms, color erasure and interpretation of colored types as predicates. The result is a more powerful type-theory: some definitions and proofs may be omitted as they become trivial, it becomes easier to program with precise types, and some parametricity results can be internalized."
},
{
"paper_title": "Typed syntactic meta-programming",
"paper_authors": [
"Dominique Devriese",
"Frank Piessens"
],
"paper_abstract": "We present a novel set of meta-programming primitives for use in a dependently-typed functional language. The types of our meta-programs provide strong and precise guarantees about their termination, correctness and completeness. Our system supports type-safe construction and analysis of terms, types and typing contexts. Unlike alternative approaches, they are written in the same style as normal programs and use the language's standard functional computational model. We formalise the new meta-programming primitives, implement them as an extension of Agda, and provide evidence of usefulness by means of two compelling applications in the fields of datatype-generic programming and proof tactics."
},
{
"paper_title": "Mtac: a monad for typed tactic programming in Coq",
"paper_authors": [
"Beta Ziliani",
"Derek Dreyer",
"Neelakantan R. Krishnaswami",
"Aleksandar Nanevski",
"Viktor Vafeiadis"
],
"paper_abstract": "Effective support for custom proof automation is essential for large scale interactive proof development. However, existing languages for automation via *tactics* either (a) provide no way to specify the behavior of tactics within the base logic of the accompanying theorem prover, or (b) rely on advanced type-theoretic machinery that is not easily integrated into established theorem provers. We present Mtac, a lightweight but powerful extension to Coq that supports dependently-typed tactic programming. Mtac tactics have access to all the features of ordinary Coq programming, as well as a new set of typed tactical primitives. We avoid the need to touch the trusted kernel typechecker of Coq by encapsulating uses of these new tactical primitives in a *monad*, and instrumenting Coq so that it executes monadic tactics during type inference."
},
{
"paper_title": "Fun with semirings: a functional pearl on the abuse of linear algebra",
"paper_authors": [
"Stephen Dolan"
],
"paper_abstract": "Describing a problem using classical linear algebra is a very well-known problem-solving technique. If your question can be formulated as a question about real or complex matrices, then the answer can often be found by standard techniques. It's less well-known that very similar techniques still apply where instead of real or complex numbers we have a closed semiring, which is a structure with some analogue of addition and multiplication that need not support subtraction or division. We define a typeclass in Haskell for describing closed semirings, and implement a few functions for manipulating matrices and polynomials over them. We then show how these functions can be used to calculate transitive closures, find shortest or longest or widest paths in a graph, analyse the data flow of imperative programs, optimally pack knapsacks, and perform discrete event simulations, all by just providing an appropriate underlying closed semiring."
},
{
"paper_title": "Efficient divide-and-conquer parsing of practical context-free languages",
"paper_authors": [
"Jean-Philippe Bernardy",
"Koen Claessen"
],
"paper_abstract": "We present a divide-and-conquer algorithm for parsing context-free languages efficiently. Our algorithm is an instance of Valiant's (1975), who reduced the problem of parsing to matrix multiplications. We show that, while the conquer step of Valiant's is O(n3) in the worst case, it improves to O(logn3), under certain conditions satisfied by many useful inputs. These conditions occur for example in program texts written by humans. The improvement happens because the multiplications involve an overwhelming majority of empty matrices. This result is relevant to modern computing: divide-and-conquer algorithms can be parallelized relatively easily."
},
{
"paper_title": "Functional geometry and the Traité de Lutherie: functional pearl",
"paper_authors": [
"Harry George Mairson"
],
"paper_abstract": "We describe a functional programming approach to the design of outlines of eighteenth-century string instruments. The approach is based on the research described in François Denis's book, Traité de lutherie. The programming vernacular for Denis's instructions, which we call functional geometry, is meant to reiterate the historically justified language and techniques of this musical instrument design. The programming metaphor is entirely Euclidean, involving straightedge and compass constructions, with few (if any) numbers, and no Cartesian equations or grid. As such, it is also an interesting approach to teaching programming and mathematics without numerical calculation or equational reasoning. The advantage of this language-based, functional approach to lutherie is founded in the abstract characterization of common patterns in instrument design. These patterns include not only the abstraction of common straightedge and compass constructions, but of higher-order conceptualization of the instrument design process. We also discuss the role of arithmetic, geometric, harmonic, and subharmonic proportions, and the use of their rational approximants."
},
{
"paper_title": "Programming and reasoning with algebraic effects and dependent types",
"paper_authors": [
"Edwin Brady"
],
"paper_abstract": "One often cited benefit of pure functional programming is that pure code is easier to test and reason about, both formally and informally. However, real programs have side-effects including state management, exceptions and interactions with the outside world. Haskell solves this problem using monads to capture details of possibly side-effecting computations --- it provides monads for capturing state, I/O, exceptions, non-determinism, libraries for practical purposes such as CGI and parsing, and many others, as well as monad transformers for combining multiple effects. Unfortunately, useful as monads are, they do not compose very well. Monad transformers can quickly become unwieldy when there are lots of effects to manage, leading to a temptation in larger programs to combine everything into one coarse-grained state and exception monad. In this paper I describe an alternative approach based on handling algebraic effects, implemented in the IDRIS programming language. I show how to describe side effecting computations, how to write programs which compose multiple fine-grained effects, and how, using dependent types, we can use this approach to reason about states in effectful programs."
},
{
"paper_title": "Handlers in action",
"paper_authors": [
"Ohad Kammar",
"Sam Lindley",
"Nicolas Oury"
],
"paper_abstract": "Plotkin and Pretnar's handlers for algebraic effects occupy a sweet spot in the design space of abstractions for effectful computation. By separating effect signatures from their implementation, algebraic effects provide a high degree of modularity, allowing programmers to express effectful programs independently of the concrete interpretation of their effects. A handler is an interpretation of the effects of an algebraic computation. The handler abstraction adapts well to multiple settings: pure or impure, strict or lazy, static types or dynamic types. This is a position paper whose main aim is to popularise the handler abstraction. We give a gentle introduction to its use, a collection of illustrative examples, and a straightforward operational semantics. We describe our Haskell implementation of handlers in detail, outline the ideas behind our OCaml, SML, and Racket implementations, and present experimental results comparing handlers with existing code."
},
{
"paper_title": "Computer science as a school subject",
"paper_authors": [
"Simon Peyton Jones"
],
"paper_abstract": "Computer science is one of the richest, most exciting disciplines on the planet, yet any teenager will tell you that ICT (as it is called in UK schools --- \"information and communication technology\") is focused almost entirely on the use and application of computers, and in practice covers nothing about how computers work, nor programming, nor anything of the discipline of computer science as we understand it. Over the last two decades, computing at school has drifted from writing adventure games on the BBC Micro to writing business plans in Excel. This is bad for our young people's education, and it is bad for our economy. Nor is this phenomenon restricted to the UK: many countries are struggling with the same issues. Our young people should be educated not only in the application and use of digital technology, but also in how it works, and its foundational principles. Lacking such knowledge renders them powerless in the face of complex and opaque technology, disenfranchises them from making informed decisions about the digital society, and deprives our nations of a well-qualified stream of students enthusiastic and able to envision and design new digital systems. Can anything be done, given the enormous inertia of our various countries' educational systems? Sometimes, yes. After a decade of stasis, change has come to the UK. Over the last 18 months, there has been a wholesale reform of the English school computing curriculum, and substantial movement in Scotland and Wales. It now seems likely that computer science will, for the first time, become part of every child's education. This change has been driven not by institutions or by the government, but by a grass-roots movement of parents, teachers, university academics, software developers, and others. A key agent in this grass-roots movement---although not the only one---is the Computing At School Working Group (CAS). In this talk I will describe how CAS was born and developed, and the radical changes that have taken place since in the UK. I hope that this may be encouraging for those pushing water uphill in other parts of the world, and I will also try to draw out some lessons from our experience that may be useful to others."
},
{
"paper_title": "Correctness of an STM Haskell implementation",
"paper_authors": [
"Manfred Schmidt-Schauß",
"David Sabel"
],
"paper_abstract": "A concurrent implementation of software transactional memory in Concurrent Haskell using a call-by-need functional language with processes and futures is given. The description of the small-step operational semantics is precise and explicit, and employs an early abort of conflicting transactions. A proof of correctness of the implementation is given for a contextual semantics with may- and should-convergence. This implies that our implementation is a correct evaluator for an abstract specification equipped with a big-step semantics."
},
{
"paper_title": "Programming with permissions in Mezzo",
"paper_authors": [
"François Pottier",
"Jonathan Protzenko"
],
"paper_abstract": "We present Mezzo, a typed programming language of ML lineage. Mezzo is equipped with a novel static discipline of duplicable and affine permissions, which controls aliasing and ownership. This rules out certain mistakes, including representation exposure and data races, and enables new idioms, such as gradual initialization, memory re-use, and (type)state changes. Although the core static discipline disallows sharing a mutable data structure, Mezzo offers several ways of working around this restriction, including a novel dynamic ownership control mechanism which we dub \"adoption and abandon\"."
},
{
"paper_title": "Wellfounded recursion with copatterns: a unified approach to termination and productivity",
"paper_authors": [
"Andreas M. Abel",
"Brigitte Pientka"
],
"paper_abstract": "In this paper, we study strong normalization of a core language based on System F-omega which supports programming with finite and infinite structures. Building on our prior work, finite data such as finite lists and trees are defined via constructors and manipulated via pattern matching, while infinite data such as streams and infinite trees is defined by observations and synthesized via copattern matching. In this work, we take a type-based approach to strong normalization by tracking size information about finite and infinite data in the type. This guarantees compositionality. More importantly, the duality of pattern and copatterns provide a unifying semantic concept which allows us for the first time to elegantly and uniformly support both well-founded induction and coinduction by mere rewriting. The strong normalization proof is structured around Girard's reducibility candidates. As such our system allows for non-determinism and does not rely on coverage. Since System F-omega is general enough that it can be the target of compilation for the Calculus of Constructions, this work is a significant step towards representing observation-centric infinite data in proof assistants such as Coq and Agda."
},
{
"paper_title": "Productive coprogramming with guarded recursion",
"paper_authors": [
"Robert Atkey",
"Conor McBride"
],
"paper_abstract": "Total functional programming offers the beguiling vision that, just by virtue of the compiler accepting a program, we are guaranteed that it will always terminate. In the case of programs that are not intended to terminate, e.g., servers, we are guaranteed that programs will always be productive. Productivity means that, even if a program generates an infinite amount of data, each piece will be generated in finite time. The theoretical underpinning for productive programming with infinite output is provided by the category theoretic notion of final coalgebras. Hence, we speak of coprogramming with non-well-founded codata, as a dual to programming with well-founded data like finite lists and trees. Systems that offer facilities for productive coprogramming, such as the proof assistants Coq and Agda, currently do so through syntactic guardedness checkers. Syntactic guardedness checkers ensure that all self-recursive calls are guarded by a use of a constructor. Such a check ensures productivity. Unfortunately, these syntactic checks are not compositional, and severely complicate coprogramming. Guarded recursion, originally due to Nakano, is tantalising as a basis for a flexible and compositional type-based approach to coprogramming. However, as we show, by itself, guarded recursion is not suitable for coprogramming due to the fact that there is no way to make finite observations on pieces of infinite data. In this paper, we introduce the concept of clock variables that index Nakano's guarded recursion. Clock variables allow us to \"close over\" the generation of infinite data, and to make finite observations, something that is not possible with guarded recursion alone."
},
{
"paper_title": "Unifying structured recursion schemes",
"paper_authors": [
"Ralf Hinze",
"Nicolas Wu",
"Jeremy Gibbons"
],
"paper_abstract": "Folds over inductive datatypes are well understood and widely used. In their plain form, they are quite restricted; but many disparate generalisations have been proposed that enjoy similar calculational benefits. There have also been attempts to unify the various generalisations: two prominent such unifications are the 'recursion schemes from comonads' of Uustalu, Vene and Pardo, and our own 'adjoint folds'. Until now, these two unified schemes have appeared incompatible. We show that this appearance is illusory: in fact, adjoint folds subsume recursion schemes from comonads. The proof of this claim involves standard constructions in category theory that are nevertheless not well known in functional programming: Eilenberg-Moore categories and bialgebras."
},
{
"paper_title": "Higher-order functional reactive programming without spacetime leaks",
"paper_authors": [
"Neelakantan R. Krishnaswami"
],
"paper_abstract": "Functional reactive programming (FRP) is an elegant approach to declaratively specify reactive systems. However, the powerful abstractions of FRP have historically made it difficult to predict and control the resource usage of programs written in this style. In this paper, we give a new language for higher-order reactive programming. Our language generalizes and simplifies prior type systems for reactive programming, by supporting the use of streams of streams, first-class functions, and higher-order operations. We also support many temporal operations beyond streams, such as terminatable streams, events, and even resumptions with first-class schedulers. Furthermore, our language supports an efficient implementation strategy permitting us to eagerly deallocate old values and statically rule out spacetime leaks, a notorious source of inefficiency in reactive programs. Furthermore, these memory guarantees are achieved without the use of a complex substructural type discipline. We also show that our implementation strategy of eager deallocation is safe, by showing the soundness of our type system with a novel step-indexed Kripke logical relation."
},
{
"paper_title": "Functional reactive programming with liveness guarantees",
"paper_authors": [
"Alan Jeffrey"
],
"paper_abstract": "Functional Reactive Programming (FRP) is an approach to the development of reactive systems which provides a pure functional interface, but which may be implemented as an abstraction of an imperative event-driven layer. FRP systems typically provide a model of behaviours (total time-indexed values, implemented as pull systems) and event sources (partial time-indexed values, implemented as push systems). In this paper, we investigate a type system for event-driven FRP programs which provide liveness guarantees, that is every input event is guaranteed to generate an output event. We show that FRP can be implemented on top of a model of sets and relations, and that the isomorphism between event sources and behaviours corresponds to the isomorphism between relations and set-valued functions. We then implement sets and relations using a model of continuations using the usual double-negation CPS transform. The implementation of behaviours as pull systems based on futures, and of event sources as push systems based on the observer pattern, thus arises from first principles. We also discuss a Java implementation of the FRP model."
},
{
"paper_title": "A short cut to parallelization theorems",
"paper_authors": [
"Akimasa Morihata"
],
"paper_abstract": "The third list-homomorphism theorem states that if a function is both foldr and foldl, it has a divide-and-conquer parallel implementation as well. In this paper, we develop a theory for obtaining such parallelization theorems. The key is a new proof of the third list-homomorphism theorem based on shortcut deforestation. The proof implies that there exists a divide-and-conquer parallel program of the form of h(x 'merge' y) = h1 x odot h2 y, where h is the subject of parallelization, merge is the operation of integrating independent substructures, h1 and h2 are computations applied to substructures, possibly in parallel, and odot merges the results calculated for substructures, if (i) h can be specified by two certain forms of iterative programs, and (ii) merge can be implemented by a function of a certain polymorphic type. Therefore, when requirement (ii) is fulfilled, h has a divide-and-conquer implementation if h has two certain forms of implementations. We show that our approach is applicable to structure-consuming operations by catamorphisms (folds), structure-generating operations by anamorphisms (unfolds), and their generalizations called hylomorphisms."
},
{
"paper_title": "Using circular programs for higher-order syntax: functional pearl",
"paper_authors": [
"Emil Axelsson",
"Koen Claessen"
],
"paper_abstract": "This pearl presents a novel technique for constructing a first-order syntax tree directly from a higher-order interface. We exploit circular programming to generate names for new variables, resulting in a simple yet efficient method. Our motivating application is the design of embedded languages supporting variable binding, where it is convenient to use higher-order syntax when constructing programs, but first-order syntax when processing or transforming programs."
},
{
"paper_title": "Weak optimality, and the meaning of sharing",
"paper_authors": [
"Thibaut Balabonski"
],
"paper_abstract": "In this paper we investigate laziness and optimal evaluation strategies for functional programming languages. We consider the weak lambda-calculus as a basis of functional programming languages, and we adapt to this setting the concepts of optimal reductions that were defined for the full lambda-calculus. We prove that the usual implementation of call-by-need using sharing is optimal, that is, normalizing any lambda-term with call-by-need requires exactly the same number of reduction steps as the shortest reduction sequence in the weak lambda-calculus without sharing. Furthermore, we prove that optimal reduction sequences without sharing are not computable. Hence sharing is the only computable means to reach weak optimality."
},
{
"paper_title": "System FC with explicit kind equality",
"paper_authors": [
"Stephanie Weirich",
"Justin Hsu",
"Richard A. Eisenberg"
],
"paper_abstract": "System FC, the core language of the Glasgow Haskell Compiler, is an explicitly-typed variant of System F with first-class type equality proofs called coercions. This extensible proof system forms the foundation for type system extensions such as type families (type-level functions) and Generalized Algebraic Datatypes (GADTs). Such features, in conjunction with kind polymorphism and datatype promotion, support expressive compile-time reasoning. However, the core language lacks explicit kind equality proofs. As a result, type-level computation does not have access to kind-level functions or promoted GADTs, the type-level analogues to expression-level features that have been so useful. In this paper, we eliminate such discrepancies by introducing kind equalities to System FC. Our approach is based on dependent type systems with heterogeneous equality and the \"Type-in-Type\" axiom, yet it preserves the metatheoretic properties of FC. In particular, type checking is simple, decidable and syntax directed. We prove the preservation and progress theorems for the extended language."
},
{
"paper_title": "The constrained-monad problem",
"paper_authors": [
"Neil Sculthorpe",
"Jan Bracker",
"George Giorgidze",
"Andy Gill"
],
"paper_abstract": "In Haskell, there are many data types that would form monads were it not for the presence of type-class constraints on the operations on that data type. This is a frustrating problem in practice, because there is a considerable amount of support and infrastructure for monads that these data types cannot use. Using several examples, we show that a monadic computation can be restructured into a normal form such that the standard monad class can be used. The technique is not specific to monads, and we show how it can also be applied to other structures, such as applicative functors. One significant use case for this technique is domain-specific languages, where it is often desirable to compile a deep embedding of a computation to some other language, which requires restricting the types that can appear in that computation."
},
{
"paper_title": "Simple and compositional reification of monadic embedded languages",
"paper_authors": [
"Josef David Svenningsson",
"Bo Joel Svensson"
],
"paper_abstract": "When writing embedded domain specific languages in Haskell, it is often convenient to be able to make an instance of the Monad class to take advantage of the do-notation and the extensive monad libraries. Commonly it is desirable to compile such languages rather than just interpret them. This introduces the problem of monad reification, i.e. observing the structure of the monadic computation. We present a solution to the monad reification problem and illustrate it with a small robot control language. Monad reification is not new but the novelty of our approach is in its directness, simplicity and compositionality."
},
{
"paper_title": "Structural recursion for querying ordered graphs",
"paper_authors": [
"Soichiro Hidaka",
"Kazuyuki Asada",
"Zhenjiang Hu",
"Hiroyuki Kato",
"Keisuke Nakano"
],
"paper_abstract": "Structural recursion, in the form of, for example, folds on lists and catamorphisms on algebraic data structures including trees, plays an important role in functional programming, by providing a systematic way for constructing and manipulating functional programs. It is, however, a challenge to define structural recursions for graph data structures, the most ubiquitous sort of data in computing. This is because unlike lists and trees, graphs are essentially not inductive and cannot be formalized as an initial algebra in general. In this paper, we borrow from the database community the idea of structural recursion on how to restrict recursions on infinite unordered regular trees so that they preserve the finiteness property and become terminating, which are desirable properties for query languages. We propose a new graph transformation language called lambdaFG for transforming and querying ordered graphs, based on the well-defined bisimulation relation on ordered graphs with special epsilon-edges. The language lambdaFG is a higher order graph transformation language that extends the simply typed lambda calculus with graph constructors and more powerful structural recursions, which is extended for transformations on the sibling dimension. It not only gives a general framework for manipulating graphs and reasoning about them, but also provides a solution to the open problem of how to define a structural recursion on ordered graphs, with the help of the bisimilarity for ordered graphs with epsilon-edges."
},
{
"paper_title": "Modular monadic meta-theory",
"paper_authors": [
"Benjamin Delaware",
"Steven Keuchel",
"Tom Schrijvers",
"Bruno C.d.S. Oliveira"
],
"paper_abstract": "This paper presents 3MT, a framework for modular mechanized meta-theory of languages with effects. Using 3MT, individual language features and their corresponding definitions -- semantic functions, theorem statements and proofs-- can be built separately and then reused to create different languages with fully mechanized meta-theory. 3MT combines modular datatypes and monads to define denotational semantics with effects on a per-feature basis, without fixing the particular set of effects or language constructs. One well-established problem with type soundness proofs for denotational semantics is that they are notoriously brittle with respect to the addition of new effects. The statement of type soundness for a language depends intimately on the effects it uses, making it particularly challenging to achieve modularity. 3MT solves this long-standing problem by splitting these theorems into two separate and reusable parts: a feature theorem that captures the well-typing of denotations produced by the semantic function of an individual feature with respect to only the effects used, and an effect theorem that adapts well-typings of denotations to a fixed superset of effects. The proof of type soundness for a particular language simply combines these theorems for its features and the combination of their effects. To establish both theorems, 3MT uses two key reasoning techniques: modular induction and algebraic laws about effects. Several effectful language features, including references and errors, illustrate the capabilities of 3MT. A case study reuses these features to build fully mechanized definitions and proofs for 28 languages, including several versions of mini-ML with effects."
},
{
"paper_title": "Modular and automated type-soundness verification for language extensions",
"paper_authors": [
"Florian Lorenzen",
"Sebastian Erdweg"
],
"paper_abstract": "Language extensions introduce high-level programming constructs that protect programmers from low-level details and repetitive tasks. For such an abstraction barrier to be sustainable, it is important that no errors are reported in terms of generated code. A typical strategy is to check the original user code prior to translation into a low-level encoding, applying the assumption that the translation does not introduce new errors. Unfortunately, such assumption is untenable in general, but in particular in the context of extensible programming languages, such as Racket or SugarJ, that allow regular programmers to define language extensions. In this paper, we present a formalism for building and automatically verifying the type-soundness of syntactic language extensions. To build a type-sound language extension with our formalism, a developer declares an extended syntax, type rules for the extended syntax, and translation rules into the (possibly further extended) base language. Our formalism then validates that the user-defined type rules are sufficient to guarantee that the code generated by the translation rules cannot contain any type errors. This effectively ensures that an initial type check prior to translation precludes type errors in generated code. We have implemented a core system in PLT Redex and we have developed a syntactically extensible variant of System Fw that we extend with let notation, monadic do blocks, and algebraic data types. Our formalism verifies the soundness of each extension automatically."
},
{
"paper_title": "A nanopass framework for commercial compiler development",
"paper_authors": [
"Andrew W. Keep",
"R. Kent Dybvig"
],
"paper_abstract": "Contemporary compilers must typically handle sophisticated high-level source languages, generate efficient code for multiple hardware architectures and operating systems, and support source-level debugging, profiling, and other program development tools. As a result, compilers tend to be among the most complex of software systems. Nanopass frameworks are designed to help manage this complexity. A nanopass compiler is comprised of many single-task passes with formally defined intermediate languages. The perceived downside of a nanopass compiler is that the extra passes will lead to substantially longer compilation times. To determine whether this is the case, we have created a plug replacement for the commercial Chez Scheme compiler, implemented using an updated nanopass framework, and we have compared the speed of the new compiler and the code it generates against the original compiler for a large set of benchmark programs. This paper describes the updated nanopass framework, the new compiler, and the results of our experiments. The compiler produces faster code than the original, averaging 15-27% depending on architecture and optimization level, due to a more sophisticated but slower register allocator and improvements to several optimizations. Compilation times average well within a factor of two of the original compiler, despite the slower register allocator and the replacement of five passes of the original 10 with over 50 nanopasses."
},
{
"paper_title": "Experience report: applying random testing to a base type environment",
"paper_authors": [
"Vincent St-Amour",
"Neil Toronto"
],
"paper_abstract": "As programmers, programming in typed languages increases our confidence in the correctness of our programs. As type system designers, soundness proofs increase our confidence in the correctness of our type systems. There is more to typed languages than their typing rules, however. To be usable, a typed language needs to provide a well-furnished standard library and to specify types for its exports. As software artifacts, these base type environments can rival typecheckers in complexity. Our experience with the Typed Racket base environment---which accounts for 31% of the code in the Typed Racket implementation---teaches us that writing type environments can be just as error-prone as writing typecheckers. We report on our experience over the past two years of using random testing to increase our confidence in the correctness of the Typed Racket base environment."
},
{
"paper_title": "Experience report: functional programming of mHealth applications",
"paper_authors": [
"Christian L. Petersen",
"Matthias Gorges",
"Dustin Dunsmuir",
"Mark Ansermino",
"Guy A. Dumont"
],
"paper_abstract": "A modular framework for the development of medical applications that promotes deterministic, robust and correct code is presented. The system is based on the portable Gambit Scheme programming language and provides a flexible cross-platform environment for developing graphical applications on mobile devices as well as medical instrumentation interfaces running on embedded platforms. Real world applications of this framework for mobile diagnostics, telemonitoring and automated drug infusions are reported. The source code for the core framework is open source and available at: https://github.com/part-cw/lambdanative."
},
{
"paper_title": "Hoare-style reasoning with (algebraic) continuations",
"paper_authors": [
"Germán Andrés Delbianco",
"Aleksandar Nanevski"
],
"paper_abstract": "Continuations are programming abstractions that allow for manipulating the \"future\" of a computation. Amongst their many applications, they enable implementing unstructured program flow through higher-order control operators such as callcc. In this paper we develop a Hoare-style logic for the verification of programs with higher-order control, in the presence of dynamic state. This is done by designing a dependent type theory with first class callcc and abort operators, where pre- and postconditions of programs are tracked through types. Our operators are algebraic in the sense of Plotkin and Power, and Jaskelioff, to reduce the annotation burden and enable verification by symbolic evaluation. We illustrate working with the logic by verifying a number of characteristic examples."
},
{
"paper_title": "Unifying refinement and hoare-style reasoning in a logic for higher-order concurrency",
"paper_authors": [
"Aaron Turon",
"Derek Dreyer",
"Lars Birkedal"
],
"paper_abstract": "Modular programming and modular verification go hand in hand, but most existing logics for concurrency ignore two crucial forms of modularity: *higher-order functions*, which are essential for building reusable components, and *granularity abstraction*, a key technique for hiding the intricacies of fine-grained concurrent data structures from the clients of those data structures. In this paper, we present CaReSL, the first logic to support the use of granularity abstraction for modular verification of higher-order concurrent programs. After motivating the features of CaReSL through a variety of illustrative examples, we demonstrate its effectiveness by using it to tackle a significant case study: the first formal proof of (partial) correctness for Hendler et al.'s \"flat combining\" algorithm."
},
{
"paper_title": "The bedrock structured programming system: combining generative metaprogramming and hoare logic in an extensible program verifier",
"paper_authors": [
"Adam Chlipala"
],
"paper_abstract": "We report on the design and implementation of an extensible programming language and its intrinsic support for formal verification. Our language is targeted at low-level programming of infrastructure like operating systems and runtime systems. It is based on a cross-platform core combining characteristics of assembly languages and compiler intermediate languages. From this foundation, we take literally the saying that C is a \"macro assembly language\": we introduce an expressive notion of certified low-level macros, sufficient to build up the usual features of C and beyond as macros with no special support in the core. Furthermore, our macros have integrated support for strongest postcondition calculation and verification condition generation, so that we can provide a high-productivity formal verification environment within Coq for programs composed from any combination of macros. Our macro interface is expressive enough to support features that low-level programs usually only access through external tools with no formal guarantees, such as declarative parsing or SQL-inspired querying. The abstraction level of these macros only imposes a compile-time cost, via the execution of functional Coq programs that compute programs in our intermediate language; but the run-time cost is not substantially greater than for more conventional C code. We describe our experiences constructing a full C-like language stack using macros, with some experiments on the verifiability and performance of individual programs running on that stack."
},
{
"paper_title": "A practical theory of language-integrated query",
"paper_authors": [
"James Cheney",
"Sam Lindley",
"Philip Wadler"
],
"paper_abstract": "Language-integrated query is receiving renewed attention, in part because of its support through Microsoft's LINQ framework. We present a practical theory of language-integrated query based on quotation and normalisation of quoted terms. Our technique supports join queries, abstraction over values and predicates, composition of queries, dynamic generation of queries, and queries with nested intermediate data. Higher-order features prove useful even for constructing first-order queries. We prove a theorem characterising when a host query is guaranteed to generate a single SQL query. We present experimental results confirming our technique works, even in situations where Microsoft's LINQ framework either fails to produce an SQL query or, in one case, produces an avalanche of SQL queries."
},
{
"paper_title": "Calculating threesomes, with blame",
"paper_authors": [
"Ronald Garcia"
],
"paper_abstract": "Coercions and threesomes both enable a language to combine static and dynamic types while avoiding cast-based space leaks. Coercion calculi elegantly specify space-efficient cast behavior, even when augmented with blame tracking, but implementing their semantics directly is difficult. Threesomes, on the other hand, have a straightforward recursive implementation, but endowing them with blame tracking is challenging. In this paper, we show that you can use that elegant spec to produce that straightforward implementation: we use the coercion calculus to derive threesomes with blame. In particular, we construct novel threesome calculi for blame tracking strategies that detect errors earlier, catch more errors, and reflect an intuitive conception of safe and unsafe casts based on traditional subtyping."
},
{
"paper_title": "Complete and easy bidirectional typechecking for higher-rank polymorphism",
"paper_authors": [
"Joshua Dunfield",
"Neelakantan R. Krishnaswami"
],
"paper_abstract": "Bidirectional typechecking, in which terms either synthesize a type or are checked against a known type, has become popular for its scalability (unlike Damas-Milner type inference, bidirectional typing remains decidable even for very expressive type systems), its error reporting, and its relative ease of implementation. Following design principles from proof theory, bidirectional typing can be applied to many type constructs. The principles underlying a bidirectional approach to polymorphism, however, are less obvious. We give a declarative, bidirectional account of higher-rank polymorphism, grounded in proof theory; this calculus enjoys many properties such as eta-reduction and predictability of annotations. We give an algorithm for implementing the declarative system; our algorithm is remarkably simple and well-behaved, despite being both sound and complete."
},
{
"paper_title": "Optimizing abstract abstract machines",
"paper_authors": [
"J. Ian Johnson",
"Nicholas Labich",
"Matthew Might",
"David Van Horn"
],
"paper_abstract": "The technique of abstracting abstract machines (AAM) provides a systematic approach for deriving computable approximations of evaluators that are easily proved sound. This article contributes a complementary step-by-step process for subsequently going from a naive analyzer derived under the AAM approach, to an efficient and correct implementation. The end result of the process is a two to three order-of-magnitude improvement over the systematically derived analyzer, making it competitive with hand-optimized implementations that compute fundamentally less precise results."
},
{
"paper_title": "Testing noninterference, quickly",
"paper_authors": [
"Catalin Hritcu",
"John Hughes",
"Benjamin C. Pierce",
"Antal Spector-Zabusky",
"Dimitrios Vytiniotis",
"Arthur Azevedo de Amorim",
"Leonidas Lampropoulos"
],
"paper_abstract": "Information-flow control mechanisms are difficult to design and labor intensive to prove correct. To reduce the time wasted on proof attempts doomed to fail due to broken definitions, we advocate modern random testing techniques for finding counterexamples during the design process. We show how to use QuickCheck, a property-based random-testing tool, to guide the design of a simple information-flow abstract machine. We find that both sophisticated strategies for generating well-distributed random programs and readily falsifiable formulations of noninterference properties are critically important. We propose several approaches and evaluate their effectiveness on a collection of injected bugs of varying subtlety. We also present an effective technique for shrinking large counterexamples to minimal, easily comprehensible ones. Taken together, our best methods enable us to quickly and automatically generate simple counterexamples for all these bugs."
}
]
},
{
"proceeding_title": "ICFP '12:Proceedings of the 17th ACM SIGPLAN international conference on Functional programming",
"proceeding_contents": [
{
"paper_title": "Agda-curious?: an exploration of programming with dependent types",
"paper_authors": [
"Conor Thomas McBride"
],
"paper_abstract": "I explore programming with the dependently typed functional language, AGDA. I present the progress which AGDA has made, demonstrate its usage in a small development, reflect critically on the state of the art, and speculate about the way ahead. I do not seek to persuade you to adopt AGDA as your primary tool for systems development, but argue that AGDA stimulates new useful ways to think about programming problems and deserves not just curiosity but interest, support and contribution."
},
{
"paper_title": "Verified heap theorem prover by paramodulation",
"paper_authors": [
"Gordon Stewart",
"Lennart Beringer",
"Andrew W. Appel"
],
"paper_abstract": "We present VeriStar, a verified theorem prover for a decidable subset of separation logic. Together with VeriSmall [3], a proved-sound Smallfoot-style program analysis for C minor, VeriStar demonstrates that fully machine-checked static analyses equipped with efficient theorem provers are now within the reach of formal methods. As a pair, VeriStar and VeriSmall represent the first application of the Verified Software Toolchain [4], a tightly integrated collection of machine-verified program logics and compilers giving foundational correctness guarantees. VeriStar is (1) purely functional, (2) machine-checked, (3) end-to-end, (4) efficient and (5) modular. By purely functional, we mean it is implemented in Gallina, the pure functional programming language embedded in the Coq theorem prover. By machine-checked, we mean it has a proof in Coq that when the prover says \"valid\", the checked entailment holds in a proved-sound separation logic for C minor. By end-to-end, we mean that when the static analysis+theorem prover says a C minor program is safe, the program will be compiled to a semantically equivalent assembly program that runs on real hardware. By efficient, we mean that the prover implements a state-of-the-art algorithm for deciding heap entailments and uses highly tuned verified functional data structures. By modular, we mean that VeriStar can be retrofitted to other static analyses as a plug-compatible entailment checker and its soundness proof can easily be ported to other separation logics."
},
{
"paper_title": "Formal verification of monad transformers",
"paper_authors": [
"Brian Huffman"
],
"paper_abstract": "We present techniques for reasoning about constructor classes that (like the monad class) fix polymorphic operations and assert polymorphic axioms. We do not require a logic with first-class type constructors, first-class polymorphism, or type quantification; instead, we rely on a domain-theoretic model of the type system in a universal domain to provide these features. These ideas are implemented in the Tycon library for the Isabelle theorem prover, which builds on the HOLCF library of domain theory. The Tycon library provides various axiomatic type constructor classes, including functors and monads. It also provides automation for instantiating those classes, and for defining further subclasses. We use the Tycon library to formalize three Haskell monad transformers: the error transformer, the writer transformer, and the resumption transformer. The error and writer transformers do not universally preserve the monad laws; however, we establish datatype invariants for each, showing that they are valid monads when viewed as abstract datatypes."
},
{
"paper_title": "Elaborating intersection and union types",
"paper_authors": [
"Joshua Dunfield"
],
"paper_abstract": "Designing and implementing typed programming languages is hard. Every new type system feature requires extending the metatheory and implementation, which are often complicated and fragile. To ease this process, we would like to provide general mechanisms that subsume many different features. In modern type systems, parametric polymorphism is fundamental, but intersection polymorphism has gained little traction in programming languages. Most practical intersection type systems have supported only refinement intersections, which increase the expressiveness of types (more precise properties can be checked) without altering the expressiveness of terms; refinement intersections can simply be erased during compilation. In contrast, unrestricted intersections increase the expressiveness of terms, and can be used to encode diverse language features, promising an economy of both theory and implementation. We describe a foundation for compiling unrestricted intersection and union types: an elaboration type system that generates ordinary λ-calculus terms. The key feature is a Forsythe-like merge construct. With this construct, not all reductions of the source program preserve types; however, we prove that ordinary call-by-value evaluation of the elaborated program corresponds to a type-preserving evaluation of the source program. We also describe a prototype implementation and applications of unrestricted intersections and unions: records, operator overloading, and simulating dynamic typing."
},
{
"paper_title": "An error-tolerant type system for variational lambda calculus",
"paper_authors": [
"Sheng Chen",
"Martin Erwig",
"Eric Walkingshaw"
],
"paper_abstract": "Conditional compilation and software product line technologies make it possible to generate a huge number of different programs from a single software project. Typing each of these programs individually is usually impossible due to the sheer number of possible variants. Our previous work has addressed this problem with a type system for variational lambda calculus (VLC), an extension of lambda calculus with basic constructs for introducing and organizing variation. Although our type inference algorithm is more efficient than the brute-force strategy of inferring the types of each variant individually, it is less robust since type inference will fail for the entire variational expression if any one variant contains a type error. In this work, we extend our type system to operate on VLC expressions containing type errors. This extension directly supports locating ill-typed variants and the incremental development of variational programs. It also has many subtle implications for the unification of variational types. We show that our extended type system possesses a principal typing property and that the underlying unification problem is unitary. Our unification algorithm computes partial unifiers that lead to result types that (1) contain errors in as few variants as possible and (2) are most general. Finally, we perform an empirical evaluation to determine the overhead of this extension compared to our previous work, to demonstrate the improvements over the brute-force approach, and to explore the effects of various error distributions on the inference process."
},
{
"paper_title": "Superficially substructural types",
"paper_authors": [
"Neelakantan R. Krishnaswami",
"Aaron Turon",
"Derek Dreyer",
"Deepak Garg"
],
"paper_abstract": "Many substructural type systems have been proposed for controlling access to shared state in higher-order languages. Central to these systems is the notion of a *resource*, which may be split into disjoint pieces that different parts of a program can manipulate independently without worrying about interfering with one another. Some systems support a *logical* notion of resource (such as permissions), under which two resources may be considered disjoint even if they govern the *same* piece of state. However, in nearly all existing systems, the notions of resource and disjointness are fixed at the outset, baked into the model of the language, and fairly coarse-grained in the kinds of sharing they enable. In this paper, inspired by recent work on \"fictional disjointness\" in separation logic, we propose a simple and flexible way of enabling any module in a program to create its own custom type of splittable resource (represented as a commutative monoid), thus providing fine-grained control over how the module's private state is shared with its clients. This functionality can be incorporated into an otherwise standard substructural type system by means of a new typing rule we call *the sharing rule*, whose soundness we prove semantically via a novel resource-oriented Kripke logical relation."
},
{
"paper_title": "Shake before building: replacing make with haskell",
"paper_authors": [
"Neil Mitchell"
],
"paper_abstract": "Most complex software projects are compiled using a build tool (e.g. make), which runs commands in an order satisfying user-defined dependencies. Unfortunately, most build tools require all dependencies to be specified before the build starts. This restriction makes many dependency patterns difficult to express, especially those involving files generated at build time. We show how to eliminate this restriction, allowing additional dependencies to be specified while building. We have implemented our ideas in the Haskell library Shake, and have used Shake to write a complex build system which compiles millions of lines of code."
},
{
"paper_title": "Practical typed lazy contracts",
"paper_authors": [
"Olaf Chitil"
],
"paper_abstract": "Until now there has been no support for specifying and enforcing contracts within a lazy functional program. That is a shame, because contracts consist of pre- and post-conditions for functions that go beyond the standard static types. This paper presents the design and implementation of a small, easy-to-use, purely functional contract library for Haskell, which, when a contract is violated, also provides more useful information than the classical blaming of one contract partner. From now on lazy functional languages can profit from the assurances in the development of correct programs that contracts provide."
},
{
"paper_title": "Functional programming with structured graphs",
"paper_authors": [
"Bruno C.d.S. Oliveira",
"William R. Cook"
],
"paper_abstract": "This paper presents a new functional programming model for graph structures called structured graphs. Structured graphs extend conventional algebraic datatypes with explicit definition and manipulation of cycles and/or sharing, and offer a practical and convenient way to program graphs in functional programming languages like Haskell. The representation of sharing and cycles (edges) employs recursive binders and uses an encoding inspired by parametric higher-order abstract syntax. Unlike traditional approaches based on mutable references or node/edge lists, well-formedness of the graph structure is ensured statically and reasoning can be done with standard functional programming techniques. Since the binding structure is generic, we can define many useful generic combinators for manipulating structured graphs. We give applications and show how to reason about structured graphs."
},
{
"paper_title": "Painless programming combining reduction and search: design principles for embedding decision procedures in high-level languages",
"paper_authors": [
"Timothy E. Sheard"
],
"paper_abstract": "We describe the Funlogic system which extends a functional language with existentially quantified declarations. An existential declaration introduces a variable and a set of constraints that its value should meet. Existential variables are bound to conforming values by a decision procedure. Funlogic embeds multiple external decision procedures using a common framework. Design principles for embedding decision procedures are developed and illustrated for three different decision procedures from widely varying domains."
},
{
"paper_title": "Transporting functions across ornaments",
"paper_authors": [
"Pierre-Evariste Dagand",
"Conor McBride"
],
"paper_abstract": "Programming with dependent types is a blessing and a curse. It is a blessing to be able to bake invariants into the definition of datatypes: we can finally write correct-by-construction software. However, this extreme accuracy is also a curse: a datatype is the combination of a structuring medium together with a special purpose logic. These domain-specific logics hamper any effort of code reuse among similarly structured data. In this paper, we exorcise our datatypes by adapting the notion of ornament to our universe of inductive families. We then show how code reuse can be achieved by ornamenting functions. Using these functional ornaments, we capture the relationship between functions such as the addition of natural numbers and the concatenation of lists. With this knowledge, we demonstrate how the implementation of the former informs the implementation of the latter: the user can ask the definition of addition to be lifted to lists and she will only be asked the details necessary to carry on adding lists rather than numbers. Our presentation is formalised in a type theory with a universe of datatypes and all our constructions have been implemented as generic programs, requiring no extension to the type theory."
},
{
"paper_title": "Proof-producing synthesis of ML from higher-order logic",
"paper_authors": [
"Magnus O. Myreen",
"Scott Owens"
],
"paper_abstract": "The higher-order logic found in proof assistants such as Coq and various HOL systems provides a convenient setting for the development and verification of pure functional programs. However, to efficiently run these programs, they must be converted (or \"extracted\") to functional programs in a programming language such as ML or Haskell. With current techniques, this step, which must be trusted, relates similar looking objects that have very different semantic definitions, such as the set-theoretic model of a logic and the operational semantics of a programming language. In this paper, we show how to increase the trustworthiness of this step with an automated technique. Given a functional program expressed in higher-order logic, our technique provides the corresponding program for a functional language defined with an operational semantics, and it provides a mechanically checked theorem relating the two. This theorem can then be used to transfer verified properties of the logical function to the program. We have implemented our technique in the HOL4 theorem prover, translating functions to a core subset of Standard ML, and have applied it to examples including functional data structures, a parser generator, cryptographic algorithms, and a garbage collector."
},
{
"paper_title": "Operational semantics using the partiality monad",
"paper_authors": [
"Nils Anders Danielsson"
],
"paper_abstract": "The operational semantics of a partial, functional language is often given as a relation rather than as a function. The latter approach is arguably more natural: if the language is functional, why not take advantage of this when defining the semantics? One can immediately see that a functional semantics is deterministic and, in a constructive setting, computable. This paper shows how one can use the coinductive partiality monad to define big-step or small-step operational semantics for lambda-calculi and virtual machines as total, computable functions (total definitional interpreters). To demonstrate that the resulting semantics are useful type soundness and compiler correctness results are also proved. The results have been implemented and checked using Agda, a dependently typed programming language and proof assistant."
},
{
"paper_title": "High performance embedded domain specific languages",
"paper_authors": [
"Kunle Olukotun"
],
"paper_abstract": "Today, all high-performance computer architectures are parallel and heterogeneous; a combination of multiple CPUs, GPUs and specialized processors. This creates a complex programming problem for application developers. Domain-specific languages (DSLs) are a promising solution to this problem because they provide an avenue for application-specific abstractions to be mapped directly to low level architecture-specific programming models providing high programmer productivity and high execution performance. In this talk I will describe our approach to building high performance DSLs, which is based on embedding in Scala, light-wieght modular staging and a DSL infrastructure called Delite. I will describe how we transform impure functional programs into efficient first-order low-level code using domain specific optimization, parallelism optimization, locality optimization, scalar optimization, and architecture-specific code generation. All optimizations and transformations are implemented in an extensible DSL compiler architecture that minimizes the programmer effort required to develop a new DSL."
},
{
"paper_title": "Pure type systems with corecursion on streams: from finite to infinitary normalisation",
"paper_authors": [
"Paula G. Severi",
"Fer-Jan J. de Vries"
],
"paper_abstract": "In this paper, we use types for ensuring that programs involving streams are well-behaved. We extend pure type systems with a type constructor for streams, a modal operator next and a fixed point operator for expressing corecursion. This extension is called Pure Type Systems with Corecursion (CoPTS). The typed lambda calculus for reactive programs defined by Krishnaswami and Benton can be obtained as a CoPTS. CoPTSs allow us to study a wide range of typed lambda calculi extended with corecursion using only one framework. In particular, we study this extension for the calculus of constructions which is the underlying formal language of Coq. We use the machinery of infinitary rewriting and formalise the idea of well-behaved programs using the concept of infinitary normalisation. The set of finite and infinite terms is defined as a metric completion. We establish a precise connection between the modal operator (• A) and the metric at a syntactic level by relating a variable of type (• A) with the depth of all its occurrences in a term. This syntactic connection between the modal operator and the depth is the key to the proofs of infinitary weak and strong normalisation."
},
{
"paper_title": "On the complexity of equivalence of specifications of infinite objects",
"paper_authors": [
"Jörg Endrullis",
"Dimitri Hendriks",
"Rena Bakhshi"
],
"paper_abstract": "We study the complexity of deciding the equality of infinite objects specified by systems of equations, and of infinite objects specified by λ-terms. For equational specifications there are several natural notions of equality: equality in all models, equality of the sets of solutions, and equality of normal forms for productive specifications. For λ-terms we investigate Böhm-tree equality and various notions of observational equality. We pinpoint the complexity of each of these notions in the arithmetical or analytical hierarchy. We show that the complexity of deciding equality in all models subsumes the entire analytical hierarchy. This holds already for the most simple infinite objects, viz. streams over {0,1}, and stands in sharp contrast to the low arithmetical ϖ02-completeness of equality of equationally specified streams derived in [17] employing a different notion of equality."
},
{
"paper_title": "Experience report: a do-it-yourself high-assurance compiler",
"paper_authors": [
"Lee Pike",
"Nis Wegmann",
"Sebastian Niller",
"Alwyn Goodloe"
],
"paper_abstract": "Embedded domain-specific languages (EDSLs) are an approach for quickly building new languages while maintaining the advantages of a rich metalanguage. We argue in this experience report that the \"EDSL approach\" can surprisingly ease the task of building a high-assurance compiler. We do not strive to build a fully formally-verified tool-chain, but take a \"do-it-yourself\" approach to increase our confidence in compiler-correctness without too much effort. Copilot is an EDSL developed by Galois, Inc. and the National Institute of Aerospace under contract to NASA for the purpose of runtime monitoring of flight-critical avionics. We report our experience in using type-checking, QuickCheck, and model-checking \"off-the-shelf\" to quickly increase confidence in our EDSL tool-chain."
},
{
"paper_title": "Equality proofs and deferred type errors: a compiler pearl",
"paper_authors": [
"Dimitrios Vytiniotis",
"Simon Peyton Jones",
"José Pedro Magalhães"
],
"paper_abstract": "The Glasgow Haskell Compiler is an optimizing compiler that expresses and manipulates first-class equality proofs in its intermediate language. We describe a simple, elegant technique that exploits these equality proofs to support deferred type errors. The technique requires us to treat equality proofs as possibly-divergent terms; we show how to do so without losing either soundness or the zero-overhead cost model that the programmer expects."
},
{
"paper_title": "Efficient lookup-table protocol in secure multiparty computation",
"paper_authors": [
"John Launchbury",
"Iavor S. Diatchki",
"Thomas DuBuisson",
"Andy Adams-Moran"
],
"paper_abstract": "Secure multiparty computation (SMC) permits a collection of parties to compute a collaborative result, without any of the parties gaining any knowledge about the inputs provided by other parties. Specifications for SMC are commonly presented as boolean circuits, where optimizations come mostly from reducing the number of multiply-operations (including and-gates) - these are the operations which incur significant cost, either in computation overhead or in communication between the parties. Instead, we take a language-oriented approach, and consequently are able to explore many other kinds of optimizations. We present an efficient and general purpose SMC table-lookup algorithm that can serve as a direct alternative to circuits. Looking up a private (i.e. shared, or encrypted) n-bit argument in a public table requires log(n) parallel-and operations. We use the advanced encryption standard algorithm (AES) as a driving motivation, and by introducing different kinds of parallelization techniques, produce the fastest current SMC implementation of AES, improving the best previously reported results by well over an order of magnitude."
},
{
"paper_title": "Addressing covert termination and timing channels in concurrent information flow systems",
"paper_authors": [
"Deian Stefan",
"Alejandro Russo",
"Pablo Buiras",
"Amit Levy",
"John C. Mitchell",
"David Maziéres"
],
"paper_abstract": "When termination of a program is observable by an adversary, confidential information may be leaked by terminating accordingly. While this termination covert channel has limited bandwidth for sequential programs, it is a more dangerous source of information leakage in concurrent settings. We address concurrent termination and timing channels by presenting a dynamic information-flow control system that mitigates and eliminates these channels while allowing termination and timing to depend on secret values. Intuitively, we leverage concurrency by placing such potentially sensitive actions in separate threads. While termination and timing of these threads may expose secret values, our system requires any thread observing these properties to raise its information-flow label accordingly, preventing leaks to lower-labeled contexts. We implement this approach in a Haskell library and demonstrate its applicability by building a web server that uses information-flow control to restrict untrusted web applications."
},
{
"paper_title": "Sneaking around concatMap: efficient combinators for dynamic programming",
"paper_authors": [
"Christian Höner zu Siederdissen"
],
"paper_abstract": "We present a framework of dynamic programming combinators that provides a high-level environment to describe the recursions typical of dynamic programming over sequence data in a style very similar to algebraic dynamic programming (ADP). Using a combination of type-level programming and stream fusion leads to a substantial increase in performance, without sacrificing much of the convenience and theoretical underpinnings of ADP. We draw examples from the field of computational biology, more specifically RNA secondary structure prediction, to demonstrate how to use these combinators and what differences exist between this library, ADP, and other approaches. The final version of the combinator library allows writing algorithms with performance close to hand-optimized C code."
},
{
"paper_title": "Experience report: Haskell in computational biology",
"paper_authors": [
"Noah M. Daniels",
"Andrew Gallant",
"Norman Ramsey"
],
"paper_abstract": "Haskell gives computational biologists the flexibility and rapid prototyping of a scripting language, plus the performance of native code. In our experience, higher-order functions, lazy evaluation, and monads really worked, but profiling and debugging presented obstacles. Also, Haskell libraries vary greatly: memoization combinators and parallel-evaluation strategies helped us a lot, but other, nameless libraries mostly got in our way. Despite the obstacles and the uncertain quality of some libraries, Haskell's ecosystem made it easy for us to develop new algorithms in computational biology."
},
{
"paper_title": "A meta-scheduler for the par-monad: composable scheduling for the heterogeneous cloud",
"paper_authors": [
"Adam Foltzer",
"Abhishek Kulkarni",
"Rebecca Swords",
"Sajith Sasidharan",
"Eric Jiang",
"Ryan Newton"
],
"paper_abstract": "Modern parallel computing hardware demands increasingly specialized attention to the details of scheduling and load balancing across heterogeneous execution resources that may include GPU and cloud environments, in addition to traditional CPUs. Many existing solutions address the challenges of particular resources, but do so in isolation, and in general do not compose within larger systems. We propose a general, composable abstraction for execution resources, along with a continuation-based meta-scheduler that harnesses those resources in the context of a deterministic parallel programming library for Haskell. We demonstrate performance benefits of combined CPU/GPU scheduling over either alone, and of combined multithreaded/distributed scheduling over existing distributed programming approaches for Haskell."
},
{
"paper_title": "Nested data-parallelism on the gpu",
"paper_authors": [
"Lars Bergstrom",
"John Reppy"
],
"paper_abstract": "Graphics processing units (GPUs) provide both memory bandwidth and arithmetic performance far greater than that available on CPUs but, because of their Single-Instruction-Multiple-Data (SIMD) architecture, they are hard to program. Most of the programs ported to GPUs thus far use traditional data-level parallelism, performing only operations that operate uniformly over vectors. NESL is a first-order functional language that was designed to allow programmers to write irregular-parallel programs - such as parallel divide-and-conquer algorithms - for wide-vector parallel computers. This paper presents our port of the NESL implementation to work on GPUs and provides empirical evidence that nested data-parallelism (NDP) on GPUs significantly outperforms CPU-based implementations and matches or beats newer GPU languages that support only flat parallelism. While our performance does not match that of hand-tuned CUDA programs, we argue that the notational conciseness of NESL is worth the loss in performance. This work provides the first language implementation that directly supports NDP on a GPU."
},
{
"paper_title": "Work efficient higher-order vectorisation",
"paper_authors": [
"Ben Lippmeier",
"Manuel M.T. Chakravarty",
"Gabriele Keller",
"Roman Leshchinskiy",
"Simon Peyton Jones"
],
"paper_abstract": "Existing approaches to higher-order vectorisation, also known as flattening nested data parallelism, do not preserve the asymptotic work complexity of the source program. Straightforward examples, such as sparse matrix-vector multiplication, can suffer a severe blow-up in both time and space, which limits the practicality of this method. We discuss why this problem arises, identify the mis-handling of index space transforms as the root cause, and present a solution using a refined representation of nested arrays. We have implemented this solution in Data Parallel Haskell (DPH) and present benchmarks showing that realistic programs, which used to suffer the blow-up, now have the correct asymptotic work complexity. In some cases, the asymptotic complexity of the vectorised program is even better than the original."
},
{
"paper_title": "Tales from the jungle",
"paper_authors": [
"Peter Sewell"
],
"paper_abstract": "We rely on a computational infrastructure that is a densely interwined mass of software and hardware: programming languages, network protocols, operating systems, and processors. It has accumulated great complexity, from a combination of engineering design decisions, contingent historical choices, and sheer scale, yet it is defined at best by prose specifications, or, all too often, just by the common implementations. Can we do better? More specifically, can we apply rigorous methods to this mainstream infrastructure, taking the accumulated complexity seriously, and if we do, does it help? My colleagues and I have looked at these questions in several contexts: the TCP/IP network protocols with their Sockets API; programming language design, including the Java module system and the C11/C++11 concurrency model; the hardware concurrency behaviour of x86, IBM POWER, and ARM multiprocessors; and compilation of concurrent code. In this talk I will draw some lessons from what did and did not succeed, looking especially at the empirical nature of some of the work, at the social process of engagement with the various different communities, and at the mathematical and software tools we used. Domain-specific modelling languages (based on functional programming ideas) and proof assistants were invaluable for working with the large and loose specifications involved: idioms within HOL4 for TCP, our Ott tool for programming language specification, and Owens's Lem tool for portable semantic definitions, with HOL4, Isabelle, and Coq, for the relaxed-memory concurrency semantics work. Our experience with these suggests something of what is needed to make full-scale rigorous semantics a commonplace reality."
},
{
"paper_title": "Propositions as sessions",
"paper_authors": [
"Philip Wadler"
],
"paper_abstract": "Continuing a line of work by Abramsky (1994), by Bellin and Scott (1994), and by Caires and Pfenning (2010), among others, this paper presents CP, a calculus in which propositions of classical linear logic correspond to session types. Continuing a line of work by Honda (1993), by Honda, Kubo, and Vasconcelos (1998), and by Gay and Vasconcelos (2010), among others, this paper presents GV, a linear functional language with session types, and presents a translation from GV into CP. The translation formalises for the first time a connection between a standard presentation of session types and linear logic, and shows how a modification to the standard presentation yield a language free from deadlock, where deadlock freedom follows from the correspondence to linear logic."
},
{
"paper_title": "Typing unmarshalling without marshalling types",
"paper_authors": [
"Grégoire Henry",
"Michel Mauny",
"Emmanuel Chailloux",
"Pascal Manoury"
],
"paper_abstract": "Unmarshalling primitives in statically typed language require, in order to preserve type safety, to dynamically verify the compatibility between the incoming values and the statically expected type. In the context of programming languages based on parametric polymorphism and uniform data representation, we propose a relation of compatibility between (unmarshalled) memory graphs and types. It is defined as constraints over nodes of the memory graph. Then, we propose an algorithm to check the compatibility between a memory graph and a type. It is described as a constraint solver based on a rewriting system. We have shown that the proposed algorithm is sound and semi-complete in presence of algebraic data types, mutable data, polymorphic sharing, cycles, and functional values, however, in its general form, it may not terminate. We have implemented a prototype tailored for the OCaml compiler [17] that always terminates and still seems sufficiently complete in practice."
},
{
"paper_title": "Deconstraining DSLs",
"paper_authors": [
"Will Jones",
"Tony Field",
"Tristan Allwood"
],
"paper_abstract": "Strongly-typed functional languages provide a powerful framework for embedding Domain-Specific Languages (DSLs). However, building type-safe functions defined over an embedded DSL can introduce application-specific type constraints that end up being imposed on the DSL data types themselves. At best, these constraints are unwieldy and at worst they can limit the range of DSL expressions that can be built. We present a simple solution to this problem that allows application-specific constraints to be specified at the point of use of a DSL expression rather than when the DSL's embedding types are defined. Our solution applies equally to both tagged and tagless representations and, importantly, also works in the presence of higher-rank types."
},
{
"paper_title": "Explicitly heterogeneous metaprogramming with MetaHaskell",
"paper_authors": [
"Geoffrey Mainland"
],
"paper_abstract": "Languages with support for metaprogramming, like MetaOCaml, offer a principled approach to code generation by guaranteeing that well-typed metaprograms produce well-typed programs. However, many problem domains where metaprogramming can fruitfully be applied require generating code in languages like C, CUDA, or assembly. Rather than resorting to add-hoc code generation techniques, these applications should be directly supported by explicitly heterogeneous metaprogramming languages. We present MetaHaskell, an extension of Haskell 98 that provides modular syntactic and type system support for type safe metaprogramming with multiple object languages. Adding a new object language to MetaHaskell requires only minor modifications to the host language to support type-level quantification over object language types and propagation of type equality constraints. We demonstrate the flexibility of our approach through three object languages: a core ML language, a linear variant of the core ML language, and a subset of C. All three languages support metaprogramming with open terms and guarantee that well-typed MetaHaskell programs will only produce closed object terms that are well-typed. The essence of MetaHaskell is captured in a type system for a simplified metalanguage. MetaHaskell, as well as all three object languages, are fully implemented in the mhc bytecode compiler."
},
{
"paper_title": "A generic abstract syntax model for embedded languages",
"paper_authors": [
"Emil Axelsson"
],
"paper_abstract": "Representing a syntax tree using a data type often involves having many similar-looking constructors. Functions operating on such types often end up having many similar-looking cases. Different languages often make use of similar-looking constructions. We propose a generic model of abstract syntax trees capable of representing a wide range of typed languages. Syntactic constructs can be composed in a modular fashion enabling reuse of abstract syntax and syntactic processing within and across languages. Building on previous methods of encoding extensible data types in Haskell, our model is a pragmatic solution to Wadler's \"expression problem\". Its practicality has been confirmed by its use in the implementation of the embedded language Feldspar."
},
{
"paper_title": "Automatic amortised analysis of dynamic memory allocation for lazy functional programs",
"paper_authors": [
"Hugo Simões",
"Pedro Vasconcelos",
"Mário Florido",
"Steffen Jost",
"Kevin Hammond"
],
"paper_abstract": "This paper describes the first successful attempt, of which we are aware, to define an automatic, type-based static analysis of resource bounds for lazy functional programs. Our analysis uses the automatic amortisation approach developed by Hofmann and Jost, which was previously restricted to eager evaluation. In this paper, we extend this work to a lazy setting by capturing the costs of unevaluated expressions in type annotations and by amortising the payment of these costs using a notion of lazy potential. We present our analysis as a proof system for predicting heap allocations of a minimal functional language (including higher-order functions and recursive data types) and define a formal cost model based on Launchbury's natural semantics for lazy evaluation. We prove the soundness of our analysis with respect to the cost model. Our approach is illustrated by a number of representative and non-trivial examples that have been analysed using a prototype implementation of our analysis."
},
{
"paper_title": "Introspective pushdown analysis of higher-order programs",
"paper_authors": [
"Christopher Earl",
"Ilya Sergey",
"Matthew Might",
"David Van Horn"
],
"paper_abstract": "In the static analysis of functional programs, pushdown flow analysis and abstract garbage collection skirt just inside the boundaries of soundness and decidability. Alone, each method reduces analysis times and boosts precision by orders of magnitude. This work illuminates and conquers the theoretical challenges that stand in the way of combining the power of these techniques. The challenge in marrying these techniques is not subtle: computing the reachable control states of a pushdown system relies on limiting access during transition to the top of the stack; abstract garbage collection, on the other hand, needs full access to the entire stack to compute a root set, just as concrete collection does. Introspective pushdown systems resolve this conflict. Introspective pushdown systems provide enough access to the stack to allow abstract garbage collection, but they remain restricted enough to compute control-state reachability, thereby enabling the sound and precise product of pushdown analysis and abstract garbage collection. Experiments reveal synergistic interplay between the techniques, and the fusion demonstrates \"better-than-both-worlds\" precision."
},
{
"paper_title": "A traversal-based algorithm for higher-order model checking",
"paper_authors": [
"Robin P. Neatherway",
"Steven J. Ramsay",
"Chih-Hao Luke Ong"
],
"paper_abstract": "Higher-order model checking - the model checking of trees generated by higher-order recursion schemes (HORS) - is a natural generalisation of finite-state and pushdown model checking. Recent work has shown that it can serve as a basis for software model checking for functional languages such as ML and Haskell. In this paper, we introduce higher-order recursion schemes with cases (HORSC), which extend HORS with a definition-by-cases construct (to express program branching based on data) and non-determinism (to express abstractions of behaviours). This paper is a study of the universal HORSC model checking problem for deterministic trivial automata: does the automaton accept every tree in the tree language generated by the given HORSC? We first characterise the model checking problem by an intersection type system extended with a carefully restricted form of union types. We then present an algorithm for deciding the model checking problem, which is based on the notion of traversals induced by the fully abstract game semantics of these schemes, but presented as a goal-directed construction of derivations in the intersection and union type system. We view HORSC model checking as a suitable backend engine for an approach to verifying functional programs. We have implemented the algorithm in a tool called TravMC, and demonstrated its effectiveness on a test suite of programs, including abstract models of functional programs obtained via an abstraction-refinement procedure from pattern-matching recursion schemes."
},
{
"paper_title": "Functional programs that explain their work",
"paper_authors": [
"Roly Perera",
"Umut A. Acar",
"James Cheney",
"Paul Blain Levy"
],
"paper_abstract": "We present techniques that enable higher-order functional computations to \"explain\" their work by answering questions about how parts of their output were calculated. As explanations, we consider the traditional notion of program slices, which we show can be inadequate, and propose a new notion: trace slices. We present techniques for specifying flexible and rich slicing criteria based on partial expressions, parts of which have been replaced by holes. We characterise program slices in an algorithm-independent fashion and show that a least slice for a given criterion exists. We then present an algorithm, called unevaluation, for computing least program slices from computations reified as traces. Observing a limitation of program slices, we develop a notion of trace slice as another form of explanation and present an algorithm for computing them. The unevaluation algorithm can be applied to any subtrace of a trace slice to compute a program slice whose evaluation generates that subtrace. This close correspondence between programs, traces, and their slices can enable the programmer to understand a computation interactively, in terms of the programming language in which the computation is expressed. We present an implementation in the form of a tool, discuss some important practical implementation concerns and present some techniques for addressing them."
}
]
},
{
"proceeding_title": "ICFP '11:Proceedings of the 16th ACM SIGPLAN international conference on Functional programming",
"proceeding_contents": [
{
"paper_title": "Towards a comprehensive theory of monadic effects",
"paper_authors": [
"Andrzej Filinski"
],
"paper_abstract": "It has been more than 20 years since monads were proposed as a unifying concept for computational effects, in both formal semantics and functional programs. Over that period, there has been substantial incremental progress on several fronts within the ensuing research area, including denotational, operational, and axiomatic characterizations of effects; principles and frameworks for combining effects; prescriptive vs. descriptive effect-type systems; specification vs. implementation of effects; and realizations of effect-related theoretical constructions in practical functional languages, both eager and lazy. Yet few would confidently claim that programs with computational effects are by now as well understood, and as thoroughly supported by formal reasoning techniques, as types and terms in purely functional settings. This talk outlines (one view of) the landscape of effectful functional programming, and attempts to assess our collective progress towards the goal of a broad yet coherent theory of monadic effects. We are not quite there yet, but intriguingly, many potential ingredients of such a theory have been repeatedly discovered and developed, with only minor variations, in seemingly unrelated contexts. Some stronger-than-expected ties between the research topics mentioned above also instill hope that there is indeed a natural, comprehensive theory of monadic effects, waiting to be fully explicated."
},
{
"paper_title": "Just do it: simple monadic equational reasoning",
"paper_authors": [
"Jeremy Gibbons",
"Ralf Hinze"
],
"paper_abstract": "One of the appeals of pure functional programming is that it is so amenable to equational reasoning. One of the problems of pure functional programming is that it rules out computational effects. Moggi and Wadler showed how to get round this problem by using monads to encapsulate the effects, leading in essence to a phase distinction - a pure functional evaluation yielding an impure imperative computation. Still, it has not been clear how to reconcile that phase distinction with the continuing appeal of functional programming; does the impure imperative part become inaccessible to equational reasoning? We think not; and to back that up, we present a simple axiomatic approach to reasoning about programs with computational effects."
},
{
"paper_title": "Lightweight monadic programming in ML",
"paper_authors": [
"Nikhil Swamy",
"Nataliya Guts",
"Daan Leijen",
"Michael Hicks"
],
"paper_abstract": "Many useful programming constructions can be expressed as monads. Examples include probabilistic modeling, functional reactive programming, parsing, and information flow tracking, not to mention effectful functionality like state and I/O. In this paper, we present a type-based rewriting algorithm to make programming with arbitrary monads as easy as using ML's built-in support for state and I/O. Developers write programs using monadic values of type m τ as if they were of type τ, and our algorithm inserts the necessary binds, units, and monad-to-monad morphisms so that the program type checks. Our algorithm, based on Jones' qualified types, produces principal types. But principal types are sometimes problematic: the program's semantics could depend on the choice of instantiation when more than one instantiation is valid. In such situations we are able to simplify the types to remove any ambiguity but without adversely affecting typability; thus we can accept strictly more programs. Moreover, we have proved that this simplification is efficient (linear in the number of constraints) and coherent: while our algorithm induces a particular rewriting, all related rewritings will have the same semantics. We have implemented our approach for a core functional language and applied it successfully to simple examples from the domains listed above, which are used as illustrations throughout the paper."
},
{
"paper_title": "Functional programming through deep time: modeling the first complex ecosystems on earth",
"paper_authors": [
"Emily G. Mitchell"
],
"paper_abstract": "The ecology of Earth's first large organisms is an unsolved problem in palaeontology. This experience report discusses the determination of which ecosystems could have been feasible, by considering the biological feedbacks within them. Haskell was used to model the ecosystems for these first large organisms - the Ediacara biota. For verification of the results, the statistical language R was used. Neither Haskell nor R would have been sufficient for this work - Haskell's libraries for statistics are weak, while R lacks the structure for expressing algorithms in a maintainable manner. This work is the first to quantify all feedback loops in an ecosystem, and has generated considerable interest from both the ecological and palaeontological communities."
},
{
"paper_title": "Monads, zippers and views: virtualizing the monad stack",
"paper_authors": [
"Tom Schrijvers",
"Bruno C.d.S. Oliveira"
],
"paper_abstract": "We make monadic components more reusable and robust to changes by employing two new techniques for virtualizing the monad stack: the monad zipper and monad views. The monad zipper is a higher-order monad transformer that creates virtual monad stacks by ignoring particular layers in a concrete stack. Monad views provide a general framework for monad stack virtualization: they take the monad zipper one step further and integrate it with a wide range of other virtualizations. For instance, particular views allow restricted access to monads in the stack. Furthermore, monad views provide components with a call-by-reference-like mechanism for accessing particular layers of the monad stack. With our two new mechanisms, the monadic effects required by components no longer need to be literally reflected in the concrete monad stack. This makes these components more reusable and robust to changes."
},
{
"paper_title": "A semantic model for graphical user interfaces",
"paper_authors": [
"Neelakantan R. Krishnaswami",
"Nick Benton"
],
"paper_abstract": "We give a denotational model for graphical user interface (GUI) programming using the Cartesian closed category of ultrametric spaces. The ultrametric structure enforces causality restrictions on reactive systems and allows well-founded recursive definitions by a generalization of guardedness. We capture the arbitrariness of user input (e.g., a user gets to decide the stream of clicks she sends to a program) by making use of the fact that the closed subsets of an ultrametric space themselves form an ultrametric space, allowing us to interpret nondeterminism with a \"powerspace\" monad. Algebras for the powerspace monad yield a model of intuitionistic linear logic, which we exploit in the definition of a mixed linear/non-linear domain-specific language for writing GUI programs. The non-linear part of the language is used for writing reactive stream-processing functions whilst the linear sublanguage naturally captures the generativity and usage constraints on the various linear objects in GUIs, such as the elements of a DOM or scene graph. We have implemented this DSL as an extension to OCaml, and give examples demonstrating that programs in this style can be short and readable."
},
{
"paper_title": "Modular rollback through control logging: a pair of twin functional pearls",
"paper_authors": [
"Olin Shivers",
"Aaron J. Turon"
],
"paper_abstract": "We present a technique, based on the use of first-class control operators, enabling programs to maintain and invoke rollback logs for sequences of reversible effects. Our technique is modular, in that it provides complete separation between some library of effectful operations, and a client, \"driver\" program which invokes and rolls back sequences of these operations. In particular, the checkpoint mechanism, which is entirely encapsulated within the effect library, logs not only the library's effects, but also the client's control state. Thus, logging and rollback can be almost completely transparent to the client code. This separation of concerns manifests itself nicely when we must implement software with sophisticated error handling. We illustrate with two examples that exploit the architecture to disentangle some core parsing task from its error management. The parser code is completely separate from the error-correction code, although the two components are deeply intertwined at run time."
},
{
"paper_title": "Pushdown flow analysis of first-class control",
"paper_authors": [
"Dimitrios Vardoulakis",
"Olin Shivers"
],
"paper_abstract": "Pushdown models are better than control-flow graphs for higher-order flow analysis. They faithfully model the call/return structure of a program, which results in fewer spurious flows and increased precision. However, pushdown models require that calls and returns in the analyzed program nest properly. As a result, they cannot be used to analyze language constructs that break call/return nesting such as generators, coroutines, call/cc, etc. In this paper, we extend the CFA2 flow analysis to create the first pushdown flow analysis for languages with first-class control. We modify the abstract semantics of CFA2 to allow continuations to escape to, and be restored from, the heap. We then present a summarization algorithm that handles escaping continuations via a new kind of summary edge. We prove that the algorithm is sound with respect to the abstract semantics."
},
{
"paper_title": "Subtyping delimited continuations",
"paper_authors": [
"Marek Materzok",
"Dariusz Biernacki"
],
"paper_abstract": "We present a type system with subtyping for first-class delimited continuations that generalizes Danvy and Filinski's type system for shift and reset by maintaining explicit information about the types of contexts in the metacontext. We exploit this generalization by considering the control operators known as shift0 and reset0 that can access arbitrary contexts in the metacontext. We use subtyping to control the level of information about the metacontext the expression actually requires and in particular to coerce pure expressions into effectful ones. For this type system we prove strong type soundness and termination of evaluation and we present a provably correct type reconstruction algorithm. We also introduce two CPS translations for shift0 and reset0: one targeting the untyped lambda calculus, and another - type-directed - targeting the simply-typed lambda calculus. The latter translation preserves typability and is selective in that it keeps pure expressions in direct style."
},
{
"paper_title": "Set-theoretic foundation of parametric polymorphism and subtyping",
"paper_authors": [
"Giuseppe Castagna",
"Zhiwu Xu"
],
"paper_abstract": "We define and study parametric polymorphism for a type system with recursive, product, union, intersection, negation, and function types. We first recall why the definition of such a system was considered hard \"when not impossible\" and then present the main ideas at the basis of our solution. In particular, we introduce the notion of \"convexity\" on which our solution is built up and discuss its connections with parametricity as defined by Reynolds to whose study our work sheds new light."
},
{
"paper_title": "Parametric polymorphism and semantic subtyping: the logical connection",
"paper_authors": [
"Nils Gesbert",
"Pierre Genevès",
"Nabil Layaïda"
],
"paper_abstract": "We consider a type algebra equipped with recursive, product, function, intersection, union, and complement types together with type variables and implicit universal quantification over them. We consider the subtyping relation recently defined by Castagna and Xu over such type expressions and show how this relation can be decided in EXPTIME, answering an open question. The novelty, originality and strength of our solution reside in introducing a logical modeling for the semantic subtyping framework. We model semantic subtyping in a tree logic and use a satisfiability-testing algorithm in order to decide subtyping. We report on practical experiments made with a full implementation of the system. This provides a powerful polymorphic type system aiming at maintaining full static type-safety of functional programs that manipulate trees, even with higher-order functions, which is particularly useful in the context of XML."
},
{
"paper_title": "Balanced trees inhabiting functional parallel programming",
"paper_authors": [
"Akimasa Morihata",
"Kiminori Matsuzaki"
],
"paper_abstract": "Divide-and-conquer is an important technique in parallel programming. However, algebraic data structures do not fit divide-and-conquer parallelism. For example, the usual pointer-based implementation of lists cannot efficiently be divided at their middle, which prevents us from developing list-iterating divide-and-conquer parallel programs. Tree-iterating programs possibly face a similar problem, because trees might be ill-balanced and list-like shapes. This paper examines parallel programming based on balanced trees: we consider balanced-tree structures and develop recursive functions on them. By virtue of their balancing nature, either bottom-up or top-down recursive functions exploit divide-and-conquer parallelism. Our main contribution is to demonstrate the promise of this approach. We propose a way of systematically developing balanced trees from parallel algorithms, and then, we show that efficient parallel programs on them can be developed by equational reasoning powered by Reynolds' relational parametricity. We consider functions that operate either lists or binary trees, and show that our methods can uniformly deal with both cases. The developed parallel programs are purely functional, correct by construction, and sometimes even simpler than known algorithms."
},
{
"paper_title": "Implicit self-adjusting computation for purely functional programs",
"paper_authors": [
"Yan Chen",
"Joshua Dunfield",
"Matthew A. Hammer",
"Umut A. Acar"
],
"paper_abstract": "Computational problems that involve dynamic data, such as physics simulations and program development environments, have been an important subject of study in programming languages. Building on this work, recent advances in self-adjusting computation have developed techniques that enable programs to respond automatically and efficiently to dynamic changes in their inputs. Self-adjusting programs have been shown to be efficient for a reasonably broad range of problems but the approach still requires an explicit programming style, where the programmer must use specific monadic types and primitives to identify, create and operate on data that can change over time. We describe techniques for automatically translating purely functional programs into self-adjusting programs. In this implicit approach, the programmer need only annotate the (top-level) input types of the programs to be translated. Type inference finds all other types, and a type-directed translation rewrites the source program into an explicitly self-adjusting target program. The type system is related to information-flow type systems and enjoys decidable type inference via constraint solving. We prove that the translation outputs well-typed self-adjusting programs and preserves the source program's input-output behavior, guaranteeing that translated programs respond correctly to all changes to their data. Using a cost semantics, we also prove that the translation preserves the asymptotic complexity of the source program."
},
{
"paper_title": "Programming assurance cases in Agda",
"paper_authors": [
"Makoto Takeyama"
],
"paper_abstract": "Agda is a modern functional programming language equipped with an interactive proof assistant as its developing environment. Its features include dependent types, type universe, inductive and coinductive families of types, pattern matching, records, and nested parameterized modules. Based on the \"propositions as types, proofs as programs\" correspondence in Martin-Löf's Type Theory, Agda lets users to construct, verify, and execute a smooth mixture of programs and proofs. Using Agda is similar to using an editor in a modern IDE. Users have more direct control over how programs / proofs are written than in automation-oriented systems using command-scripts for proof construction. Agda thus encourages users to express their ideas with more sophisticated dependently typed programming and less logical proofs. Programming techniques for readability and maintainability now translate to techniques for writing verified documents for human communication. Agda has been developed at Chalmers University of Technology by Ulf Norell and others. A growing international community of developers and users applies it in research, education, and industry. At AIST in Japan, we aim to introduce its merits to construction, verification, maintenance, and run-time evaluation of \"assurance cases\", which are documented bodies of systems assurance arguments used as the hub for assurance- and risk-communication among stakeholders. The talk gives an overview of Agda and presents our current effort on programming assurance cases in Agda."
},
{
"paper_title": "On the bright side of type classes: instance arguments in Agda",
"paper_authors": [
"Dominique Devriese",
"Frank Piessens"
],
"paper_abstract": "We present instance arguments: an alternative to type classes and related features in the dependently typed, purely functional programming language/proof assistant Agda. They are a new, general type of function arguments, resolved from call-site scope in a type-directed way. The mechanism is inspired by both Scala's implicits and Agda's existing implicit arguments, but differs from both in important ways. Our mechanism is designed and implemented for Agda, but our design choices can be applied to other programming languages as well. Like Scala's implicits, we do not provide a separate structure for type classes and their instances, but instead rely on Agda's standard dependently typed records, so that standard language mechanisms provide features that are missing or expensive in other proposals. Like Scala, we support the equivalent of local instances. Unlike Scala, functions taking our new arguments are first-class citizens and can be abstracted over and manipulated in standard ways. Compared to other proposals, we avoid the pitfall of introducing a separate type-level computational model through the instance search mechanism. All values in scope are automatically candidates for instance resolution. A final novelty of our approach is that existing Agda libraries using records gain the benefits of type classes without any modification. We discuss our implementation in Agda (to be part of Agda 2.2.12) and we use monads as an example to show how it allows existing concepts in the Agda standard library to be used in a similar way as corresponding Haskell code using type classes. We also demonstrate and discuss equivalents and alternatives to some advanced type class-related patterns from the literature and some new patterns specific to our system."
},
{
"paper_title": "Functional modelling of musical harmony: an experience report",
"paper_authors": [
"José Pedro Magalhães",
"W. Bas de Haas"
],
"paper_abstract": "Music theory has been essential in composing and performing music for centuries. Within Western tonal music, from the early Baroque on to modern-day jazz and pop music, the function of chords within a chord sequence can be explained by harmony theory. Although Western tonal harmony theory is a thoroughly studied area, formalising this theory is a hard problem. We present a formalisation of the rules of tonal harmony as a Haskell (generalized) algebraic datatype. Given a sequence of chord labels, the harmonic function of a chord in its tonal context is automatically derived. For this, we use several advanced functional programming techniques, such as type-level computations, datatype-generic programming, and error-correcting parsers. As a detailed example, we show how our model can be used to improve content-based retrieval of jazz songs. We explain why Haskell is the perfect match for these tasks, and compare our implementation to an earlier solution in Java. We also point out shortcomings of the language and libraries that limit our work, and discuss future developments which may ameliorate our solution."
},
{
"paper_title": "How to make ad hoc proof automation less ad hoc",
"paper_authors": [
"Georges Gonthier",
"Beta Ziliani",
"Aleksandar Nanevski",
"Derek Dreyer"
],
"paper_abstract": "Most interactive theorem provers provide support for some form of user-customizable proof automation. In a number of popular systems, such as Coq and Isabelle, this automation is achieved primarily through tactics, which are programmed in a separate language from that of the prover's base logic. While tactics are clearly useful in practice, they can be difficult to maintain and compose because, unlike lemmas, their behavior cannot be specified within the expressive type system of the prover itself. We propose a novel approach to proof automation in Coq that allows the user to specify the behavior of custom automated routines in terms of Coq's own type system. Our approach involves a sophisticated application of Coq's canonical structures, which generalize Haskell type classes and facilitate a flexible style of dependently-typed logic programming. Specifically, just as Haskell type classes are used to infer the canonical implementation of an overloaded term at a given type, canonical structures can be used to infer the canonical proof of an overloaded lemma for a given instantiation of its parameters. We present a series of design patterns for canonical structure programming that enable one to carefully and predictably coax Coq's type inference engine into triggering the execution of user-supplied algorithms during unification, and we illustrate these patterns through several realistic examples drawn from Hoare Type Theory. We assume no prior knowledge of Coq and describe the relevant aspects of Coq type inference from first principles."
},
{
"paper_title": "Temporal higher-order contracts",
"paper_authors": [
"Tim Disney",
"Cormac Flanagan",
"Jay McCarthy"
],
"paper_abstract": "Behavioral contracts are embraced by software engineers because they document module interfaces, detect interface violations, and help identify faulty modules (packages, classes, functions, etc). This paper extends prior higher-order contract systems to also express and enforce temporal properties, which are common in software systems with imperative state, but which are mostly left implicit or are at best informally specified. The paper presents both a programmatic contract API as well as a temporal contract language, and reports on experience and performance results from implementing these contracts in Racket. Our development formalizes module behavior as a trace of events such as function calls and returns. Our contract system provides both non-interference (where contracts cannot influence correct executions) and also a notion of completeness (where contracts can enforce any decidable, prefix-closed predicate on event traces)."
},
{
"paper_title": "Parsing with derivatives: a functional pearl",
"paper_authors": [
"Matthew Might",
"David Darais",
"Daniel Spiewak"
],
"paper_abstract": "We present a functional approach to parsing unrestricted context-free grammars based on Brzozowski's derivative of regular expressions. If we consider context-free grammars as recursive regular expressions, Brzozowski's equational theory extends without modification to context-free grammars (and it generalizes to parser combinators). The supporting actors in this story are three concepts familiar to functional programmers - laziness, memoization and fixed points; these allow Brzozowski's original equations to be transliterated into purely functional code in about 30 lines spread over three functions. Yet, this almost impossibly brief implementation has a drawback: its performance is sour - in both theory and practice. The culprit? Each derivative can double the size of a grammar, and with it, the cost of the next derivative. Fortunately, much of the new structure inflicted by the derivative is either dead on arrival, or it dies after the very next derivative. To eliminate it, we once again exploit laziness and memoization to transliterate an equational theory that prunes such debris into working code. Thanks to this compaction, parsing times become reasonable in practice. We equip the functional programmer with two equational theories that, when combined, make for an abbreviated understanding and implementation of a system for parsing context-free languages."
},
{
"paper_title": "An efficient non-moving garbage collector for functional languages",
"paper_authors": [
"Katsuhiro Ueno",
"Atsushi Ohori",
"Toshiaki Otomo"
],
"paper_abstract": "Motivated by developing a memory management system that allows functional languages to seamlessly inter-operate with C, we propose an efficient non-moving garbage collection algorithm based on bitmap marking and report its implementation and performance evaluation. In our method, the heap consists of sub-heaps Hi | c ≤ i ≤ B of exponentially increasing allocation sizes (Hi for 2i bytes) and a special sub-heap for exceptionally large objects. Actual space for each sub-heap is dynamically allocated and reclaimed from a pool of fixed size allocation segments. In each allocation segment, the algorithm maintains a bitmap representing the set of live objects. Allocation is done by searching for the next free bit in the bitmap. By adding meta-level bitmaps that summarize the contents of bitmaps hierarchically and maintaining the current bit position in the bitmap hierarchy, the next free bit can be found in a small constant time for most cases, and in log32(segmentSize) time in the worst case on a 32-bit architecture. The collection is done by clearing the bitmaps and tracing live objects. The algorithm can be extended to generational GC by maintaining multiple bitmaps for the same heap space. The proposed method does not require compaction and objects are not moved at all. This property is significant for a functional language to inter-operate with C, and it should also be beneficial in supporting multiple native threads. The proposed method has been implemented in a full-scale Standard ML compiler. Our benchmark tests show that our non-moving collector performs as efficiently as a generational copying collector designed for functional languages."
},
{
"paper_title": "Deriving an efficient FPGA implementation of a low density parity check forward error corrector",
"paper_authors": [
"Andy Gill",
"Andrew Farmer"
],
"paper_abstract": "Creating correct hardware is hard. Though there is much talk of using formal and semi-formal methods to develop designs and implementations, in practice most implementations are written without the support of any formal or semi-formal methodology. Having such a methodology brings many benefits, including improved likelihood of a correct implementation, lowering the cost of design exploration and lowering the cost of certification. In this paper, we introduce a semi formal methodology for connecting executable specifications written in the functional language Haskell to efficient VHDL implementations. The connection is performed by manual edits, using semi-formal equational reasoning facilitated by the worker/wrapper transformation, and directed using commutable functors. We explain our methodology on a full-scale example, an efficient Low-Density Parity Check forward error correcting code, which has been implemented on a Virtex-5 FPGA."
},
{
"paper_title": "Geometry of synthesis iv: compiling affine recursion into static hardware",
"paper_authors": [
"Dan R. Ghica",
"Alex Smith",
"Satnam Singh"
],
"paper_abstract": "Abramsky's Geometry of Interaction interpretation (GoI) is a logical-directed way to reconcile the process and functional views of computation, and can lead to a dataflow-style semantics of programming languages that is both operational (i.e. effective) and denotational (i.e. inductive on the language syntax). The key idea of Ghica's Geometry of Synthesis (GoS) approach is that for certain programming languages (namely Reynolds's affine Syntactic Control of Interference - SCI) the GoI processes-like interpretation of the language can be given a finitary representation, for both internal state and tokens. A physical realisation of this representation becomes a semantics-directed compiler for SCI into hardware. In this paper we examine the issue of compiling affine recursive programs into hardware using the GoS method. We give syntax and compilation techniques for unfolding recursive computation in space or in time and we illustrate it with simple benchmark-style examples. We examine the performance of the benchmarks against conventional CPU-based execution models."
},
{
"paper_title": "A hierarchy of mendler style recursion combinators: taming inductive datatypes with negative occurrences",
"paper_authors": [
"Ki Yung Ahn",
"Tim Sheard"
],
"paper_abstract": "The Mendler style catamorphism (which corresponds to weak induction) always terminates even for negative inductive datatypes. The Mendler style histomorphism (which corresponds to strong induction) is known to terminate for positive inductive datatypes. To our knowledge, the literature is silent on its termination properties for negative datatypes. In this paper, we prove that histomorphisms do not always termintate by showing a counter-example. We also enrich the Mendler collection of recursion combinators by defining a new form of Mendler style catamorphism (msfcata), which terminates for all inductive datatypes, that is more expressive than the original. We organize the collection of combinators by placing them into a hierarchy of ever increasing generality, and describing the termination properties of each point on the hierarchy. We also provide many examples (including a case study on a negative inductive datatype), which illustrate both the expressive power and beauty of the Mendler style. One lesson we learn from this work is that weak induction applies to negative inductive datatypes but strong induction is problematic. We provide a proof of weak induction by exhibiting an embedding of our new combinator into Fω. We pose the open question: Is there a safe way to apply strong induction to negative inductive datatypes?"
},
{
"paper_title": "Typed self-interpretation by pattern matching",
"paper_authors": [
"Barry Jay",
"Jens Palsberg"
],
"paper_abstract": "Self-interpreters can be roughly divided into two sorts: self-recognisers that recover the input program from a canonical representation, and self-enactors that execute the input program. Major progress for statically-typed languages was achieved in 2009 by Rendel, Ostermann, and Hofer who presented the first typed self-recogniser that allows representations of different terms to have different types. A key feature of their type system is a type:type rule that renders the kind system of their language inconsistent. In this paper we present the first statically-typed language that not only allows representations of different terms to have different types, and supports a self-recogniser, but also supports a self-enactor. Our language is a factorisation calculus in the style of Jay and Given-Wilson, a combinatory calculus with a factorisation operator that is powerful enough to support the pattern-matching functions necessary for a self-interpreter. This allows us to avoid a type:type rule. Indeed, the types of System F are sufficient. We have implemented our approach and our experiments support the theory."
},
{
"paper_title": "Using camlp4 for presenting dynamic mathematics on the web: DynaMoW, an OCaml language extension for the run-time generation of mathematical contents and their presentation on the web",
"paper_authors": [
"Frédéric Chyzak",
"Alexis Darrasse"
],
"paper_abstract": "We report on the design and implementation of a programming tool, DynaMoW, to control interactive and incremental mathematical calculations to be presented on the web. This tool is implemented as a language extension of OCaml using Camlp4. Fragments of mathematical code written for a computer-algebra system as well as fragments of mathematical web documents are embedded directly and naturally inside OCaml code. A DynaMoW-based application is made of independent web services, whose parameter types are checked by the OCaml extension. The approach is illustrated by two implementations of online mathematical encyclopedias on top of DynaMoW."
},
{
"paper_title": "Secure distributed programming with value-dependent types",
"paper_authors": [
"Nikhil Swamy",
"Juan Chen",
"Cédric Fournet",
"Pierre-Yves Strub",
"Karthikeyan Bhargavan",
"Jean Yang"
],
"paper_abstract": "Distributed applications are difficult to program reliably and securely. Dependently typed functional languages promise to prevent broad classes of errors and vulnerabilities, and to enable program verification to proceed side-by-side with development. However, as recursion, effects, and rich libraries are added, using types to reason about programs, specifications, and proofs becomes challenging. We present F*, a full-fledged design and implementation of a new dependently typed language for secure distributed programming. Unlike prior languages, F* provides arbitrary recursion while maintaining a logically consistent core; it enables modular reasoning about state and other effects using affine types; and it supports proofs of refinement properties using a mixture of cryptographic evidence and logical proof terms. The key mechanism is a new kind system that tracks several sub-languages within F* and controls their interaction. F* subsumes two previous languages, F7 and Fine. We prove type soundness (with proofs mechanized in Coq) and logical consistency for F*. We have implemented a compiler that translates F* to .NET bytecode, based on a prototype for Fine. F* provides access to libraries for concurrency, networking, cryptography, and interoperability with C#, F#, and the other .NET languages. The compiler produces verifiable binaries with 60% code size overhead for proofs and types, as much as a 45x improvement over the Fine compiler, while still enabling efficient bytecode verification. To date, we have programmed and verified more than 20,000 lines of F* including (1) new schemes for multi-party sessions; (2) a zero-knowledge privacy-preserving payment protocol; (3) a provenance-aware curated database; (4) a suite of 17 web-browser extensions verified for authorization properties; and (5) a cloud-hosted multi-tier web application with a verified reference monitor."
},
{
"paper_title": "Frenetic: a network programming language",
"paper_authors": [
"Nate Foster",
"Rob Harrison",
"Michael J. Freedman",
"Christopher Monsanto",
"Jennifer Rexford",
"Alec Story",
"David Walker"
],
"paper_abstract": "Modern networks provide a variety of interrelated services including routing, traffic monitoring, load balancing, and access control. Unfortunately, the languages used to program today's networks lack modern features - they are usually defined at the low level of abstraction supplied by the underlying hardware and they fail to provide even rudimentary support for modular programming. As a result, network programs tend to be complicated, error-prone, and difficult to maintain. This paper presents Frenetic, a high-level language for programming distributed collections of network switches. Frenetic provides a declarative query language for classifying and aggregating network traffic as well as a functional reactive combinator library for describing high-level packet-forwarding policies. Unlike prior work in this domain, these constructs are - by design - fully compositional, which facilitates modular reasoning and enables code reuse. This important property is enabled by Frenetic's novel run-time system which manages all of the details related to installing, uninstalling, and querying low-level packet-processing rules on physical switches. Overall, this paper makes three main contributions: (1) We analyze the state-of-the art in languages for programming networks and identify the key limitations; (2) We present a language design that addresses these limitations, using a series of examples to motivate and validate our choices; (3) We describe an implementation of the language and evaluate its performance on several benchmarks."
},
{
"paper_title": "Forest: a language and toolkit for programming with filestores",
"paper_authors": [
"Kathleen Fisher",
"Nate Foster",
"David Walker",
"Kenny Q. Zhu"
],
"paper_abstract": "A filestore is a structured collection of data files housed in a conventional hierarchical file system. Many applications use filestores as a poor-man's database, and the correct execution of these applications requires that the collection of files, directories, and symbolic links stored on disk satisfy a variety of precise invariants. Moreover, all of these structures must have acceptable ownership, permission, and timestamp attributes. Unfortunately, current programming languages do not provide support for documenting assumptions about filestores, detecting errors in them, or safely loading from and storing to them. This paper describes the design, implementation, and semantics of Forest, a new domain-specific language for describing filestores. The language uses a type-based metaphor to specify the expected structure, attributes, and invariants of filestores. Forest generates loading and storing functions that make it easy to connect data on disk to an isomorphic representation in memory that can be manipulated as if it were any other data structure. Forest also generates metadata that describes the degree to which the structures on the disk conform to the specification, making error detection easy. In a nutshell, Forest extends the rigorous discipline of typed programming languages to the untyped world of file systems. We have implemented Forest as an embedded domain-specific language in Haskell. In addition to generating infrastructure for reading, writing and checking file systems, our implementation generates type class instances that make it easy to build generic tools that operate over arbitrary filestores. We illustrate the utility of this infrastructure by building a file system visualizer, a file access checker, a generic query interface, description-directed variants of several standard UNIX shell tools and (circularly) a simple Forest description inference engine. Finally, we formalize a core fragment of Forest in a semantics inspired by classical tree logics and prove round-tripping laws showing that the loading and storing functions behave sensibly."
},
{
"paper_title": "Making standard ML a practical database programming language",
"paper_authors": [
"Atsushi Ohori",
"Katsuhiro Ueno"
],
"paper_abstract": "Integrating a database query language into a programming language is becoming increasingly important in recently emerging high-level cloud computing and other applications, where efficient and sophisticated data manipulation is required during computation. This paper reports on seamless integration of SQL into SML# - an extension of Standard ML. In the integrated language, the type system always infers a principal type for any type consistent SQL expression. This makes SQL queries first-class citizens, which can be freely combined with any other language constructs definable in Standard ML. For a program involving SQL queries, the compiler separates SQL queries and delegates their evaluation to a database server, e.g. PostgreSQL or MySQL in the currently implemented version. The type system of our language is largely based on Machiavelli, which demonstrates that ML with record polymorphism can represent type structure of SQL. In order to develop a practical language, however, a number of technical challenges have to be overcome, including static enforcement of server connection consistency, proper treatment of overloaded SQL primitives, query compilation, and runtime connection management. This paper describes the necessary extensions to the type system and compilation, and reports on the details of its implementation."
},
{
"paper_title": "Nameless, painless",
"paper_authors": [
"Nicolas Pouillard"
],
"paper_abstract": "De Bruijn indices are a well known technique for programming with names and binders. They provide a representation that is both simple and canonical. However, programming errors tend to be really easy to make. We propose a safer programming interface implemented as a library. Whereas indexing the types of names and terms by a numerical bound is a famous technique, we index them by worlds, a different notion of index that is both finer and more abstract. While being more finely typed, our approach incurs no loss of expressiveness or efficiency. Via parametricity we obtain properties about functions polymorphic on worlds. For instance, well-typed world-polymorphic functions over open λ-terms commute with any renaming of the free variables. Our whole development is conducted within Agda, from the code of the library, to its soundness proof and the properties of external functions. The soundness of our library is demonstrated via the construction of a logical relations argument."
},
{
"paper_title": "Binders unbound",
"paper_authors": [
"Stephanie Weirich",
"Brent A. Yorgey",
"Tim Sheard"
],
"paper_abstract": "Implementors of compilers, program refactorers, theorem provers, proof checkers, and other systems that manipulate syntax know that dealing with name binding is difficult to do well. Operations such as α-equivalence and capture-avoiding substitution seem simple, yet subtle bugs often go undetected. Furthermore, their implementations are tedious, requiring \"boilerplate\" code that must be updated whenever the object language definition changes. Many researchers have therefore sought to specify binding syntax declaratively, so that tools can correctly handle the details behind the scenes. This idea has been the inspiration for many new systems (such as Beluga, Delphin, FreshML, FreshOCaml, Cαml, FreshLib, and Ott) but there is still room for improvement in expressivity, simplicity and convenience. In this paper, we present a new domain-specific language, Unbound, for specifying binding structure. Our language is particularly expressive - it supports multiple atom types, pattern binders, type annotations, recursive binders, and nested binding (necessary for telescopes, a feature found in dependently-typed languages). However, our specification language is also simple, consisting of just five basic combinators. We provide a formal semantics for this language derived from a locally nameless representation and prove that it satisfies a number of desirable properties. We also present an implementation of our binding specification language as a GHC Haskell library implementing an embedded domain specific language (EDSL). By using Haskell type constructors to represent binding combinators, we implement the EDSL succinctly using datatype-generic programming. Our implementation supports a number of features necessary for practical programming, including flexibility in the treatment of user-defined types, best-effort name preservation (for error messages), and integration with Haskell's monad transformer library."
},
{
"paper_title": "Recursion principles for syntax with bindings and substitution",
"paper_authors": [
"Andrei Popescu",
"Elsa L. Gunter"
],
"paper_abstract": "We characterize the data type of terms with bindings, freshness and substitution, as an initial model in a suitable Horn theory. This characterization yields a convenient recursive definition principle, which we have formalized in Isabelle/HOL and employed in a series of case studies taken from the λ-calculus literature."
},
{
"paper_title": "Proving the unique fixed-point principle correct: an adventure with category theory",
"paper_authors": [
"Ralf Hinze",
"Daniel W.H. James"
],
"paper_abstract": "Say you want to prove something about an infinite data-structure, such as a stream or an infinite tree, but you would rather not subject yourself to coinduction. The unique fixed-point principle is an easy-to-use, calculational alternative. The proof technique rests on the fact that certain recursion equations have unique solutions; if two elements of a coinductive type satisfy the same equation of this kind, then they are equal. In this paper we precisely characterize the conditions that guarantee a unique solution. Significantly, we do so not with a syntactic criterion, but with a semantic one that stems from the categorical notion of naturality. Our development is based on distributive laws and bialgebras, and draws heavily on Turi and Plotkin's pioneering work on mathematical operational semantics. Along the way, we break down the design space in two dimensions, leading to a total of nine points. Each gives rise to varying degrees of expressiveness, and we will discuss three in depth. Furthermore, our development is generic in the syntax of equations and in the behaviour they encode - we are not caged in the world of streams."
},
{
"paper_title": "Linearity and PCF: a semantic insight!",
"paper_authors": [
"Marco Gaboardi",
"Luca Paolini",
"Mauro Piccolo"
],
"paper_abstract": "Linearity is a multi-faceted and ubiquitous notion in the analysis and the development of programming language concepts. We study linearity in a denotational perspective by picking out programs that correspond to linear functions between coherence spaces. We introduce a language, named SlPCF*, that increases the higher-order expressivity of a linear core of PCF by means of new operators related to exception handling and parallel evaluation. SlPCF* allows us to program all the finite elements of the model and, consequently, it entails a full abstraction result that makes the reasoning on the equivalence between programs simpler. Denotational linearity provides also crucial information for the operational evaluation of programs. We formalize two evaluation machineries for the language. The first one is an abstract and concise operational semantics designed with the aim of explaining the new operators, and is based on an infinite-branching search of the evaluation space. The second one is more concrete and it prunes such a space, by exploiting the linear assumptions. This can also be regarded as a base for an implementation."
},
{
"paper_title": "Generalising and dualising the third list-homomorphism theorem: functional pearl",
"paper_authors": [
"Shin-Cheng Mu",
"Akimasa Morihata"
],
"paper_abstract": "The third list-homomorphism theorem says that a function is a list homomorphism if it can be described as an instance of both a foldr and a foldl. We prove a dual theorem for unfolds and generalise both theorems to trees: if a function generating a list can be described both as an unfoldr and an unfoldl, the list can be generated from the middle, and a function that processes or builds a tree both upwards and downwards may independently process/build a subtree and its one-hole context. The point-free, relational formalism helps to reveal the beautiful symmetry hidden in the theorem."
},
{
"paper_title": "Incremental updates for efficient bidirectional transformations",
"paper_authors": [
"Meng Wang",
"Jeremy Gibbons",
"Nicolas Wu"
],
"paper_abstract": "A bidirectional transformation is a pair of mappings between source and view data objects, one in each direction. When the view is modified, the source is updated accordingly. The key to handling large data objects that are subject to relatively small modifications is to process the updates incrementally. Incrementality has been explored in the semi-structured settings of relational databases and graph transformations; this flexibility in structure makes it relatively easy to divide the data into separate parts that can be transformed and updated independently. The same is not true if the data is to be encoded with more general-purpose algebraic datatypes, with transformations defined as functions: dividing data into well-typed separate parts is tricky, and recursions typically create interdependencies. In this paper, we study transformations that support incremental updates, and devise a constructive process to achieve this incrementality."
},
{
"paper_title": "Modular verification of preemptive OS kernels",
"paper_authors": [
"Alexey Gotsman",
"Hongseok Yang"
],
"paper_abstract": "Most major OS kernels today run on multiprocessor systems and are preemptive: it is possible for a process running in the kernel mode to get descheduled. Existing modular techniques for verifying concurrent code are not directly applicable in this setting: they rely on scheduling being implemented correctly, and in a preemptive kernel, the correctness of the scheduler is interdependent with the correctness of the code it schedules. This interdependency is even stronger in mainstream kernels, such as Linux, FreeBSD or XNU, where the scheduler and processes interact in complex ways. We propose the first logic that is able to decompose the verification of preemptive multiprocessor kernel code into verifying the scheduler and the rest of the kernel separately, even in the presence of complex interdependencies between the two components. The logic hides the manipulation of control by the scheduler when reasoning about preemptable code and soundly inherits proof rules from concurrent separation logic to verify it thread-modularly. This is achieved by establishing a novel form of refinement between an operational semantics of the real machine and an axiomatic semantics of OS processes, where the latter assumes an abstract machine with each process executing on a separate virtual CPU. The refinement is local in the sense that the logic focuses only on the relevant state of the kernel while verifying the scheduler. We illustrate the power of our logic by verifying an example scheduler, modelled on the one from Linux 2.6.11."
},
{
"paper_title": "Characteristic formulae for the verification of imperative programs",
"paper_authors": [
"Arthur Charguéraud"
],
"paper_abstract": "In previous work, we introduced an approach to program verification based on characteristic formulae. The approach consists of generating a higher-order logic formula from the source code of a program. This characteristic formula is constructed in such a way that it gives a sound and complete description of the semantics of that program. The formula can thus be exploited in an interactive proof assistant to formally verify that the program satisfies a particular specification. This previous work was, however, only concerned with purely-functional programs. In the present paper, we describe the generalization of characteristic formulae to an imperative programming language. In this setting, characteristic formulae involve specifications expressed in the style of Separation Logic. They also integrate the frame rule, which enables local reasoning. We have implemented a tool based on characteristic formulae. This tool, called CFML, supports the verification of imperative Caml programs using the Coq proof assistant. Using CFML, we have formally verified nontrivial imperative algorithms, as well as CPS functions, higher-order iterators, and programs involving higher-order stores."
},
{
"paper_title": "An equivalence-preserving CPS translation via multi-language semantics",
"paper_authors": [
"Amal Ahmed",
"Matthias Blume"
],
"paper_abstract": "Language-based security relies on the assumption that all potential attacks follow the rules of the language in question. When programs are compiled into a different language, this is true only if the translation process preserves observational equivalence. To prove that a translation preserves equivalence, one must show that if two program fragments cannot be distinguished by any source context, then their translations cannot be distinguished by any target context. Informally, target contexts must be no more powerful than source contexts, i.e., for every target context there exists a source context that \"behaves the same.\" This seems to amount to being able to \"back-translate\" arbitrary target terms. However, that is simply not viable for practical compilers where the target language is lower-level and, thus, contains expressions that have no source equivalent. In this paper, we give a CPS translation from a less expressive source language (STLC) to a more expressive target language (System F) and prove that the translation preserves observational equivalence. The key to our equivalence-preserving compilation is the choice of the right type translation: a source type σ mandates a set of behaviors and we must ensure that its translation σ+ mandates semantically equivalent behaviors at the target level. Based on this type translation, we demonstrate how to prove that for every target term of type σ+, there exists an equivalent source term of type σ- even when sub-terms of the target term are not necessarily \"back-translatable\" themselves. A key novelty of our proof, resulting in a pleasant proof structure, is that it leverages a multi-language semantics where source and target terms may interoperate."
},
{
"paper_title": "A kripke logical relation for effect-based program transformations",
"paper_authors": [
"Jacob Thamsborg",
"Lars Birkedal"
],
"paper_abstract": "We present a Kripke logical relation for showing the correctness of program transformations based on a type-and-effect system for an ML-like programming language with higher-order store and dynamic allocation. We show how to use our model to verify a number of interesting program transformations that rely on effect annotations. Our model is constructed as a step-indexed model over the standard operational semantics of the programming language. It extends earlier work [7, 8]that has considered, respectively, dynamically allocated first-order references and higher-order store for global variables (but no dynamic allocation). It builds on ideas from region-based memory management [21], and on Kripke logical relations for higher-order store [12, 14]. Our type-and-effect system is region-based and includes a region-masking rule which allows to hide local effects. One of the key challenges in the model construction for dynamically allocated higher-order store is that the meaning of a type may change since references, conceptually speaking, may become dangling due to region-masking. We explain how our Kripke model can be used to show correctness of program transformations for programs involving references that, conceptually, are dangling."
}
]
},
{
"proceeding_title": "ICFP '10:Proceedings of the 15th ACM SIGPLAN international conference on Functional programming",
"proceeding_contents": [
{
"paper_title": "ML: metalanguage or object language?",
"paper_authors": [
"Michael J.C. Gordon"
],
"paper_abstract": "My talk will celebrate Robin Milner's contribution to functional programming via a combination of reminiscences about the early days of ML and speculations about its future."
},
{
"paper_title": "The gentle art of levitation",
"paper_authors": [
"James Chapman",
"Pierre-Évariste Dagand",
"Conor McBride",
"Peter Morris"
],
"paper_abstract": "We present a closed dependent type theory whose inductive types are given not by a scheme for generative declarations, but by encoding in a universe. Each inductive datatype arises by interpreting its description - a first-class value in a datatype of descriptions. Moreover, the latter itself has a description. Datatype-generic programming thus becomes ordinary programming. We show some of the resulting generic operations and deploy them in particular, useful ways on the datatype of datatype descriptions itself. Simulations in existing systems suggest that this apparently self-supporting setup is achievable without paradox or infinite regress."
},
{
"paper_title": "Functional pearl: every bit counts",
"paper_authors": [
"Dimitrios Vytiniotis",
"Andrew J. Kennedy"
],
"paper_abstract": "We show how the binary encoding and decoding of typed data and typed programs can be understood, programmed, and verified with the help of question-answer games. The encoding of a value is determined by the yes/no answers to a sequence of questions about that value; conversely, decoding is the interpretation of binary data as answers to the same question scheme. We introduce a general framework for writing and verifying game-based codecs. We present games for structured, recursive, polymorphic, and indexed types, building up to a representation of well-typed terms in the simply-typed λ-calculus. The framework makes novel use of isomorphisms between types in the definition of games. The definition of isomorphisms together with additional simple properties make it easy to prove that codecs derived from games never encode two distinct values using the same code, never decode two codes to the same value, and interpret any bit sequence as a valid code for a value or as a prefix of a valid code."
},
{
"paper_title": "ReCaml: execution state as the cornerstone of reconfigurations",
"paper_authors": [
"Jérémy Buisson",
"Fabien Dagnat"
],
"paper_abstract": "To fix bugs or to enhance a software system without service disruption, one has to update it dynamically during execution. Most prior dynamic software updating techniques require that the code to be changed is not running at the time of the update. However, this restriction precludes any change to the outermost loops of servers, OS scheduling loops and recursive functions. Permitting a dynamic update to more generally manipulate the program's execution state, including the runtime stack, alleviates this restriction but increases the likelihood of type errors. In this paper we present ReCaml, a language for writing dynamic updates to running programs that views execution state as a delimited continuation. ReCaml includes a novel feature for introspecting continuations called match_cont which is sufficiently powerful to implement a variety of updating policies. We have formalized the core of ReCaml and proved it sound (using the Coq proof assistant), thus ensuring that state-manipulating updates preserve type-safe execution of the updated program. We have implemented ReCaml as an extension to the Caml bytecode interpreter and used it for several examples."
},
{
"paper_title": "Lolliproc: to concurrency from classical linear logic via curry-howard and control",
"paper_authors": [
"Karl Mazurak",
"Steve Zdancewic"
],
"paper_abstract": "While many type systems based on the intuitionistic fragment of linear logic have been proposed, applications in programming languages of the full power of linear logic - including double-negation elimination - have remained elusive. Meanwhile, linearity has been used in many type systems for concurrent programs - e.g., session types - which suggests applicability to the problems of concurrent programming, but the ways in which linearity has interacted with concurrency primitives in lambda calculi have remained somewhat ad-hoc. In this paper we connect classical linear logic and concurrent functional programming in the language Lolliproc, which provides simple primitives for concurrency that have a direct logical interpretation and that combine to provide the functionality of session types. Lolliproc features a simple process calculus \"under the hood\" but hides the machinery of processes from programmers. We illustrate Lolliproc by example and prove soundness, strong normalization, and confluence results, which, among other things, guarantees freedom from deadlocks and race conditions."
},
{
"paper_title": "Abstracting abstract machines",
"paper_authors": [
"David Van Horn",
"Matthew Might"
],
"paper_abstract": "We describe a derivational approach to abstract interpretation that yields novel and transparently sound static analyses when applied to well-established abstract machines. To demonstrate the technique and support our claim, we transform the CEK machine of Felleisen and Friedman, a lazy variant of Krivine's machine, and the stack-inspecting CM machine of Clements and Felleisen into abstract interpretations of themselves. The resulting analyses bound temporal ordering of program events; predict return-flow and stack-inspection behavior; and approximate the flow and evaluation of by-need parameters. For all of these machines, we find that a series of well-known concrete machine refactorings, plus a technique we call store-allocated continuations, leads to machines that abstract into static analyses simply by bounding their stores. We demonstrate that the technique scales up uniformly to allow static analysis of realistic language features, including tail calls, conditionals, side effects, exceptions, first-class continuations, and even garbage collection."
},
{
"paper_title": "Polyvariant flow analysis with higher-ranked polymorphic types and higher-order effect operators",
"paper_authors": [
"Stefan Holdermans",
"Jurriaan Hage"
],
"paper_abstract": "We present a type and effect system for flow analysis that makes essential use of higher-ranked polymorphism. We show that, for higher-order functions, the expressiveness of higher-ranked types enables us to improve on the precision of conventional let-polymorphic analyses. Modularity and decidability of the analysis are guaranteed by making the analysis of each program parametric in the analyses of its inputs; in particular, we have that higher-order functions give rise to higher-order operations on effects. As flow typing is archetypical to a whole class of type and effect systems, our approach can be used to boost the precision of a wide range of type-based program analyses for higher-order languages."
},
{
"paper_title": "The reduceron reconfigured",
"paper_authors": [
"Matthew Naylor",
"Colin Runciman"
],
"paper_abstract": "The leading implementations of graph reduction all target conventional processors designed for low-level imperative execution. In this paper, we present a processor specially designed to perform graph-reduction. Our processor -- the Reduceron -- is implemented using off-the-shelf reconfigurable hardware. We highlight the low-level parallelism present in sequential graph reduction, and show how parallel memories and dynamic analyses are used in the Reduceron to achieve an average reduction rate of 0.55 function applications per clock-cycle."
},
{
"paper_title": "Using functional programming within an industrial product group: perspectives and perceptions",
"paper_authors": [
"David Scott",
"Richard Sharp",
"Thomas Gazagnaire",
"Anil Madhavapeddy"
],
"paper_abstract": "We present a case-study of using OCaml within a large product development project, focussing on both the technical and non-technical issues that arose as a result. We draw comparisons between the OCaml team and the other teams that worked on the project, providing comparative data on hiring patterns and cross-team code contribution."
},
{
"paper_title": "Lazy tree splitting",
"paper_authors": [
"Lars Bergstrom",
"Mike Rainey",
"John Reppy",
"Adam Shaw",
"Matthew Fluet"
],
"paper_abstract": "Nested data-parallelism (NDP) is a declarative style for programming irregular parallel applications. NDP languages provide language features favoring the NDP style, efficient compilation of NDP programs, and various common NDP operations like parallel maps, filters, and sum-like reductions. In this paper, we describe the implementation of NDP in Parallel ML (PML), part of the Manticore project. Managing the parallel decomposition of work is one of the main challenges of implementing NDP. If the decomposition creates too many small chunks of work, performance will be eroded by too much parallel overhead. If, on the other hand, there are too few large chunks of work, there will be too much sequential processing and processors will sit idle. Recently the technique of Lazy Binary Splitting was proposed for dynamic parallel decomposition of work on flat arrays, with promising results. We adapt Lazy Binary Splitting to parallel processing of binary trees, which we use to represent parallel arrays in PML. We call our technique Lazy Tree Splitting (LTS). One of its main advantages is its performance robustness: per-program tuning is not required to achieve good performance across varying platforms. We describe LTS-based implementations of standard NDP operations, and we present experimental data demonstrating the scalability of LTS across a range of benchmarks."
},
{
"paper_title": "Semantic subtyping with an SMT solver",
"paper_authors": [
"Gavin M. Bierman",
"Andrew D. Gordon",
"Cătălin Hriţcu",
"David Langworthy"
],
"paper_abstract": "We study a first-order functional language with the novel combination of the ideas of refinement type (the subset of a type to satisfy a Boolean expression) and type-test (a Boolean expression testing whether a value belongs to a type). Our core calculus can express a rich variety of typing idioms; for example, intersection, union, negation, singleton, nullable, variant, and algebraic types are all derivable. We formulate a semantics in which expressions denote terms, and types are interpreted as first-order logic formulas. Subtyping is defined as valid implication between the semantics of types. The formulas are interpreted in a specific model that we axiomatize using standard first-order theories. On this basis, we present a novel type-checking algorithm able to eliminate many dynamic tests and to detect many errors statically. The key idea is to rely on an SMT solver to compute subtyping efficiently. Moreover, interpreting types as formulas allows us to call the SMT solver at run-time to compute instances of types."
},
{
"paper_title": "Logical types for untyped languages",
"paper_authors": [
"Sam Tobin-Hochstadt",
"Matthias Felleisen"
],
"paper_abstract": "Programmers reason about their programs using a wide variety of formal and informal methods. Programmers in untyped languages such as Scheme or Erlang are able to use any such method to reason about the type behavior of their programs. Our type system for Scheme accommodates common reasoning methods by assigning variable occurrences a subtype of their declared type based on the predicates prior to the occurrence, a discipline dubbed occurrence typing. It thus enables programmers to enrich existing Scheme code with types, while requiring few changes to the code itself. Three years of practical experience has revealed serious shortcomings of our type system. In particular, it relied on a system of ad-hoc rules to relate combinations of predicates, it could not reason about subcomponents of data structures, and it could not follow sophisticated reasoning about the relationship among predicate tests, all of which are used in existing code. In this paper, we reformulate occurrence typing to eliminate these shortcomings. The new formulation derives propositional logic formulas that hold when an expression evaluates to true or false, respectively. A simple proof system is then used to determine types of variable occurrences from these propositions. Our implementation of this revised occurrence type system thus copes with many more untyped programming idioms than the original system."
},
{
"paper_title": "TeachScheme!: a checkpoint",
"paper_authors": [
"Matthias Felleisen"
],
"paper_abstract": "In 1995, my team and I decided to create an outreach project that would use our research on functional programming to change the K-12 computer science curriculum. We had two different goals in mind. On the one hand, our curriculum should rely on mathematics to teach programming, and it d exploit programming to teach mathematics. All students - not just those who major in computer science - should benefit. On the other hand, our course should demonstrate that introductory programming can focus on program design, not just a specific syntax. We also wished to create a smooth path from a design-oriented introductory course all the way to courses on large software projects. My talk presents a checkpoint of our project, starting with our major scientific goal, a comprehensive theory of program design. Our work on this theory progresses through the development of program design courses for all age groups. At this point, we offer curricular materials for middle schools, high schools, three college-level freshman courses, and a junior-level course on constructing large components. We regularly use these materials to train K-12 teachers, after-school volunteers, and college faculty; thus far, we have reached hundreds of instructors, who in turn have dealt with thousands of students in their classrooms."
},
{
"paper_title": "Higher-order representation of substructural logics",
"paper_authors": [
"Karl Crary"
],
"paper_abstract": "We present a technique for higher-order representation of substructural logics such as linear or modal logic. We show that such logics can be encoded in the (ordinary) Logical Framework, without any linear or modal extensions. Using this encoding, metatheoretic proofs about such logics can easily be developed in the Twelf proof assistant."
},
{
"paper_title": "The impact of higher-order state and control effects on local relational reasoning",
"paper_authors": [
"Derek Dreyer",
"Georg Neis",
"Lars Birkedal"
],
"paper_abstract": "Reasoning about program equivalence is one of the oldest problems in semantics. In recent years, useful techniques have been developed, based on bisimulations and logical relations, for reasoning about equivalence in the setting of increasingly realistic languages - languages nearly as complex as ML or Haskell. Much of the recent work in this direction has considered the interesting representation independence principles enabled by the use of local state, but it is also important to understand the principles that powerful features like higher-order state and control effects disable. This latter topic has been broached extensively within the framework of game semantics, resulting in what Abramsky dubbed the \"semantic cube\": fully abstract game-semantic characterizations of various axes in the design space of ML-like languages. But when it comes to reasoning about many actual examples, game semantics does not yet supply a useful technique for proving equivalences. In this paper, we marry the aspirations of the semantic cube to the powerful proof method of step-indexed Kripke logical relations. Building on recent work of Ahmed, Dreyer, and Rossberg, we define the first fully abstract logical relation for an ML-like language with recursive types, abstract types, general references and call/cc. We then show how, under orthogonal restrictions to the expressive power our language - namely, the restriction to first-order state and/or the removal of call/cc - we can enhance the proving power of our possible-worlds model in correspondingly orthogonal ways, and we demonstrate this proving power on a range of interesting examples. Central to our story is the use of state transition systems to model the way in which properties of local state evolve over time."
},
{
"paper_title": "Distance makes the types grow stronger: a calculus for differential privacy",
"paper_authors": [
"Jason Reed",
"Benjamin C. Pierce"
],
"paper_abstract": "We want assurances that sensitive information will not be disclosed when aggregate data derived from a database is published. Differential privacy offers a strong statistical guarantee that the effect of the presence of any individual in a database will be negligible, even when an adversary has auxiliary knowledge. Much of the prior work in this area consists of proving algorithms to be differentially private one at a time; we propose to streamline this process with a functional language whose type system automatically guarantees differential privacy, allowing the programmer to write complex privacy-safe query programs in a flexible and compositional way. The key novelty is the way our type system captures function sensitivity, a measure of how much a function can magnify the distance between similar inputs: well-typed programs not only can't go wrong, they can't go too far on nearby inputs. Moreover, by introducing a monad for random computations, we can show that the established definition of differential privacy falls out naturally as a special case of this soundness principle. We develop examples including known differentially private algorithms, privacy-aware variants of standard functional programming idioms, and compositionality principles for differential privacy."
},
{
"paper_title": "Security-typed programming within dependently typed programming",
"paper_authors": [
"Jamie Morgenstern",
"Daniel R. Licata"
],
"paper_abstract": "Several recent security-typed programming languages, such as Aura, PCML5, and Fine, allow programmers to express and enforce access control and information flow policies. In this paper, we show that security-typed programming can be embedded as a library within a general-purpose dependently typed programming language, Agda. Our library, Aglet, accounts for the major features of existing security-typed programming languages, such as decentralized access control, typed proof-carrying authorization, ephemeral and dynamic policies, authentication, spatial distribution, and information flow. The implementation of Aglet consists of the following ingredients: First, we represent the syntax and proofs of an authorization logic, Garg and Pfenning's BL0, using dependent types. Second, we implement a proof search procedure, based on a focused sequent calculus, to ease the burden of constructing proofs. Third, we represent computations using a monad indexed by pre- and post-conditions drawn from the authorization logic, which permits ephemeral policies that change during execution. We describe the implementation of our library and illustrate its use on a number of the benchmark examples considered in the literature."
},
{
"paper_title": "Combining syntactic and semantic bidirectionalization",
"paper_authors": [
"Janis Voigtländer",
"Zhenjiang Hu",
"Kazutaka Matsuda",
"Meng Wang"
],
"paper_abstract": "Matsuda et al. [2007, ICFP] and Voigtländer [2009, POPL] introduced two techniques that given a source-to-view function provide an update propagation function mapping an original source and an updated view back to an updated source, subject to standard consistency conditions. Being fundamentally different in approach, both techniques have their respective strengths and weaknesses. Here we develop a synthesis of the two techniques to good effect. On the intersection of their applicability domains we achieve more than what a simple union of applying the techniques side by side delivers."
},
{
"paper_title": "Matching lenses: alignment and view update",
"paper_authors": [
"Davi M.J. Barbosa",
"Julien Cretin",
"Nate Foster",
"Michael Greenberg",
"Benjamin C. Pierce"
],
"paper_abstract": "Bidirectional programming languages are a practical approach to the view update problem. Programs in these languages, called lenses, define both a view and an update policy - i.e., every program can be read as a function mapping sources to views as well as one mapping updated views back to updated sources. One thorny issue that has not received sufficient attention in the design of bidirectional languages is alignment. In general, to correctly propagate an update to a view, a lens needs to match up the pieces of the view with the corresponding pieces of the underlying source, even after data has been inserted, deleted, or reordered. However, existing bidirectional languages either support only simple strategies that fail on many examples of practical interest, or else propose specific strategies that are baked deeply into the underlying theory. We propose a general framework of matching lenses that parameterizes lenses over arbitrary heuristics for calculating alignments. We enrich the types of lenses with \"chunks\" identifying reorderable pieces of the source and view that should be re-aligned after an update, and we formulate behavioral laws that capture essential constraints on the handling of chunks. We develop a core language of matching lenses for strings, together with a set of \"alignment combinators\" that implement a variety of alignment strategies."
},
{
"paper_title": "Bidirectionalizing graph transformations",
"paper_authors": [
"Soichiro Hidaka",
"Zhenjiang Hu",
"Kazuhiro Inaba",
"Hiroyuki Kato",
"Kazutaka Matsuda",
"Keisuke Nakano"
],
"paper_abstract": "Bidirectional transformations provide a novel mechanism for synchronizing and maintaining the consistency of information between input and output. Despite many promising results on bidirectional transformations, these have been limited to the context of relational or XML (tree-like) databases. We challenge the problem of bidirectional transformations within the context of graphs, by proposing a formal definition of a well-behaved bidirectional semantics for UnCAL, i.e., a graph algebra for the known UnQL graph query language. The key to our successful formalization is full utilization of both the recursive and bulk semantics of structural recursion on graphs. We carefully refine the existing forward evaluation of structural recursion so that it can produce sufficient trace information for later backward evaluation. We use the trace information for backward evaluation to reflect in-place updates and deletions on the view to the source, and adopt the universal resolving algorithm for inverse computation and the narrowing technique to tackle the difficult problem with insertion. We prove our bidirectional evaluation is well-behaved. Our current implementation is available online and confirms the usefulness of our approach with nontrivial applications."
},
{
"paper_title": "A fresh look at programming with names and binders",
"paper_authors": [
"Nicolas Pouillard",
"François Pottier"
],
"paper_abstract": "A wide range of computer programs, including compilers and theorem provers, manipulate data structures that involve names and binding. However, the design of programming idioms which allow performing these manipulations in a safe and natural style has, to a large extent, remained elusive. In this paper, we present a novel approach to the problem. Our proposal can be viewed either as a programming language design or as a library: in fact, it is currently implemented within Agda. It provides a safe and expressive means of programming with names and binders. It is abstract enough to support multiple concrete implementations: we present one in nominal style and one in de Bruijn style. We use logical relations to prove that \"well-typed programs do not mix names with different scope\". We exhibit an adequate encoding of Pitts-style nominal terms into our system."
},
{
"paper_title": "Experience report: growing programming languages for beginning students",
"paper_authors": [
"Marcus Crestani",
"Michael Sperber"
],
"paper_abstract": "A student learning how to program learns best when the programming language and programming environment cater to her specific needs. These needs are different from the requirements of a professional programmer. Consequently, the design of teaching languages poses challenges different from the design of professional languages. Using a functional language by itself gives advantages over more popular, professional languages, but fully exploiting these advantages requires careful adaptation to the needs of the students' as-is, these languages do not support the students nearly as well as they could. This paper describes our experience adopting the didactic approach of How to Design Programs, focussing on the design process for our own set of teaching languages. We have observed students as they try to program as part of our introductory course, and used these observations to significantly improve the design of these languages. This paper describes the changes we have made, and the journey we took to get there."
},
{
"paper_title": "Fortifying macros",
"paper_authors": [
"Ryan Culpepper",
"Matthias Felleisen"
],
"paper_abstract": "Existing macro systems force programmers to make a choice between clarity of specification and robustness. If they choose clarity, they must forgo validating significant parts of the specification and thus produce low-quality language extensions. If they choose robustness, they must write in a style that mingles the implementation with the specification and therefore obscures the latter. This paper introduces a new language for writing macros. With the new macro system, programmers naturally write robust language extensions using easy-to-understand specifications. The system translates these specifications into validators that detect misuses - including violations of context-sensitive constraints - and automatically synthesize appropriate feedback, eliminating the need for ad hoc validation code."
},
{
"paper_title": "Functional parallel algorithms",
"paper_authors": [
"Guy E. Blelloch"
],
"paper_abstract": "Functional programming presents several important advantages in the design, analysis and implementation of parallel algorithms: It discourages iteration and encourages decomposition. It supports persistence and hence easy speculation. It encourages higher-order aggregate operations. It is well suited for defining cost models tied to the programming language rather than the machine. Implementations can avoid false sharing. Implementations can use cheaper weak consistency models. And most importantly, it supports safe deterministic parallelism. In fact functional programming supports a level of abstraction in which parallel algorithms are often as easy to design and analyze as sequential algorithms. The recent widespread advent of parallel machines therefore presents a great opportunity for functional programming languages. However, any changes will require significant education at all levels and involvement of the functional programming community. In this talk I will discuss an approach to designing and analyzing parallel algorithms in a strict functional and fully deterministic setting. Key ideas include a cost model defined in term of analyzing work and span, the use of divide-and-conquer and contraction, the need for arrays (immutable) to achieve asymptotic efficiency, and the power of (deterministic) randomized algorithms. These are all ideas I believe can be taught at any level."
},
{
"paper_title": "Specifying and verifying sparse matrix codes",
"paper_authors": [
"Gilad Arnold",
"Johannes Hölzl",
"Ali Sinan Köksal",
"Rastislav Bodík",
"Mooly Sagiv"
],
"paper_abstract": "Sparse matrix formats are typically implemented with low-level imperative programs. The optimized nature of these implementations hides the structural organization of the sparse format and complicates its verification. We define a variable-free functional language (LL) in which even advanced formats can be expressed naturally, as a pipeline-style composition of smaller construction steps. We translate LL programs to Isabelle/HOL and describe a proof system based on parametric predicates for tracking relationship between mathematical vectors and their concrete representations. This proof theory automatically verifies full functional correctness of many formats. We show that it is reusable and extensible to hierarchical sparse formats."
},
{
"paper_title": "Regular, shape-polymorphic, parallel arrays in Haskell",
"paper_authors": [
"Gabriele Keller",
"Manuel M.T. Chakravarty",
"Roman Leshchinskiy",
"Simon Peyton Jones",
"Ben Lippmeier"
],
"paper_abstract": "We present a novel approach to regular, multi-dimensional arrays in Haskell. The main highlights of our approach are that it (1) is purely functional, (2) supports reuse through shape polymorphism, (3) avoids unnecessary intermediate structures rather than relying on subsequent loop fusion, and (4) supports transparent parallelisation. We show how to embed two forms of shape polymorphism into Haskell's type system using type classes and type families. In particular, we discuss the generalisation of regular array transformations to arrays of higher rank, and introduce a type-safe specification of array slices. We discuss the runtime performance of our approach for three standard array algorithms. We achieve absolute performance comparable to handwritten C code. At the same time, our implementation scales well up to 8 processor cores."
},
{
"paper_title": "A certified framework for compiling and executing garbage-collected languages",
"paper_authors": [
"Andrew McCreight",
"Tim Chevalier",
"Andrew Tolmach"
],
"paper_abstract": "We describe the design, implementation, and use of a machine-certified framework for correct compilation and execution of programs in garbage-collected languages. Our framework extends Leroy's Coq-certified Compcert compiler and Cminor intermediate language. We add: (i) a new intermediate language, GCminor, that includes primitives for allocating memory in a garbage-collected heap and for specifying GC roots; (ii) a precise, low-level specification for a Cminor library for garbage collection; and (iii) a proven semantics-preserving translation from GCminor to Cminor plus the GC library. GCminor neatly encapsulates the interface between mutator and collector code, while remaining simple and flexible enough to be used with a wide variety of source languages and collector styles. Front ends targeting GCminor can be implemented using any compiler technology and any desired degree of verification, including full semantics preservation, type preservation, or informal trust. As an example application of our framework, we describe a compiler for Haskell that translates the Glasgow Haskell Compiler's Core intermediate language to GCminor. To support a simple but useful memory safety argument for this compiler, the front end uses a novel combination of type preservation and runtime checks, which is of independent interest."
},
{
"paper_title": "Total parser combinators",
"paper_authors": [
"Nils Anders Danielsson"
],
"paper_abstract": "A monadic parser combinator library which guarantees termination of parsing, while still allowing many forms of left recursion, is described. The library's interface is similar to those of many other parser combinator libraries, with two important differences: one is that the interface clearly specifies which parts of the constructed parsers may be infinite, and which parts have to be finite, using dependent types and a combination of induction and coinduction; and the other is that the parser type is unusually informative. The library comes with a formal semantics, using which it is proved that the parser combinators are as expressive as possible. The implementation is supported by a machine-checked correctness proof."
},
{
"paper_title": "Scrapping your inefficient engine: using partial evaluation to improve domain-specific language implementation",
"paper_authors": [
"Edwin C. Brady",
"Kevin Hammond"
],
"paper_abstract": "Partial evaluation aims to improve the efficiency of a program by specialising it with respect to some known inputs. In this paper, we show that partial evaluation can be an effective and, unusually, easy to use technique for the efficient implementation of embedded domain-specific languages. We achieve this by exploiting dependent types and by following some simple rules in the definition of the interpreter for the domain-specific language. We present experimental evidence that partial evaluation of programs in domain-specific languages can yield efficient residual programs whose performance is competitive with their Java and C equivalents and which are also, through the use of dependent types, verifiably resource-safe. Using our technique, it follows that a verifiably correct and resource-safe program can also be an efficient program"
},
{
"paper_title": "Rethinking supercompilation",
"paper_authors": [
"Neil Mitchell"
],
"paper_abstract": "Supercompilation is a program optimisation technique that is particularly effective at eliminating unnecessary overheads. We have designed a new supercompiler, making many novel choices, including different termination criteria and handling of let bindings. The result is a supercompiler that focuses on simplicity, compiles programs quickly and optimises programs well. We have benchmarked our supercompiler, with some programs running more than twice as fast than when compiled with GHC."
},
{
"paper_title": "Program verification through characteristic formulae",
"paper_authors": [
"Arthur Charguéraud"
],
"paper_abstract": "This paper describes CFML, the first program verification tool based on characteristic formulae. Given the source code of a pure Caml program, this tool generates a logical formula that implies any valid post-condition for that program. One can then prove that the program satisfies a given specification by reasoning interactively about the characteristic formula using a proof assistant such as Coq. Our characteristic formulae improve over Honda et al's total characteristic assertion pairs in that they are expressible in standard higher-order logic, allowing to exploit them in practice to verify programs using existing proof assistants. Our technique has been applied to formally verify more than half of the content of Okasaki's Purely Functional Data Structures reference book"
},
{
"paper_title": "VeriML: typed computation of logical terms inside a language with effects",
"paper_authors": [
"Antonis Stampoulis",
"Zhong Shao"
],
"paper_abstract": "Modern proof assistants such as Coq and Isabelle provide high degrees of expressiveness and assurance because they support formal reasoning in higher-order logic and supply explicit machine-checkable proof objects. Unfortunately, large scale proof development in these proof assistants is still an extremely difficult and time-consuming task. One major weakness of these proof assistants is the lack of a single language where users can develop complex tactics and decision procedures using a rich programming model and in a typeful manner. This limits the scalability of the proof development process, as users avoid developing domain-specific tactics and decision procedures. In this paper, we present VeriML - a novel language design that couples a type-safe effectful computational language with first-class support for manipulating logical terms such as propositions and proofs. The main idea behind our design is to integrate a rich logical framework - similar to the one supported by Coq - inside a computational language inspired by ML. The language design is such that the added features are orthogonal to the rest of the computational language, and also do not require significant additions to the logic language, so soundness is guaranteed. We have built a prototype implementation of VeriML including both its type-checker and an interpreter. We demonstrate the effectiveness of our design by showing a number of type-safe tactics and decision procedures written in VeriML."
},
{
"paper_title": "Parametricity and dependent types",
"paper_authors": [
"Jean-Philippe Bernardy",
"Patrik Jansson",
"Ross Paterson"
],
"paper_abstract": "Reynolds' abstraction theorem shows how a typing judgement in System F can be translated into a relational statement (in second order predicate logic) about inhabitants of the type. We (in second order predicate logic) about inhabitants of the type. We obtain a similar result for a single lambda calculus (a pure type system), in which terms, types and their relations are expressed. Working within a single system dispenses with the need for an interpretation layer, allowing for an unusually simple presentation. While the unification puts some constraints on the type system (which we spell out), the result applies to many interesting cases, including dependently-typed ones."
},
{
"paper_title": "A play on regular expressions: functional pearl",
"paper_authors": [
"Sebastian Fischer",
"Frank Huch",
"Thomas Wilke"
],
"paper_abstract": "Cody, Hazel, and Theo, two experienced Haskell programmers and an expert in automata theory, develop an elegant Haskell program for matching regular expressions: (i) the program is purely functional; (ii) it is overloaded over arbitrary semirings, which not only allows to solve the ordinary matching problem but also supports other applications like computing leftmost longest matchings or the number of matchings, all with a single algorithm; (iii) it is more powerful than other matchers, as it can be used for parsing every context-free language by taking advantage of laziness. The developed program is based on an old technique to turn regular expressions into finite automata which makes it efficient both in terms of worst-case time and space bounds and actual performance: despite its simplicity, the Haskell implementation can compete with a recently published professional C++ program for the same problem."
},
{
"paper_title": "Experience report: Haskell as a reagent: results and observations on the use of Haskell in a python project",
"paper_authors": [
"Iustin Pop"
],
"paper_abstract": "In system administration, the languages of choice for solving automation tasks are scripting languages, owing to their flexibility, extensive library support and quick development cycle. Functional programming is more likely to be found in software development teams and the academic world. This separation means that system administrators cannot use the most effective tool for a given problem; in an ideal world, we should be able to mix and match different languages, based on the problem at hand. This experience report details our initial introduction and use of Haskell in a mature, medium size project implemented in Python. We also analyse the interaction between the two languages, and show how Haskell has excelled at solving a particular type of real-world problems"
},
{
"paper_title": "Instance chains: type class programming without overlapping instances",
"paper_authors": [
"J. Garrett Morris",
"Mark P. Jones"
],
"paper_abstract": "Type classes have found a wide variety of uses in Haskell programs, from simple overloading of operators (such as equality or ordering) to complex invariants used to implement type-safe heterogeneous lists or limited subtyping. Unfortunately, many of the richer uses of type classes require extensions to the class system that have been incompletely described in the research literature and are not universally accepted within the Haskell community. This paper describes a new type class system, implemented in a prototype tool called ilab, that simplifies and enhances Haskell-style type-class programming. In ilab, we replace overlapping instances with a new feature, instance chains, allowing explicit alternation and failure in instance declarations. We describe a technique for ascribing semantics to type class systems, relating classes, instances, and class constraints (such as kind signatures or functional dependencies) directly to a set-theoretic model of relations on types. Finally, we give a semantics for ilab and describe its implementation."
}
]
},
{
"proceeding_title": "ICFP '09:Proceedings of the 14th ACM SIGPLAN international conference on Functional programming",
"proceeding_contents": [
{
"paper_title": "ICFP09 PC Chairs Report",
"paper_authors": [
"Andrew Tolmach"
]
},
{
"paper_title": "In Memoriam Peter Landin",
"paper_authors": [
"Phil Wadler",
"Olivier Danvy"
]
},
{
"paper_title": "Most Influential ICFP99 Paper Award"
},
{
"paper_title": "Report on the Twelfth ICFP Programming Contest",
"paper_authors": [
"Andy Gill"
]
},
{
"paper_title": "SIGPLAN Programming Languages Achievement Award Rod Burstall",
"paper_authors": [
"Phil Wadler"
]
},
{
"paper_title": "Organizing functional code for parallel execution or, foldl and foldr considered slightly harmful",
"paper_authors": [
"Guy L. Steele, Jr."
],
"paper_abstract": "Alan Perlis, inverting OscarWilde's famous quip about cynics, once suggested, decades ago, that a Lisp programmer is one who knows the value of everything and the cost of nothing. Now that the conference on Lisp and Functional Programming has become ICFP, some may think that OCaml and Haskell programmers have inherited this (now undeserved) epigram. I do believe that as multicore processors are becoming prominent, and soon ubiquitous, it behooves all programmers to rethink their programming style, strategies, and tactics, so that their code may have excellent performance. For the last six years I have been part of a team working on a programming language, Fortress, that has borrowed ideas not only from Fortran, not only from Java, not only from Algol and Alphard and CLU, not only from MADCAP and MODCAP and MIRFAC and the Klerer-May system-but also from Haskell, and I would like to repay the favor. In this talk I will discuss three ideas (none original with me) that I have found to be especially powerful in organizing Fortress programs so that they may be executed equally effectively either sequentially or in parallel: user-defined associative operators (and, more generally, user-defined monoids); conjugate transforms of data; and monoid-caching trees (as described, for example, by Hinze and Paterson). I will exhibit pleasant little code examples (some original with me) that make use of these ideas."
},
{
"paper_title": "Functional pearl: la tour d'Hanoï",
"paper_authors": [
"Ralf Hinze"
],
"paper_abstract": "This pearl aims to demonstrate the ideas of wholemeal and projective programming using the Towers of Hanoi puzzle as a running example. The puzzle has its own beauty, which we hope to expose along the way."
},
{
"paper_title": "Purely functional lazy non-deterministic programming",
"paper_authors": [
"Sebastian Fischer",
"Oleg Kiselyov",
"Chung-chieh Shan"
],
"paper_abstract": "Functional logic programming and probabilistic programming have demonstrated the broad benefits of combining laziness (non-strict evaluation with sharing of the results) with non-determinism. Yet these benefits are seldom enjoyed in functional programming, because the existing features for non-strictness, sharing, and non-determinism in functional languages are tricky to combine. We present a practical way to write purely functional lazy non-deterministic programs that are efficient and perspicuous. We achieve this goal by embedding the programs into existing languages (such as Haskell, SML, and OCaml) with high-quality implementations, by making choices lazily and representing data with non-deterministic components, by working with custom monadic data types and search strategies, and by providing equational laws for the programmer to reason about their code."
},
{
"paper_title": "Safe functional reactive programming through dependent types",
"paper_authors": [
"Neil Sculthorpe",
"Henrik Nilsson"
],
"paper_abstract": "Functional Reactive Programming (FRP) is an approach to reactive programming where systems are structured as networks of functions operating on signals. FRP is based on the synchronous data-flow paradigm and supports both continuous-time and discrete-time signals (hybrid systems). What sets FRP apart from most other languages for similar applications is its support for systems with dynamic structure and for higher-order reactive constructs. Statically guaranteeing correctness properties of programs is an attractive proposition. This is true in particular for typical application domains for reactive programming such as embedded systems. To that end, many existing reactive languages have type systems or other static checks that guarantee domain-specific properties, such as feedback loops always being well-formed. However, they are limited in their capabilities to support dynamism and higher-order data-flow compared with FRP. Thus, the onus of ensuring such properties of FRP programs has so far been on the programmer as established static techniques do not suffice. In this paper, we show how dependent types allow this concern to be addressed. We present an implementation of FRP embedded in the dependently-typed language Agda, leveraging the type system of the host language to craft a domain-specific (dependent) type system for FRP. The implementation constitutes a discrete, operational semantics of FRP, and as it passes the Agda type, coverage, and termination checks, we know the operational semantics is total, which means our type system is safe."
},
{
"paper_title": "Causal commutative arrows and their optimization",
"paper_authors": [
"Hai Liu",
"Eric Cheng",
"Paul Hudak"
],
"paper_abstract": "Arrows are a popular form of abstract computation. Being more general than monads, they are more broadly applicable, and in particular are a good abstraction for signal processing and dataflow computations. Most notably, arrows form the basis for a domain specific language called Yampa, which has been used in a variety of concrete applications, including animation, robotics, sound synthesis, control systems, and graphical user interfaces. Our primary interest is in better understanding the class of abstract computations captured by Yampa. Unfortunately, arrows are not concrete enough to do this with precision. To remedy this situation we introduce the concept of commutative arrows that capture a kind of non-interference property of concurrent computations. We also add an init operator, and identify a crucial law that captures the causal nature of arrow effects. We call the resulting computational model causal commutative arrows. To study this class of computations in more detail, we define an extension to the simply typed lambda calculus called causal commutative arrows (CCA), and study its properties. Our key contribution is the identification of a normal form for CCA called causal commutative normal form (CCNF). By defining a normalization procedure we have developed an optimization strategy that yields dramatic improvements in performance over conventional implementations of arrows. We have implemented this technique in Haskell, and conducted benchmarks that validate the effectiveness of our approach. When combined with stream fusion, the overall methodology can result in speed-ups of greater than two orders of magnitude."
},
{
"paper_title": "A functional I/O system or, fun for freshman kids",
"paper_authors": [
"Matthias Felleisen",
"Robert Bruce Findler",
"Matthew Flatt",
"Shriram Krishnamurthi"
],
"paper_abstract": "Functional programming languages ought to play a central role in mathematics education for middle schools (age range: 10-14). After all, functional programming is a form of algebra and programming is a creative activity about problem solving. Introducing it into mathematics courses would make pre-algebra course come alive. If input and output were invisible, students could implement fun simulations, animations, and even interactive and distributed games all while using nothing more than plain mathematics. We have implemented this vision with a simple framework for purely functional I/O. Using this framework, students design, implement, and test plain mathematical functions over numbers, booleans, string, and images. Then the framework wires them up to devices and performs all the translation from external information to internal data (and vice versa)--just like every other operating system. Once middle school students are hooked on this form of programming, our curriculum provides a smooth path for them from pre-algebra to freshman courses in college on object-oriented design and theorem proving."
},
{
"paper_title": "Experience report: embedded, parallel computer-vision with a functional DSL",
"paper_authors": [
"Ryan R. Newton",
"Teresa Ko"
],
"paper_abstract": "This paper presents our experience using a domain-specific functional language, WaveScript, to build embedded sensing applications used in scientific research. We focus on a recent computervision application for detecting birds in their natural environment. The application was ported from a prototype in C++. In reimplementing the application, we gained a much cleaner factoring of its functionality (through higher-order functions and better interfaces to libraries) and a near-linear parallel speed-up with no additional effort. These benefits are offset by one substantial downside: the lack of familiarity with the language of the original vision researchers, who understandably tried to use the language in the familiar way they use C++ and thus ran into various problems."
},
{
"paper_title": "Runtime support for multicore Haskell",
"paper_authors": [
"Simon Marlow",
"Simon Peyton Jones",
"Satnam Singh"
],
"paper_abstract": "Purely functional programs should run well on parallel hardware because of the absence of side effects, but it has proved hard to realise this potential in practice. Plenty of papers describe promising ideas, but vastly fewer describe real implementations with good wall-clock performance. We describe just such an implementation, and quantitatively explore some of the complex design tradeoffs that make such implementations hard to build. Our measurements are necessarily detailed and specific, but they are reproducible, and we believe that they offer some general insights."
},
{
"paper_title": "Effective interactive proofs for higher-order imperative programs",
"paper_authors": [
"Adam Chlipala",
"Gregory Malecha",
"Greg Morrisett",
"Avraham Shinnar",
"Ryan Wisnesky"
],
"paper_abstract": "We present a new approach for constructing and verifying higher-order, imperative programs using the Coq proof assistant. We build on the past work on the Ynot system, which is based on Hoare Type Theory. That original system was a proof of concept, where every program verification was accomplished via laborious manual proofs, with much code devoted to uninteresting low-level details. In this paper, we present a re-implementation of Ynot which makes it possible to implement fully-verified, higher-order imperative programs with reasonable proof burden. At the same time, our new system is implemented entirely in Coq source files, showcasing the versatility of that proof assistant as a platform for research on language design and verification. Both versions of the system have been evaluated with case studies in the verification of imperative data structures, such as hash tables with higher-order iterators. The verification burden in our new system is reduced by at least an order of magnitude compared to the old system, by replacing manual proof with automation. The core of the automation is a simplification procedure for implications in higher-order separation logic, with hooks that allow programmers to add domain-specific simplification rules. We argue for the effectiveness of our infrastructure by verifying a number of data structures and a packrat parser, and we compare to similar efforts within other projects. Compared to competing approaches to data structure verification, our system includes much less code that must be trusted; namely, about a hundred lines of Coq code defining a program logic. All of our theorems and decision procedures have or build machine-checkable correctness proofs from first principles, removing opportunities for tool bugs to create faulty verifications."
},
{
"paper_title": "Experience report: seL4: formally verifying a high-performance microkernel",
"paper_authors": [
"Gerwin Klein",
"Philip Derrin",
"Kevin Elphinstone"
],
"paper_abstract": "We report on our experience using Haskell as an executable specification language in the formal verification of the seL4 microkernel. The verification connects an abstract operational specification in the theorem prover Isabelle/HOL to a C implementation of the microkernel. We describe how this project differs from other efforts, and examine the effect of using Haskell in a large-scale formal verification. The kernel comprises 8,700 lines of C code; the verification more than 150,000 lines of proof script."
},
{
"paper_title": "Biorthogonality, step-indexing and compiler correctness",
"paper_authors": [
"Nick Benton",
"Chung-Kil Hur"
],
"paper_abstract": "We define logical relations between the denotational semantics of a simply typed functional language with recursion and the operational behaviour of low-level programs in a variant SECD machine. The relations, which are defined using biorthogonality and stepindexing, capture what it means for a piece of low-level code to implement a mathematical, domain-theoretic function and are used to prove correctness of a simple compiler. The results have been formalized in the Coq proof assistant."
},
{
"paper_title": "Scribble: closing the book on ad hoc documentation tools",
"paper_authors": [
"Matthew Flatt",
"Eli Barzilay",
"Robert Bruce Findler"
],
"paper_abstract": "Scribble is a system for writing library documentation, user guides, and tutorials. It builds on PLT Scheme's technology for language extension, and at its heart is a new approach to connecting prose references with library bindings. Besides the base system, we have built Scribble libraries for JavaDoc-style API documentation, literate programming, and conference papers. We have used Scribble to produce thousands of pages of documentation for PLT Scheme; the new documentation is more complete, more accessible, and better organized, thanks in large part to Scribble's flexibility and the ease with which we cross-reference information across levels. This paper reports on the use of Scribble and on its design as both an extension and an extensible part of PLT Scheme."
},
{
"paper_title": "Lambda, the ultimate TA: using a proof assistant to teach programming language foundations",
"paper_authors": [
"Benjamin C. Pierce"
],
"paper_abstract": "Ambitious experiments using proof assistants for programming language research and teaching are all the rage. In this talk, I'll report on one now underway at the University of Pennsylvania and several other sites: a one-semester graduate course in the theory of programming languages presented entirely - every lecture, every homework assignment - in Coq. I'll try to give a sense of what the course is like for both instructors and students, describe some of the most interesting challenges in developing it, and explain why I now believe such machine-assisted courses are the way of the future."
},
{
"paper_title": "A universe of binding and computation",
"paper_authors": [
"Daniel R. Licata",
"Robert Harper"
],
"paper_abstract": "We construct a logical framework supporting datatypes that mix binding and computation, implemented as a universe in the dependently typed programming language Agda 2. We represent binding pronominally, using well-scoped de Bruijn indices, so that types can be used to reason about the scoping of variables. We equip our universe with datatype-generic implementations of weakening, substitution, exchange, contraction, and subordination-based strengthening, so that programmers need not reimplement these operations for each individual language they define. In our mixed, pronominal setting, weakening and substitution hold only under some conditions on types, but we show that these conditions can be discharged automatically in many cases. Finally, we program a variety of standard difficult test cases from the literature, such as normalization-by-evaluation for the untyped lambda-calculus, demonstrating that we can express detailed invariants about variable usage in a program's type while still writing clean and clear code."
},
{
"paper_title": "Non-parametric parametricity",
"paper_authors": [
"Georg Neis",
"Derek Dreyer",
"Andreas Rossberg"
],
"paper_abstract": "Type abstraction and intensional type analysis are features seemingly at odds-type abstraction is intended to guarantee parametricity and representation independence, while type analysis is inherently non-parametric. Recently, however, several researchers have proposed and implemented \"dynamic type generation\" as a way to reconcile these features. The idea is that, when one defines an abstract type, one should also be able to generate at run time a fresh type name, which may be used as a dynamic representative of the abstract type for purposes of type analysis. The question remains: in a language with non-parametric polymorphism, does dynamic type generation provide us with the same kinds of abstraction guarantees that we get from parametric polymorphism? Our goal is to provide a rigorous answer to this question. We define a step-indexed Kripke logical relation for a language with both non-parametric polymorphism (in the form of type-safe cast) and dynamic type generation. Our logical relation enables us to establish parametricity and representation independence results, even in a non-parametric setting, by attaching arbitrary relational interpretations to dynamically-generated type names. In addition, we explore how programs that are provably equivalent in a more traditional parametric logical relation may be \"wrapped\" systematically to produce terms that are related by our non-parametric relation, and vice versa. This leads us to a novel \"polarized\" form of our logical relation, which enables us to distinguish formally between positive and negative notions of parametricity."
},
{
"paper_title": "Finding race conditions in Erlang with QuickCheck and PULSE",
"paper_authors": [
"Koen Claessen",
"Michal Palka",
"Nicholas Smallbone",
"John Hughes",
"Hans Svensson",
"Thomas Arts",
"Ulf Wiger"
],
"paper_abstract": "We address the problem of testing and debugging concurrent, distributed Erlang applications. In concurrent programs, race conditions are a common class of bugs and are very hard to find in practice. Traditional unit testing is normally unable to help finding all race conditions, because their occurrence depends so much on timing. Therefore, race conditions are often found during system testing, where due to the vast amount of code under test, it is often hard to diagnose the error resulting from race conditions. We present three tools (QuickCheck, PULSE, and a visualizer) that in combination can be used to test and debug concurrent programs in unit testing with a much better possibility of detecting race conditions. We evaluate our method on an industrial concurrent case study and illustrate how we find and analyze the race conditions."
},
{
"paper_title": "Partial memoization of concurrency and communication",
"paper_authors": [
"Lukasz Ziarek",
"KC Sivaramakrishnan",
"Suresh Jagannathan"
],
"paper_abstract": "Memoization is a well-known optimization technique used to eliminate redundant calls for pure functions. If a call to a function f with argument v yields result r, a subsequent call to f with v can be immediately reduced to r without the need to re-evaluate f's body. Understanding memoization in the presence of concurrency and communication is significantly more challenging. For example, if f communicates with other threads, it is not sufficient to simply record its input/output behavior; we must also track inter-thread dependencies induced by these communication actions. Subsequent calls to f can be elided only if we can identify an interleaving of actions from these call-sites that lead to states in which these dependencies are satisfied. Similar issues arise if f spawns additional threads. In this paper, we consider the memoization problem for a higher-order concurrent language whose threads may communicate through synchronous message-based communication. To avoid the need to perform unbounded state space search that may be necessary to determine if all communication dependencies manifest in an earlier call can be satisfied in a later one, we introduce a weaker notion of memoization called partial memoization that gives implementations the freedom to avoid performing some part, if not all, of a previously memoized call. To validate the effectiveness of our ideas, we consider the benefits of memoization for reducing the overhead of recomputation for streaming, server-based, and transactional applications executed on a multi-core machine. We show that on a variety of workloads, memoization can lead to substantial performance improvements without incurring high memory costs."
},
{
"paper_title": "Free theorems involving type constructor classes: functional pearl",
"paper_authors": [
"Janis Voigtländer"
],
"paper_abstract": "Free theorems are a charm, allowing the derivation of useful statements about programs from their (polymorphic) types alone. We show how to reap such theorems not only from polymorphism over ordinary types, but also from polymorphism over type constructors restricted by class constraints. Our prime application area is that of monads, which form the probably most popular type constructor class of Haskell. To demonstrate the broader scope, we also deal with a transparent way of introducing difference lists into a program, endowed with a neat and general correctness proof."
},
{
"paper_title": "Experience report: Haskell in the 'real world': writing a commercial application in a lazy functional lanuage",
"paper_authors": [
"Curt J. Sampson"
],
"paper_abstract": "I describe the initial attempt of experienced business software developers with minimal functional programming background to write a non-trivial, business-critical application entirely in Haskell. Some parts of the application domain are well suited to a mathematically-oriented language; others are more typically done in languages such as C++. I discuss the advantages and difficulties of Haskell in these circumstances, with a particular focus on issues that commercial developers find important but that may receive less attention from the academic community. I conclude that, while academic implementations of \"advanced\" programming languages arguably may lag somewhat behind implementations of commercial languages in certain ways important to businesses, this appears relatively easy to fix, and that the other advantages that they offer make them a good, albeit long-term, investment for companies where effective IT implementation can offer a crucial advantage to success."
},
{
"paper_title": "Beautiful differentiation",
"paper_authors": [
"Conal M. Elliott"
],
"paper_abstract": "Automatic differentiation (AD) is a precise, efficient, and convenient method for computing derivatives of functions. Its forward-mode implementation can be quite simple even when extended to compute all of the higher-order derivatives as well. The higher-dimensional case has also been tackled, though with extra complexity. This paper develops an implementation of higher-dimensional, higher-order, forward-mode AD in the extremely general and elegant setting of calculus on manifolds and derives that implementation from a simple and precise specification. In order to motivate and discover the implementation, the paper poses the question \"What does AD mean, independently of implementation?\" An answer arises in the form of naturality of sampling a function and its derivative. Automatic differentiation flows out of this naturality condition, together with the chain rule. Graduating from first-order to higher-order AD corresponds to sampling all derivatives instead of just one. Next, the setting is expanded to arbitrary vector spaces, in which derivative values are linear maps. The specification of AD adapts to this elegant and very general setting, which even simplifies the development."
},
{
"paper_title": "OXenstored: an efficient hierarchical and transactional database using functional programming with reference cell comparisons",
"paper_authors": [
"Thomas Gazagnaire",
"Vincent Hanquez"
],
"paper_abstract": "We describe in this paper our implementation of the Xenstored service which is part of the Xen architecture. Xenstored maintains a hierarchical and transactional database, used for storing and managing configuration values. We demonstrate in this paper that mixing functional data-structures together with reference cell comparison, which is a limited form of pointer comparison, is: (i) safe; and (ii) efficient. This demonstration is based, first, on an axiomatization of operations on the tree-like structure we used to represent the Xenstored database. From this axiomatization, we then derive an efficient algorithm for coalescing concurrent transactions modifying that structure. Finally, we experimentally compare the performance of our implementation, that we called OXenstored, and the C implementation of the Xenstored service distributed with the Xen hypervisor sources: the results show that Oxenstored is much more efficient than its C counterpart. As a direct result of this work, OXenstored will be included in future releases of Xenserver, the virtualization product distributed by Citrix Systems, where it will replace the current implementation of the Xenstored service."
},
{
"paper_title": "Experience report: using objective caml to develop safety-critical embedded tools in a certification framework",
"paper_authors": [
"Bruno Pagano",
"Olivier Andrieu",
"Thomas Moniot",
"Benjamin Canou",
"Emmanuel Chailloux",
"Philippe Wang",
"Pascal Manoury",
"Jean-Louis Colaço"
],
"paper_abstract": "High-level tools have become unavoidable in industrial software development processes. Safety-critical embedded programs don't escape this trend. In the context of safety-critical embedded systems, the development processes follow strict guidelines and requirements. The development quality assurance applies as much to the final embedded code, as to the tools themselves. The French company Esterel Technologies decided in 2006 to base its new SCADE SUITE 6TM certifiable code generator on Objective Caml. This paper outlines how it has been challenging in the context of safety critical software development by the rigorous norms DO-178B, IEC 61508, EN 50128 and such."
},
{
"paper_title": "Identifying query incompatibilities with evolving XML schemas",
"paper_authors": [
"Pierre Genevès",
"Nabil Layaïda",
"Vincent Quint"
],
"paper_abstract": "During the life cycle of an XML application, both schemas and queries may change from one version to another. Schema evolutions may affect query results and potentially the validity of produced data. Nowadays, a challenge is to assess and accommodate the impact of these changes in evolving XML applications. Such questions arise naturally in XML static analyzers. These analyzers often rely on decision procedures such as inclusion between XML schemas, query containment and satisfiability. However, existing decision procedures cannot be used directly in this context. The reason is that they are unable to distinguish information related to the evolution from information corresponding to bugs. This paper proposes a predicate language within a logical framework that can be used to make this distinction. We present a system for monitoring the effect of schema evolutions on the set of admissible documents and on the results of queries. The system is very powerful in analyzing various scenarios where the result of a query may not be anymore what was expected. Specifically, the system is based on a set of predicates which allow a fine-grained analysis for a wide range of forward and backward compatibility issues. Moreover, the system can produce counterexamples and witness documents which are useful for debugging purposes. The current implementation has been tested with realistic use cases, where it allows identifying queries that must be reformulated in order to produce the expected results across successive schema versions."
},
{
"paper_title": "Commutative monads, diagrams and knots",
"paper_authors": [
"Dan P. Piponi"
],
"paper_abstract": "There is certain diverse class of diagram that is found in a variety of branches of mathematics and which all share this property: there is a common scheme for translating all of these diagrams into useful functional code. These diagrams include Bayesian networks, quantum computer circuits [1], trace diagrams for multilinear algebra [2], Feynman diagrams and even knot diagrams [3]. I will show how a common thread lying behind these diagrams is the presence of a commutative monad and I will show how we can use this fact to translate these diagrams directly into Haskell code making use of do-notation for monads. I will also show a number of examples of such translated code at work and use it to solve problems ranging from Bayesian inference to the topological problem of untangling tangled strings. Along the way I hope to give a little insight into the subjects mentioned above and illustrate how a functional programming language can be a valuable tool in mathematical research and experimentation."
},
{
"paper_title": "Generic programming with fixed points for mutually recursive datatypes",
"paper_authors": [
"Alexey Rodriguez Yakushev",
"Stefan Holdermans",
"Andres Löh",
"Johan Jeuring"
],
"paper_abstract": "Many datatype-generic functions need access to the recursive positions in the structure of the datatype, and therefore adopt a fixed point view on datatypes. Examples include variants of fold that traverse the data following the recursive structure, or the Zipper data structure that enables navigation along the recursive positions. However, Hindley-Milner-inspired type systems with algebraic datatypes make it difficult to express fixed points for anything but regular datatypes. Many real-life examples such as abstract syntax trees are in fact systems of mutually recursive datatypes and therefore excluded. Using Haskell's GADTs and type families, we describe a technique that allows a fixed-point view for systems of mutually recursive datatypes. We demonstrate that our approach is widely applicable by giving several examples of generic functions for this view, most prominently the Zipper."
},
{
"paper_title": "Attribute grammars fly first-class: how to do aspect oriented programming in Haskell",
"paper_authors": [
"Marcos Viera",
"S. Doaitse Swierstra",
"Wouter Swierstra"
],
"paper_abstract": "Attribute Grammars (AGs), a general-purpose formalism for describing recursive computations over data types, avoid the trade-off which arises when building software incrementally: should it be easy to add new data types and data type alternatives or to add new operations on existing data types? However, AGs are usually implemented as a pre-processor, leaving e.g. type checking to later processing phases and making interactive development, proper error reporting and debugging difficult. Embedding AG into Haskell as a combinator library solves these problems. Previous attempts at embedding AGs as a domain-specific language were based on extensible records and thus exploiting Haskell's type system to check the well formedness of the AG, but fell short in compactness and the possibility to abstract over oft occurring AG patterns. Other attempts used a very generic mapping for which the AG well-formedness could not be statically checked. We present a typed embedding of AG in Haskell satisfying all these requirements. The key lies in using HList-like typed heterogeneous collections (extensible polymorphic records) and expressing AG well-formedness conditions as type-level predicates (i.e., type-class constraints). By further type-level programming we can also express common programming patterns, corresponding to the typical use cases of monads such as Reader, Writer and State. The paper presents a realistic example of type-class-based type-level programming in Haskell."
},
{
"paper_title": "Parallel concurrent ML",
"paper_authors": [
"John Reppy",
"Claudio V. Russo",
"Yingqi Xiao"
],
"paper_abstract": "Concurrent ML (CML) is a high-level message-passing language that supports the construction of first-class synchronous abstractions called events. This mechanism has proven quite effective over the years and has been incorporated in a number of other languages. While CML provides a concurrent programming model, its implementation has always been limited to uniprocessors. This limitation is exploited in the implementation of the synchronization protocol that underlies the event mechanism, but with the advent of cheap parallel processing on the desktop (and laptop), it is time for Parallel CML. Parallel implementations of CML-like primitives for Java and Haskell exist, but build on high-level synchronization constructs that are unlikely to perform well. This paper presents a novel, parallel implementation of CML that exploits a purpose-built optimistic concurrency protocol designed for both correctness and performance on shared-memory multiprocessors. This work extends and completes an earlier protocol that supported just a strict subset of CML with synchronization on input, but not output events. Our main contributions are a model-checked reference implementation of the protocol and two concrete implementations. This paper focuses on Manticore's functional, continuation-based implementation but briefly discusses an independent, thread-based implementation written in C# and running on Microsoft's stock, parallel runtime. Although very different in detail, both derive from the same design. Experimental evaluation of the Manticore implementation reveals good performance, dispite the extra overhead of multiprocessor synchronization."
},
{
"paper_title": "A concurrent ML library in concurrent Haskell",
"paper_authors": [
"Avik Chaudhuri"
],
"paper_abstract": "In Concurrent ML, synchronization abstractions can be defined and passed as values, much like functions in ML. This mechanism admits a powerful, modular style of concurrent programming, called higher-order concurrent programming. Unfortunately, it is not clear whether this style of programming is possible in languages such as Concurrent Haskell, that support only first-order message passing. Indeed, the implementation of synchronization abstractions in Concurrent ML relies on fairly low-level, language-specific details. In this paper we show, constructively, that synchronization abstractions can be supported in a language that supports only first-order message passing. Specifically, we implement a library that makes Concurrent ML-style programming possible in Concurrent Haskell. We begin with a core, formal implementation of synchronization abstractions in the π-calculus. Then, we extend this implementation to encode all of Concurrent ML's concurrency primitives (and more!) in Concurrent Haskell. Our implementation is surprisingly efficient, even without possible optimizations. In several small, informal experiments, our library seems to outperform OCaml's standard library of Concurrent ML-style primitives. At the heart of our implementation is a new distributed synchronization protocol that we prove correct. Unlike several previous translations of synchronization abstractions in concurrent languages, we remain faithful to the standard semantics for Concurrent ML's concurrency primitives. For example, we retain the symmetry of choose, which can express selective communication. As a corollary, we establish that implementing selective communication on distributed machines is no harder than implementing first-order message passing on such machines."
},
{
"paper_title": "Experience report: OCaml for an industrial-strength static analysis framework",
"paper_authors": [
"Pascal Cuoq",
"Julien Signoles",
"Patrick Baudin",
"Richard Bonichon",
"Géraud Canet",
"Loïc Correnson",
"Benjamin Monate",
"Virgile Prevosto",
"Armand Puccetti"
],
"paper_abstract": "This experience report describes the choice of OCaml as the implementation language for Frama-C, a framework for the static analysis of C programs. OCaml became the implementation language for Frama-C because it is expressive. Most of the reasons listed in the remaining of this article are secondary reasons, features which are not specific to OCaml (modularity, availability of a C parser, control over the use of resources...) but could have prevented the use of OCaml for this project if they had been missing."
},
{
"paper_title": "Control-flow analysis of function calls and returns by abstract interpretation",
"paper_authors": [
"Jan Midtgaard",
"Thomas P. Jensen"
],
"paper_abstract": "We derive a control-flow analysis that approximates the interprocedural control-flow of both function calls and returns in the presence of first-class functions and tail-call optimization. In addition to an abstract environment, our analysis computes for each expression an abstract control stack, effectively approximating where function calls return across optimized tail calls. The analysis is systematically calculated by abstract interpretation of the stack-based CaEK abstract machine of Flanagan et al. using a series of Galois connections. Abstract interpretation provides a unifying setting in which we 1) prove the analysis equivalent to the composition of a continuation-passing style (CPS) transformation followed by an abstract interpretation of a stack-less CPS machine, and 2) extract an equivalent constraint-based formulation, thereby providing a rational reconstruction of a constraint-based control-flow analysis from abstract interpretation principles."
},
{
"paper_title": "Automatically RESTful web applications: marking modular serializable continuations",
"paper_authors": [
"Jay A. McCarthy"
],
"paper_abstract": "Continuation-based Web servers provide distinct advantages over traditional Web application development: expressive power and modularity. This power leads to fewer errors and more interesting applications. Furthermore, these Web servers are more than prototypes; they are used in some real commercial applications. Unfortunately, they pay a heavy price for the additional power in the form of lack of scalability. We fix this key problem with a modular program transformation that produces scalable, continuation-based Web programs based on the REST architecture. Our programs use the same features as non-scalable, continuation-based Web programs, so we do not sacrifice expressive power for performance. In particular, we allow continuation marks in Web programs. Our system uses 10 percent (or less) of the memory required by previous approaches."
},
{
"paper_title": "Experience report: ocsigen, a web programming framework",
"paper_authors": [
"Vincent Balat",
"Jérôme Vouillon",
"Boris Yakobowski"
],
"paper_abstract": "The evolution of Web sites towards very dynamic applications makes it necessary to reconsider current Web programming technologies. We believe that Web development would benefit greatly from more abstract paradigms and that a more semantical approach would result in huge gains in expressiveness. In particular, functional programming provides a really elegant solution to some important Web interaction problems, but few frameworks take advantage of it. The Ocsigen project is an attempt to provide global solutions to these needs. We present our experience in designing this general framework for Web programming, written in Objective Caml. It provides a fully featured Web server and a framework for programming Web applications, with the aim of improving expressiveness and safety. This is done by taking advantage of functional programming and static typing as much as possible."
},
{
"paper_title": "Implementing first-class polymorphic delimited continuations by a type-directed selective CPS-transform",
"paper_authors": [
"Tiark Rompf",
"Ingo Maier",
"Martin Odersky"
],
"paper_abstract": "We describe the implementation of first-class polymorphic delimited continuations in the programming language Scala. We use Scala's pluggable typing architecture to implement a simple type and effect system, which discriminates expressions with control effects from those without and accurately tracks answer type modification incurred by control effects. To tackle the problem of implementing first-class continuations under the adverse conditions brought upon by the Java VM, we employ a selective CPS transform, which is driven entirely by effect-annotated types and leaves pure code in direct style. Benchmarks indicate that this high-level approach performs competitively."
},
{
"paper_title": "A theory of typed coercions and its applications",
"paper_authors": [
"Nikhil Swamy",
"Michael Hicks",
"Gavin M. Bierman"
],
"paper_abstract": "A number of important program rewriting scenarios can be recast as type-directed coercion insertion. These range from more theoretical applications such as coercive subtyping and supporting overloading in type theories, to more practical applications such as integrating static and dynamically typed code using gradual typing, and inlining code to enforce security policies such as access control and provenance tracking. In this paper we give a general theory of type-directed coercion insertion. We specifically explore the inherent tradeoff between expressiveness and ambiguity--the more powerful the strategy for generating coercions, the greater the possibility of several, semantically distinct rewritings for a given program. We consider increasingly powerful coercion generation strategies, work out example applications supported by the increased power (including those mentioned above), and identify the inherent ambiguity problems of each setting, along with various techniques to tame the ambiguities."
},
{
"paper_title": "Complete and decidable type inference for GADTs",
"paper_authors": [
"Tom Schrijvers",
"Simon Peyton Jones",
"Martin Sulzmann",
"Dimitrios Vytiniotis"
],
"paper_abstract": "GADTs have proven to be an invaluable language extension, for ensuring data invariants and program correctness among others. Unfortunately, they pose a tough problem for type inference: we lose the principal-type property, which is necessary for modular type inference. We present a novel and simplified type inference approach for local type assumptions from GADT pattern matches. Our approach is complete and decidable, while more liberal than previous such approaches."
}
]
},
{
"proceeding_title": "ICFP '08:Proceedings of the 13th ACM SIGPLAN international conference on Functional programming",
"proceeding_contents": [
{
"paper_title": "ICFP 2009 Announcement",
"paper_authors": [
"Phil Wadler"
]
},
{
"paper_title": "From OCaml to Javascript at Skydeck.",
"paper_authors": [
"Jake Donham"
]
},
{
"paper_title": "Is Haskell Ready for Everyday Computing?",
"paper_authors": [
"Jeff Polakow"
]
},
{
"paper_title": "Haskell' Status Report",
"paper_authors": [
"Simon Marlow"
]
},
{
"paper_title": "Report on the Eleventh ICFP Programming Contest",
"paper_authors": [
"Tim Sheard",
"Tim Chevalier",
"Chuan-kai Lin",
"Garrett Morris",
"Emerson Murphy-Hill",
"Any Gill",
"John Reppy",
"Lars Bergstrom",
"Mike Rainey",
"Adam Shaw",
"Virgin Gheorghiu"
]
},
{
"paper_title": "Invited talk: The Future of Erlang",
"paper_authors": [
"Kenneth Lundin"
]
},
{
"paper_title": "PC Chair's Report",
"paper_authors": [
"Peter Theimann"
]
},
{
"paper_title": "Buy a Feature: An Adventure in Immutability and Actors.",
"paper_authors": [
"David Pollak"
]
},
{
"paper_title": "Informal Five Minute Presentations",
"paper_authors": [
"Tee Teoh"
]
},
{
"paper_title": "Ad Serving with Erlang.",
"paper_authors": [
"Bob Ippolito"
]
},
{
"paper_title": "Functions to Junctions: Ultra Low Power Chip Design With Some Help From Haskell.",
"paper_authors": [
"Gregory Wright"
]
},
{
"paper_title": "Controlling Hybrid Vehicles with Haskell",
"paper_authors": [
"Tom Hawkins"
]
},
{
"paper_title": "Most Influential ICFP'98 Paper Award",
"paper_authors": [
"Kathleen Fisher"
]
},
{
"paper_title": "Lazy and speculative execution in computer systems",
"paper_authors": [
"Butler W. Lampson"
],
"paper_abstract": "The distinction between lazy and eager (or strict) evaluation has been studied in programming languages since Algol 60's call by name, as a way to avoid unnecessary work and to deal gracefully with infinite structures such as streams. It is deeply integrated in some languages, notably Haskell, and can be simulated in many languages by wrapping a lazy expression in a lambda. Less well studied is the role of laziness, and its opposite, speculation, in computer systems, both hardware and software. A wide range of techniques can be understood as applications of these two ideas. Laziness is the idea behind: Redo logging for maintaining persistent state and replicated state machines: the log represents the current state, but it is evaluated only after a failure or to bring a replica online. Copy-on-write schemes for maintaining multiple versions of a large, slowly changing state, usually in a database or file system. Write buffers and writeback caches in memory and file systems, which are lazy about updating the main store. Relaxed memory models and eventual consistency replication schemes (which require weakening the spec). Clipping regions and expose events in graphics and window systems. Carry-save adders, which defer propagating carries until a clean result is needed. \"Infinity\" and \"Not a number\" results of floating point operations. Futures (in programming) and out of order execution (in CPUs), which launch a computation but are lazy about consuming the result. Dataflow is a generalization. \"Formatting operators\" in text editors, which apply properties such as \"italic\" to large regions of text by attaching a sequence of functions that compute the properties; the functions are not evaluated until the text needs to be displayed. Stream processing in database queries, Unix pipes, etc., which conceptually applies operators to unbounded sequences of data, but rearranges the computation when possible to apply a sequence of operators to each data item in turn. Speculation is the idea behind: Optimistic concurrency control in databases, and more recently in transactional memory Prefetching in memory and file systems. Branch prediction, and speculative execution in general in modern CPUs. Data speculation, which works especially well when the data is cached but might be updated by a concurrent process. This is a form of optimistic concurrency control. Exponential backoff schemes for scheduling a resource, most notably in LANs such as WiFi or classical Ethernet. All forms of caching, which speculate that it's worth filling up some memory with data in the hope that it will be used again. In both cases it is usual to insist that the laziness or speculation is strictly a matter of scheduling that doesn't affect the result of a computation but only improves the performance. Sometimes, however, the spec is weakened, for example in eventual consistency. I will discuss many of these examples in detail and examine what they have in common, how they differ, and what factors govern the effectiveness of laziness and speculation in computer systems."
},
{
"paper_title": "FLUX: functional updates for XML",
"paper_authors": [
"James Cheney"
],
"paper_abstract": "XML database query languages have been studied extensively, but XML database updates have received relatively little attention, and pose many challenges to language design. We are developing an XML update language called FLUX, which stands for FunctionaL Updates for XML, drawing upon ideas from functional programming languages. In prior work, we have introduced a core language for FLUX with a clear operational semantics and a sound, decidable static type system based on regular expression types. Our initial proposal had several limitations. First, it lacked support for recursive types or update procedures. Second, although a high-level source language can easily be translated to the core language, it is difficult to propagate meaningful type errors from the core language back to the source. Third, certain updates are wellformed yet contain path errors, or \"dead\" subexpressions which never do any useful work. It would be useful to detect path errors, since they often represent errors or optimization opportunities. In this paper, we address all three limitations. Specifically, we present an improved, sound type system that handles recursion. We also formalize a source update language and give a translation to the core language that preserves and reflects typability. We also develop a path-error analysis (a form of dead-code analysis) for updates."
},
{
"paper_title": "Typed iterators for XML",
"paper_authors": [
"Giuseppe Castagna",
"Kim Nguyen"
],
"paper_abstract": "XML transformations are very sensitive to types: XML types describe the tags and attributes of XML elements as well as the number, kind, and order of their sub-elements. Therefore, operations, even simple ones, that modify these features may affect the types of documents. Operations on XML documents are performed by iterators that, to be useful, need to be typed by a kind of polymorphism that goes beyond what currently exists. For this reason these iterators are not programmed but, rather, hard-coded in the languages. However, this approach soon reaches its limits, as the hard-coded iterators cannot cover fairly standard usage scenarios. As a solution to this problem we propose a generic language to define iterators for XML data. This language can either be used as a compilation target (e.g., for XPATH) or it can be grafted on any statically typed host programming language (as long as this has product types) to endow it with XML processing capabilities. We show that our language mostly offers the required degree of polymorphism, study its formal properties, and show its expressiveness and practical impact by providing several usage examples and encodings."
},
{
"paper_title": "AURA: a programming language for authorization and audit",
"paper_authors": [
"Limin Jia",
"Jeffrey A. Vaughan",
"Karl Mazurak",
"Jianzhou Zhao",
"Luke Zarko",
"Joseph Schorr",
"Steve Zdancewic"
],
"paper_abstract": "This paper presents AURA, a programming language for access control that treats ordinary programming constructs (e.g., integers and recursive functions) and authorization logic constructs (e.g., principals and access control policies) in a uniform way. AURA is based on polymorphic DCC and uses dependent types to permit assertions that refer directly to AURA values while keeping computation out of the assertion level to ensure tractability. The main technical results of this paper include fully mechanically verified proofs of the decidability and soundness for AURA's type system, and a prototype typechecker and interpreter."
},
{
"paper_title": "The power of Pi",
"paper_authors": [
"Nicolas Oury",
"Wouter Swierstra"
],
"paper_abstract": "This paper exhibits the power of programming with dependent types by dint of embedding three domain-specific languages: Cryptol, a language for cryptographic protocols; a small data description language; and relational algebra. Each example demonstrates particular design patterns inherent to dependently-typed programming. Documenting these techniques paves the way for further research in domain-specific embedded type systems."
},
{
"paper_title": "Type checking with open type functions",
"paper_authors": [
"Tom Schrijvers",
"Simon Peyton Jones",
"Manuel Chakravarty",
"Martin Sulzmann"
],
"paper_abstract": "We report on an extension of Haskell with open type-level functions and equality constraints that unifies earlier work on GADTs, functional dependencies, and associated types. The contribution of the paper is that we identify and characterise the key technical challenge of entailment checking; and we give a novel, decidable, sound, and complete algorithm to solve it, together with some practically-important variants. Our system is implemented in GHC, and is already in active use."
},
{
"paper_title": "None",
"paper_authors": [
"Didier Rémy",
"Boris Yakobowski"
],
"paper_abstract": "MLF is a type system that seamlessly merges ML-style type inference with System-F polymorphism. We propose a system of graphic (type) constraints that can be used to perform type inference in both ML or MLF. We show that this constraint system is a small extension of the formalism of graphic types, originally introduced to represent MLF types. We give a few semantic preserving transformations on constraints and propose a strategy for applying them to solve constraints. We show that the resulting algorithm has optimal complexity for MLF type inference, and argue that, as for ML, this complexity is linear under reasonable assumptions."
},
{
"paper_title": "A type-preserving compiler in Haskell",
"paper_authors": [
"Louis-Julien Guillemette",
"Stefan Monnier"
],
"paper_abstract": "There has been a lot of interest of late for programming languages that incorporate features from dependent type systems and proof assistants, in order to capture important invariants of the program in the types. This allows type-based program verification and is a promising compromise between plain old types and full blown Hoare logic proofs. The introduction of GADTs in GHC (and more recently type families) made such dependent typing available in an industry-quality implementation, making it possible to consider its use in large scale programs. We have undertaken the construction of a complete compiler for System F, whose main property is that the GHC type checker verifies mechanically that each phase of the compiler properly preserves types. Our particular focus is on \"types rather than proofs\": reasonably few annotations that do not overwhelm the actual code. We believe it should be possible to write such a type-preserving compiler with an amount of extra code comparable to what is necessary for typical typed intermediate languages, but with the advantage of static checking. We will show in this paper the remaining hurdles to reach this goal."
},
{
"paper_title": "Experience report: playing the DSL card",
"paper_authors": [
"Mark P. Jones"
],
"paper_abstract": "This paper describes our experience using a functional language, Haskell, to build an embedded, domain-specific language (DSL) for component configuration in large-scale, real-time, embedded systems. Prior to the introduction of the DSL, engineers would describe the steps needed to configure a particular system in a handwritten XML document. In this paper, we outline the application domain, give a brief overview of the DSL that we developed, and provide concrete data to demonstrate its effectiveness. In particular, we show that the DSL has several significant benefits over the original, XML-based approach including reduced code size, increased modularity and scalability, and detection and prevention of common defects. For example, using the DSL, we were able to produce clear and intuitive descriptions of component configurations that were sometimes less than 1/30 of the size of the original XML."
},
{
"paper_title": "Generic discrimination: sorting and paritioning unshared data in linear time",
"paper_authors": [
"Fritz Henglein"
],
"paper_abstract": "We introduce the notion of discrimination as a generalization of both sorting and partitioning and show that worst-case linear-time discrimination functions (discriminators) can be defined generically, by (co-)induction on an expressive language of order denotations. The generic definition yields discriminators that generalize both distributive sorting and multiset discrimination. The generic discriminator can be coded compactly using list comprehensions, with order denotations specified using Generalized Algebraic Data Types (GADTs). A GADT-free combinator formulation of discriminators is also given. We give some examples of the uses of discriminators, including a new most-significant-digit lexicographic sorting algorithm. Discriminators generalize binary comparison functions: They operate on n arguments at a time, but do not expose more information than the underlying equivalence, respectively ordering relation on the arguments. We argue that primitive types with equality (such as references in ML) and ordered types (such as the machine integer type), should expose their equality, respectively standard ordering relation, as discriminators: Having only a binary equality test on a type requires Θ(n2) time to find all the occurrences of an element in a list of length n, for each element in the list, even if the equality test takes only constant time. A discriminator accomplishes this in linear time. Likewise, having only a (constant-time) comparison function requires Θ(n log n) time to sort a list of n elements. A discriminator can do this in linear time."
},
{
"paper_title": "Transactional events for ML",
"paper_authors": [
"Laura Effinger-Dean",
"Matthew Kehrt",
"Dan Grossman"
],
"paper_abstract": "Transactional events (TE) are an approach to concurrent programming that enriches the first-class synchronous message-passing of Concurrent ML (CML) with a combinator that allows multiple messages to be passed as part of one all-or-nothing synchronization. Donnelly and Fluet (2006) designed and implemented TE as a Haskell library and demonstrated that it enables elegant solutions to programming patterns that are awkward or impossible in CML. However, both the definition and the implementation of TE relied fundamentally on the code in a synchronization not using mutable memory, an unreasonable assumption for mostly functional languages like ML where functional interfaces may have impure implementations. We present a definition and implementation of TE that supports ML-style references and nested synchronizations, both of which were previously unnecessary due to Haskell's more restrictive type system. As in prior work, we have a high-level semantics that makes nondeterministic choices such that synchronizations succeed whenever possible and a low-level semantics that uses search to implement the high-level semantics soundly and completely. The key design trade-off in the semantics is to allow updates to mutable memory without requiring the implementation to consider all possible thread interleavings. Our solution uses first-class heaps and allows interleavings only when a message is sent or received. We have used Coq to prove the high- and low-level semantics equivalent. We have implemented our approach by modifying the Objective Caml run-time system. By modifying the run-time system, rather than relying solely on a library, we can eliminate the potential for nonterminating computations within unsuccessful synchronizations to run forever."
},
{
"paper_title": "Experience report: erlang in acoustic ray tracing",
"paper_authors": [
"Christian Convey",
"Andrew Fredricks",
"Christopher Gagner",
"Douglas Maxwell",
"Lutz Hamel"
],
"paper_abstract": "We investigated the relative merits of C++ and Erlang in the implementation of a parallel acoustic ray tracing algorithm for the U.S. Navy. We found a much smaller learning curve and better debugging environment for parallel Erlang than for pthreads-based C++ programming. Our C++ implementation outperformed the Erlang program by at least 12x. Attempts to use Erlang on the IBM Cell BE microprocessor were frustrated by Erlang's memory footprint."
},
{
"paper_title": "Implicitly-threaded parallelism in Manticore",
"paper_authors": [
"Matthew Fluet",
"Mike Rainey",
"John Reppy",
"Adam Shaw"
],
"paper_abstract": "The increasing availability of commodity multicore processors is making parallel computing available to the masses. Traditional parallel languages are largely intended for large-scale scientific computing and tend not to be well-suited to programming the applications one typically finds on a desktop system. Thus we need new parallel-language designs that address a broader spectrum of applications. In this paper, we present Manticore, a language for building parallel applications on commodity multicore hardware including a diverse collection of parallel constructs for different granularities of work. We focus on the implicitly-threaded parallel constructs in our high-level functional language. We concentrate on those elements that distinguish our design from related ones, namely, a novel parallel binding form, a nondeterministic parallel case form, and exceptions in the presence of data parallelism. These features differentiate the present work from related work on functional data parallel language designs, which has focused largely on parallel problems with regular structure and the compiler transformations --- most notably, flattening --- that make such designs feasible. We describe our implementation strategies and present some detailed examples utilizing various mechanisms of our language."
},
{
"paper_title": "Defunctionalized interpreters for programming languages",
"paper_authors": [
"Olivier Danvy"
],
"paper_abstract": "This document illustrates how functional implementations of formal semantics (structural operational semantics, reduction semantics, small-step and big-step abstract machines, natural semantics, and denotational semantics) can be transformed into each other. These transformations were foreshadowed by Reynolds in \"Definitional Interpreters for Higher-Order Programming Languages\" for functional implementations of denotational semantics, natural semantics, and big-step abstract machines using closure conversion, CPS transformation, and defunctionalization. Over the last few years, the author and his students have further observed that functional implementations of small-step and of big-step abstract machines are related using fusion by fixed-point promotion and that functional implementations of reduction semantics and of small-step abstract machines are related using refocusing and transition compression. It furthermore appears that functional implementations of structural operational semantics and of reduction semantics are related as well, also using CPS transformation and defunctionalization. This further relation provides an element of answer to Felleisen's conjecture that any structural operational semantics can be expressed as a reduction semantics: for deterministic languages, a reduction semantics is a structural operational semantics in continuation style, where the reduction context is a defunctionalized continuation. As the defunctionalized counterpart of the continuation of a one-step reduction function, a reduction context represents the rest of the reduction, just as an evaluation context represents the rest of the evaluation since it is the defunctionalized counterpart of the continuation of an evaluation function."
},
{
"paper_title": "Parametric higher-order abstract syntax for mechanized semantics",
"paper_authors": [
"Adam Chlipala"
],
"paper_abstract": "We present parametric higher-order abstract syntax (PHOAS), a new approach to formalizing the syntax of programming languages in computer proof assistants based on type theory. Like higher-order abstract syntax (HOAS), PHOAS uses the meta language's binding constructs to represent the object language's binding constructs. Unlike HOAS, PHOAS types are definable in general-purpose type theories that support traditional functional programming, like Coq's Calculus of Inductive Constructions. We walk through how Coq can be used to develop certified, executable program transformations over several statically-typed functional programming languages formalized with PHOAS; that is, each transformation has a machine-checked proof of type preservation and semantic preservation. Our examples include CPS translation and closure conversion for simply-typed lambda calculus, CPS translation for System F, and translation from a language with ML-style pattern matching to a simpler language with no variable-arity binding constructs. By avoiding the syntactic hassle associated with first-order representation techniques, we achieve a very high degree of proof automation."
},
{
"paper_title": "Typed closure conversion preserves observational equivalence",
"paper_authors": [
"Amal Ahmed",
"Matthias Blume"
],
"paper_abstract": "Language-based security relies on the assumption that all potential attacks are bound by the rules of the language in question. When programs are compiled into a different language, this is true only if the translation process preserves observational equivalence. We investigate the problem of fully abstract compilation, i.e., compilation that both preserves and reflects observational equivalence. In particular, we prove that typed closure conversion for the polymorphic »-calculus with existential and recursive types is fully abstract. Our proof uses operational techniques in the form of a step-indexed logical relation and construction of certain wrapper terms that \"back-translate\" from target values to source values. Although typed closure conversion has been assumed to be fully abstract, we are not aware of any previous result that actually proves this."
},
{
"paper_title": "Write it recursively: a generic framework for optimal path queries",
"paper_authors": [
"Akimasa Morihata",
"Kiminori Matsuzaki",
"Masato Takeichi"
],
"paper_abstract": "Optimal path queries are queries to obtain an optimal path specified by a given criterion of optimality. There have been many studies to give efficient algorithms for classes of optimal path problem. In this paper, we propose a generic framework for optimal path queries. We offer a domain-specific language to describe optimal path queries, together with an algorithm to find an optimal path specified in our language. One of the most distinct features of our framework is the use of recursive functions to specify queries. Recursive functions reinforce expressiveness of our language so that we can describe many problems including known ones; thus, we need not learn existing results. Moreover, we can derive an efficient querying algorithm from the description of a query written in recursive functions. Our algorithm is a generalization of existing algorithms, and answers a query in O(n log n) time on a graph of O(n) size. We also explain our implementation of an optimal path querying system, and report some experimental results."
},
{
"paper_title": "Efficient nondestructive equality checking for trees and graphs",
"paper_authors": [
"Michael D. Adams",
"R. Kent Dybvig"
],
"paper_abstract": "The Revised6 Report on Scheme requires its generic equivalence predicate, equal?, to terminate even on cyclic inputs. While the terminating equal? can be implemented via a DFA-equivalence or union-find algorithm, these algorithms usually require an additional pointer to be stored in each object, are not suitable for multithreaded code due to their destructive nature, and may be unacceptably slow for the small acyclic values that are the most likely inputs to the predicate. This paper presents a variant of the union-find algorithm for equal? that addresses these issues. It performs well on large and small, cyclic and acyclic inputs by interleaving a low-overhead algorithm that terminates only for acyclic inputs with a more general algorithm that handles cyclic inputs. The algorithm terminates for all inputs while never being more than a small factor slower than whichever of the acyclic or union-find algorithms would have been faster. Several intermediate algorithms are also presented, each of which might be suitable for use in a particular application, though only the final algorithm is suitable for use in a library procedure, like equal?, that must work acceptably well for all inputs."
},
{
"paper_title": "Functional pearl: streams and unique fixed points",
"paper_authors": [
"Ralf Hinze"
],
"paper_abstract": "Streams, infinite sequences of elements, live in a coworld: they are given by a coinductive data type, operations on streams are implemented by corecursive programs, and proofs are conducted using coinduction. But there is more to it: suitably restricted, stream equations possess unique solutions, a fact that is not very widely appreciated. We show that this property gives rise to a simple and attractive proof technique essentially bringing equational reasoning to the coworld. In fact, we redevelop the theory of recurrences, finite calculus and generating functions using streams and stream operators building on the cornerstone of unique solutions. The development is constructive: streams and stream operators are implemented in Haskell, usually by one-liners. The resulting calculus or library, if you wish, is elegant and fun to use. Finally, we rephrase the proof of uniqueness using generalised algebraic data types."
},
{
"paper_title": "Data-flow testing of declarative programs",
"paper_authors": [
"Sebastian Fischer",
"Herbert Kuchen"
],
"paper_abstract": "We propose a novel notion of data-flow coverage for testing declarative programs. Moreover, we extend an automatic test-case generator such that it can achieve data-flow coverage. The coverage information is obtained by instrumenting a program such that it collects coverage information during its execution. Finally, we show the benefits of data-flow based testing for a couple of example applications."
},
{
"paper_title": "Functional translation of a calculus of capabilities",
"paper_authors": [
"Arthur Charguéraud",
"François Pottier"
],
"paper_abstract": "Reasoning about imperative programs requires the ability to track aliasing and ownership properties. We present a type system that provides this ability, by using regions, capabilities, and singleton types. It is designed for a high-level calculus with higher-order functions, algebraic data structures, and references (mutable memory cells). The type system has polymorphism, yet does not require a value restriction, because capabilities act as explicit store typings. We exhibit a type-directed, type-preserving, and meaning-preserving translation of this imperative calculus into a pure calculus. Like the monadic translation, this is a store-passing translation. Here, however, the store is partitioned into multiple fragments, which are threaded through a computation only if they are relevant to it. Furthermore, the decomposition of the store into fragments can evolve dynamically to reflect ownership transfers. The translation offers deep insight about the inner workings and soundness of the type system. If coupled with a semantic model of its target calculus, it leads to a semantic model of its imperative source calculus. Furthermore, it provides a foundation for our long-term objective of designing a system for specifying and certifying imperative programs with dynamic memory allocation."
},
{
"paper_title": "Paradise: a two-stage DSL embedded in Haskell",
"paper_authors": [
"Lennart Augustsson",
"Howard Mansell",
"Ganesh Sittampalam"
],
"paper_abstract": "We have implemented a two-stage language, Paradise, for building reusable components which are used to price financial products. Paradise is embedded in Haskell and makes heavy use of type-class based overloading, allowing the second stage to be compiled into a variety of backend platforms. Paradise has enabled us to begin moving away from implementation directly in monolithic Excel spreadsheets and towards a more modular and retargetable approach."
},
{
"paper_title": "Ynot: dependent types for imperative programs",
"paper_authors": [
"Aleksandar Nanevski",
"Greg Morrisett",
"Avraham Shinnar",
"Paul Govereau",
"Lars Birkedal"
],
"paper_abstract": "We describe an axiomatic extension to the Coq proof assistant, that supports writing, reasoning about, and extracting higher-order, dependently-typed programs with side-effects. Coq already includes a powerful functional language that supports dependent types, but that language is limited to pure, total functions. The key contribution of our extension, which we call Ynot, is the added support for computations that may have effects such as non-termination, accessing a mutable store, and throwing/catching exceptions. The axioms of Ynot form a small trusted computing base which has been formally justified in our previous work on Hoare Type Theory (HTT). We show how these axioms can be combined with the powerful type and abstraction mechanisms of Coq to build higher-level reasoning mechanisms which in turn can be used to build realistic, verified software components. To substantiate this claim, we describe here a representative series of modules that implement imperative finite maps, including support for a higher-order (effectful) iterator. The implementations range from simple (e.g., association lists) to complex (e.g., hash tables) but share a common interface which abstracts the implementation details and ensures that the modules properly implement the finite map abstraction."
},
{
"paper_title": "A scheduling framework for general-purpose parallel languages",
"paper_authors": [
"Matthew Fluet",
"Mike Rainey",
"John Reppy"
],
"paper_abstract": "The trend in microprocessor design toward multicore and manycore processors means that future performance gains in software will largely come from harnessing parallelism. To realize such gains, we need languages and implementations that can enable parallelism at many different levels. For example, an application might use both explicit threads to implement course-grain parallelism for independent tasks and implicit threads for fine-grain data-parallel computation over a large array. An important aspect of this requirement is supporting a wide range of different scheduling mechanisms for parallel computation. In this paper, we describe the scheduling framework that we have designed and implemented for Manticore, a strict parallel functional language. We take a micro-kernel approach in our design: the compiler and runtime support a small collection of scheduling primitives upon which complex scheduling policies can be implemented. This framework is extremely flexible and can support a wide range of different scheduling policies. It also supports the nesting of schedulers, which is key to both supporting multiple scheduling policies in the same application and to hierarchies of speculative parallel computations. In addition to describing our framework, we also illustrate its expressiveness with several popular scheduling techniques. We present a (mostly) modular approach to extending our schedulers to support cancellation. This mechanism is essential for implementing eager and speculative parallelism. We finally evaluate our framework with a series of benchmarks and an analysis."
},
{
"paper_title": "Space profiling for parallel functional programs",
"paper_authors": [
"Daniel Spoonhower",
"Guy E. Blelloch",
"Robert Harper",
"Phillip B. Gibbons"
],
"paper_abstract": "This paper presents a semantic space profiler for parallel functional programs. Building on previous work in sequential profiling, our tools help programmers to relate runtime resource use back to program source code. Unlike many profiling tools, our profiler is based on a cost semantics. This provides a means to reason about performance without requiring a detailed understanding of the compiler or runtime system. It also provides a specification for language implementers. This is critical in that it enables us to separate cleanly the performance of the application from that of the language implementation. Some aspects of the implementation can have significant effects on performance. Our cost semantics enables programmers to understand the impact of different scheduling policies while hiding many of the details of their implementations. We show applications where the choice of scheduling policy has asymptotic effects on space use. We explain these use patterns through a demonstration of our tools. We also validate our methodology by observing similar performance in our implementation of a parallel extension of Standard ML."
},
{
"paper_title": "Polymorphism and page tables: systems programming from a functional programmer's perspective",
"paper_authors": [
"Mark P. Jones"
],
"paper_abstract": "With features that include lightweight syntax, expressive type systems, and deep semantic foundations, functional languages are now being used to develop an increasingly broad range of complex, real-world applications. In the area of systems software, however, where performance and interaction with low-level aspects of hardware are central concerns, many practitioners still eschew the advantages of higher-level languages for the potentially unsafe but predictable behavior of traditional imperative languages like C. It is ironic that critical applications such as operating systems kernels, device drivers, and VMMs - where a single bug could compromise the reliability or security of a whole system - are among the least likely to benefit from the abstractions and safety guarantees of modern language designs. Over the last few years, our group has been investigating the potential for using Haskell to develop realistic operating systems that can boot and run on bare metal. The House system, developed primarily by Thomas Hallgren and Andrew Tolmach, demonstrates that it is indeed possible to construct systems software in a functional language. But House still relies on a layer of runtime support primitives - some written using unsafe Haskell primitives and others written in C - to provide services ranging from garbage collection to control of the page table structures used by the hardware memory management unit. We would like to replace as much of this layer as possible with code written in a functional language without compromising on type or memory safety. Our experiences with House have led us to believe that a new functional language is required to reflect the needs of the systems domain more directly. Interestingly, however, we have concluded that this does not require fundamental new language design. In this invited talk, I will give an update on the current status of our project and I will describe how we are leveraging familiar components of the Haskell type system - including polymorphism, kinds, qualified types and improvement - to capture more precise details of effect usage, data representation, and termination. I will also discuss the challenges of writing and compiling performance-sensitive code written in a functional style. It was once considered radical to use C in place of assembly language to construct systems software. Is it possible that functional languages might one day become as commonplace in this application domain as C is today?"
},
{
"paper_title": "Pattern minimization problems over recursive data types",
"paper_authors": [
"Alexander Krauss"
],
"paper_abstract": "In the context of program verification in an interactive theorem prover, we study the problem of transforming function definitions with ML-style (possibly overlapping) pattern matching into minimal sets of independent equations. Since independent equations are valid unconditionally, they are better suited for the equational proof style using induction and rewriting, which is often found in proofs in theorem provers or on paper. We relate the problem to the well-known minimization problem for propositional DNF formulas and show that it is £P/2-complete. We then develop a concrete algorithm to compute minimal patterns, which naturally generalizes the standard Quine-McCluskey procedure to the domain of term patterns."
},
{
"paper_title": "None",
"paper_authors": [
"David Van Horn",
"Harry G. Mairson"
],
"paper_abstract": "We give an exact characterization of the computational complexity of the kCFA hierarchy. For any k > 0, we prove that the control flow decision problem is complete for deterministic exponential time. This theorem validates empirical observations that such control flow analysis is intractable. It also provides more general insight into the complexity of abstract interpretation."
},
{
"paper_title": "HMF: simple type inference for first-class polymorphism",
"paper_authors": [
"Daan Leijen"
],
"paper_abstract": "HMF is a conservative extension of Hindley-Milner type inference with first-class polymorphism. In contrast to other proposals, HML uses regular System F types and has a simple type inference algorithm that is just a small extension of the usual Damas-Milner algorithm W. Given the relative simplicity and expressive power, we feel that HMF can be an attractive type system in practice. There is a reference implementation of the type system available online together with a technical report containing proofs (Leijen 2007a,b)."
},
{
"paper_title": "FPH: first-class polymorphism for Haskell",
"paper_authors": [
"Dimitrios Vytiniotis",
"Stephanie Weirich",
"Simon Peyton Jones"
],
"paper_abstract": "Languages supporting polymorphism typically have ad-hoc restrictions on where polymorphic types may occur. Supporting \"firstclass\" polymorphism, by lifting those restrictions, is obviously desirable, but it is hard to achieve this without sacrificing type inference. We present a new type system for higher-rank and impredicative polymorphism that improves on earlier proposals: it is an extension of Damas-Milner; it relies only on System F types; it has a simple, declarative specification; it is robust to program transformations; and it enjoys a complete and decidable type inference algorithm."
},
{
"paper_title": "Mixin' up the ML module system",
"paper_authors": [
"Derek Dreyer",
"Andreas Rossberg"
],
"paper_abstract": "ML modules provide hierarchical namespace management, as well as fine-grained control over the propagation of type information, but they do not allow modules to be broken up into mutually recursive, separately compilable components. Mixin modules facilitate recursive linking of separately compiled components, but they are not hierarchically composable and typically do not support type abstraction. We synthesize the complementary advantages of these two mechanisms in a novel module system design we call MixML. A MixML module is like an ML structure in which some of the components are specified but not defined. In other words, it unifies the ML structure and signature languages into one. MixML seamlessly integrates hierarchical composition, translucent MLstyle data abstraction, and mixin-style recursive linking. Moreover, the design of MixML is clean and minimalist; it emphasizes how all the salient, semantically interesting features of the ML module system (as well as several proposed extensions to it) can be understood simply as stylized uses of a small set of orthogonal underlying constructs, with mixin composition playing a central role."
},
{
"paper_title": "Compiling self-adjusting programs with continuations",
"paper_authors": [
"Ruy Ley-Wild",
"Matthew Fluet",
"Umut A. Acar"
],
"paper_abstract": "Self-adjusting programs respond automatically and efficiently to input changes by tracking the dynamic data dependences of the computation and incrementally updating the output as needed. In order to identify data dependences, previously proposed approaches require the user to make use of a set of monadic primitives. Rewriting an ordinary program into a self-adjusting program with these primitives, however, can be difficult and error-prone due to various monadic and proper-usage restrictions, some of which cannot be enforced statically. Previous work therefore suggests that self-adjusting computation would benefit from direct language and compiler support. In this paper, we propose a language-based technique for writing and compiling self-adjusting programs from ordinary programs. To compile self-adjusting programs, we use a continuation-passing style (cps) transformation to automatically infer a conservative approximation of the dynamic data dependences. To prevent the inferred, approximate dependences from degrading the performance of change propagation, we generate memoized versions of cps functions that can reuse previous work even when they are invoked with different continuations. The approach offers a natural programming style that requires minimal changes to existing code, while statically enforcing the invariants required by self-adjusting computation. We validate the feasibility of our proposal by extending Standard ML and by integrating the transformation into MLton, a whole-program optimizing compiler for Standard ML. Our experiments indicate that the proposed compilation technique can produce self-adjusting programs whose performance is consistent with the asymptotic bounds and experimental results obtained via manual rewriting (up to a constant factor)."
},
{
"paper_title": "Flask: staged functional programming for sensor networks",
"paper_authors": [
"Geoffrey Mainland",
"Greg Morrisett",
"Matt Welsh"
],
"paper_abstract": "Severely resource-constrained devices present a confounding challenge to the functional programmer: we are used to having powerful abstraction facilities at our fingertips, but how can we make use of these tools on a device with an 8- or 16-bit CPU and at most tens of kilobytes of RAM? Motivated by this challenge, we have developed Flask, a domain specific language embedded in Haskell that brings the power of functional programming to sensor networks, collections of highly resource-constrained devices. Flask consists of a staging mechanism that cleanly separates node-level code from the meta-language used to generate node-level code fragments; syntactic support for embedding standard sensor network code; a restricted subset of Haskell that runs on sensor networks and constrains program space and time consumption; a higher-level \"data stream\" combinator library for quickly constructing sensor network programs; and an extensible runtime that provides commonly-used services. We demonstrate Flask through several small code examples as well as a compiler that generates node-level code to execute a network-wide query specified in a SQL-like language. We show how using Flask ensures constraints on space and time behavior. Through microbenchmarks and measurements on physical hardware, we demonstrate that Flask produces programs that are efficient in terms of CPU and memory usage and that can run effectively on existing sensor network hardware."
},
{
"paper_title": "Experience report: a pure shirt fits",
"paper_authors": [
"Ravi Nanavati"
],
"paper_abstract": "Bluespec is a hardware-design tools startup whose core technology is developed using Haskell. Haskell is an unusual choice for a startup because it adds technical risk to the inherent business risk. In the years since Bluespec's founding, we have discovered that Haskell's purity is an unexpected match for the development needs of a startup. Based on Bluespec's experience, we conclude that pure programming languages can be the source of a competitive advantage for startup software companies."
},
{
"paper_title": "Functional netlists",
"paper_authors": [
"Sungwoo Park",
"Jinha Kim",
"Hyeonseung Im"
],
"paper_abstract": "In efforts to overcome the complexity of the syntax and the lack of formal semantics of conventional hardware description languages, a number of functional hardware description languages have been developed. Like conventional hardware description languages, however, functional hardware description languages eventually convert all source programs into netlists, which describe wire connections in hardware circuits at the lowest level and conceal all high-level descriptions written into source programs. We develop a variant of the lambda calculus, called lλ (linear lambda), which may serve as a high-level substitute for netlists. In order to support higher-order functions, lλ uses a linear type system which enforces the linear use of variables of function type. The translation of lλ into structural descriptions of hardware circuits is sound and complete in the sense that it maps expressions only to realizable hardware circuits and that every realizable hardware circuit has a corresponding expression in lλ. To illustrate the use of lλ as a high-level substitute for netlists, we design a simple hardware description language that extends lλ with polymorphism, and use it to implement a Fast Fourier Transform circuit."
},
{
"paper_title": "NixOS: a purely functional Linux distribution",
"paper_authors": [
"Eelco Dolstra",
"Andres Löh"
],
"paper_abstract": "Existing package and system configuration management tools suffer from an imperative model, where system administration actions such as upgrading packages or changes to system configuration files are stateful: they destructively update the state of the system. This leads to many problems, such as the inability to roll back changes easily, to run multiple versions of a package side-by-side, to reproduce a configuration deterministically on another machine, or to reliably upgrade a system. In this paper we show that we can overcome these problems by moving to a purely functional system configuration model. This means that all static parts of a system (such as software packages, configuration files and system startup scripts) are built by pure functions and are immutable, stored in a way analogously to a heap in a purely function language. We have implemented this model in NixOS, a non-trivial Linux distribution that uses the Nix package manager to build the entire system configuration from a purely functional specification."
},
{
"paper_title": "Experience report: visualizing data through functional pipelines",
"paper_authors": [
"David J. Duke",
"Rita Borgo",
"Colin Runciman",
"Malcolm Wallace"
],
"paper_abstract": "Scientific visualization is the transformation of data into images. The pipeline model is a widely-used implementation strategy. This term refers not only to linear chains of processing stages, but more generally to demand-driven networks of components. Apparent parallels with functional programming are more than superficial: e.g. some pipelines support streams of data, and a limited form of lazy evaluation. Yet almost all visualization systems are implemented in imperative languages. We challenge this position. Using Haskell, we have reconstructed several fundamental visualization techniques, with encouraging results both in terms of novel insight and performance. In this paper we set the context for our modest rebellion, report some of our results, and reflect on the lessons that we have learned."
},
{
"paper_title": "Quotient lenses",
"paper_authors": [
"J. Nathan Foster",
"Alexandre Pilkiewicz",
"Benjamin C. Pierce"
],
"paper_abstract": "There are now a number of BIDIRECTIONAL PROGRAMMING LANGUAGES, where every program can be read both as a forward transformation mapping one data structure to another and as a reverse transformation mapping an edited output back to a correspondingly edited input. Besides parsimony - the two related transformations are described by just one expression - such languages are attractive because they promise strong behavioral laws about how the two transformations fit together - e.g., their composition is the identity function. It has repeatedly been observed, however, that such laws are actually a bit too strong: in practice, we do not want them \"on the nose,\" but only up to some equivalence, allowing inessential details, such as whitespace, to be modified after a round trip. Some bidirectional languages loosen their laws in this way, but only for specific, baked-in equivalences. In this work, we propose a general theory of QUOTIENT LENSES - bidirectional transformations that are well behaved modulo equivalence relations controlled by the programmer. Semantically, quotient lenses are a natural refinement of LENSES, which we have studied in previous work. At the level of syntax, we present a rich set of constructs for programming with CANONIZERS and for quotienting lenses by canonizers. We track equivalences explicitly, with the type of every quotient lens specifying the equivalences it respects. We have implemented quotient lenses as a refinement of the bidirectional string processing language Boomerang. We present a number of useful primitive canonizers for strings, and give a simple extension of Boomerang's regular-expression-based type system to statically typecheck quotient lenses. The resulting language is an expressive tool for transforming real-world, ad-hoc data formats. We demonstrate the power of our notation by developing an extended example based on the UniProt genome database format and illustrate the generality of our approach by showing how uses of quotienting in other bidirectional languages can be translated into our notation."
},
{
"paper_title": "Report on the tenth ICFP programming contest",
"paper_authors": [
"Eelco Dolstra",
"Jurriaan Hage",
"Bastiaan Heeren",
"Stefan Holdermans",
"Johan Jeuring",
"Andres Löh",
"Clara Löh",
"Arie Middelkoop",
"Alexey Rodriguez",
"John van Schie"
],
"paper_abstract": "The ICFP programming contest is a 72-hour contest, which attracts thousands of contestants from all over the world. In this report we describe what it takes to organise this contest, the main ideas behind the contest we organised, the task, how to solve it, how we created it, and how well the contestants did. This year's task was to reverse engineer the DNA of a stranded alien life form to enable it to survive on our planet. The alien's DNA had to be modified by means of a prefix that modified its meaning so that the alien's phenotype would approximate a given \"ideal\" outcome, increasing its probability of survival. About 357 teams from 39 countries solved at least part of the contest. The language of choice for discriminating hackers turned out to be C++."
}
]
},
{
"proceeding_title": "ICFP '07:Proceedings of the 12th ACM SIGPLAN international conference on Functional programming",
"proceeding_contents": [
{
"paper_title": "Ott: effective tool support for the working semanticist",
"paper_authors": [
"Peter Sewell",
"Francesco Zappa Nardelli",
"Scott Owens",
"Gilles Peskine",
"Thomas Ridge",
"Susmit Sarkar",
"Rok Strniša"
],
"paper_abstract": "It is rare to give a semantic definition of a full-scale programming language, despite the many potential benefits. Partly this is because the available metalanguages for expressing semantics - usually either L<scp>a</scp>TEX for informal mathematics, or the formal mathematics of a proof assistant - make it much harder than necessary to work with large definitions. We present a metalanguage specifically designed for this problem, and a tool, ott, that sanity-checks such definitions and compiles them into proof assistant code for Coq, HOL, Isabelle, and (in progress) Twelf, together with L<scp>a</scp>TEX code for production-quality typesetting, and OCaml boilerplate. The main innovations are:(1) metalanguage design to make definitions concise, and easy to read and edit;(2) an expressive but intuitive metalanguage for specifying binding structures; and (3) compilation to proof assistant code. This has been tested in substantial case studies, including modular specifications of calculi from the TAPL text, a Lightweight Java with Java JSR 277/294 module system proposals, and a large fragment of OCaml (around 306 rules), with machine proofs of various soundness results. Our aim with this work is to enable a phase change: making it feasible to work routinely, without heroic effort, with rigorous semantic definitions of realistic languages."
},
{
"paper_title": "None",
"paper_authors": [
"Matthieu Sozeau"
],
"paper_abstract": "Finger Trees (Hinze & Paterson, 2006) are a general purpose persistent data structure with good performance. Their genericity permits developing a wealth of structures like ordered sequences or interval trees on top of a single implementation. However, the type systems used by current functional languages do not guarantee the coherent parameterization and specialization of Finger Trees, let alone the correctness of their implementation. We present a certified implementation of Finger Trees solving these problems using the Program extension of Coq. We not only implement the structure but also prove its invariants along the way, which permit building certified structures on top of Finger Trees in an elegant way."
},
{
"paper_title": "Experience report: functional programming in c-rules",
"paper_authors": [
"Jeremy Wazny"
],
"paper_abstract": "C-Rules is a business rules management system developed by Constraint Technologies International1 (CTI), designed for use in transportation problems. Users define rules describing various aspects of a problem, such as solution costs and legality, which are then queried from a host application, typically an optimising solver. At its core, C-Rules provides a functional expression language which affords users both power and flexibility when formulating rules. In this paper we will describe our experiences of using functional programming both at the end-user level, as well as at the implementation level. We highlight some of the benefits we, and the product's users, have enjoyed from the decision to base our rule system on features such as: higher-order functions, referential transparency, and static, polymorphic typing. We also outline some of our experiences in using Haskell to build an efficient compiler for the core language."
},
{
"paper_title": "Extensible pattern matching via a lightweight language extension",
"paper_authors": [
"Don Syme",
"Gregory Neverov",
"James Margetson"
],
"paper_abstract": "Pattern matching of algebraic data types (ADTs) is a standard feature in typed functional programming languages, but it is well known that it interacts poorly with abstraction. While several partial solutions to this problem have been proposed, few have been implemented or used. This paper describes an extension to the .NET language F# called active patterns, which supports pattern matching over abstract representations of generic heterogeneous data such as XML and term structures, including where these are represented via object models in other .NET languages. Our design is the first to incorporate both ad hoc pattern matching functions for partial decompositions and \"views\" for total decompositions, and yet remains a simple and lightweight extension. We give a description of the language extension along with numerous motivating examples. Finally we describe how this feature would interact with other reasonable and related language extensions: existential types quantified at data discrimination tags, GADTs, and monadic generalizations of pattern matching."
},
{
"paper_title": "On Barron and Strachey's cartesian product function",
"paper_authors": [
"Olivier Danvy",
"Michael Spivey"
],
"paper_abstract": "Over forty years ago, David Barron and Christopher Strachey published a startlingly elegant program for the Cartesian product of a list of lists, expressing it with a three nested occurrences of the function we now call foldr. This program is remarkable for its time because of its masterful display of higher-order functions and lexical scope, and we put it forward as possibly the first ever functional pearl. We first characterize it as the result of a sequence of program transformations, and then apply similar transformations to a program for the classical power set example. We also show that using a higher-order representation of lists allows a definition of the Cartesian product function where foldr is nested only twice."
},
{
"paper_title": "Bidirectionalization transformation based on automatic derivation of view complement functions",
"paper_authors": [
"Kazutaka Matsuda",
"Zhenjiang Hu",
"Keisuke Nakano",
"Makoto Hamana",
"Masato Takeichi"
],
"paper_abstract": "Bidirectional transformation is a pair of transformations: a view function and a backward transformation. A view function maps one data structure called source onto another called view. The corresponding backward transformation reflects changes in the view to the source. Its practically useful applications include replicated data synchronization, presentation-oriented editor development, tracing software development, and view updating in the database community. However, developing a bidirectional transformation is hard, because one has to give two mappings that satisfy the bidirectional properties for system consistency. In this paper, we propose a new framework for bidirectionalization that can automatically generate a useful backward transformation from a view function while guaranteeing that the two transformations satisfy the bidirectional properties. Our framework is based on two known approaches to bidirectionalization, namely the constant complement approach from the database community and the combinator approach from the programming language community, but it has three new features: (1) unlike the constant complement approach, it can deal with transformations between algebraic data structures rather than just tables; (2) unlike the combinator approach, in which primitive bidirectional transformations have to be explicitly given, it can derive them automatically; (3) it generates a view update checker to validate updates on views, which has not been well addressed so far. The new framework has been implemented and the experimental results show that our framework has promise."
},
{
"paper_title": "Tangible functional programming",
"paper_authors": [
"Conal M. Elliott"
],
"paper_abstract": "We present a user-friendly approach to unifying program creation and execution, based on a notion of \"tangible values\" (TVs), which are visual and interactive manifestations of pure values, including functions. Programming happens by gestural composition of TVs. Our goal is to give end-users the ability to create parameterized, composable content without imposing the usual abstract and linguistic working style of programmers. We hope that such a system will put the essence of programming into the hands of many more people, and in particular people with artistic/visual creative style. In realizing this vision, we develop algebras for visual presentation and for \"deep\" function application, where function and argument may both be nested within a structure of tuples, functions, etc. Composition gestures are translated into chains of combinators that act simultaneously on statically typed values and their visualizations."
},
{
"paper_title": "Termination analysis and call graph construction for higher-order functional programs",
"paper_authors": [
"Damien Sereni"
],
"paper_abstract": "The analysis and verification of higher-order programs raises the issue of control-flow analysis for higher-order languages. The problem of constructing an accurate call graph for a higher-order program has been the topic of extensive research, and numerous methods for flow analysis, varying in complexity and precision, have been suggested. While termination analysis of higher-order programs has been studied, there has been little examination of the impact of call graph construction on the precision of termination checking. We examine the effect of various control-flow analysis techniques on a termination analysis for higher-order functional programs. We present a termination checking framework and instantiate this with three call graph constructions varying in precision and complexity, and illustrate by example the impact of the choice of call graph construction. Our second aim is to use the resulting analyses to shed light on the relationship between control-flow analyses. We prove precise inclusions between the classes of programs recognised as terminating by the same termination criterion over different call graph analyses, giving one of the first characterisations of expressive power of flow analyses for higher-order programs."
},
{
"paper_title": "Relating complexity and precision in control flow analysis",
"paper_authors": [
"David Van Horn",
"Harry G. Mairson"
],
"paper_abstract": "We analyze the computational complexity of kCFA, a hierarchy of control flow analyses that determine which functions may be applied at a given call-site. This hierarchy specifies related decision problems, quite apart from any algorithms that may implement their solutions. We identify a simple decision problem answered by this analysis and prove that in the 0CFA case, the problem is complete for polynomial time. The proof is based on a nonstandard, symmetric implementation of Boolean logic within multiplicative linear logic (MLL). We also identify a simpler version of 0CFA related to η-expansion, and prove that it is complete for logarithmic space, using arguments based on computing paths and permutations. For any fixed k>0, it is known that kCFA (and the analogous decision problem) can be computed in time exponential in the program size. For k=1, we show that the decision problem is NP-hard, and sketch why this remains true for larger fixed values of k. The proof technique depends on using the approximation of CFA as an essentially nondeterministic computing mechanism, as distinct from the exactness of normalization. When k=n, so that the \"depth\" of the control flow analysis grows linearly in the program length, we show that the decision problem is complete for exponential time. In addition, we sketch how the analysis presented here may be extended naturally to languages with control operators. All of the insights presented give clear examples of how straightforward observations about linearity, and linear logic, may in turn be used to give a greater understanding of functional programming and program analysis."
},
{
"paper_title": "Inductive reasoning about effectful data types",
"paper_authors": [
"Andrzej Filinski",
"Kristian Støvring"
],
"paper_abstract": "We present a pair of reasoning principles, definition and proof by rigid induction, which can be seen as proper generalizations of lazy-datatype induction to monadic effects other than partiality. We further show how these principles can be integrated into logical-relations arguments, and obtain as a particular instance a general and principled proof that the success-stream and failure-continuation models of backtracking are equivalent. As another application, we present a monadic model of general search trees, not necessarily traversed depth-first. The results are applicable to both lazy and eager languages, and we emphasize this by presenting most examples in both Haskell and SML."
},
{
"paper_title": "A type directed translation of MLF to system F",
"paper_authors": [
"Daan Leijen"
],
"paper_abstract": "The MLF type system by Le Botlan and Rémy is a natural extension of Hindley-Milner type inference that supports full first-class polymorphism, where types can be of higher-rank and impredicatively instantiated. Even though MLF is theoretically very attractive, it has not seen widespread adoption. We believe that this partly because it is unclear how the rich language of MLF types relate to standard System F types. In this article we give the first type directed translation of MLF terms to System F terms. Based on insight gained from this translation, we also define \"Rigid MLF\" (MLF=), a restriction of MLF where all bound values have a System F type. The expressiveness of MLF= is the same as that of boxy types, but MLF= needs fewer annotations and we give a detailed comparison between them."
},
{
"paper_title": "Declarative programming for artificial intelligence applications",
"paper_authors": [
"John W. Lloyd"
],
"paper_abstract": "In this talk, I will consider some possible extensions to existing functional programming languages that would make them more suitable for the important and growing class of artificial intelligence applications. First, I will motivate the need for these language extensions. Then I will give some technical detail about these extensions that provide the logic programming idioms, probabilistic computation, and modal computation. Some examples will be given to illustrate these ideas which have been implemented in the Bach programming language that is an extension of Haskell."
},
{
"paper_title": "McErlang: a model checker for a distributed functional programming language",
"paper_authors": [
"Lars-Åke Fredlund",
"Hans Svensson"
],
"paper_abstract": "We present a model checker for verifying distributed programs written in the Erlang programming language. Providing a model checker for Erlang is especially rewarding since the language is by now being seen as a very capable platform for developing industrial strength distributed applications with excellent failure tolerance characteristics. In contrast to most other Erlang verification attempts, we provide support for a very substantial part of the language. The model checker has full Erlang data type support, support for general process communication, node semantics (inter-process behave subtly different from intra-process communication), fault detection and fault tolerance through process linking, and can verify programs written using the OTP Erlang component library (used by most modern Erlang programs). As the model checking tool is itself implemented in Erlang we benefit from the advantages that a (dynamically typed) functional programming language offers: easy prototyping and experimentation with new verification algorithms, rich executable models that use complex data structures directly programmed in Erlang, the ability to treat executable models interchangeably as programs (to be executed directly by the Erlang interpreter) and data, and not least the possibility to cleanly structure and to cleanly combine various verification sub-tasks. In the paper we discuss the design of the tool and provide early indications on its performance."
},
{
"paper_title": "Experience report: the reactis validation tool",
"paper_authors": [
"Steve Sims",
"Daniel C. DuVarney"
],
"paper_abstract": "Reactis is a commercially successful testing and validation tool which is implemented almost entirely in Standard ML. Our experience using a functional language to develop a commercial product has led us to the conclusion that while functional languages have some disadvantages, in the case of Reactis the benefits of a functional language substantially outweigh the drawbacks."
},
{
"paper_title": "iTasks: executable specifications of interactive work flow systems for the web",
"paper_authors": [
"Rinus Plasmeijer",
"Peter Achten",
"Pieter Koopman"
],
"paper_abstract": "In this paper we introduce the iTask system: a set of combinators to specify work flows in a pure functional language at a very high level of abstraction. Work flow systems are automated systems in which tasks are coordinated that have to be executed by humans and computers. The combinators that we propose support work flow patterns commonly found in commercial work flow systems. Compared with most of these commercial systems, the iTask system offers several advantages: tasks are statically typed, tasks can be higher order, the combinators are fully compositional, dynamic and recursive work flows can be specified, and last but not least, the specification is used to generate an executable web-based multi-user work flow application. With the iTask system, useful work flows can be defined which cannot be expressed in other systems: work can be interrupted and subsequently directed to other workers for further processing. The implementation is special as well. It is based on the Clean iData toolkit which makes it possible to create fully dynamic, interactive, thin client web applications. Thanks to the generic programming techniques used in the iData toolkit, the programming effort is reduced significantly: state handling, form rendering, user interaction, and storage management is handled automatically. The iTask system allows a task to be regarded as a special kind of persistent redex being reduced by the application user via task completion. The combinators control the order in which these redexes are made available to the application user. The system rewrites the persistent task redexes in a similar way as functions are rewritten in lazy functional languages."
},
{
"paper_title": "Experience report: scheme in commercial web application development",
"paper_authors": [
"Noel Welsh",
"David Gurnell"
],
"paper_abstract": "Over the past year Untyped has developed some 40'000 lines of Scheme code for a variety of web-based applications, which receive over 10'000 hits a day. This is, to our knowledge, the largest web-based application deployment of PLT Scheme. Our experiences developing with PLT Scheme show that deficiencies in the existing infrastructure can be easily overcome, and we can exploit advanced language features to improve productivity. We conclude that PLT Scheme makes an excellent platform for developing web-based applications, and is competitive with more mainstream choices."
},
{
"paper_title": "Functional pearl: the great escape or, how to jump the border without getting caught",
"paper_authors": [
"David Herman"
],
"paper_abstract": "Filinski showed that callcc and a single mutable reference cell are sufficient to express the delimited control operators shift and reset. However, this implementation interacts poorly with dynamic bindings like exception handlers. We present a variation on Filinski's encoding of delimited continuations that behaves appropriately in the presence of exceptions and give an implementation in Standard ML of New Jersey. We prove the encoding correct with respect to the semantics of delimited dynamic binding."
},
{
"paper_title": "Adding delimited and composable control to a production programming environment",
"paper_authors": [
"Matthew Flatt",
"Gang Yu",
"Robert Bruce Findler",
"Matthias Felleisen"
],
"paper_abstract": "Operators for delimiting control and for capturing composable continuations litter the landscape of theoretical programming language research. Numerous papers explain their advantages, how the operators explain each other (or don't), and other aspects of the operators' existence. Production programming languages, however, do not support these operators, partly because their relationship to existing and demonstrably useful constructs - such as exceptions and dynamic binding - remains relatively unexplored. In this paper, we report on our effort of translating the theory of delimited and composable control into a viable implementation for a production system. The report shows how this effort involved a substantial design element, including work with a formal model, as well as significant practical exploration and engineering. The resulting version of PLT Scheme incorporates the expressive combination of delimited and composable control alongside dynamic-wind, dynamic binding, and exception handling. None of the additional operators subvert the intended benefits of existing control operators, so that programmers can freely mix and match control operators."
},
{
"paper_title": "Compiling with continuations, continued",
"paper_authors": [
"Andrew Kennedy"
],
"paper_abstract": "We present a series of CPS-based intermediate languages suitable for functional language compilation, arguing that they have practical benefits over direct-style languages based on A-normal form (ANF) or monads. Inlining of functions demonstrates the benefits most clearly: in ANF-based languages, inlining involves a re-normalization step that rearranges let expressions and possibly introduces a new 'join point' function, and in monadic languages, commuting conversions must be applied; in contrast, inlining in our CPS language is a simple substitution of variables for variables. We present a contification transformation implemented by simple rewrites on the intermediate language. Exceptions are modelled using so-called 'double-barrelled' CPS. Subtyping on exception constructors then gives a very straightforward effect analysis for exceptions. We also show how a graph-based representation of CPS terms can be implemented extremely efficiently, with linear-time term simplification."
},
{
"paper_title": "Type-safe higher-order channels in ML-like languages",
"paper_authors": [
"Sungwoo Park"
],
"paper_abstract": "As a means of transmitting not only data but also code encapsulated within functions, higher-order channels provide an advanced form of task parallelism in parallel computations. In the presence of mutable references, however, they pose a safety problem because references may be transmitted to remote threads where they are no longer valid. This paper presents an ML-like parallel language with type-safe higher-order channels. By type safety, we mean that no value written to a channel contains references, or equivalently, that no reference escapes via a channel from the thread where it is created. The type system uses a typing judgment that is capable of deciding whether the value to which a term evaluates contains references or not. The use of such a typing judgment also makes it easy to achieve another desirable feature of channels, channel locality, that associates every channel with a unique thread for serving all values addressed to it. Our type system permits mutable references in sequential computations and also ensures that mutable references never interfere with parallel computations. Thus it provides both flexibility in sequential programming and ease of implementing parallel computations."
},
{
"paper_title": "Evaluating high-level distributed language constructs",
"paper_authors": [
"Jan Nyström",
"Phil Trinder",
"David King"
],
"paper_abstract": "The paper investigates the impact of high level distributed programming language constructs on the engineering of realistic software components. Based on reengineering two non-trivial telecoms components, we compare two high-level distributed functional languages, Erlang and GdH, with conventional distributed technologies C++/CORBA and C++/UDP. We investigate several aspects of high-level distributed languages including the impact on code size of high-level constructs. We identify three language constructs that primarily contribute to the reduction in application size and quantify their impact. We provide the first evidence based on analysis of a substantial system to support the widely-held supposition that high-level constructs reduce programming effort associated with specifying distributed coordination. We investigate whether a language with sophisticated high-level fault tolerance can produce suitably robust components, and both measure and analyse the additional programming effort needed to introduce robustness. Finally, we investigate some implications of a range of type systems for engineering distributed software."
},
{
"paper_title": "Experience report: using functional programming to manage a linux distribution",
"paper_authors": [
"Clifford Beshers",
"David Fox",
"Jeremy Shaw"
],
"paper_abstract": "We report on our experience using functional programming languages in the development of a commercial GNU/Linux distribution, discussing features of several significant systems: hardware detection and system configuration; OS installer CD creation; package compilation and management. Static typing helps compensate for the lack of a complete testing lab and helps us be effective with a very small team. Most importantly, we believe that going beyond merely using functional languages to using purely functional designs really helps to create simple, effective tools."
},
{
"paper_title": "Subtyping and intersection types revisited",
"paper_authors": [
"Frank Pfenning"
],
"paper_abstract": "Church's system of simple types has proven to be remarkably robust: call-by-name, call-by-need, and call-by-value languages, with or without effects, and even logical frameworks can be based on the same typing rules. When type systems become more expressive, this unity fractures. An early example is the value restriction for parametric polymorphism which is necessary for ML but not Haskell; a later manifestation is the lack of distributivity of function types over intersections in call-by-value languages with effects. In this talk we reexamine the logical justification for systems of subtyping and intersection types and then explore the consequences in two different settings: logical frameworks and functional programming. In logical frameworks functions are pure and their definitions observable, but complications could arise from the presence of dependent types. We show that this is not the case, and that we can obtain soundness and completeness theorems for a certain axiomatization of subtyping. We also sketch a connection to the type-theoretic notion of proof irrelevance. In functional programming we investigate how the encapsulation of effects in monads interacts with subtyping and intersection types, providing an updated analysis of the value restriction and related phenomena. While at present this study is far from complete, we believe that its origin in purely logical notions will give rise to a uniform theory that can easily be adapted to specific languages and their operational interpretations."
},
{
"paper_title": "Experience report: building an eclipse-based IDE for Haskell",
"paper_authors": [
"Leif Frenzel"
],
"paper_abstract": "This paper summarizes experiences from an open source project that builds a free Haskell IDE based on Eclipse (an open source IDE platform). Eclipse is extensible and has proved to be a good basis for IDEs for several programming languages. Difficulties arise mainly because it is written in Java and requires extensions to be written in Java. This made it hard to reuse existing development tools implemented in Haskell, and turned out to be a considerable obstacle to finding contributors. Several approaches to resolve these issues are described and their advantages and disadvantages discussed."
},
{
"paper_title": "User-friendly functional programming for web mashups",
"paper_authors": [
"Rob Ennals",
"David Gay"
],
"paper_abstract": "MashMaker is a web-based tool that makes it easy for a normal user to create web mashups by browsing around, without needing to type, or plan in advance what they want to do. Like a web browser, Mashmaker allows users to create mashups by browsing, rather than writing code, and allows users to bookmark interesting things they find, forming new widgets - reusable mashup fragments. Like a spreadsheet, MashMaker mixes program and data and allows ad-hoc unstructured editing of programs. MashMaker is also a modern functional programming language with non-side effecting expressions, higher order functions, and lazy evaluation. MashMaker programs can be manipulated either textually, or through an interactive tree representation, in which a program is presented together with the values it produces. In order to cope with this unusual domain, MashMaker contains a number of deviations from normal function languages. The most notable of these is that, in order to allow the programmer to write programs directly on their data, all data is stored in a single tree, and evaluation of an expression always takes place at a specific point in this tree, which also functi
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment