Initialize
gitflow | git |
---|---|
git flow init |
git init |
git commit --allow-empty -m "Initial commit" |
|
git checkout -b develop master |
gitflow | git |
---|---|
git flow init |
git init |
git commit --allow-empty -m "Initial commit" |
|
git checkout -b develop master |
Why do compilers even bother with exploiting undefinedness signed overflow? And what are those | |
mysterious cases where it helps? | |
A lot of people (myself included) are against transforms that aggressively exploit undefined behavior, but | |
I think it's useful to know what compiler writers are accomplishing by this. | |
TL;DR: C doesn't work very well if int!=register width, but (for backwards compat) int is 32-bit on all | |
major 64-bit targets, and this causes quite hairy problems for code generation and optimization in some | |
fairly common cases. The signed overflow UB exploitation is an attempt to work around this. |
Monads and delimited control are very closely related, so it isn’t too hard to understand them in terms of one another. From a monadic point of view, the big idea is that if you have the computation m >>= f
, then f
is m
’s continuation. It’s the function that is called with m
’s result to continue execution after m
returns.
If you have a long chain of binds, the continuation is just the composition of all of them. So, for example, if you have
m >>= f >>= g >>= h
then the continuation of m
is f >=> g >=> h
. Likewise, the continuation of m >>= f
is g >=> h
.
{-# LANGUAGE TemplateHaskell #-} | |
{-# LANGUAGE GADTs #-} | |
{-# LANGUAGE TypeFamilies #-} | |
{-# LANGUAGE MultiParamTypeClasses #-} | |
module Bad where | |
import Prelude | |
import Data.Kind ( Type ) | |
import Generics.MultiRec.TH |
/// Playground - noun: a place where people can play | |
/// I am the very model of a modern Judgement General | |
//: # Algorithm W | |
//: In this playground we develop a complete implementation of the classic | |
//: algorithm W for Hindley-Milner polymorphic type inference in Swift. | |
//: ## Introduction |
(* Compile with: | |
$ ocamlfind ocamlc -package angstrom,stdio -linkpkg tychk.ml -o tychk | |
Example use: | |
$ ./tychk <<EOF | |
> let f = (fun x -> x) : forall a. a -> a | |
> in f f | |
> EOF | |
input : forall a. a -> a | |
*) |
{-# language Strict, LambdaCase, BlockArguments #-} | |
{-# options_ghc -Wincomplete-patterns #-} | |
{- | |
Minimal demo of "glued" evaluation in the style of Olle Fredriksson: | |
https://github.com/ollef/sixty | |
The main idea is that during elaboration, we need different evaluation |
## Principal type-schemes for functional programs | |
**Luis Damas and Robin Milner, POPL '82** | |
> module W where | |
> import Data.List | |
> import Data.Maybe | |
> import Data.Function |
The idea behind program analysis is simple, right? You just want to know stuff about your program before it runs, usually because you don't want unexpected problems to arise (those are better in movies.) Then why looking at Wikipedia gives you headaches? Just so many approaches, tools, languages
In this article I would like to give a glimpse of an overarching approach to program analysis, based on ideas from abstract interpretation. My goal is not to pinpoint a specific technique, but rather show how they have common core concepts, the differences being due mostly to algorithmic challenges. In other words, static analysis have a shared goal, but it's a challenge to make them precise and performant.
Code is meant to be executed by a computer. Take the following very simple function:
fun cantulupe(x) = {