Skip to content

Instantly share code, notes, and snippets.

@seanjensengrey
Forked from anonymous/gist:127a99a05058afabddfc
Last active November 11, 2017 14:34
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save seanjensengrey/572cffee2574ae2adf24f3831b9d9e24 to your computer and use it in GitHub Desktop.
Save seanjensengrey/572cffee2574ae2adf24f3831b9d9e24 to your computer and use it in GitHub Desktop.
Shedskin

*** SHED SKIN Python-to-C++ Compiler *** Copyright 2005-2013 Mark Dufour; License GNU GPL version 3 (See LICENSE)

infer.py: perform iterative type analysis

we combine two techniques from the literature, to analyze both parametric polymorphism and data polymorphism adaptively. these techniques are agesen's cartesian product algorithm [0] and plevyak's iterative flow analysis [1] '(the data polymorphic part)'. for details about these algorithms, see ole agesen's excellent Phd thesis [2]. for details about the Shed Skin implementation, see Mark Dufour's MsC thesis [3].

the cartesian product algorithm duplicates functions (or their graph counterpart), based on the cartesian product of possible argument types, whereas iterative flow analysis duplicates classes based on observed imprecisions at assignment points. the two integers mentioned in the graph.py description are used to keep track of duplicates along these dimensions (first class duplicate nr, then function duplicate nr).

the combined technique scales reasonably well, but can explode in many cases. there are many ways to improve this. some ideas:

  • an iterative deepening approach, merging redundant duplicates after each deepening
  • add and propagate filters across variables. e.g. 'a+1; a=b' implies that a and b must be of a type that implements 'add'.

a complementary but very practical approach to (greatly) improve scalability would be to profile programs before compiling them, resulting in quite precise (lower bound) type information. type inference can then be used to 'fill in the gaps'.

iterative_dataflow_analysis():

FORWARD PHASE

  • propagate types along constraint graph (propagate())
  • all the while creating function duplicates using the cartesian product algorithm(cpa())
  • when creating a function duplicate, fill in allocation points with correct type (ifa_seed_template())

BACKWARD PHASE

  • determine classes to be duplicated, according to found imprecision points (ifa())
  • from imprecision points, follow the constraint graph (backwards) to find involved allocation points
  • duplicate classes, and spread them over these allocation points

CLEANUP

  • quit if no further imprecision points (ifa() did not find anything)
  • otherwise, restore the constraint graph to its original state and restart
  • all the while maintaining types for each allocation point in gx.alloc_info

update: we now analyze programs incrementally, adding several functions and redoing the full analysis each time. this seems to greatly help the CPA from exploding early on.

[0] agesen's cartesian product algorithm http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.30.8177

[1] plevyak's iterative flow analysis http://www.plevyak.com/ifa-submit.pdf

[2] ole agesen's excellent Phd thesis http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.93.4969

[3] Mark Dufour's MsC thesis http://mark.dufour.googlepages.com/shedskin.pdf

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment