Partial redundancy elimination (PRE) is an optimization where you eliminate computations that are redundant on some but not all control flow paths.
Example:
if (...) {
// ...
int a = x + y;
// ...
} else {
// ...
// no use of x + y
// ...
}
int b = x + y;
After PRE:
int t;
if (...) {
// ...
t = x + y;
int a = t;
// ...
} else {
// ...
// no use of x + y
// ...
t = x + y;
}
int b = t;
Notice that if we inserted a computation of x + y in the branch that didn't originally need to compute it.
Another way to avoid redundant computations would be the following:
int t = x + y;
if (...) {
// ...
int a = t;
// ...
} else {
// ...
// no use of x + y
// ...
}
int b = t;
However, this transformation greatly extends t's lifetime, so it's not considered an acceptable solution. Traditional CFG/SSA-based compiler transformations take a lot of care to avoid over-extending lifetimes; they generally would only consider this kind of hoisting for loops.
You can make this last hoisting transformation to CPS terms since it relies only on lexical scoping. Lexical scope implies def-use dominance but you can't go in the reverse direction, i.e. you can't transform CFGs in SSA form into CPS terms, without this kind of lifetime-extending hoisting. Hence I don't see a way you could make the equivalent of the PRE transformation in a CPS-based compiler. You'd need to rely on a global code motion (GCM) pass like sea-of-nodes compilers to bring the lifetimes under control, and GCM would have to operate on something more like a CFG, not on lexically scoped terms.
EDIT: This conclusion in the last paragraph is wrong. While this would be an issue if you were just rewriting direct-style terms, it isn't an issue with CPS terms. As Michel points out in the comments, the SSA-to-CPS translation of the PRE example in SSA form does the right thing.
Thanks Michel, that's so obvious I don't know why it didn't occur to me. Now I feel dumb. :)