Skip to content

Instantly share code, notes, and snippets.

Embed
What would you like to do?
Consider a simple definition using guards:
foo x
| x == C1 = { -# LIKELY 10 #- } e1
| otherwise = { -# LIKELY 0 #- } e2
with C1 being defined by T using uniform default weights:
data T = {-# LIKELY 1000 #-} C1
| {-# LIKELY 1000 #-} C2
| {-# LIKELY 1000 #-} C3
After desugaring and initial optimization we arrive at the following core code:
case (case x of { C1 -> (Weight: 1000) True;
C2 -> (Weight: 1000) False;
C3 -> (Weight: 1000) False; } ) of
{
True -> (Weight: 10) e1;
False -> (Weight: 0) e2;
}
The weights on C1/2/3 come from the default likelihoods of T.
The weights 10/0 are from the pragmas in the function definition.
This is correct so far and what we expect.
We then apply case of case:
case x of { C1 -> (Weight: 1000)
case True of {
True -> (Weight: 10) e1;
False -> (Weight: 0) e2;
};
C2 -> (Weight: 1000)
case True of {
True -> (Weight: 10) e1;
False -> (Weight: 0) e2;
};
C3 -> (Weight: 1000)
case True of {
True -> (Weight: 10) e1;
False -> (Weight: 0) e2;
};
}
Then during application of known constructor we eliminate the redudant cases as well as the redudant cases
resulting in the following code:
case x of {
C1(Weight: 1000) -> e1;
C2(Weight: 1000) -> e2;
C3(Weight: 1000) -> e2
}
With the end result that we dropped the user given weights,
instead using the weights from the definition of `==`.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.