Skip to content

Instantly share code, notes, and snippets.

@CrockAgile
Last active December 27, 2022 00:47
Show Gist options
  • Star 24 You must be signed in to star a gist
  • Fork 3 You must be signed in to fork a gist
  • Save CrockAgile/09065649ae5a52629599ebc5645922d6 to your computer and use it in GitHub Desktop.
Save CrockAgile/09065649ae5a52629599ebc5645922d6 to your computer and use it in GitHub Desktop.
Floating Point Arcade

Floating Point Arcade

Twitter Follow

A Shady Coin Toss

A shady game master approaches...

Hey there! You look like you would enjoy a good game of chance.

This machine 🎰 outputs "certified random"ℒ️ floating point numbers.
Let's use it as a coin flip. You win on "heads", and you lose on "tails".

Floating point numbers cover a wide number range. For this game, the range [0, 0.5) is "tails",
and the range [0.5, 1) is "heads". If the random number isn't in either of those ranges,
then the flip will be redone.

At this point, it would be fair for us to be skeptical of the shady game master. Maybe we should write a quick simulation of the game.

let mut win = 0u64;
let mut lose = 0u64;

use rand::Rng;
let mut rng = rand::thread_rng();

while (win + lose) < 1_000_000 {
    let flip: f32 = rng.gen();
    
    if flip >= 0.5 && flip < 1.0 {
        win += 1;
    } else if flip >= 0.0 && flip < 0.5 {
        lose += 1;
    }
}
println!("win={win} lose={lose}");

One run of the simulation yields: win=500011 lose=499989. Our simulation gives us some confidence to play the game, at least for 10 rounds. We pull the lever of the "certified random" machine 🎰 10 times, and the results are: win=0 lose=10

Wow! Look at your bad luck πŸ˜‰ .

No, you can't look inside my "certified random" machine!
But a trusted third party has inspected and certified its proprietary randomness algorithm.

Motivated by spite, we now have the energy to learn about floating point numbers.

IEEE 754

IEEE 754 is a standard for floating point arithmetic.

32 bit floating point numbers are represented by three parts:

  • 1 sign bit to indicate if positive or negative (+/-)
  • 8 bits for the exponent (2^exp)
  • 23 bits for the "fraction", the digits trailing after the decimal point (e.g. 1.001, but not the leading "1")

These parts combine to a single floating point number

$$ (+/-)\space 2^{exponent} * 1.fraction $$

Wait, how does "fraction" work? When humans write fractional numbers, we usually write in base 10. For example, 1.125 would correspond to:

10^0 10^-1 10^-2 10^-3
1. 1 2 5

Computers use binary instead, representing numbers in base 2. For example, 1.125 in binary would be:

2^0 2^-1 2^-2 2^-3
1. 0 0 1

IEEE 754 uses 23 bits for the fraction and does not store the leading "1" because it is the same for all floats. So this table has 24 places, ranging from $2^{0}$ to $2^{-23}$

2^0 2^-1 2^-2 2^-3 ... 2^-21 2^-22 2^-23
1 0 0 1 ... 0 0 0

1.125 in this binary notation would be: 1.001000000000000000000000

This binary representation can only represent numbers with the form $1.fraction$. This is a large restriction!

Luckily there is also the 8 bit exponent component of IEEE 754 floating numbers. The exponent scales the fraction, which allows representing a wider range of numbers.

$$ 2^{exponent} * 1.fraction $$

Multiplying by powers of 2 in binary is luckily as simple as moving the decimal point. For example, multiplying $2^3 * 1.001$ moves the decimal point 3 to the right: $1001$

Or another way to look at multiplying by powers of 2 is that it scales the columns of the table:

2^3 2^2 2^1 2^0 ... 2^-18 2^-19 2^-20
1 0 0 1 ... 0 0 0

Looking at the table, 1001. in binary would be:

$$ 12^3 + 12^0 = 9 $$

The exponent can be negative as well, shifting the decimal to the left. For example, multiplying $2^{-3} * 1.001$ moves the decimal point 3 to the left: $0.001001$

This "floating" decimal point is where floating point numbers get their name.

Exponents are stored as integers in 8 bits, with a "bias" 127. "Bias" in this case means to offset the 8 bit exponent. For example, if the 8 bit value is 200, then it should be "biased" by -127, resulting in an exponent of $200-127=73$

The exponent value (0b1111_1111) is reserved for representing infinity and NaN.

The exponent value (0b0000_0000) is reserved for subnormal numbers. With "subnormal" numbers, the leading "1" is instead assumed to be a leading "0". And the exponent is -126. This supports floating point numbers of the form: $2^{-126} * 0.fraction$

With the reserved values, and the "bias" of 127, exponents are restricted to the range [-126, +127].

Scheme Exposed

With this background knowledge, it's now possible to guess at the scheme used by the shady game master.

Counting every floating point number in the range $[0, 1)$, bucketed with 0.0625 resolution exposes the problem:

bucket min count
0 1031798785
0.0625 8388608
0.125 4194304
0.1875 4194304
0.25 2097152
0.3125 2097152
0.375 2097152
0.4375 2097152
0.5 1048576
0.5625 1048576
0.625 1048576
0.6875 1048576
0.75 1048576
0.8125 1048576
0.875 1048576
0.9375 1048576

There are considerably more floating point numbers in $[0, 0.5)$ than there are in $[0.5, 1)$. So "tails" has much better odds!

Oh drat! You have seen through my clever ruse.
But you have to admit, floating point numbers are weird!

Since the trick is exposed, feel free to look at the propriety algorithm below.
As you can see, it really IS "random"! And it almost identical to your simulation code.
But it is not "fair". Or rather, it does not produce a uniform distribution.
let mut win = 0u64;
let mut lose = 0u64;

use rand::Rng;
let mut rng = rand::thread_rng();

while (win + lose) < 1_000_000 {
    // THE ONLY DIFFERENCE IS HERE
    // generating a random u32, and then f32 from_bits
    // does NOT generate a uniform distribution
    let flip: u32 = rng.gen();
    let flip = f32::from_bits(flip);
    
    if flip >= 0.5 && flip < 1.0 {
        win += 1;
    } else if flip >= 0.0 && flip < 0.5 {
        lose += 1;
    }
}

println!("win={win} lose={lose}")
// win=7810 lose=992190

A Shady Lottery

But wait a minute. Didn't you run a simulation earlier that seemed to be fair and uniform?
How about a new "fair" game?

* Use `rand` crate instead of shady machine 🎰, with trusted randomness generator
* Each player will be given 100 different floating point number lotto tickets
* Keep drawing random floats until a player has a matching lotto ticket, making them the winner

What do you think? Is this a fair game?

Using rand instead of shady machine 🎰 is a good step toward fairness. As we saw in the simulation, rand can generate 32 bit floating point numbers with uniform distribution in the range [0, 1). And rand is open source! So before agreeing to play the game, let's peek at the paraphrased source code:

fn sample(rng: &mut Rng) -> f32 {
    let float_size = std::mem::size_of::<f32>() as u32 * 8;
    let fraction_bits = 23;
    let precision = fraction_bits + 1;
    let unused_bits = float_size - precision;
    // 2^-24
    let scale = 1.0 / (((1u32) << precision) as f32);

    // generate random 32 bits
    // 0b_????????_????????_????????_????????
    let value: u32 = rng.gen();
    // shift off 8 unused bits, leaving 24 random bits
    // 0b_00000000_????????_????????_????????
    let value = value >> unused_bits;
    // convert the integer to a float
    let value = value as f32;
    // scale the float by 2^-24
    let value = scale * value;
    value
}

(source)

To reiterate the steps rand takes:

  1. generate random, unsigned 24 bit integer, with a possible value range $[0, 2^{24})$
  2. convert that integer to a float
  3. scale the float by $2^{-24}$, which scales the possible value range to $[0, 1)$

This seems fair! But shady game master, we read the spec. So we know we shouldn't let you put any weird floating point numbers on our lottery tickets.

"Weird" floating point numbers would be like:

  • +infinity, -infinity
  • NaN (not a number)
  • $-0.0$ (yes negative zero is a real floating point, and equal to 0 so could be considered in the $[0, 1)$ range
  • subnormal numbers
On my honor as the shady game master,
those weird floating point numbers will not be on any of the lotto tickets.

So, ready to play the lottery? GREAT!

Here are your 100 lotto tickets, and here are my 100.
Now let's keep drawing numbers at random until
one of our numbers comes up...

πŸ€–: SHADY GAME MASTER WINS

Oh, I seem to have won again. I am starting to feel bad!
Let's play another 100 rounds so you might win some...

πŸ€–: SHADY GAME MASTER WINS
πŸ€–: SHADY GAME MASTER WINS
πŸ€–: SHADY GAME MASTER WINS
πŸ€–: SHADY GAME MASTER WINS
πŸ€–: SHADY GAME MASTER WINS
// repeat 95 times ...

WOW! You didn't win a single round... What poor luck you are having, again! 😏

But... HOW! We used rand! We read the spec! We were careful! What numbers are even on these lotto tickets?!?!?

Looking closer, there is something a little suspicious about the numbers on our lotto tickets...

2 ^ -126 * 1.00000000000000000000001
2 ^ -126 * 1.00000000000000000000010
2 ^ -126 * 1.00000000000000000000011
// ...

These numbers are indeed valid 32 bit floats, and in the [0, 1) range. But they are tiny! To be specific, they are on the order of $2^{-126}$ .

Looking at how rand generates random floats, it is impossible for such a negative exponent to ever be produced. To see why, let's walk through how rand generates the smallest, non-zero value it can:

  1. generate a random integer, in this case $1$
  2. convert to float, in this case $2^0 * 1.0$
  3. scale by $2^{-24}$, leaving $2^-24 * 1.0$

By this logic, rand will never generate a float with an exponent less than -24. And looking at our lotto tickets, all 100 tickets are in that range!

Okay, you got me! You can't blame a shady game master for trying to pull a fast one.

I really do feel bad now, and want you to win at least once.
After all, you have worked so hard!

If I promise all of the lotto tickets have exponents in the range (-24, 0],
then will you play again?

GREAT! Here are your lotto tickets for floats in the range [0, 1),
AND with exponents between (-24, 0].

Now let's draw numbers again until one of us wins...

πŸ€–: SHADY GAME MASTER WINS
πŸ€–: SHADY GAME MASTER WINS
πŸ€–: SHADY GAME MASTER WINS
// ...

AGAIN?! What lotto numbers did that shady game master give us this time:

2 ^ -2 * 1.00000000000000000000001
2 ^ -2 * 1.00000000000000000000011
2 ^ -2 * 1.00000000000000000000101
// ...

The shady game master was true to their promise. These numbers seem quite reasonable, and yet rand never produces them. If we reverse rand's steps for generating random floats, then we will notice something. Let's use our handy fractional multiplication table:

2^-2 2^-3 2^-4 ... 2^-23 2^-24 2^-25
1. 0 0 ... 0 0 1

First, undo the scaling rand uses by multiplying $2^{24}$:

2^22 2^21 2^20 ... 2^1 2^0 2^-1
1. 0 0 ... 0 0 1

Next convert this back into an integer... Wait a minute! What about that column with $2^{-1}$ ?. That's a fraction! Integers cannot have fractions! Why is there a fraction left over?

The problem is that these numbers exceed the "precision" used by rand when generating random floats. It is important that the last bit in the fraction is set to 1. Looking at the table, the last column represents $2^{-25}$, but there were only 24 bits of precision used!

**cackling shady game master appears**

Yes, that is why none of your lottery tickets could ever win, despite using the `rand` crate.

Good on you for figuring out my tricks! After only a few defeats, you have learned a lot.
Applause are well deserved πŸ‘πŸ‘πŸ‘ !

That was the last shady game I have prepared for you. I hope your visit was entertaining.

And more importantly, I hope you learned how tricksy floating point numbers can be!

These Are Not Bugs In rand

Seeing all these examples of floating point numbers rand cannot generate might give the impression of "bugs". Shouldn't all floats be possible if they are "random" ?

We have seen in the simulation that rand gives a uniform distribution of floats in [0, 1). But we also saw that valid floating point numbers do not form a uniform distribution in [0, 1). This contradiction is why not all floating points can be generated by rand. If we review our bucket counts for floating point numbers:

bucket min count
0 1031798785
0.0625 8388608
0.125 4194304
0.1875 4194304
0.25 2097152
0.3125 2097152
0.375 2097152
0.4375 2097152
0.5 1048576
0.5625 1048576
0.625 1048576
0.6875 1048576
0.75 1048576
0.8125 1048576
0.875 1048576
0.9375 1048576

Because the distribution of valid floating point numbers is not uniform, then to achieve a uniform distribution some valid floating point numbers must be excluded.

It is good that rand does not generate all valid floating point numbers in $[0, 1)$ . Otherwise the distribution would not be uniform.

When generating "random" anything, it is important to clarify the probability density function. What is the possible range? Are probabilities uniform over the range? normally distributed? bias?

rand actually supplies a variety of distributions for a variety of use cases. The "default" Standard strategy for 32 bit floating point numbers we have seen is uniform in the range $[0, 1)$. This seems like an intuitive default choice. So much so, it is the distribution used by JavaScript's Math.random

Conclusion

This article was originally inspired by creative uses of JavaScript's Math.random. I have seen it used in dozens of ways, like flipping coins, rolling dice, or subsampling data.

Seeing all these use cases tickled my curiosity. How exactly do floating point numbers and randomness mix together? And as we've seen, the mixture can be counter intuitive!

But I would be negligent if I did not mention that this was for fun ☒️. If you are building a real arcade, I recommend not using floating points 😊 .

@Wuelle
Copy link

Wuelle commented Nov 10, 2022

Pretty cool! πŸ‘

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment