Skip to content

Instantly share code, notes, and snippets.

@blixt
Last active January 14, 2024 07:01
Show Gist options
  • Save blixt/f17b47c62508be59987b to your computer and use it in GitHub Desktop.
Save blixt/f17b47c62508be59987b to your computer and use it in GitHub Desktop.
A very simple, seedable JavaScript PRNG. NOTE: Please read comments on why this is not a good choice.
// NOTICE 2020-04-18
// Please see the comments below about why this is not a great PRNG.
// Read summary by @bryc here:
// https://github.com/bryc/code/blob/master/jshash/PRNGs.md
// Have a look at js-arbit which uses Alea:
// https://github.com/blixt/js-arbit
/**
* Creates a pseudo-random value generator. The seed must be an integer.
*
* Uses an optimized version of the Park-Miller PRNG.
* http://www.firstpr.com.au/dsp/rand31/
*/
function Random(seed) {
this._seed = seed % 2147483647;
if (this._seed <= 0) this._seed += 2147483646;
}
/**
* Returns a pseudo-random value between 1 and 2^32 - 2.
*/
Random.prototype.next = function () {
return this._seed = this._seed * 16807 % 2147483647;
};
/**
* Returns a pseudo-random floating point number in range [0, 1).
*/
Random.prototype.nextFloat = function (opt_minOrMax, opt_max) {
// We know that result of next() will be 1 to 2147483646 (inclusive).
return (this.next() - 1) / 2147483646;
};
@chriscauley
Copy link

chriscauley commented Dec 17, 2018

I don't think I'd call this a bug, but I recommend you "burn" the first number. It is very closely coupled to the seed. So if you choose a small seed (less than 100,000 by my estimate), your first float will always be ~0.04

for (let i=0;i<100;i++) {
  const seed = Math.floor(Math.random()*10000) // seed is 0-9,999
  console.log(new Random(seed).nextFloat()) // ~ 0.04
}

I discovered this because I'm making a video game and every enemy always moved down on their first move and every level had the same first item.

@bryc
Copy link

bryc commented Dec 21, 2018

@chriscauley The first number is always computed as n*16807. The highest possible seed in your example is 10000*16807 which is 0.07826369263070049 as a float. Besides simply ignoring the first number, hash the seed (ensure your seed is well distributed).

// 31-bit LCG (16807 multiplier, same algorithm as OP's)
var LCG=s=>()=>((s=(16807*s)%2147483647))/2147483647;
// knuth's simple 32-bit integer hash
var hash=n=>Math.imul(n,2654435761)>>>0; 

for(let seed=1;seed<10;seed++) console.log(seed, LCG( hash(seed) )() );
1 0.5944313365034759
2 0.1888626730069518
3 0.7832940095104277
4 0.3777253460139036
5 0.9721566825173795
6 0.5665880190208554
7 0.16101935552433133
8 0.7554506920278072
9 0.34988202853128314

Though you could get a similar result simply by running the generator ~20 or so times right after initialization.


Personally I would use the following instead, because I don't trust the quality of LCG:

// Mulberry32, a fast high quality PRNG: https://gist.github.com/tommyettinger/46a874533244883189143505d203312c
var mb32=s=>t=>(s=s+1831565813|0,t=Math.imul(s^s>>>15,1|s),t=t+Math.imul(t^t>>>7,61|t)^t,(t^t>>>14)>>>0)/2**32;

for(let seed=1;seed<10;seed++) console.log(seed, mb32( hash(seed) )() );

Another cautionary tale about LCGs

Edit: This is a bug that only affects broken LCG implementations, you can disregard. Sorry for any confusion. it shouldn't be an issue if the implementation is done right. It happens because the multiplication result using Math.imul is smaller than the modulo, which does not occur in the original.
Leaving up the original write-up, but don't take it seriously.

By chance I used a random seed of 3816034944 with weird, bad behavior. Looking at the output without the division, and in hexadecimal, the first bits are always the same.

var LCG=s=>_=>(2**31-1&(s=Math.imul(16807,s))); 
var nxt = LCG(3816034944);
for(var i = 0; i < 9; i++) console.log(nxt().toString(16))

This gives the output:

596a9180
63766a80
7349f980
7d9b4280
5c2ae180
033a9a80
7c754980
782c7280
2e113180

This shows a clear pattern in the first 8 bits of the output: 1000 000, and it happens each time, infinitely. This is mostly caused by using an even seed.

This happens even if you use a different polynomial too:

var LCG=s=>_=>(s=Math.imul(741103597,s)>>>0);
var nxt = LCG(3816034944);
for(var i = 0; i < 9; i++) console.log(nxt().toString(16))

Gives output:

32bea080
59061680
5a485480
afadba80
bc372880
f3d3fe80
84c01c80
1589e280
baa03080

So I would really avoid LCG now. If you really want a tiny oneliner, use mulberry32:

// mulberry32 - an actual high quality 32-bit generator
var mb32=a=>(t)=>(a=a+1831565813|0,t=Math.imul(a^a>>>15,1|a),t=t+Math.imul(t^t>>>7,61|t)^t,(t^t>>>14)>>>0)/2**32;

@nanodeath
Copy link

I strongly recommend checking out this page by @bryc for information on better JS PRNG implementations.

@blixt
Copy link
Author

blixt commented Apr 18, 2020

Thanks for all the comments here! I'll add a clear notice in the top of this gist to ask the reader to read up on it!

This is a very old gist, and after making it I found Alea to be a good performance/quality compromise in JS and made this package: https://github.com/blixt/js-arbit

I set up dieharder tests in the js-arbit repo if you wanna compare it to other PRNGs.

@nanodeath
Copy link

@blixt this page happens to be the #2 result on Google for "javascript prng" 🙂

@dgreensp
Copy link

dgreensp commented Mar 12, 2021

The comment by @bryc above about the problem with this PRNG always returning integers ending in 0x80 is wrong, because it uses an incorrectly "optimized" version of the PRNG.

// this is the correct PRNG (EDIT: actually no, see my comment below, the Math.imul already breaks it)
function LCG(a) {
    return function() {
      a = Math.imul(48271, a) | 0 % 2147483647;
      return (a & 2147483647) / 2147483648;
    }
}

// This is wrong.  Masking with 2**31-1 is not the same as modulo that number,
// just as x&255 is not the same as x%255, it is x%256.
var LCG=s=>()=>(2**31-1&(s=Math.imul(48271,s)))/2**31; // Same as above

So the OP is not as bad as suggested.

@bryc
Copy link

bryc commented Mar 12, 2021

Edit: I must apologize as I was displaying a confirmation bias. But @dgreensp is correct; the 0x80 problem I mentioned only occurs because the multiplication result is smaller than the modulus. And he is right that the randomness quality is worse than a vanilla LCG. But even the original is quite poor quality. I would only advise using an LCG if you want to study the math properly, learn how to test randomness properly, and ultimately do not mind using a very basic PRNG that won't fool randomness tests like PractRand and TestU01.

For reference, here is a clear (and correct) implementation:

function LCG(a) {
    return function() {
      a = (a * 48271) % 2147483647;
      return (a - 1) / 2147483646;
    }
}

Original text below:


@dgreensp

Sorry but that is nonsense. Using Math.imul forces it to be a 32-bit LCG, which makes the AND and Modulo operators do the exact same thing because the operand is never over 2147483647. I simply chose AND because it produces more simplified byte code that Firefox can JIT better. The original C code requires 64-bit long type for the multiplication stage. My JS implementation essentially changes long to int, and yes there are implications for doing that.

function LCG(a) {
    return function() {
      return a = a * 48271 % 2147483647;
    }
}

The above code is a C-compatible version, but it's slower. The only reason it's even possible is because the result of the multiplication is never more than 48271 * 2147483647. So you can't use larger LCG multipliers. The JavaScript JIT compilers cannot as easily accelerate these straight Number multiplications as they can for Math.imul or other 32-bit casted operations. And in all honesty, I highly doubt it makes a difference quality-wise. These old LCGs fail modern PRNG tests, it's as simple as that. There are better, faster options available.

The point of my various LCG examples aren't that it 'produces the same exact output as the 1988 paper', but is a version more optimized for use in JS. The 0x80 problem I mentioned is not in error; it will affect any 32-bit LCG, even if implemented in C/C++. It's something to be aware of when writing optimized LCGs in JS.

Edit: Here is a performance case test with some sample comparison code: https://jsbench.me/1fkm5vl0rc/1

Firefox: A is fastest, B is ~55% slower, C is ~65% slower
Chrome: A and B are mostly on par (implying smarter modulo optimization). C is ~55% slower

So that may explain why I'd use AND over Modulo.

But again; LCGs do not pass randomness tests and require 64-bit or even 128-bit multiplication for best results. PCG is basically a modern adaptation of LCG which does pass randomness tests, but it is ill-suited for JavaScript because it relies on overflowing 64-bit multiplication.

@dgreensp
Copy link

@bryc Sorry, I didn't mean to use the Math.imul version of the code in my "correct" example, that was an error on my part.

The main point I was trying to make is, using 2^31-1 (= 2147483647) as a modulus is clever because 2147483647 is prime, and in particular, you will not have problems with there being a pattern to the low bits. When you switch to using & or Math.imul, it's not the same algorithm, and it's known to have way worse properties because the modulus is a power of two. There is at least one place above in this thread where the switch is quietly made from a prime modulus to a power-of-two modulus, keeping the numbers 16807 and 2^31-1 in there, and it's hard to spot, because there's a comment stating it's the same algorithm.

Anyway, for my purposes (generating some predictable randomness simply and efficiently for randomized unit tests), I actually like the original LCG.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment