Skip to content

Instantly share code, notes, and snippets.

@tansey
Created December 7, 2011 18:49
Show Gist options
  • Star 11 You must be signed in to star a gist
  • Fork 1 You must be signed in to fork a gist
  • Save tansey/1444070 to your computer and use it in GitHub Desktop.
Save tansey/1444070 to your computer and use it in GitHub Desktop.
Sampling from a Gaussian Distribution in C#
public static double SampleGaussian(Random random, double mean, double stddev)
{
// The method requires sampling from a uniform random of (0,1]
// but Random.NextDouble() returns a sample of [0,1).
double x1 = 1 - random.NextDouble();
double x2 = 1 - random.NextDouble();
double y1 = Math.Sqrt(-2.0 * Math.Log(x1)) * Math.Cos(2.0 * Math.PI * x2);
return y1 * stddev + mean;
}
@ProgramistycznySwir
Copy link

ProgramistycznySwir commented Jan 30, 2021

For anyone using this, worth mentioning is that you can get second gaussian distributed random number by swapping .Cos() with .Sin() and save it in some variable and don't calculete it if you needed second sample, both are viable numbers.

@mpv55mx1
Copy link

oday I found your function for creating a random Gaussian number. I will appreciate it if you could provide a reference for creating a random number for different types of distribution functions.
I will be using your function in my program and will cite you in my paper.
Best regards
Mahendra

@NightElfik
Copy link

NightElfik commented Apr 3, 2023

For anyone using this, worth mentioning is that you can get second gaussian distributed random number by swapping .Cos() with .Sin() and save it in some variable and don't calculete it if you needed second sample, both are viable numbers.

I don't think that swapping Cos for Sin would produce a new independent value though. The value might be still normally distributed but it will be tied to the first one based on the random value x2. For example, Math.Cos(2.0 * Math.PI * x2) is large (> 1/sqrt 2), the Math.Sin(2.0 * Math.PI * x2) will be predictably small (< 1 / sqrt 2).

@ProgramistycznySwir
Copy link

I don't think that swapping Cos for Sin would produce a new independent value though.

Actually they are (although i'm not into cryptography, soo don't take my word), it's called Box–Muller transform

@NightElfik
Copy link

I don't think that swapping Cos for Sin would produce a new independent value though.

Actually they are (although i'm not into cryptography, soo don't take my word), it's called Box–Muller transform

Interesting, thanks for the reference. I see that the wiki says that Suppose U1 and U2 are independent samples chosen from the uniform distribution on the unit interval (0, 1). ... Then Z0 and Z1 are independent random variables with a standard normal distribution.

I am just having hard time wrapping my head around the fact that the only difference is cos/sin of the same angle and that is somehow enough to guarantee independence. I've plotted 1000 points as a scatter plot and did not visually see any obvious correlation.

Interestingly enough Math.NET uses a different algorithm, they call it "Polar transform": https://github.com/mathnet/mathnet-numerics/blob/master/src/Numerics/Distributions/Normal.cs#L393

@ProgramistycznySwir
Copy link

ProgramistycznySwir commented Apr 19, 2023

Interestingly enough Math.NET uses a different algorithm

This paper compares accuracy of different methods, it reads that Polar Technique is most accurate, soo I guess framework developers chose more accurate method and left optimization to people who need it.
Polar Technique uses rejection sampling as it requires that (x1, x2) fall inside unit circle, if you have optimal way of generating (or hashing) (x1, x2) points which magnitude is smaller than 1, this technique will be better, but otherwise Box-Muller will be faster.

Also it depends on your use-case, I use math for game-dev, soo in my case inaccuracies are ok.

@NightElfik
Copy link

This paper compares accuracy of different methods

Fantastic reference, that is exactly something I was wondering about. Thanks for sharing!

When I was writing my first comment I was more worried about some systematic bias, not precision. For example, imagine a method that generates two random numbers as a = rand(); and b = 1.0-a. These two numbers are both from uniform random distribution but obviously they are strongly correlated to each other. Turns out that my intuition was not right and the box-muller method does not produce biased numbers.

Interestingly enough, the paper you shared says: Another form of the Box–Muller method is called the polar technique. This improves over the previous technique in being quicker since it makes fewer calls to the mathematical library and uses only one transcendental function, instead of three. and This method is advantageous of this method over the first form of Box–Muller’s method in spite of the fact that the algorithm discards 21% of the values of W in step 3..

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment