Skip to content

Instantly share code, notes, and snippets.

@tupui
Last active October 9, 2022 12:20
Show Gist options
  • Save tupui/cea0a91cc127ea3890ac0f002f887bae to your computer and use it in GitHub Desktop.
Save tupui/cea0a91cc127ea3890ac0f002f887bae to your computer and use it in GitHub Desktop.
Halton Sequence in python
"""Halton low discrepancy sequence.
This snippet implements the Halton sequence following the generalization of
a sequence of *Van der Corput* in n-dimensions.
---------------------------
MIT License
Copyright (c) 2017 Pamphile Tupui ROY
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
"""
import numpy as np
def primes_from_2_to(n):
"""Prime number from 2 to n.
From `StackOverflow <https://stackoverflow.com/questions/2068372>`_.
:param int n: sup bound with ``n >= 6``.
:return: primes in 2 <= p < n.
:rtype: list
"""
sieve = np.ones(n // 3 + (n % 6 == 2), dtype=np.bool)
for i in range(1, int(n ** 0.5) // 3 + 1):
if sieve[i]:
k = 3 * i + 1 | 1
sieve[k * k // 3::2 * k] = False
sieve[k * (k - 2 * (i & 1) + 4) // 3::2 * k] = False
return np.r_[2, 3, ((3 * np.nonzero(sieve)[0][1:] + 1) | 1)]
def van_der_corput(n_sample, base=2):
"""Van der Corput sequence.
:param int n_sample: number of element of the sequence.
:param int base: base of the sequence.
:return: sequence of Van der Corput.
:rtype: list (n_samples,)
"""
sequence = []
for i in range(n_sample):
n_th_number, denom = 0., 1.
while i > 0:
i, remainder = divmod(i, base)
denom *= base
n_th_number += remainder / denom
sequence.append(n_th_number)
return sequence
def halton(dim, n_sample):
"""Halton sequence.
:param int dim: dimension
:param int n_sample: number of samples.
:return: sequence of Halton.
:rtype: array_like (n_samples, n_features)
"""
big_number = 10
while 'Not enought primes':
base = primes_from_2_to(big_number)[:dim]
if len(base) == dim:
break
big_number += 1000
# Generate a sample using a Van der Corput sequence per dimension.
sample = [van_der_corput(n_sample + 1, dim) for dim in base]
sample = np.stack(sample, axis=-1)[1:]
return sample
print(van_der_corput(10))
# [0.0, 0.5, 0.25, 0.75, 0.125, 0.625, 0.375, 0.875, 0.0625, 0.5625]
print(halton(2, 5))
# [[ 0.5 0.33333333]
# [ 0.25 0.66666667]
# [ 0.75 0.11111111]
# [ 0.125 0.44444444]
# [ 0.625 0.77777778]]
@tupui
Copy link
Author

tupui commented Jul 24, 2018

This has been included in statsmodels/tools/sequences.py

@molinav
Copy link

molinav commented Feb 21, 2019

I am experiencing a performance leak when computing the Halton sequences with Python 3.6.7 and NumPy 1.13.3. The issue is triggered because the function halton is passing np.int64 instances to the base argument from function van_der_corput, and the builtin divmod inside van_der_corput seems to be very inefficient if not used with native Python integers.

An example of what IPython3 returns in my laptop for divmod:

In [2]: native_two, numpy_two = 2, np.int64(2)

In [3]: %timeit divmod(10, native_two)
The slowest run took 25.93 times longer than the fastest. This could mean that an intermediate result is being cached.
10000000 loops, best of 3: 89.8 ns per loop

In [4]: %timeit divmod(10, numpy_two)
The slowest run took 68.61 times longer than the fastest. This could mean that an intermediate result is being cached.
1000000 loops, best of 3: 222 ns per loop

Because of this performance leak, I see these timings for the van_der_corput function when trying to generate 100000 samples:

In [5]: %timeit van_der_corput(100000, native_two)
1 loop, best of 3: 325 ms per loop

In [6]: %timeit van_der_corput(100000, numpy_two)
1 loop, best of 3: 3.58 s per loop

And finally, the halton function is suffering the consequences to generate e.g. 100000 samples from a 5D-space:

In [7]: %timeit halton(5, 100000)
1 loop, best of 3: 10.6 s per loop

I could workaround this issue with two different possibilities:

  • change line 81 with base = primes_from_2_to(big_number)[:dim].tolist(), to convert the array into a list of native Python integers, or
  • force the argument types before line 59 with n_sample, base = int(n_sample), int(base) (I find this better because the performance leak is inside van_der_corput, not in halton).

I decided to apply the second solution, and after the change the performance leak is not there anymore:

In [9]: %timeit halton(5, 100000)
1 loop, best of 3: 959 ms per loop

I do not know if this issue can be reproduced with other versions, but at least I found it interesting enough to be commented. I hope this can be useful to others.

@tupui
Copy link
Author

tupui commented Jun 18, 2020

@molinav Thanks for finding this out! I am currently working on a PR to have this in SciPy. So this is useful.

@jdavidd
Copy link

jdavidd commented Dec 14, 2020

hello @tupui! How can I use this to generate Halton sequence in a given rectangle? :)

@tupui
Copy link
Author

tupui commented Dec 15, 2020

Hi @jdavidd, once you generate a sample, you just have to scale the values from [0, 1) to [a, b), b>a the bounds you want.
For instance if you have two parameters the first range is [-2, 6] and the second [0, 5]:

bounds = [[-2, 0], [6, 5]]
bounds = np.array(bounds)
min_ = np.min(bounds, axis=0)
max_ = np.max(bounds, axis=0)
sample = sample * (max_ - min_) + min_

But have a look at the PR I have in scipy for more stuff like discrepancy: scipy/scipy#10844

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment