Skip to content

Instantly share code, notes, and snippets.

@scionoftech
Created October 3, 2019 10:02
Show Gist options
  • Save scionoftech/62faa748ff909105316b2e6d35ece01f to your computer and use it in GitHub Desktop.
Save scionoftech/62faa748ff909105316b2e6d35ece01f to your computer and use it in GitHub Desktop.
How to run python script on GPU
# pip install --user numba
from numba import jit, cuda
import numpy as np
# to measure exec time
from timeit import default_timer as timer
# normal function to run on cpu
def func(a):
for i in range(10000000):
a[i]+= 1
# function optimized to run on gpu
@jit(target ="cuda")
def func2(a):
for i in range(10000000):
a[i]+= 1
if __name__=="__main__":
n = 10000000
a = np.ones(n, dtype = np.float64)
b = np.ones(n, dtype = np.float32)
start = timer()
func(a)
print("without GPU:", timer()-start)
start = timer()
func2(a)
print("with GPU:", timer()-start)
@FatinShadab
Copy link

This doesn't work anymore ):

@Alf71
Copy link

Alf71 commented Oct 23, 2022

The difference between both functions seems to be only

@jit(target ="cuda")

@KarthikDevalla
Copy link

This doesn't work anymore ):

Hey @FatinShadab try replacing @jit(target='cuda') with @jit(target_backend='cuda')

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment