Skip to content

Instantly share code, notes, and snippets.

View kayush2O6's full-sized avatar

Ayush Kumar kayush2O6

View GitHub Profile
@kayush2O6
kayush2O6 / dask_custring.ipynb
Last active March 14, 2019 05:12
Final version with suggested solution
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@kayush2O6
kayush2O6 / steps_cudf.txt
Last active July 20, 2022 14:18
rapids'cudf on google colab
Steps 1: Just to verify that you have all requirement satisfied, needed by rapids.
* check the gpu card (>=Pascal arch)
!nvidia-smi
* check cuda version installed (>=9.2)
!nvcc -V
*check the python and pip version (python==3.6)
!python -V; pip -V
@kayush2O6
kayush2O6 / steps_cuml.txt
Last active May 10, 2021 06:21
rapids'cuml on google colab
BEFORE INSTALLING THE CUML, PLEASE MAKE SURE YOU HAVE FOLLOWED THE ABOVE STEPS FOR CUDF. CUDF SHOULD BE WORKING...
Step 1: Install the cuml and its depandencies.
!apt install libopenblas-base libomp-dev
!pip install cuml-cuda100
# import cuml at this point, will give libcuml.so not found error. #
NOTE: Step2 is optional and is just for information, you can fast forward to Step3 directly to work quickely.
@kayush2O6
kayush2O6 / numba_to_pytorch.py
Last active November 12, 2022 15:37
convert numba cuda array to pytorch tensor
from numba import cuda
import ctypes
import numpy as np
import torch
def devndarray2torch(dev_arr):
t = torch.empty(size=dev_arr.shape, dtype=dtyp).cuda()
ctx = cuda.cudadrv.driver.driver.get_context()
# constant value of #bytes in case of float32 = 4
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.