Skip to content

Instantly share code, notes, and snippets.

@yjmade
yjmade / cuda_shm.py
Last active August 22, 2023 07:53
Open Torch CUDA shm in other process with numba
#In process 1
import torch
shape = (100000,)
a=torch.rand(shape, device="cuda:0").share_memory_()
deviceId, handle_ptr, size, offset,*_ =a.storage()._share_cuda_()
# in process 2
import numpy as np
import numba.cuda
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@yjmade
yjmade / README.md
Created December 5, 2016 02:58
directory transfer by axel without to compress

in the origin server

first install SimpleTornadoServer

pip install SimpleTornadoServer

then generate the manifest of the dir to be download