Skip to content

Instantly share code, notes, and snippets.

@obilaniu
Created April 8, 2022 15:00
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save obilaniu/b133470cb70410d841faca819d3921e5 to your computer and use it in GitHub Desktop.
Save obilaniu/b133470cb70410d841faca819d3921e5 to your computer and use it in GitHub Desktop.
LD_PRELOAD'able patch for PyTorch 1.10.x INTERNAL ASSERT FAILED at "../aten/src/ATen/MapAllocator.cpp":263
// Anaconda: g++-7 -D_GLIBCXX_USE_CXX11_ABI=0 -Os -Wall -fPIC hack.cpp -c -o hack_abi_old.o
// Others: g++-7 -D_GLIBCXX_USE_CXX11_ABI=1 -Os -Wall -fPIC hack.cpp -c -o hack_abi_new.o
// Linker: g++-7 -fPIC -shared hack_abi_old.o hack_abi_new.o -o hack.so
// Strip: strip hack.so
// Use: export LD_PRELOAD=/absolute/path/to/hack.so # Before executing python
#include <atomic>
#include <string>
#include <random>
#include <stdint.h>
#include <unistd.h>
namespace at {
std::string NewProcessWideShmHandle()
{
static std::atomic<uint64_t> counter{0};
static std::random_device rd;
std::string handle = "/torch_";
handle += std::to_string(getpid());
handle += "_";
handle += std::to_string(rd());
handle += "_";
handle += std::to_string(counter.fetch_add(1, std::memory_order_relaxed));
return handle;
}
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment