Skip to content

Instantly share code, notes, and snippets.

View JBlaschke's full-sized avatar

Johannes Blaschke JBlaschke

View GitHub Profile
@JBlaschke
JBlaschke / reproducer.jl
Created March 18, 2024 22:43
Unknown socket reproducer for Distributed.jl
using Distributed
addprocs(2)
println(workers())
@everywhere using Distributed
@everywhere function remote_worker(fn, entries, results)
while true
# take data from remote channel asynchronously
@JBlaschke
JBlaschke / ld_trace.sh
Created March 2, 2024 05:57
Trace ld with timestamps
#!/bin/bash
# Redirect LD_DEBUG output (stderr) to ts (from moreutils https://github.com/madx/moreutils/blob/master/ts)
# Every process gets it's own file (ld_trace.<pid>.log)
LD_DEBUG=all "$@" 2> >(ts '%Y-%m-%d %H:%M:%.S' >ld_trace.$$.log)

A metatable can be defined like

local t = setmetatable({}, {
  __tostring = function() return 'custom tostring behavior!' end
})

Here are the metamethods that you can define, and their behavior

Operators

@JBlaschke
JBlaschke / string.split.lua
Created May 24, 2023 22:28 — forked from jaredallard/string.split.lua
string.split in lua
-- split a string
function string:split(delimiter)
local result = { }
local from = 1
local delim_from, delim_to = string.find( self, delimiter, from )
while delim_from do
table.insert( result, string.sub( self, from , delim_from-1 ) )
from = delim_to + 1
delim_from, delim_to = string.find( self, delimiter, from )
end
@JBlaschke
JBlaschke / compressed_tar_filesystem.jl
Last active December 21, 2022 22:00
Example of in-memory file system used to read a compressed tar archive (without extracting to disk). Based on the uncompressed version from: https://gist.github.com/KristofferC/b354b654e776f4910b03985830418723
module memtar
using Tar, CodecZlib, TranscodingStreams
mutable struct InMemoryFile
size::Int
pos::Int
str::Union{Nothing, String}
end
@JBlaschke
JBlaschke / mpi4py_pycuda_demo.py
Created December 28, 2020 18:16 — forked from lebedov/mpi4py_pycuda_demo.py
Demo of how to pass GPU memory managed by pycuda to mpi4py.
#!/usr/bin/env python
"""
Demo of how to pass GPU memory managed by pycuda to mpi4py.
Notes
-----
This code can be used to perform peer-to-peer communication of data via
NVIDIA's GPUDirect technology if mpi4py has been built against a
CUDA-enabled MPI implementation.
@JBlaschke
JBlaschke / Make.local
Created April 30, 2020 16:14
AMReX make configuration for using homebrew-installed gcc
#
# AMReX Make configuration for Homebrew
#
# If it exists, the Make.local file is included at the end of Make.defs. Thus
# one can override various variables here.
#
# Do not check this file into git repo. This is supposed to be purely local.
#
@JBlaschke
JBlaschke / OpenMP_offload_object_composition.cpp
Created April 30, 2020 15:52 — forked from jmeyers314/OpenMP_offload_object_composition.cpp
OpenMP c++ GPU offload with object composition
#include <new>
#include <vector>
#include <iostream>
#pragma omp declare target
// polymorphic Abstract Base Class
class Base {
public:
virtual double doOne(double x) = 0; // do something interesting
virtual Base* getDevPtr() = 0; // get a device pointer to device shadow instance of class
@JBlaschke
JBlaschke / keybase.md
Created February 25, 2018 18:14
Keybase.io Proof of Identity

Keybase proof

I hereby claim:

To claim this, I am signing this object: