Skip to content

Instantly share code, notes, and snippets.

View jw3126's full-sized avatar

Jan Weidner jw3126

  • Freiburg, Germany
View GitHub Profile
using Test
using Convex1D
using BenchmarkTools
function minimize_scaled_L1_diff(xs, ys)
# find t::Number that minimizers f(t) = sum(abs, t*xs - ys)
# f is convex with piecewise constant derivative given by
# f'(t) = Σ xi * sign(t*xi - yi)
# One of the points ti := yi/xi must be a minimizer (for some xi !=0. If all xi==0 then f == const anyway)
# Based on remarks of Mathieu Tanneau in slack
@jw3126
jw3126 / flux_vs_keras.jl
Created October 2, 2020 09:08
Benchmark 1d convnet keras vs flux
using PyCall
using Flux
function doit_keras(cfg)
keras = pyimport("tensorflow.keras")
inp = keras.layers.Input((nothing, 1))
x = inp
x = keras.layers.Conv1D(kernel_size=51, filters=50)(x)
x = keras.layers.Conv1D(kernel_size=1, filters=1)(x)
out = x
@jw3126
jw3126 / mwe.py
Last active November 20, 2020 13:58
pytorch_lightning_ddp_gradient_checkpointing_bug
# This reproduces a pytorch_lightning issue
# where gradient checkpointing + ddp results in nan loss
#
# * Run with gpus=1 and it works fine.
# * Run with gpus=4 and it loss becomes nan quickly
#
# See also https://forums.pytorchlightning.ai/t/gradient-checkpointing-ddp-nan/398
import torch
from torch import nn
from torch.nn import functional as F
@jw3126
jw3126 / README.md
Created November 5, 2021 21:48
Evolution

image