Skip to content

Instantly share code, notes, and snippets.

@luzai
luzai / inp.grad.py
Created January 18, 2020 08:41
input gradient norm reg
if args.inp_gw != 0:
input_imgs_grad_attached = torch.autograd.grad(
outputs=loss, inputs=imgs,
create_graph=True, retain_graph=True,
only_inputs=True
)[0]
input_imgs_grad_attached = input_imgs_grad_attached.view(
input_imgs_grad_attached.size(0), -1
)
if args.aux == 'l1_grad':
@luzai
luzai / test_superkernel.py
Last active June 12, 2019 03:23
this version implementation of superkernel tend to gradient explode.
import torch
import torch.nn as nn
import torch.nn.functional as F
from easydict import EasyDict
import numpy as np
__all__ = ['singlepath']
conf = EasyDict()
conf.conv2dmask_drop_ratio = .0
@luzai
luzai / get_layer_bug.py
Last active May 8, 2017 03:06
Try to fix shapes bug in keras/example/mnist_net2net.py
'''This is an implementation of Net2Net experiment with MNIST in
'Net2Net: Accelerating Learning via Knowledge Transfer'
by Tianqi Chen, Ian Goodfellow, and Jonathon Shlens
arXiv:1511.05641v4 [cs.LG] 23 Apr 2016
http://arxiv.org/abs/1511.05641
Notes
- What:
+ Net2Net is a group of methods to transfer knowledge from a teacher neural