Skip to content

Instantly share code, notes, and snippets.

@ionvision
Forked from Tushar-N/hook_activations.py
Created August 5, 2019 07:55
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save ionvision/28bf021dd0c8dc1c73d95288fb7741fe to your computer and use it in GitHub Desktop.
Save ionvision/28bf021dd0c8dc1c73d95288fb7741fe to your computer and use it in GitHub Desktop.
Pytorch code to save activations for specific layers over an entire dataset
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchvision.models as tmodels
from functools import partial
import collections
# dummy data: 10 batches of images with batch size 16
dataset = [torch.rand(16,3,224,224).cuda() for _ in range(10)]
# network: a resnet50
net = tmodels.resnet50(pretrained=True).cuda()
# a dictionary that keeps saving the activations as they come
activations = collections.defaultdict(list)
def save_activation(name, mod, inp, out):
activations[name].append(out.cpu())
# Registering hooks for all the Conv2d layers
# Note: Hooks are called EVERY TIME the module performs a forward pass. For modules that are
# called repeatedly at different stages of the forward pass (like RELUs), this will save different
# activations. Editing the forward pass code to save activations is the way to go for these cases.
for name, m in net.named_modules():
if type(m)==nn.Conv2d:
# partial to assign the layer name to each hook
m.register_forward_hook(partial(save_activation, name))
# forward pass through the full dataset
for batch in dataset:
out = net(batch)
# concatenate all the outputs we saved to get the the activations for each layer for the whole dataset
activations = {name: torch.cat(outputs, 0) for name, outputs in activations.items()}
# just print out the sizes of the saved activations as a sanity check
for k,v in activations.items():
print (k, v.size())
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment