Skip to content

Instantly share code, notes, and snippets.

View ducha-aiki's full-sized avatar

Dmytro Mishkin ducha-aiki

View GitHub Profile
@ducha-aiki
ducha-aiki / gist:21f41b1887749b122ca7
Created December 29, 2014 09:15
Imagenet val bvlc_googlenet
I1223 12:53:51.375224 23647 caffe.cpp:134] Use GPU with device ID 0
I1223 12:53:51.488831 23647 net.cpp:42] Initializing net from parameters:
name: "GoogleNet"
layers {
top: "data"
top: "label"
name: "data"
type: DATA
data_param {
source: "/home/share/storage/datasets/imagenet/dbs/ilsvrc12_val_lmdb"
This file has been truncated, but you can view the full file.
I1229 14:22:43.806607 23733 caffe.cpp:134] Use GPU with device ID 0
I1229 14:22:44.181466 23733 net.cpp:42] Initializing net from parameters:
name: "AlexNet"
layers {
top: "data"
top: "label"
name: "data"
type: DATA
data_param {
source: "/home/share/storage/datasets/imagenet/dbs/ilsvrc12_shai_val_lmdb"
@ducha-aiki
ducha-aiki / cifar10_2K_in-place.log
Created October 21, 2015 18:37
cifar10_2K_in-place
I1021 21:37:50.680150 3320 caffe.cpp:184] Using GPUs 0
I1021 21:37:50.799264 3320 solver.cpp:47] Initializing solver from parameters:
test_iter: 10
test_interval: 1000
base_lr: 0.001
display: 100
max_iter: 5000
lr_policy: "poly"
power: 0.5
momentum: 0.9
@ducha-aiki
ducha-aiki / cifar10_2K_not-in-place.log
Created October 21, 2015 18:37
cifar10_2K_not-in-place
I1021 21:39:31.658416 3427 caffe.cpp:184] Using GPUs 0
I1021 21:39:31.932163 3427 solver.cpp:47] Initializing solver from parameters:
test_iter: 10
test_interval: 1000
base_lr: 0.001
display: 100
max_iter: 5000
lr_policy: "poly"
power: 0.5
momentum: 0.9
layer {
name: "data"
type: "Input"
top: "data"
input_param { shape: { dim: 2 dim: 1 dim: 65 dim: 65 } }
}
layer {
name: "Gx"
type: "Convolution"
name: "CIFAR10_full"
layer {
name: "cifar"
type: "Data"
top: "data"
top: "label"
include {
phase: TRAIN
}
transform_param {
@ducha-aiki
ducha-aiki / batched_grid_sampling_pytorch.py
Created March 5, 2018 12:36
Batched version of grid sampling for saving memory
import torch
import torch.nn.functional as F
from torch.autograd import Variable
def batched_grid_apply(img, grid, batch_size):
n_patches = len(grid)
if n_patches > batch_size:
bs = batch_size
n_batches = n_patches / bs + 1
for batch_idx in range(n_batches):
@ducha-aiki
ducha-aiki / res_article_hardnet
Created October 26, 2018 15:04
res_article_hardnet.m
hb_setup();
%%
res = rproc.read('scoresroot', ...
fullfile(hb_path, 'matlab', 'scores', 'default'));
norm_splits = {};
norms_path = fullfile(hb_path, 'matlab', 'data', 'best_normalizations.csv');
norms = readtable(norms_path, 'delimiter', ',');
norms.Properties.RowNames = norms.descriptor;
@ducha-aiki
ducha-aiki / compact_bilinear_pooling.py
Created February 1, 2019 11:41 — forked from vadimkantorov/compact_bilinear_pooling.py
Compact Bilinear Pooling in PyTorch using the new FFT support
import torch
class CompactBilinearPooling(torch.nn.Module):
def __init__(self, input_dim1, input_dim2, output_dim, sum_pool = True):
super(CompactBilinearPooling, self).__init__()
self.output_dim = output_dim
self.sum_pool = sum_pool
generate_sketch_matrix = lambda rand_h, rand_s, input_dim, output_dim: torch.sparse.FloatTensor(torch.stack([torch.arange(input_dim, out = torch.LongTensor()), rand_h.long()]), rand_s.float(), [input_dim, output_dim]).to_dense()
self.sketch_matrix1 = torch.nn.Parameter(generate_sketch_matrix(torch.randint(output_dim, size = (input_dim1,)), 2 * torch.randint(2, size = (input_dim1,)) - 1, input_dim1, output_dim))
self.sketch_matrix2 = torch.nn.Parameter(generate_sketch_matrix(torch.randint(output_dim, size = (input_dim2,)), 2 * torch.randint(2, size = (input_dim2,)) - 1, input_dim2, output_dim))
@ducha-aiki
ducha-aiki / cifar10_full_sigmoid_solver.prototxt
Created October 22, 2015 09:05
Examples of how to use batch_norm in caffe
# The train/test net protocol buffer definition
net: "examples/cifar10/cifar10_full_sigmoid_train_test.prototxt"
# test_iter specifies how many forward passes the test should carry out.
# In the case of CIFAR10, we have test batch size 100 and 100 test iterations,
# covering the full 10,000 testing images.
test_iter: 100
# Carry out testing every 1000 training iterations.
test_interval: 1000
# The base learning rate, momentum and the weight decay of the network.
base_lr: 0.001