Skip to content

Instantly share code, notes, and snippets.

x = np.array([[ 0.06169621],
[-0.05147406],
[ 0.04445121],
[-0.01159501],
[-0.03638469],
[-0.04069594],
[-0.04716281],
[-0.00189471],
[ 0.06169621],
[ 0.03906215],
@vbalnt
vbalnt / hpatches-descriptors.txt
Last active February 11, 2019 13:10
HPatches available descriptor files. Convert to nice table with https://ozh.github.io/ascii-tables/
name description
ncc Normalised cross correlation
sift SIFT [Lowe IJCV 2004]
rootsift rootSIFT [Arandjelović & Zisserman CVPR 2012]
orb ORB [Rublee et al ICCV 2011]
brief BRIEF [Calonder et al. PAMI 2012]
binboost BinBoost [Trzcinski et al. PAMI 2013]
deepdesc DeepDesc [Simo-Serra et al. ICCV 2015]
liop LIOP [Wang et al ICCV 2011]
tfeat-margin-star TFeat with margin loss [Balntas et al. BMVC 2016]
@vbalnt
vbalnt / git.md
Last active October 24, 2016 12:10
Git workflow
  • git pull (to get other people's changes - run this before any change you make)
  • make changes in your local copy of the repo
  • git add * to add your changes
  • git commit -m "your message with your changes here"to commit your changes
  • git push to push your changes for everyone else
@vbalnt
vbalnt / ex.lua
Last active September 30, 2016 14:30
example on how to save hpatches descriptors for hbench
require 'cutorch'
require 'xlua'
require 'trepl'
require 'cunn'
require 'cudnn'
require 'image'
require 'nn'
require 'torch'
require 'lfs'
@vbalnt
vbalnt / cuda-ubuntu.md
Last active March 29, 2021 09:58
Installation of CUDA & Tensorflow in Ubuntu 14.04 or 16.04
@vbalnt
vbalnt / generate_brief_tests.py
Created May 29, 2016 19:35
Original code from openCV to create the tests that are included in their version of the descriptor. Taken from an old openCV repo.
tests = int(sys.argv[1])
S = 32 # patch size
S2 = S / 2 # Clamp sample coordinates to this
sigma = S / 5.0 # Sampling geometry II
random.seed(42) # Make repeatable for simplicity
def random_coordinate():
coord = int(round(random.gauss(0.0, sigma)))
return min(max(coord, -S2 + 1), S2)+S2-1
# Run on GPU: THEANO_FLAGS=mode=FAST_RUN,device=gpu,floatX=float32 python mnist_siamese_graph.py
from __future__ import print_function
import sys
import os
import time
import numpy as np
import theano
import theano.tensor as T
import lasagne
@vbalnt
vbalnt / siamese.py
Last active March 1, 2019 01:50
train on siamese graph - custom mini batches
'''Train a Siamese MLP on pairs of digits from the MNIST dataset.
It follows Hadsell-et-al.'06 [1] by computing the Euclidean distance on the
output of the shared network and by optimizing the contrastive loss (see paper
for mode details).
[1] "Dimensionality Reduction by Learning an Invariant Mapping"
http://yann.lecun.com/exdb/publis/pdf/hadsell-chopra-lecun-06.pdf
Run on GPU: THEANO_FLAGS=mode=FAST_RUN,device=gpu,floatX=float32 python mnist_siamese_graph.py
#ifndef _UTILS_H_
#define _UTILS_H_
#include <stdio.h>
#include <string.h>
#include <dirent.h>
#include "vlfeat/generic.h"
#include "vlfeat/sift.h"
#include "vlfeat/imopv.h"
#include "pgm.h"
/* #include <plplot/plplot.h> */
@vbalnt
vbalnt / train.c
Last active September 7, 2015 15:44
The training code sample for the offline test selection for BOLD - Binary Online Learned Descriptor (CVPR 2015)
/* Creation of the test array & calling of the learning func */
bintest orb_tests[n_tests];
learn_orb_tests_g2(training_data, orb_tests ,1024,32,10000);
/* fwrite_bintests(orb_tests,1024,"orb1024.descr"); */
/* Actual learning func */
void learn_orb_tests_g2(dataset data,bintest *ltests,int dims,int patch_size,int nlearn)
{
bintest *all_tests;