Skip to content

Instantly share code, notes, and snippets.

@chicm
chicm / loader.py
Created November 15, 2019 06:18
pseudo label
import os
import cv2
import collections
import time
import tqdm
from PIL import Image
from functools import partial
train_on_gpu = True
import numpy as np
@chicm
chicm / ensemble.py
Last active September 25, 2019 12:35
detection ensemble
"""
Ensembling methods for object detection.
"""
"""
General Ensemble - find overlapping boxes of the same class and average their positions
while adding their confidences. Can weigh different detectors with different weights.
No real learning here, although the weights and iou_thresh can be optimized.
Input:
@chicm
chicm / split.py
Last active October 4, 2017 06:15
split chinese characters
with open('h5c/h5ctrain.zh', 'r') as f:
text = f.readlines()
text = [line.strip() for line in text]
print(text[:10])
split_text = []
for line in text:
newline = ''
for c in line:
@chicm
chicm / process_resource.py
Last active October 11, 2017 07:31
process resource file
import glob,os
def load_properties(props, filepath, sep='=', comment_char='#'):
"""
Read the file passed as parameter as a properties file.
"""
if not os.path.exists(filepath):
return
with open(filepath, "rt") as f:
@chicm
chicm / gist:b8ec4762da7cd5daadc50c8e69361c1f
Created October 2, 2017 19:07 — forked from karpathy/gist:587454dc0146a6ae21fc
An efficient, batched LSTM.
"""
This is a batched LSTM forward and backward pass
"""
import numpy as np
import code
class LSTM:
@staticmethod
def init(input_size, hidden_size, fancy_forget_bias_init = 3):
@chicm
chicm / plank_notes
Created June 23, 2017 18:16 — forked from erogol/plank_notes
Kaggle Plankton Classification winner's approach notes
FROM: http://benanne.github.io/2015/03/17/plankton.html
Meta-Tricks:
- Use %10 for validation with STRATIFIED SAMPLING (my mistake)
- Cyclic Pooling
- Leaky ReLU = max(x, a*x) learned a
- reduces overfitting with a ~= 1/3
- Orthogonal initialization http://arxiv.org/pdf/1312.6120v3.pdf
- Use larger weight decay for larger models since otherwise some layers might diverge
@chicm
chicm / plank_notes
Created June 23, 2017 18:16 — forked from erogol/plank_notes
Kaggle Plankton Classification winner's approach notes
FROM: http://benanne.github.io/2015/03/17/plankton.html
Meta-Tricks:
- Use %10 for validation with STRATIFIED SAMPLING (my mistake)
- Cyclic Pooling
- Leaky ReLU = max(x, a*x) learned a
- reduces overfitting with a ~= 1/3
- Orthogonal initialization http://arxiv.org/pdf/1312.6120v3.pdf
- Use larger weight decay for larger models since otherwise some layers might diverge
'''This script goes along the blog post
"Building powerful image classification models using very little data"
from blog.keras.io.
It uses data that can be downloaded at:
https://www.kaggle.com/c/dogs-vs-cats/data
In our setup, we:
- created a data/ folder
- created train/ and validation/ subfolders inside data/
- created cats/ and dogs/ subfolders inside train/ and validation/
- put the cat pictures index 0-999 in data/train/cats
@chicm
chicm / readme.md
Created March 30, 2017 07:23 — forked from baraldilorenzo/readme.md
VGG-16 pre-trained model for Keras

##VGG16 model for Keras

This is the Keras model of the 16-layer network used by the VGG team in the ILSVRC-2014 competition.

It has been obtained by directly converting the Caffe model provived by the authors.

Details about the network architecture can be found in the following arXiv paper:

Very Deep Convolutional Networks for Large-Scale Image Recognition

K. Simonyan, A. Zisserman

#include <bits/stdc++.h>
#define abs(x) ((x) < 0?-(x):(x))
#define uint unsigned int
using namespace std;
int gcd(int x, int y) {
if(x > y) swap(x,y);
while(x > 0) {
int z =x;
x =y%x;