Skip to content

Instantly share code, notes, and snippets.

@semi-supervised-paper
semi-supervised-paper / hsvrgb-cpp
Created August 12, 2020 12:52 — forked from fairlight1337/hsvrgb-cpp
Simple RGB/HSV conversion in C++
// Copyright (c) 2014, Jan Winkler <winkler@cs.uni-bremen.de>
// All rights reserved.
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above copyright
// notice, this list of conditions and the following disclaimer in the
@semi-supervised-paper
semi-supervised-paper / KMeans
Last active April 4, 2020 04:37
Python Implementation of LR and KMeans
import numpy as np
from matplotlib import pyplot as plt
center_1 = np.array([1,1])
center_2 = np.array([5,5])
center_3 = np.array([8,1])
data_1 = np.random.randn(200, 2) + center_1
data_2 = np.random.randn(200,2) + center_2
data_3 = np.random.randn(200,2) + center_3
@semi-supervised-paper
semi-supervised-paper / install faiss without conda
Created January 17, 2020 07:33
install faiss without conda
import site
import os
os.system('/usr/local/cuda/bin/nvcc --version')
print('site.getsitepackages() :', site.getsitepackages())
os.system('cd faiss-gpu-1.4.0-py36_cuda9.0.176_1/ && cp -r lib/python3.6/site-packages/* {} && pip install mkl && pip install --upgrade scikit-learn'.format(site.getsitepackages()[0]))
import mkl
mkl.get_max_threads()
import faiss
@semi-supervised-paper
semi-supervised-paper / top-k-top-p.py
Created October 15, 2019 06:08 — forked from thomwolf/top-k-top-p.py
Sample the next token from a probability distribution using top-k and/or nucleus (top-p) sampling
def top_k_top_p_filtering(logits, top_k=0, top_p=0.0, filter_value=-float('Inf')):
""" Filter a distribution of logits using top-k and/or nucleus (top-p) filtering
Args:
logits: logits distribution shape (vocabulary size)
top_k >0: keep only top k tokens with highest probability (top-k filtering).
top_p >0.0: keep the top tokens with cumulative probability >= top_p (nucleus filtering).
Nucleus filtering is described in Holtzman et al. (http://arxiv.org/abs/1904.09751)
"""
assert logits.dim() == 1 # batch size 1 for now - could be updated for more but the code would be less clear
top_k = min(top_k, logits.size(-1)) # Safety check
#!/bin/bash
# download and unzip dataset
#wget http://cs231n.stanford.edu/tiny-imagenet-200.zip
unzip tiny-imagenet-200.zip
current="$(pwd)/tiny-imagenet-200"
# training data
cd $current/train
@semi-supervised-paper
semi-supervised-paper / cifar-100 label to label_name
Created August 18, 2019 07:07
given label of cifar-100, output the coresponding label name
coarse_label_names
0 aquatic_mammals
1 fish
2 flowers
3 food_containers
4 fruit_and_vegetables
5 household_electrical_devices
6 household_furniture
7 insects
8 large_carnivores
@semi-supervised-paper
semi-supervised-paper / t_SNE_PCA.py
Created August 15, 2019 05:08
draw PCA or t_SNE given a numpy embedding and coresponding labels
from tensorboardX import SummaryWriter
import torch
with SummaryWriter(log_dir='./test', comment='test') as writer:
writer.add_embedding(
torch.autograd.Variable(torch.FloatTensor(np_embeddings)),
metadata=np_labels.tolist(),
global_step=0)
os.path.abspath(os.path.join(path, ".."))
@semi-supervised-paper
semi-supervised-paper / bash_float.sh
Created July 26, 2019 08:57
float numbers calculation in sh file
func(){
echo $1 $2
}
for i in 0.01 0.03 0.05
do
func $i $(bc <<< "$i * 120.0") &
done
wait
import theano.tensor as T
import numpy as np
import theano
import os
x = T.fmatrix('x')
y = T.ivector('y')
true_dist = T.zeros_like(x)
true_dist = T.fill(true_dist, 0.1 / x.shape[1])