Skip to content

Instantly share code, notes, and snippets.

View bnsh's full-sized avatar

Binesh Bannerjee bnsh

View GitHub Profile
@bnsh
bnsh / compute_masked_mean.p
Created March 13, 2021 13:06
I'm trying to compute the mean value over a batch of masked values... I'd like to do it without loops and dictionaries as I've done..
#! /usr/bin/env python3
# vim: expandtab shiftwidth=4 tabstop=4
"""
My application outputs a 2 dimensional quantity, but is batched.. I generate masks for the outputs that I want it to predict in the absence of having that data.
The masking works. As in, I can do input[masks] = 0, and I can also read output[masks] to get the value of only the masked outputs and I can do
loss = crit(output[masks], target[masks]) and get the loss for only the values that are in the mask.
My issue is that I want to sum over the masks.. So, in the demo below, I have a batch size of 4 and
@bnsh
bnsh / Dockerfile
Created February 16, 2021 09:30
This docker file demonstrates a seeming bug with pylint==2.6.1
# docker build -t binesh/pylint-bug .
# docker run --rm binesh/pylint-bug python3.8 -m pylint -r n /tmp/demo1.py
# docker run --rm binesh/pylint-bug python3.8 -m pylint -r n /tmp/demo2.py
FROM python:3.8
MAINTAINER Binesh Bannerjee <binesh_binesh@hotmail.com>
RUN /usr/local/bin/python3.8 -m pip install -U pylint==2.6.1
RUN echo "def func(blah):" > /tmp/demo1.py && \
@bnsh
bnsh / double-install.docker
Created February 16, 2021 04:09
Vault doesn't work within Docker on first install, but somehow works on *second* install.
FROM ubuntu:20.04
LABEL maintainer="Binesh Bannerjee <binesh_binesh@hotmail.com>"
RUN \
apt-get update -y \
&& apt-get install -y curl gnupg software-properties-common \
&& curl -fsSL https://apt.releases.hashicorp.com/gpg | apt-key add - \
&& apt-add-repository -y "deb [arch=amd64] https://apt.releases.hashicorp.com $(lsb_release -cs) main" \
&& apt-get update -y && apt-get install -y vault
@bnsh
bnsh / Dockerfile
Created January 30, 2021 20:05
Training a model with Sparse parameters.
FROM pytorch/pytorch:latest
MAINTAINER Binesh Bannerjee <binesh_binesh@hotmail.com>
COPY test.py /tmp
@bnsh
bnsh / Dockerfile
Created December 17, 2020 03:30
This is a demo of pylint erroring (ultimately) with RecursionError: maximum recursion depth exceeded.
FROM python:3.8
MAINTAINER Binesh Bannerjee <binesh_binesh@hotmail.com>
RUN /usr/local/bin/python3.8 -m pip install -U pylint pandas
RUN echo "import pandas as pd" > /tmp/demo.py && \
echo "pd.merge_asof()" >> /tmp/demo.py
RUN /usr/local/bin/python3.8 -m pip freeze
#! /usr/bin/env python3
# vim: expandtab shiftwidth=4 tabstop=4
"""This program uses an xor network to test MyDataParallel"""
import argparse
from collections import OrderedDict
import torch
import torch.nn as nn
import torch.optim as optim
@bnsh
bnsh / logmidpoints.js
Created March 30, 2020 03:59
This function (logmidpoints) will generate a list of midpoints in a log scale.
/* vim: expandtab shiftwidth=4 tabstop=4
*/
/*
* Given a min value and max value and a number of steps return an array of that many
* points from min value to max value. min value _must_ be > 0.. Likely you want
* to use 1 as min value. (the log of 0 is -Infinity, so you can't use 0...)
*/
function logmidpoints(min, max, steps) {
var points = [];
@bnsh
bnsh / lltm.cpp
Last active January 6, 2019 08:10
Problems compiling LLTM demo module from PyTorch.
#include <torch/torch.h>
#include <iostream>
#include <vector>
at::Tensor d_sigmoid(at::Tensor z) {
auto s = at::sigmoid(z);
return (1-s) * s;
}
std::vector<at::Tensor> lltm_forward(
@bnsh
bnsh / PseudoLightFMPerfect.py
Last active December 4, 2018 18:39
This program contains two things. 1. a precision that _will_ output 1.0 if all the ranks are indeed perfect, regardless of the number of items that the user had rated, and 2. a pseudo "classifier" that always outputs exactly the right predictions, by cheating, effectively. (Just to show what the best one could do with ranking, and how it might c…
#! /usr/bin/env python3
"""This is the MovieLens example from lightfm"""
import numpy as np
from scipy.sparse import csc_matrix
from lightfm.datasets import fetch_movielens
from lightfm import LightFM
from lightfm.evaluation import precision_at_k
#! /usr/bin/env python3
"""Is this a bug in lightfm's precision_at_k function?
If there are #entries < k for a particular user, it seems as if that user can _never_ have a precision
> (#positive_interactions / k) for that user. So, the maximum precision is _not_ in fact 1.0, it's whatever the average
of _all_ the (#positive_interactions / k) values are throughout the entire training set.
I have a proposed solution at the end. I wonder if it works.
"""