Skip to content

Instantly share code, notes, and snippets.

View vihari's full-sized avatar

Vihari Piratla vihari

View GitHub Profile
@vihari
vihari / kaggle-ai-science.md
Last active February 17, 2024 12:25
Decoding the prize winning solutions of Kaggle AI Science Challenge

If you are an AI nerd, then you probably have heard about the AI2's (Allen Institute for Artificial Intelligence) Aristo project to train a computer to pass the standardized tests faced by an eighth grade student.
In order to simplify the problem, it is narrowed down to just one subject (science, what else?) and the test involves selecting one of the four possible choices for every question.

A Kaggle challenge is launched with a first prize of $50,000 for the same and ended in Feb. 2016. According to me, the problem is not an incremental easy-tweaky sort, it is a real forward leap (or is it? that depends on the adapted solutions).
Some interesting things about the challenge:

  • The challenge is somewhat similar to the Jeopardy! game (but only easier) and looked like something that IBM Watson can easily take a hit at.
@vihari
vihari / tf_print.py
Last active April 10, 2019 09:06
Tensorflow's tf.Print to stdout instead of default stderr
"""
The default tf.Print op goes to STDERR
Use the function below to direct the output to stdout instead
Usage:
> x=tf.ones([1, 2])
> y=tf.zeros([1, 3])
> p = x*x
> p = tf_print(p, [x, y], "hello")
> p.eval()
hello [[ 0. 0.]]
@vihari
vihari / common-crawl-search.pl
Created July 4, 2017 12:57
A PERL script to grep CommonCrwal dataset on Amazon's S3 storage. Configure your AWS account (http://tech.marksblogg.com/petabytes-of-website-data-spark-emr.html) before using the script.
#!/usr/bin/perl -w
# set the query
$query = "www.google.com\\\/maps\\\/embed";
# path to CommonCrawl dataset
$S3_URL = "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/";
$all = `aws s3 ls $S3_URL|perl -ane 'print "\$F[1]\n"'`;
print "Launching search for: $query...\n";
@segs = split(/[\n\s]+/, $all);
$nf=0;
for ($i=0;$i<=$#segs;$i++){
#!/usr/bin/python
"""
I have written this script to see if the optimal policy solution obtained by Linear Programming is same as the one that would be obtained by a Gradient based method
The Gradient based approach is implemented with Tensorflow and LP with a scipy library.
I found that n of the nk constraints (greater than or equal to) for the solution become equalities for the solution obtained by LinProg and that is not so in the case of Gradient-based approach. This shows that the constraints do not sufficiently specify the solution, but works in case of LP because of the way it finds the solution.
TF - Tensorflow's Gradient Descent
Run: python <script> [LP|TF]
"""
import tensorflow as tf
@vihari
vihari / preceptron.py
Created September 23, 2017 06:16
Perceptron Convergence Analysis
#!/usr/bin/python
"""
Is Convergence rate of perceptron update dependent on the input dimensionality?
"""
import numpy as np
N = 100
lr = 1
for sz in [5, 10, 100, 500, 1000, 5000, 10000]:
dat = np.random.normal(scale=10, size=[N, sz])
@vihari
vihari / export_tfrecord.py
Created May 10, 2018 08:35
Script to export to TFRecords
"""
Exports data into tfrecords to the save_dir
train_data, validation_data and test_data are list of tuples containing: (image_data, label, domain id, file_path (if available))
"""
def export_tfrecord(save_dir, train_data, validation_data, test_data):
import math
import itertools
random.shuffle(train_data)
def rankL(np_rank):
r = int(np_rank[-1])
_l = 0
for k in range(1, r+1):
_l += 1./k
return np.float32(_l)
"""
labels are assumed to be 1 hot encoded
@vihari
vihari / pytorch_csd.py
Last active February 23, 2022 13:59
PyTorch version of CSD
def csd(self, embeds, labels, domains, num_classes, num_domains, K=1, is_training=False, scope=""):
"""CSD layer to be used as a replacement for your final classification layer
Args:
embeds (tensor): final layer representations of dim 2
labels (tensor): tf tensor with label index of dim 1
domains (tensor): tf tensor with domain index of dim 1 -- set to all zeros when testing
num_classes (int): Number of label classes: scalar
num_domains (int): Number of domains: scalar
K (int): Number of domain specific components to use. should be >=1 and <=num_domains-1
@vihari
vihari / tf_csd.py
Created June 9, 2020 16:30
TF version of CSD
def csd(embeds, label_placeholder, domain_placeholder, num_classes, num_domains, K=1, is_training=False, scope=""):
"""CSD layer to be used as a replacement for your final classification layer
Args:
embeds (tensor): final layer representations of dim 2
label_placeholder (tensor): tf tensor with label index of dim 1
domain_placeholder (tensor): tf tensor with domain index of dim 1 -- set to all zeros when testing
num_classes (int): Number of label classes: scalar
num_domains (int): Number of domains: scalar
K (int): Number of domain specific components to use. should be >=1 and <=num_domains-1
@vihari
vihari / ai_alignment.md
Last active February 19, 2024 12:38
Short Summary of AI Alignment

The following is a short summary of AI alignment that you may find handy.

Imagine a maid robot with which we are interacting.

  • Outer alignment problem, aka Reward hacking, task misspecification, specification gaming.

    You ask for a coffee. It understood the assignment, but grabbed it from your father and gave it to you. You got the coffee, but that is not how you want it.
    Problem: Your values and preferences are not encoded.
    Challenging part: How to specify innumerably many preferences and ensure they are adhered?
    Methods: Tune it to be honest, harmless and helpful: RLHF. Feedback at scale for super-intelligence: Scalable oversight, weak-to-strong generalisation, super-alignment. Explain the process instead of simply specifying the outcome: process-based feedback.