One Paragraph of project description goes here
These instructions will get you a copy of the project up and running on your local machine for development and testing purposes. See deployment for notes on how to deploy the project on a live system.
| """Information Retrieval metrics | |
| Useful Resources: | |
| http://www.cs.utexas.edu/~mooney/ir-course/slides/Evaluation.ppt | |
| http://www.nii.ac.jp/TechReports/05-014E.pdf | |
| http://www.stanford.edu/class/cs276/handouts/EvaluationNew-handout-6-per.pdf | |
| http://hal.archives-ouvertes.fr/docs/00/72/67/60/PDF/07-busa-fekete.pdf | |
| Learning to Rank for Information Retrieval (Tie-Yan Liu) | |
| """ | |
| import numpy as np |
| #!/usr/bin/env python | |
| # see http://matpalm.com/blog/2015/03/28/theano_word_embeddings/ | |
| import theano | |
| import theano.tensor as T | |
| import numpy as np | |
| import random | |
| E = np.asarray(np.random.randn(6, 2), dtype='float32') | |
| t_E = theano.shared(E) | |
| t_idxs = T.ivector() |
| """ | |
| Minimal character-level Vanilla RNN model. Written by Andrej Karpathy (@karpathy) | |
| BSD License | |
| """ | |
| import numpy as np | |
| # data I/O | |
| data = open('input.txt', 'r').read() # should be simple plain text file | |
| chars = list(set(data)) | |
| data_size, vocab_size = len(data), len(chars) |
| #!/bin/bash | |
| # This is required for `notify-send` to work from within a cron. | |
| # http://askubuntu.com/questions/298608/notify-send-doesnt-work-from-crontab/346580#346580 | |
| eval "export $(egrep -z DBUS_SESSION_BUS_ADDRESS /proc/$(pgrep -u $LOGNAME gnome-session)/environ)"; | |
| # syncAndWink | |
| # | |
| # Syncs all remotely-tracked branches on a git repo passed as first argument ($1). It also pulls any new branches | |
| # and tags attached to the repo. |
| '''Trains a multi-output deep NN on the MNIST dataset using crossentropy and | |
| policy gradients (REINFORCE). | |
| The goal of this example is twofold: | |
| * Show how to use policy graidents for training | |
| * Show how to use generators with multioutput models | |
| # Policy graidients | |
| This is a Reinforcement Learning technique [1] that trains the model | |
| following the gradient of the logarithm of action taken scaled by the advantage | |
| (reward - baseline) of that action. | |
| # Generators |
| #!/bin/bash | |
| ### steps #### | |
| # Verify the system has a cuda-capable gpu | |
| # Download and install the nvidia cuda toolkit and cudnn | |
| # Setup environmental variables | |
| # Verify the installation | |
| ### | |
| ### to verify your gpu is cuda enable check |
Yoav Goldberg, Jan 30, 2021
This new paper from Google Research Ethics Team (by Sambasivan, Arnesen, Hutchinson, Doshi, and Prabhakaran) touches on a very imortant topic: research (and supposedly also applied) work on algorithmic fairness---and more broadly AI-ethics---is US-centric[*], reflecting US subgroups, values, and methods. But AI is also applied elsewhere (for example, India). Do the methods and result developed for/in the US transfer? The answer is, of course, no, and the paper is doing a good job of showing it. If you are the kind of person who is impressed by the number of citations, this one has 220, a much higher number than another paper (not) from Google Research that became popular recently and which boasts many citations. I think this current paper (let's call it "the India Paper") is substantially more important, given that it raises a very serious issue that