Skip to content

Instantly share code, notes, and snippets.

import sys
import numpy as np
import random
sys.argv += ["--dynet-mem", "1000", "--dynet-seed", "10", "--dynet-gpu-ids" , "1" ]
from dynet import *
random.seed(10)
np.random.seed(20)
import sys
import numpy as np
import random
sys.argv += ["--dynet-mem", "1000", "--dynet-seed", "10", "--dynet-gpu-ids" , "1" ]
from dynet import *
random.seed(10)
np.random.seed(20)

Putting papers on arxiv early vs the protections of blind review

The tension between putting papers on arxiv as soon as possible and the double-blind peer review process is ever present. Some people favor the fast-pace of progress facilitated by making papers available before or during the peer review process, while others favor the protection of double-blind reviewing (actually, of author-blind reviewing. reviewer-anonymity is not part of the debate).

As I now serve on an ACL committee which is tasked at assessing this tension, I've spend a longer-then-usual time thinking about it, and came up with an analysis which I find informative, and which others may also find useful. These are my personal opinions, and are not representative of the committee. Though naturally, I will share them there as well.

The analysis examines the dynamics of review bias due to author identities being made exposed through a pre-print, and its effect on other authors at the same conference. The conclusion, as usual with me,

@yoavg
yoavg / searle.md
Last active January 5, 2025 10:43
On Searle's Chinese Room Argument

On Searle's "Chinese Room" argument

When I first heard of Searle's "Chinese Room" argument, some twenty+ years ago, I had roughly the following dialog:


"Imagine there is a room with instructions, and someone slips a note written in chinese into this room, and you don't know chinese, but you follow the instructios in the room and based on the instructions you produce a different note in chinese and send it back out, and whoever sends you the original note thinks your note is a perfect response."

Oh, so the person outside doesn't know chinese either?

"No no, they do know chinese, you produced a perfect answer"

Thoughts and some criticism on "Re-imagining Algorithmic Fairness in India and Beyond".

Yoav Goldberg, Jan 30, 2021

This new paper from Google Research Ethics Team (by Sambasivan, Arnesen, Hutchinson, Doshi, and Prabhakaran) touches on a very imortant topic: research (and supposedly also applied) work on algorithmic fairness---and more broadly AI-ethics---is US-centric[*], reflecting US subgroups, values, and methods. But AI is also applied elsewhere (for example, India). Do the methods and result developed for/in the US transfer? The answer is, of course, no, and the paper is doing a good job of showing it. If you are the kind of person who is impressed by the number of citations, this one has 220, a much higher number than another paper (not) from Google Research that became popular recently and which boasts many citations. I think this current paper (let's call it "the India Paper") is substantially more important, given that it raises a very serious issue that

@yoavg
yoavg / stochastic-critique.md
Last active January 5, 2025 10:43
A criticism of Stochastic Parrots

A criticism of "On the Dangers of Stochastic Parrots: Can Languae Models be Too Big"

Yoav Goldberg, Jan 23, 2021.

The FAccT paper "On the Dangers of Stochastic Parrots: Can Languae Models be Too Big" by Bender, Gebru, McMillan-Major and Shmitchell has been the center of a controversary recently. The final version is now out, and, owing a lot to this controversary, would undoubtly become very widely read. I read an earlier draft of the paper, and I think that the new and updated final version is much improved in many ways: kudos for the authors for this upgrade. I also agree with and endorse most of the content. This is important stuff, you should read it.

However, I do find some aspects of the paper (and the resulting discourse around it and around technology) to be problematic. These weren't clear to me when initially reading the first draft several months ago, but they became very clear to me now. These points are for the most part

@yoavg
yoavg / ngram-lm-leak.ipynb
Created February 27, 2018 22:51
Simple 4gram-lm also "leak secrets"
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 4gram language models share secrets too...\n",
"_Yoav Goldberg, 28 Feb, 2018._\n",
"\n",
"In [a recent research paper](https://arxiv.org/pdf/1802.08232.pdf) titled \"The Secret Sharer:\n",
@yoavg
yoavg / ACL.js
Created September 9, 2015 08:07
Updated the zotero ACL translator to include titles as well as author names in selection list.
{
"translatorID": "f4a5876a-3e53-40e2-9032-d99a30d7a6fc",
"label": "ACL",
"creator": "Nathan Schneider, Yoav Goldberg",
"target": "^https?://(www[.])?aclweb\\.org/anthology/[^#]+",
"minVersion": "1.0.8",
"maxVersion": "",
"priority": 100,
"inRepository": true,
"translatorType": 4,