Skip to content

Instantly share code, notes, and snippets.

@riceissa
riceissa / yule.py
Last active August 16, 2020 08:04
yule.py
#!/usr/bin/env python3
import matplotlib.pyplot as plt
import numpy as np
NUM_CITIES = 1000
city_pops = [1] * NUM_CITIES # initialize NUM_CITIES cities each with population 1
for person in range(100000):
prob_vector = np.divide(city_pops, sum(city_pops))
#!/usr/bin/env python3
import numpy as np
import matplotlib.pyplot as plt
def f(x, r):
return r*x*(1-x)
eq_xs = []
eq_ys = []
#!/usr/bin/env python3
import numpy as np
from numpy.random import normal
import matplotlib.pyplot as plt
from random import shuffle
# arguments = [-1, 3, 4, 1, 6, 7, 3, 5, 2, -3, -2, 7, 4, 2, 4]
# arguments.sort()
# arguments = np.random.uniform(low=-5, high=10, size=50)
@riceissa
riceissa / blah.md
Last active January 20, 2020 00:09
Setting up Linode block storage with nginx, MySQL, php-fpm on Ubuntu https://github.com/vipulnaik/working-drafts/issues/6

Linode instance

Follow https://www.linode.com/docs/getting-started/ to create new linode and set it up

I am using ubuntu 18.04 LTS.

I've set hostname to testlinode.

For my test instance, I am skipping lots of steps that are unnecessary (e.g. I don't need a non-root account).

@riceissa
riceissa / dump.php
Created December 20, 2019 00:36
testing mysql regex via php
<pre>
<?php
include_once("backend/globalVariables/passwordFile.inc");
$queriesList = array(
'select "Ought" regexp ?' => 1,
'select "Forethought" regexp ?' => 0,
'select "Forethought|Ought" regexp ?' => 1,
'select "Open Phil|Ought" regexp ?' => 1,
'select "Open Phil|Ought|MIRI" regexp ?' => 1,
'select "Open Phil|ForethOught|MIRI" regexp ?' => 0,
@riceissa
riceissa / anki_algorithm.py
Last active December 15, 2023 09:36
my current understanding of Anki's spacing algorithm
"""
This is my understanding of the Anki scheduling algorithm, which I mostly
got from watching https://www.youtube.com/watch?v=lz60qTP2Gx0
and https://www.youtube.com/watch?v=1XaJjbCSXT0
and from reading
https://faqs.ankiweb.net/what-spaced-repetition-algorithm.html
There is also https://github.com/dae/anki/blob/master/anki/sched.py but I find
it really hard to understand.
Things I don't bother to implement here: the random fudge factor (that Anki
@riceissa
riceissa / dump.md
Last active November 22, 2019 01:20
Resetting ease factor for a deck in Anki

Personal background

A while ago I changed the default ease factor for my math problems deck to something like 160% (from the default of 250%), probably because I was worried that if my card intervals became too large too quickly, I would have too many problems that I didn't know how to do well. Instead what ended up happening was that I would get the same problems over and over again, and the deck became less fun to the point where I kept avoiding reviewing it (currently I have 43 due cards, which is a large number because each problem takes up to 30 minutes to complete).

After researching Anki deck options, it became clear that the problem was the ease factor

@riceissa
riceissa / post.md
Last active October 22, 2019 09:22
Deliberation as a method to find the "actual preferences" of humans

Some recent discussion about what Paul Christiano means by "short-term preferences" got me thinking more generally about deliberation as a method of figuring out the human user's or users' "actual preferences". (I can't give a definition of "actual preferences" because we have such a poor understanding of meta-ethics that we don't even know what the term should mean or if they even exist.)

To set the framing of this post: We want good outcomes from AI. To get this, we probably want to figure out the human user's or users' "actual preferences" at some point. There are several options for this:

  • Directly solve meta-ethics. We figure out whether there are normative facts about what we should value, and use this solution to clarify what "actual preferences" means and to find the human's or humans' "actual prefere
@riceissa
riceissa / dump.md
Last active October 30, 2019 22:51
Attempt to pass Paul's ITT for strategy-stealing stuff

warning: I'm currently making a bunch of changes to this

Understanding strategy-stealing in the Corrigible Contender scenario

Here is my current best guess for Paul's strategy-stealing position:

There is a tension between (a) doing things that the human user understands; and (b) being competitive, doing the "optimal" thing for the long term, stealing unaligned AIs' strategies, etc. Paul resolves this tension by giving up on (a), and focusing just on (b). This means that the human user will basically not understand what's going on in the world (the world is changing too quickly and too dramatically, the aligned AI is taking actions that are too difficult to understand, etc.).

If we were talking about the Sovereign Singleton scenario (I will be using terminology from Wei Dai's success stories post), giving up on (a) seems fine, since the AI would have a CEV-like specification of the human user's values. But in the Corrigibl

#!/usr/bin/env python3
import datetime
import mysql.connector
# import matplotlib
# matplotlib.use('Agg')
import matplotlib.pyplot as plt
cnx = mysql.connector.connect(user='issa', database='donations')