Skip to content

Instantly share code, notes, and snippets.

Issa Rice riceissa

Block or report user

Report or block riceissa

Hide content and notifications from this user.

Learn more about blocking users

Contact Support about this user’s behavior.

Learn more about reporting abuse

Report abuse
View GitHub Profile
@riceissa
riceissa / blah.md
Last active Jan 20, 2020
Setting up Linode block storage with nginx, MySQL, php-fpm on Ubuntu https://github.com/vipulnaik/working-drafts/issues/6
View blah.md

Linode instance

Follow https://www.linode.com/docs/getting-started/ to create new linode and set it up

I am using ubuntu 18.04 LTS.

I've set hostname to testlinode.

For my test instance, I am skipping lots of steps that are unnecessary (e.g. I don't need a non-root account).

@riceissa
riceissa / dump.php
Created Dec 20, 2019
testing mysql regex via php
View dump.php
<pre>
<?php
include_once("backend/globalVariables/passwordFile.inc");
$queriesList = array(
'select "Ought" regexp ?' => 1,
'select "Forethought" regexp ?' => 0,
'select "Forethought|Ought" regexp ?' => 1,
'select "Open Phil|Ought" regexp ?' => 1,
'select "Open Phil|Ought|MIRI" regexp ?' => 1,
'select "Open Phil|ForethOught|MIRI" regexp ?' => 0,
@riceissa
riceissa / anki_algorithm.py
Created Nov 22, 2019
my current understanding of Anki's spacing algorithm
View anki_algorithm.py
"""
This is my understanding of the Anki scheduling algorithm, which I mostly
got from watching https://www.youtube.com/watch?v=lz60qTP2Gx0
and https://www.youtube.com/watch?v=1XaJjbCSXT0
and from reading
https://apps.ankiweb.net/docs/manual.html#what-spaced-repetition-algorithm-does-anki-use
There is also https://github.com/dae/anki/blob/master/anki/sched.py but I find
it really hard to understand.
Things I don't bother to implement here: the random fudge factor (that Anki
@riceissa
riceissa / dump.md
Last active Nov 22, 2019
Resetting ease factor for a deck in Anki
View dump.md

Personal background

A while ago I changed the default ease factor for my math problems deck to something like 160% (from the default of 250%), probably because I was worried that if my card intervals became too large too quickly, I would have too many problems that I didn't know how to do well. Instead what ended up happening was that I would get the same problems over and over again, and the deck became less fun to the point where I kept avoiding reviewing it (currently I have 43 due cards, which is a large number because each problem takes up to 30 minutes to complete).

After researching Anki deck options, it became clear that the problem was the ease factor

@riceissa
riceissa / post.md
Last active Oct 22, 2019
Deliberation as a method to find the "actual preferences" of humans
View post.md

Some recent discussion about what Paul Christiano means by "short-term preferences" got me thinking more generally about deliberation as a method of figuring out the human user's or users' "actual preferences". (I can't give a definition of "actual preferences" because we have such a poor understanding of meta-ethics that we don't even know what the term should mean or if they even exist.)

To set the framing of this post: We want good outcomes from AI. To get this, we probably want to figure out the human user's or users' "actual preferences" at some point. There are several options for this:

  • Directly solve meta-ethics. We figure out whether there are normative facts about what we should value, and use this solution to clarify what "actual preferences" means and to find the human's or humans' "actual prefere
@riceissa
riceissa / dump.md
Last active Oct 30, 2019
Attempt to pass Paul's ITT for strategy-stealing stuff
View dump.md

warning: I'm currently making a bunch of changes to this

Understanding strategy-stealing in the Corrigible Contender scenario

Here is my current best guess for Paul's strategy-stealing position:

There is a tension between (a) doing things that the human user understands; and (b) being competitive, doing the "optimal" thing for the long term, stealing unaligned AIs' strategies, etc. Paul resolves this tension by giving up on (a), and focusing just on (b). This means that the human user will basically not understand what's going on in the world (the world is changing too quickly and too dramatically, the aligned AI is taking actions that are too difficult to understand, etc.).

If we were talking about the Sovereign Singleton scenario (I will be using terminology from Wei Dai's success stories post), giving up on (a) seems fine, since the AI would have a CEV-like specification of the human user's values. But in the Corrigibl

View plot.py
#!/usr/bin/env python3
import datetime
import mysql.connector
# import matplotlib
# matplotlib.use('Agg')
import matplotlib.pyplot as plt
cnx = mysql.connector.connect(user='issa', database='donations')
@riceissa
riceissa / graph.py
Last active Sep 9, 2019
Funding chains in the x-risk/AI safety ecosystem
View graph.py
#!/usr/bin/env python3
# License: CC0
from graphviz import Digraph
whitelist = {
'Open Philanthropy Project': "Open Phil",
'Future of Humanity Institute': "FHI",
'Machine Intelligence Research Institute': "MIRI",
'Berkeley Existential Risk Initiative': "BERI",
@riceissa
riceissa / eliezer_non_sequence_posts.csv
Last active Jun 30, 2019
Eliezer Yudkowsky's non-sequence posts on LessWrong
View eliezer_non_sequence_posts.csv
We can make this file beautiful and searchable if this error is corrected: It looks like row 8 should actually have 4 columns, instead of 3. in line 7.
postedAt,baseScore,title,pageUrl
2018-12-12T01:40:13.298Z,71,Should ethicists be inside or outside a profession?,https://www.lesswrong.com/posts/LRKXuxLrnxx3nSESv/should-ethicists-be-inside-or-outside-a-profession
2018-12-07T22:24:17.072Z,82,Transhumanists Don't Need Special Dispositions,https://www.lesswrong.com/posts/cq4DsXzGRXJBmYuyB/transhumanists-don-t-need-special-dispositions
2018-12-05T20:12:13.114Z,86,Transhumanism as Simplified Humanism,https://www.lesswrong.com/posts/Aud7CL7uhz55KL8jG/transhumanism-as-simplified-humanism
2018-11-16T23:06:29.506Z,115,Is Clickbait Destroying Our General Intelligence?,https://www.lesswrong.com/posts/YicoiQurNBxSp7a65/is-clickbait-destroying-our-general-intelligence
2018-10-28T20:09:32.056Z,108,On Doing the Improbable,https://www.lesswrong.com/posts/st7DiQP23YQSxumCt/on-doing-the-improbable
2018-10-04T00:38:58.795Z,150,The Rocket Alignment Problem,https://www.lesswrong.com/posts/Gg9a4y8reWKtLe3Tn/the-rocket-alignment-problem
2018-05-31T21:28:19.354Z,190,Toolbox-thinkin
View list_of_wikipedias.py
#!/usr/bin/env python3
# List from https://meta.wikimedia.org/wiki/List_of_Wikipedias
meta_dict = {
"English": "en",
"Cebuano": "ceb",
"Swedish": "sv",
"German": "de",
"French": "fr",
"Dutch": "nl",
You can’t perform that action at this time.