Skip to content

Instantly share code, notes, and snippets.

import trio
from typing import AsyncIterator, Awaitable, Callable, Coroutine, Optional, TypeVar
A = TypeVar("A")
B = TypeVar("B")
async def yield_with_delay(val, delay):
for i in range(5):
await trio.sleep(delay)
{-# LANGUAGE MultiParamTypeClasses, GADTs, FlexibleInstances #-}
{-
Here's my summary of the structure of QFT.
Note that the way I've written things is probably deeply offensive to physicists, because I treat time separately from other dimensions.
I've done it this way because it makes the relationships between the ideas IMO more obvious.
TODO, other interesting things to try to mention:

Bill Zito messaged me the following:

I think the numbers you reference in the talk might be from "Preliminary prices for human-level intelligence," AI impacts, 2015, https://aiimpacts.org/preliminary-prices-for-human-level-hardware/ which is also referenced by Luke's 2015 openphil post "what do we know about AI timelines" https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-timelines#Trend_extrapolation

I think that Drexler's estimates based on task comparisons in CAIS are more compelling (and I've spent some time fact-checking the numbers / making revised estimates if it's of interest to you in the future) https://www.fhi.ox.ac.uk/wp-content/uploads/Reframing_Superintelligence_FHI-TR-2019-1.1-1.pdf?asd=sa section 40.2 estimates brain compute

Also, section 40.3.2 references some other calculations, which are also referenced in the AI impacts post: "Sandberg and Bostrom (Sandberg and Bostrom 2008), for example, consider brain emul

  • It seems like there might be various ways that the world could be radically transformed by narrow AI before it's transformed by AGI; why don't people talk about that? (Maybe CAIS is people talking about this?)
  • It seems like in AI, most of the time we've been limited by compute rather than people thinking of clever things to do with the compute. But in other fields, eg math, we're limited by things that aren't compute, obviously. Should I think of AI as being like ML or like math?
    • And not everyone agrees that we're limited by compute with AI; what's the deal with that?
  • I want to understand evolutionary theory, so that I can reason effectively about how it works to have systems which select for things
  • Sometimes creatures get cancers; this seems like an example of when a system creates an optimizing system inside of it and tries to use that system to do work for it. Cancers are particularly bad when the organism is in a setting that it's not usually in, because the optimization pressure is as stro

(I paid Tegan to spend an hour or two writing these notes)

Exon-shuffling

It’s definitely true that multiple exons can be attached to produce a novel protein; It’s definitely not the case that all novel genes were produced by exon-shuffling. Exon-shuffling was not a thing for prokaryotes, and didn’t get popular with eukaryotes until metazoans, where they got wildly popular. This is probably in connection with multicellularity, since most of the proteins produced with exon-shuffling have an extracellular role. Also, the modules that end up in these proteins are mostly the ones that are able to fold properly on their own and are structurally stable. I have a weak sense that cellular proteins are largely conserved from early evolution, ie weren’t created with exon-shuffling, and they’re definitely slower to evolve. Fun fact: “most of the modular proteins were assemble

--- This survey is made with GuidedTrack.
*question: Which of the following comes closest to your view about the treatment of animals?
Animals deserve the same rights as people to be free from harm and exploitation.
*question: Is there anything you want to clarify about that answer?
*type: paragraph
>> same_rights_as_people = 1

I often see people get confused by binary search tree balancing. This is very fair, because all the algorithms to make it work are quite complicated, and understanding them deeply is a lot of work.

But the misconceptions I see are often not about the mechanics of BST balancing, which are legitimately really complicated, but about the motivation and goals of these mechanisms. So I’ve written a few brief notes here in the hope that they let people understand what BST balancing mechanisms are supposed to do.

The key selling point of a BST is that it has O(log(n)) search and modification. The O(log(n)) here comes from the height of the tree. So we just require that the height of the tree doesn’t grow to more than a constant factor of the logarithm of the number of nodes.

Two common balancing mechanisms are AVL trees and red-black trees. In a red-black tree, the maintained invariant is that the paths from the root to a leaf can’t differ in length by more than a factor of two. So it’s fine for your left subtree

Avoid parallel arrays.

If you want to represent a bunch of people who have names and ages, it’s cleaner to do

let people = [{name: “Buck”, age: 23}, {name: “Daniel”, age: 25}] 

than

let names = [“Buck”, “Daniel”];

let ages = [23, 25];

Here’s more detail on what I want to be able to do. I want to be able to write class hierarchies and reason about what properties are possible for objects which have particular classes. I have no idea whether something like this exists.

abstract class Animal {
  def makeNoise(): String
  
  def numberOfLegs: Int
}

abstract class Insect extends Animal {
server {
listen 80; ## listen for ipv4; this line is default and implied
server_name shlegeris.com www.shlegeris.com;
location /dice/ {
proxy_pass http://127.0.0.1:4567/;
}
location /music-game/ {
proxy_pass http://bshlgrs.github.io/music-game/;