Skip to content

Instantly share code, notes, and snippets.

What would you like to do?
  • It seems like there might be various ways that the world could be radically transformed by narrow AI before it's transformed by AGI; why don't people talk about that? (Maybe CAIS is people talking about this?)
  • It seems like in AI, most of the time we've been limited by compute rather than people thinking of clever things to do with the compute. But in other fields, eg math, we're limited by things that aren't compute, obviously. Should I think of AI as being like ML or like math?
    • And not everyone agrees that we're limited by compute with AI; what's the deal with that?
  • I want to understand evolutionary theory, so that I can reason effectively about how it works to have systems which select for things
    • Sometimes creatures get cancers; this seems like an example of when a system creates an optimizing system inside of it and tries to use that system to do work for it. Cancers are particularly bad when the organism is in a setting that it's not usually in, because the optimization pressure is as strong but the anti-cancer methods are weaker [citation needed]. Is this is a metaphor for something?
    • I vaguely understand that evolution is less efficient than ML "because ML can use gradients"; how exactly does that work?
  • What does the world look like if you imagine AI systems getting gradually more and more powerful? Eg if you imagine that they can do the work of an IQ 140 human for $1M per hour in one year, and the price halves every 18 months; what does that world practically look like from the perspective of someone inside it?
    • Would people notice that the world is going crazy and try to pass regulations to stop that? What would that end up like?
  • Should I know more about things like bio risk? How am I supposed to learn about bio risk without going looking for infohazards?
  • If AGI ends up just making Google shareholders a billion times richer, how good is that as an outcome?
  • Is the world declining in quality of democratic institutions? Idk because it seems like it is, but also I think that everyone always is worried that democratic institutions are getting rapidly weaker.
  • It seems like the world might get really crazy for all kinds of different reasons before AGI; I should have a list of ways and then think about them probably?
  • Is it really plausible that the first AGI would be built safely even if we knew theoretically how to do it safely? Like, there are bugs in things like plane control systems where the programmers know that it's crucially important not to have bugs; is it really plausible that AGI would get so much more safety attention than planes or crucial medical devices or whatever?
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.