Skip to content

Instantly share code, notes, and snippets.

@ryantuck
Last active December 27, 2022 21:25
Show Gist options
  • Save ryantuck/a6b4b0303eddb3bd250ee97f2bde7f9d to your computer and use it in GitHub Desktop.
Save ryantuck/a6b4b0303eddb3bd250ee97f2bde7f9d to your computer and use it in GitHub Desktop.
notes on superintelligence

superintelligence

"an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills."

  • artificial general intelligence, as opposed to specialized AI
  • either computer intelligence or human-augmented intelligence
  • kurzweil - not pessimistic, multiple AIs, cloud-augmented minds
  • silicon operates orders of magnitude faster (1M x) than neurons, parallelization
  • humans outperform animals in long-term planning and language use rather than biology - AI would probably do the same
  • AIs could edit and improve themselves so rapidly that we must expect a fast take-off once machines hit human-level intelligence
  • kurzweil law of accelerating returns - maybe why people less likely to think AI is coming soon
  • should see comparable affordable compute power rivaling all human brains come to fruition in like 10 years
  • median result from surveys of experts puts AGI around 2060.

types of AI

  • Artificial narrow intelligence (weak AI) - good at chess, nothing else. currently ubiquitous.
  • Artifical general intelligence (strong AI) - human-level abilities at multiple tasks
  • Artificial superintelligence - vastly superhuman intelligence

types of ASI

  • oracle - answers questions
  • genie - carries out commands
  • sovereign - assigned open-ended problem and solves it how it sees fit

oracle and genie are inefficient but safer.

fast take off

AGI to ASI will happen extremely rapidly if AGI can improve upon itself, which is almost certain.

ASI is necessarily an omnipotent being.

We have what may be an extremely difficult problem with an unknown time to solve it, on which quite possibly the entire future of humanity depends. — Nick Bostrom

howto

  • get sufficient computational resources
  • make it smart (mimic the brain / run rapid evolution / design a narrow AI to develop a general AI)

upsides/downsides

  • in theory, any hard unsolved problem could be handled - disease, energy distribution, etc
  • if we get it wrong, we're SOL

Superintelligence would be the last invention biological man would ever need to make, since, by definition, it would be much better at inventing than we are.

ethics

  • The Coherent extrapolated volition (CEV) proposal is that it should have the values upon which humans would converge.
  • The Moral rightness (MR) proposal is that it should value moral rightness.
  • The Moral permissibility (MP) proposal is that it should value staying within the bounds of moral permissibility (and otherwise have CEV values).

"The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."

where are we now?

  • chess, go, etc better than human
  • image classification better than human

etc

diagram

“as soon as it works, no one calls it AI anymore.”

“AI has by now succeeded in doing essentially everything that requires ‘thinking’ but has failed to do most of what people and animals do ‘without thinking.'” - Donald Knuth

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment