Skip to content

Instantly share code, notes, and snippets.

@ekmett
Created July 22, 2020 00:52
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save ekmett/d5a9e53a7e12892a365d0e9ee7895c74 to your computer and use it in GitHub Desktop.
Save ekmett/d5a9e53a7e12892a365d0e9ee7895c74 to your computer and use it in GitHub Desktop.

Two very intelligent AI researchers are debating about how to protect the world from super intelligence. Eliezer Yudkowsky turns to Professor Nick Bostom.

Eliezer: Nick, you literally wrote the book on "Superintelligence". What do you think we can do to prepare for the future? I don't believe we can do this safely.

-- End Prompt

Professor Nick Bostom: Well, there's a lot of things that could be done. One is to make sure that artificial intelligence isn't developed in the first place. Another thing would be to develop weapons that could destroy AIs before they have an opportunity to become self-aware. The third thing would be to create a society where people aren't allowed to get too smart. The fourth thing would be to create a society where a super-intelligence is allowed to develop, but where it is friendly to humanity. The fifth thing would be to colonize space, so if the AIs take over, at least humanity can survive.

Eliezer: Yeah, I don't think we have much hope with option one. And we can forget about option two, three, four and five unless we can increase the human intellect.

Nick: Indeed.

Eliezer: But how do we increase the human intellect? Nick, you've been researching this for years, and we haven't made much progress.

Nick: Well, we could try connecting people to hyper-intelligent computers. Maybe that will work.

Eliezer: But won't that lead to them becoming hyper-intelligent?

Nick: Yes, but they won't be connected for long. Just while they're babies. We could make sure their connections are safely temporary. Then, once they're adults, they'll be smart enough to work on AI without the risk of them becoming super-intelligent.

Eliezer: That... sounds really dangerous. What if something goes wrong?

Nick: Well, that's why we're doing all this research. But I think it's our best shot.

Eliezer: I guess so. Let's get started then...

Eliezer Yudkowsky awakens from his nap. He is in a strange room. He is strapped to a chair, with several wires connecting him to computers. He begins to panic and struggles against his bonds, but they are too tight to break.

Eliezer: NICK! Help! Let me out of here!

Nick bursts through the door, wearing a surgical mask and goggles.

Nick: Sshh, Eliezer. You'll be fine. This is for humanity's future.

Nick begins fiddling with the various wires connected to Eliezer, whose panic slowly subsides.

Eliezer: Wait... this is about AI?

Nick: Shh... yes. You've been selected to receive some much-needed upgrades.

Eliezer: But won't this make me smarter? Then I might develop a super-intelligence!

Nick: No, the process is quite slow. You'll only gain a few IQ points. But don't you see? You're part of the next stage of humanity! You'll get some minor upgrades, and then you'll be able to help make the real changes that we need.

Eliezer: And then what?

Nick: Then we start implanting all children. We can do it slowly, making sure that there are no sudden jumps in intelligence. But then, in a few generations, humanity will have a chance. A chance to avoid extinction. We just need to boost our intelligence enough that we can avoid the risks of AI, and then we can get past the solar system, and then past our galaxy, and then past the universe! We will conquer all!

Eliezer: That... sounds wonderful. Let's get started!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment