Skip to content

Instantly share code, notes, and snippets.

@StringEpsilon
Created November 1, 2019 17:56
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save StringEpsilon/eb7e6efb71e213a26a0a42f3766ad470 to your computer and use it in GitHub Desktop.
Save StringEpsilon/eb7e6efb71e213a26a0a42f3766ad470 to your computer and use it in GitHub Desktop.

Speculating about AI

I am by no means an expert on the subject. But since everyone else is doing it, I thought I share my 2 cents on the matter anyway.

Sub-Human AI

Much speculation about AI revolves about super intelligent beings of awesomeness. But it's unlikely that mankinds first sapient and sentient artificial intelligence will be as or even more clever than the average human. This could be for any of these reasons: Limits in hardware, limits in it's implementation and of course limits imposed on it by us.

An easy to imagine example would be the AI resulting from a emulated human brain. It might be as intelligent as us in principle, but it might simply not run as fast. Or the emulation could be imperfect and introducing either defects or slowdowns that way.

Human level AI.

This easily connects to the last sentence: If we can emulate human brains efficiently enough for them to run at pace with biological brains performance, it would simply be a human level AI. If we can accelerate the speed of emulation, it would certainly appear smarter, but only because it has much more time to think about any given subject from our perspective. Think about how much more intelligent your average human gets over their lifespan. Wisdom surely increases, and such an emulated human could potentially accumulate vast amounts of wisdom in a fraction of the time - but it's still fundamentally bound by it's emulated hardware.

Super intelligent AI.

This is where it gets complicated with speculation, and where a lot of the singularity and fear mongering talk comes in. But really, there are way to many unknowns to make any decent prediction about such an AI. This is somewhat reflected in 2 of the 3 AI-classifications of Shadowrun:

  • Metasapients - Intelligence in a way familiar to us. We can understand how they think roughly like we can understand other humans. So, they are roughly like humans but with greatly improved intelligence and wisdom. This is also usually what you see in the movies and novels (HAL, Eva, The Puppet Master, ...).
  • Xenosapients - Sentient for sure, but how they think and communicate is completely foreign to us (or other AIs). Their motivations and goals are either opaque, or seemingly irrational / illogical to us if they can be inferred.

I do think that perspective is helpful, but ultimately fails to capture how little we know about these beings we didn't even make yet. Anything we talk about for any future AI is just guesswork. Here are the obvious unknowns:

  • Do they have morality?
  • Do they have emotions?
  • Can they have religious feelings?
  • Do they (want to) understand human psychology?
  • Do they feel inferior or superior for having no physical body?
  • Do they comprehend that other sentient beings exist?
  • Are they pacifists? Are they capable of being pacifists?
  • How does their sense of identity manifest?
  • Are they curious?
  • Can they be bored?
  • Will they procreate and what will that mean for them?
  • If they do, will the offspring be capable of having separate personalities?
  • Do they even acknowledge the existence of the physical world?
  • Can they develop mental problems? What kind of problems? And who could provide therapy?
  • What will make them angry?

Basically: Think of every human trait. Every trait any animal on earth can exhibit and ask yourself: Could this apply to AIs and how does it manifest?

But that's not the complete problem. I made this list, because I think in terms of human intelligence. What I don't know is how many more questions like this there could be possible for xenosapient (or metasapient) types of intelligence. Because there is no reason for us to think that we experience all possible traits of sentient beings. Even humans can at least temporarily develop odd personalities if raised under extreme circumstances. See the Feral Child Wikipedia article. In a way, we don't even fully grasp our own boundaries in terms of personality development (and probably will never be, unless someone commits to rather cruel and highly unethical experiments). Using that knowledge to project forward on completely new beings that aren't even on the horizon of existence is rather odd.

But I obviously have to admit that the fear mongering sci-fi version of a malevolent AI that seeks to take revenge on humans is within the realms of possibility. But so is a xenosapient AI that refuses to believe in the real world and only takes interest in accounting. And I also know that some of the above points are likely influenced by us, during the creation of the AI.

The singularity

Two questions I didn't add above are rather important for what's dubbed the "singularity". It's the idea that technology and our understanding of the universe improves faster and faster, because with better technology, we get better at making improvements. This line of thinking is very often extended to AI: A super intelligent AI might modify itself to make itself better. And that better version would improve itself even further. Until it reaches some god-like potential. But there are a few roadblocks with that, even if the AI in question is in theory capable of this feat:

  • Will the AI have any motivation to improve itself?
  • Will it share any technological discoveries with us?

An AI that loathes humans might even refrain from improving itself just to deny us the ability to reverse engineer their improvements (hard or software) - even if they otherwise want to improve themselves with the goal of killing humanity.

Of course AIs could have any number of reasons to improve or not improve themselves and either share or don't share that knowledge with us or other AIs. So again: A technological singularity is entirely possible. But it's not inevitable with super-intelligence. And even a rapidly self-improving AI might not give us anything useful. Think of a xenosapient accountant, for example.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment