Skip to content

Instantly share code, notes, and snippets.

@chilloutman
Last active February 27, 2017 08:55
Show Gist options
  • Save chilloutman/32d1eef32d294d1a1fbd1fe686f2808f to your computer and use it in GitHub Desktop.
Save chilloutman/32d1eef32d294d1a1fbd1fe686f2808f to your computer and use it in GitHub Desktop.
AI: Harmful or Helpful

AI: Harmful or Helpful?

Technological optimism leads many to believe that technology inherently improves our lives. Many new technologies such as flight, have done that. Even though initially the idea that humans could fly seemed far-fetched, it works thanks to technology and it has greatly benefited humanity. We might be standing on the cusp of a new powerful technology, Artificial Intelligence. Right now we are limited to creating narrow intelligence for specialized tasks. We should mature artificial narrow intelligence by practicing it. Two suitable topics would be cognitive human extension and cognitive human amplifier, which would already initiate a big leap in humans life. It sounds like a good way to grow the technological progression. Predicting the future is hard, but many people believe this to be the beginning of something big.

While the range of biological intelligence seems very likely limited by evolution amongst other things, artificial intelligence may not be. A technology that powerful could have unprecedented consequences. A sufficiently complex AI that is not comprehensible to humans, may act in unpredictable and dangerous ways. AIs often optimize variables to achieve maximum results. In doing so, side-effects could be unintentionally catastrophic.

Let’s go back to technological optimism. Most people would agree that technology is great. Advances have served us very well. But while some people choose to extrapolate past data into to future, I don’t believe it’s that simple. A change in magnitude might bring a change in kind, especially when it comes to superintelligent AI. A superintelligence is defined as being able to beat humans at every conceivable task. Problems such as the Control Problem, might pose serious threats to humanity.

Humanity's amount of gathered knowledge is still ridiculously small, but it might be a start for an ASI. But still, an ASI will not be god-like in an instant without doing any experiments. AI developed theories might not be as error prone as human’s, but I imagine a lot more dangerous to put into practice. Without practical results, theories are not proven to be correct. Reckless learning itself could already lead to an apocalypse.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment