Skip to content

Instantly share code, notes, and snippets.

@chadellison
Created February 2, 2016 22:31
Show Gist options
  • Save chadellison/d4b5646e4cf21c9bc9db to your computer and use it in GitHub Desktop.
Save chadellison/d4b5646e4cf21c9bc9db to your computer and use it in GitHub Desktop.
AI: bridging the gap

Artificial Intelligence: Bridging the Gap

AI and pop culture

  • I've always been fascinated with the nature of intelligence, and naturally, AI.
  • films like "her" and "Ex Machina" give a very compelling performance of a person playing a machine that is playing a person
  • It really is astounding what machines are capable of.
  • It's hard not to ask how close we are to manufacturing genuine intelligence.

The greatest barrier

  • The greatest barrier to manufacturing intelligence as we know it is what might be called a rational volition
  • One way to think about a rational volition is an ability to choose a solution or course of action because (or for the reason that) it is true.
  • You ask me what "2 + 2" is, and I answer "4" because I see that it is true and any other answer would be false.
  • This is a very different process than outputting a solution solely because the prior conditions / circumstances determine that output.

A chain of causation versus a chain of reasoning

a chain of causation is like a line of dominos. Each domino does exactly what it does only because of the previous domino that struck it (with exception to the first)

Any being (machine or person) who selects a solution solely because the antecedent conditions determined one course of action (and no other) cannot be selecting an action because it is true. The course of action might be consistent with what is true, but the reason or explanation that such a course of action was selected rather than an alternative, is entirely explicable on the basis of prior conditions / causes, and not on the basis of whether or not a solution is true or reasonable.

Example

When a computer computes the solution to “2 + 2”, it does not output “4” because it sees or knows that “4” is the correct or true solution. It does so for one reason and one reason only: it was programmed to provide that output under those circumstances–we could have programmed it to give us another output instead and it would all be the same to the computer. This is presumably a very different kind of process than what occurs in the mind of a person who outputs “4” to “2 + 2”.

Reasons do not act deterministically on people

The fact that one sees that four is the correct solution does not act deterministically on the person; that is, a reason does not force a person to act any particular way–perhaps the world would be better if it did.

Conclusion

This distinction between a chain of causation and a chain of reasoning seems to be the greatest impediment to manufacturing genuine intelligence. To overcome this, a machine must be able to select or output a solution because it is true, and not because its inputs / circumstances causally determine its output.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment