Skip to content

Instantly share code, notes, and snippets.

@ashrewdmint
Last active August 29, 2015 14:05
Show Gist options
  • Save ashrewdmint/00937bba38fae76e199d to your computer and use it in GitHub Desktop.
Save ashrewdmint/00937bba38fae76e199d to your computer and use it in GitHub Desktop.

What is the Fitness Landscape for AGI?

Someone who studies evolution might discuss the "fitness landscape", a mathematical landscape which represents the reproductive success of an organism. The higher the elevation of an organism on the landscape, the more "fit" it is; the lower the elevation, the less likely the organism can survive long enough to reproduce at all.

The process of evolution yields species which, over time, discover higher and higher points on the topology of animal-space. Eventually, the successful ones end up at the top of the hill in their ecological niches, at which point they stop changing, since there is no higher point which is accessible by the short distance traveled by random mutations.

Jellyfish and horses are both at the top of the landscape, yet they reside in very different locations. There is a great valley between them. If a jellyfish slightly closer to horse-space is hatched, it will be a worse-than-average jellyfish and produce few offspring if it even manages to survive at all. The same goes for a horse slightly closer to jellyfish-space—it just wouldn't be a very good horse.

Fitness landscapes show you which evolutionary paths are possible. You cannot hop from mountain to mountain without traveling down into the valley, and evolution doesn't go down valleys because the valleys are full of impossible creatures (consider the jelly-horse example; terrestrial herbivores with squishy tentacles are in no way viable organisms).

Human inventions aren't biological creatures and they don't reproduce on their own. But human inventions are described by ideas, and ideas reproduce inside human minds. Better ideas are copied more and iterated upon, worse ideas sometimes never leave the neocortex and go extinct on the spot. Therefore, human ideas have a fitness landscape of their own.

So what does this have to do with the search for General Artificial Intelligence (AGI)?

Well, a lot of people try to make projections for the future of AI research, trying to determine when we might invent AGI and how those inventions might come about. People also like to debate about hard-takeoff vs. slow-takeoff scenaraios (i.e., how fast AGI can improve itself once it has been created).

I think all of these questions are really questions about what the AGI fitness landscape looks like.

A fast-takeoff situation is possible only when AGI occupies the base of a steeply-increasing mountain. A slow-takeoff situation happens if AGI is at the base of a slowly-rising plane.

Chess-playing AIs occupy a very small hill on the landscape. They are much better at playing chess than a human is, but you cannot iterate them towards actual general intelligence. AGI should be able to play chess, but that's not all it should be able to do.

Can you iterate your way to AGI from inventions like IBM's Watson, which (as far as I know) is multiple machine learning modules linked together in a specific architecture? Is Watson at the base of a hill, or a mountain? How far can you get with that approach?

Can you iterate your way to AGI from deep-learning algorithms? Is there a smooth fitness gradient from our current deep-learning techniques to AGI, or is it full of elevation spikes and topological dead-ends? Perhaps the best deep-learning algorithm is a jump which could only be reached by a single theoretical break through.

Maybe all of our AI approaches are small hills surrounding the One True Mountain which requires a tremendous Eureka-moment to discover. I don't think this is likely, but it's a possibility.

I don't know the answers to these questions, since I'm not an AI researcher myself. But I wonder whether or not debates and predictions regarding the future of AI could become more productive and successful if people were building models of the AGI fitness landscape, and then arguing about which models were the most justified ones.

Or maybe they are already effectively doing this and bringing up fitness landscapes overcomplicates the matter.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment