Skip to content

Instantly share code, notes, and snippets.

@CodeOtter
Last active August 8, 2022 16:23
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 1 You must be signed in to fork a gist
  • Save CodeOtter/f3285f5239247b60f110ecea406ee7c1 to your computer and use it in GitHub Desktop.
Save CodeOtter/f3285f5239247b60f110ecea406ee7c1 to your computer and use it in GitHub Desktop.
Create a new emotion

I wrote a book in 2005 that suggested the military wouldn't invent AI first. Instead, it would be the financial sector. I reasoned that as more and more humans come online through mobile devices, more and more data would be generated: video, audio, social, geospatial, and other domains yet to be popularized. (Looking at you, voxel clouds!)

Eventually, we would generate hundreds of exabytes per second. I reasoned that Moore's Law would hit a point where quantum problems with heat would slow it down, forcing transistor counts to scale horizontally (via clouds and specialization) instead of through innovation at the atomic scale.

This means while storage prices go down, processing costs to extract first move advantage shoot through the roof. Unlike their Soviet counterparts, the West would not be able to hire half of the population to analyze the data. It couldn't even hire 200% of the population to analyze that pipeline.

Thus, financial markets start to tackle the problem one bite at a time to find footing, and begin leveraging AI to discovering patterns in economic behavior that ultimately lead to an accidental but pragmatic foundation of behavioral economics. Eventually, the financial sector gets so good at it, one company makes a major breakthrough and the world never knows about it to preserve their advantage on the market.

They start to use this machine to create geopolitical risk.. but not in a Hegelian dialectic way. More like a "Invest in X and that causes a shortage in Y which results in Z having strikes and Country A complains to the WTO which forces Country B to adjust its trade positions, forcing oil producer C to cut productivity, causing hedge fund D to panic, resulting country E to posture its military along the borders of country F who then goes bankrupt trying to establish emergency defensive spending."

The AI could steer the invisible hand of the market to crush non-compliant states without firing a single bullet or deploying a single soldier. The only way this would be possible is if the machine had a "good-enough" model of human emotion... and how those emotions play out across billions of people per second... which is possible in a future where social media, voting rights, welfare participation, and employment are all measured based upon mobile device interaction history.

Thus, to build such a machine, I had to invent a simple thought experiment: "Create a new emotion."

This question is deceptively complex. One can form models and boundaries and linguistics to define an emotion... but... if I had an emotion you did not... would you understand my behavior as that emotion... or see it within a set of emotions you already possess? The answer to that highlights the difficulty of my thought experiment, and the importance of comprehending emotion in a way that can be represented statistically.

In the end, my machine can not only invent new emotions, but uses them to encode information in a way that no human can decipher, shifting the realm of encryption away from the shared absolutism of Moore's Law and prime numbers... and into the natural neurobiological diversity of DNA expression.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment