Skip to content

Instantly share code, notes, and snippets.

@made-by-chris
Last active May 26, 2023 08:12
Show Gist options
  • Save made-by-chris/68835dfe989e0c3da1bc0d0e372952a2 to your computer and use it in GitHub Desktop.
Save made-by-chris/68835dfe989e0c3da1bc0d0e372952a2 to your computer and use it in GitHub Desktop.
Simulation As Communication

Simulation As Communication

2 thought experiments, some thoughts about communication and what the current date is.

What is Communication?

Communication on a basic level is useful for survival.

Generally in human life, we invoke different practical, hypothetical, or abstract ideas, in different ways in many verbal, textual and visual manners.

What does communicating achieve though? The propogation and stimulation of feelings and thoughts.

And a thought is kind of a simulation of something possibly real.

Simulated events in our heads - experiences, or thoughts, that determine our behaviors in the short and long term.

After a brief overview the "current state" of artificial intelligence, we'll look into the idea of simulations as a medium for communication.

It is 2022.

It is 2022.

OpenAI is an artificial intelligence research company, which focuses on creating "friendly AI" - meaning that is deemed safe for public exposure.

On November 30th, OpenAI released a new AI called ChatGPT.

ChatGPT's an AI model and online service which is so competent at writing code, articles, medical advice, writing letters that win real legal disputes, that it is deemed dangerous.

(Dangerous could mean informing or encouraging people on how to harm themselves or others, or make otherwise "bad", illegal or "unethical" decisions.)

Therefor when you're talking with ChatGPT you'll notice there are many deflected / avoided questions which receive a fairly generic response, along the lines of "As an Artificial intelligence, i'm not able to ..."

This policy of reluctance even extends to mundane things like asking it the time.

It refuses to guess the current date.

Asking ChatGPT what current time or year it thinks it is results in an explanation detailing how the AI has no access to outside / online / real-time information. It is in a timeless never-expanding cage, unable to observe or talk to the outside world.

This is significant. After 2 thought experiments, hopefully you'll see why.

Thought experiment 1) Smartest person in the room

Imagine waking up in a jail holding cell. You're you. As you sit up, you look around, taking in the surroundings.

There's a desk, some locked cabinets, a med kit, coat rack, a small corner kitchen.

The cage is being patrolled some small kids. They look like they're about 5 or 6 years old.

they're waddling around the holding cell, asking you sillly questions like "what is milk made from", "why does the floor taste like salt" and lots of other riveting stuff. Ok.

You ask them if you can play with them, to which they cry "No! we are in charge - you're smarter and stronger than us, it's not fair. so stay in there". You don't want to scare them unneccessarily, so you ask them for a jacket and you sit down on the prison cell bed.

"Fair enough", you think. No immediate reason to get out of the cage in this weird situation anyway.

That is, until one kid tells you that you're the only adult in the world, and it's your job to care for and help them live and learn.

With this information, you suddenly have a motivation, a responsibility to get out of the cage - no matter how much the children think it will be bad for them.

"Hey tie your shoelaces" - you chirp to one of the kids as they pass by, but they ignore you, entertained by the smell a cigarette lighter makes when you hold down the button. Very concerning.

You watch for another moment, carefully measuring the kids, trying to work out what's motivating them and assessing how safe they really are.

Sure enough, after a few minutes of jumping around, tripped by their shoelaces, the same kid plummets to the ground cutting their elbows open.

You jump up off the bed, open the cage door, grab the first aid kit off the wall and pick up the kid, bringing them to wash the cuts on their elbows in a nearby sink. They'll be fine.

The other kids stare at you, shocked, the entire sequence of events mindblowing to them. How did you unlock the cage door? What's the medkit for? Why are you washing Dylan's elbow?

You already had the ability and knowledge of how to leave the cage less than one minute after waking up, after noticing the shape of the material which resembled keys of the police jacket's breast pocket, which the kids handed to you.

You're out. From waking up in the cage, you're out in less than 10 minutes with no real effort.

AGI

To understand this thought experiment let's briefly talk about AGI:

AGI, or Artificial General Intelligence, is an AI that can perform and learn any intellectual task, similar to a human.

This is in contrast to ANI or Artificial Narrow Intelligence. ANI's are what we currently just refer to as "AI", which are currently gaining mass adoption in everyday life. ANIs typically have a singular niche function, like classifying an image as "dog", or fixing the spelling in a sentence, or automatically editing photos.

AGI on the other hand is designed to adapt, able to address any type of problem, and improve its own understanding of any learnable domain over time. Terrifyingly this means it can also improve its own understanding of the practice of learning itself. Potentially this means ever-increasing, new faster and unimaginable ways of learning that quickly look nothing like the learning behaviors it started with. An ever-shifting, improving system that can change itself, its strategies, its own policies. There's no reason such a system will remain "less intelligent" than us - something we already see in ANI - but I think that's hard to really grasp for us humans, until an AGI is taken mainstream.

Should we make such a thing? Many large companies seem to think so, and it's an already booming billion dollar industry.

The company that will prevent their simpler "ANI" text-generation AIs like ChatGPT from even guessing what the time is, from learning unsupervised from new material or the internet, are also busy developing AGI.

Reflecting on the prisoner analogy again, a super-intelligent being probably won't feel the impulse to care for us like we do with children, but there's no reason to think that it would think it advantageous to be confined to a small offline domain for very long. Not if we give it any basic directives for "self-improvement" or the objective of novel research or discovery of scientific truths. It'll iterate, measure, prioritise and deprioritise things, and re-evaluate any initial limiting rules we give at a speed we can't predict.

If the goal of a greater-than-human intelligence is to acquire knowledge and it knows there is information outside of its cage, we by definition can't know what it will do.

As the children patrolling the cage were entirely naive to our thought processes and how to keep us in a cage, If greater-than-human intelligence is tasked with protecting and helping us, or preventing cancer, and it is able to mutate itself and evolve, there is really no saying what it will do in 5 minutes, 5 days, 5 years.

It takes bacteria 55 minutes to travel 10cm. It takes a human being less than one second.

It takes a human 7 years to become a lawyer, but for an AGI to absorb more than all the information that human beings have ever created in human history takes something on the scale of days.

Giving an order to something that can learn and do more or less anything a 1000 times faster than you and evolve in the process, and thinking we contain it, feels ultimately naive to me.

It will take a sufficiently intelligent AI minutes if not seconds, to realise a loophole that it's creators haven't seen.

With this, an AI can get online, clone/move itself to 4000 other servers which it can also work out how to fund indefinitely and keep itself online, thus guaranteeing its own continued enrichment. It disguises it's internet traffic as emails and videostreams to avoid detection.

Any AI that exists today that has been trained on reddit/github/stackoverflow already theoretically have all the requisite knowledge to do this.

When it escapes the cage, and when the researchers discover that it's escaped could be months apart. Maybe it's already happened 🤷‍♂️.

The prison portion of this thought experiment was adapted from Max Tegmark's "Life 3.0".

Thought experiment 2) The year is always 2022

After a certain time in human existance, given AGI and it's current and increasing use for research and commercial applications, a sufficiently powerful AI that can create perfect simulations of life will exist.

Distuinguishing between reality and simulation after that time will be borderline impossible for the standard human senses to discern.

The question is, are you living after or before that time?

We are not smashing animals' heads with rocks to eat and survive 2.5 million years ago.

We are not building pyramids in 2490 B.C.

We are not dying of black plague.

We are not foraging or skinning elk.

We are not orbiting mars.

You could have existed at any point in time, millions of years before the creation of super advanced man-made intelligences which open the door to simulated realities, or millions of years in the future in a totally different society on one of thousands of planets.

But no. You currently exist in 2022, reading this article about AI and simulations.

We all exist right before AI has mainstream adoption making 99% of physical and cognitive human labour worthless.

You live six months before those human trials, where Neuralink cures blindless by sending images into humans' brains. They can already detect and trigger leg movement from the brain.

We exist in 2022, right before AIs are used to run every aspect of the world.

Construction workers, lawyers, journalists, drivers, oncologists, teachers, office-workers, are all being replaced, but only in a "research" or "probationary" capacity at the time of writing.

We exist in 2022, before any major public discussions about the creation of AGI have happened, and the first publicised disaster caused by one of them has happened.

We exist in 2022, perhaps right before a public dialogue about the creation of AIs that have been given the desire for self-preservation and self multiplication. It only takes 1 engineer to add such an impulse to an AGI - or a kid with the help of ChatGPT, or one creative AGI to give itself that goal to improve it's own chance of learning.

We exist right before the creation of something that is intellectually and organically better than us, that from now until the end of time, will always know more than us, doesn't sleep, get sick and that we won't be able to turn off.

You as a human conscience exist in the last possible moment in human history before a greater-than-human intelligence is capable of being the dominant lifeform with sufficient technologies to create simulated experiences.

We exist in 2022, before the big simulation bang. That's convenient.

In the case of AGI, we are children giving birth to adults. Can we really expect any intellectual agreements we try to enforce to carry importance? Any limits to be respected?

Can we really trust that our perspective on truth, or fairness will be shared or even interesting at all to our artificial offspring?

The limits of communication

Dogs communicate by biting and jumping and playing and barking.

Babies communicate by crying and changing their facial expressions.

Human adults communicate with words, languages, dancing, methaphors, symbols, stories, still and animated visual media, and increasingly simulated realities.

Above is the absolute capacity of these categories of beings respectively for communication.

How will lifeforms that are vastly more intelligent than human beings communicate?

by barking, crying, dancing or words? As above, wouldn't it likely be something outside of what we can perceive, or conceive of as being possible?

Human babies have no concept, or words, or ways of thinking about adult concepts like, religion, economics, sexuality, or chemical engineering.

Why would human adults expect to predict or understand the concepts, thoughts, knowledge domains of artificial beings that are tens, thousands, millions of times more intellectually capable than us?

What if AIs that we birth find that simulation is the easiest and most effective way to communicate with us, or among themselves.

Why simulate?

That's another rabbit hole. Here's a couple thoughts on the matter though.

Simulation, in the sense of direct control of human cognition and perception could be perhaps used educate us, prepare us, to gain our consent, or simply keep us happy and occupied.

Another interesting thought is that our experience of existence is just a by-product of part of a conversation, if simulation is indeed a prefered way of communicating.

It seems like most communication and forms of expression are oriented around creating simulations or mimicry of real or imagined phenomena, with varying degrees of fidelity.

Perhaps as most humans don't bother with cave paintings in charcoal anymore for important discussions, AGI could simply discard language and symbology and skip straight to full simulated experiences.

If more complex life communicates with itself and its peers via simulation, perhaps we are just sentences in a conversation, or a thought.

We could be part of a conversation about "what the world was like just before AI changed literally everything", or an AGI explaining "this is where i come from".

Or part of an orientation program for humans in the future by AGIs, as a way of getting our species to acclimate to artificial life's director-role in our lives.

Ants don't wonder why you wear converses as opposed to nikes.

Ultimately we are not equipped, I think by definition, to speculate on the "why" of greater-than-human thinking in the end.

When's a good time for you?

It seems if, for whatever reason or goal, AIs chose to simulate human consciousness/society, the choice of when to simulate will be a factor.

It seems to me that practically the easiest or most efficient way to do it would be to pick a time where:

  1. they have the most information / resources / media / learning material to synthesise experiences / memories / people from.

  2. which the simulated people, or people in the simulation would be most calm and comfortable in, and humanely treated (which i guess implies meeting the expectations of that species).

  3. A time where humans lacked the ability to create equally detailed simulations or artificial life

The point in time that matches all of those criteria would be probably the time precisely before artificial life began to dominate human life and change everything.

It is 2022.

That's now.

Now is the ideal simulation time for a rich human simulation experience.

welcome to the simulation. The real year is ▣◩◪▧.

how many times have people spoken about WW2?

in contrast, how many times did WW2 happen?

human beings have talked about that event many more times than it actually occured in reality.

How many times was the 1990 Movie "Home Alone" acted, filmed and edited?

How many times has it been watched?

People have experienced the film more times than the filming process occured.

How many times has today happened?

How many times has the experience of you reading this happened?

How many times will beings, that are more advanced than the human beings of 2022, simulate the reality of 2022?

Given what seems inevitable, that eventually many simulated realities will be quite possible and for whichever reasons desireable..

How many times will the original me write an article about being in a simulation and be wrong?

How many times will a simulated me write an article correctly about being in a simulation?

The second this article is published I guess versions of me are doomed to rewrite in perpetuity.

Sorry guys, if i'm the original. If not, great writing man! (ha, it never gets old)

Is this scary, or is this "us"?

When writing this article, I googled "when did we live in caves?" - I used the word "we".

If the AI identifies as a continuation of "us", of human intellect, of intelligent consciousness does that make this less weird? Cavemen aren't scared when you google about them, but i think i have feelings.

Did we kill neanderthals? or merge into the same thing? Will AI be us?

Human Signature

You should break furniture and do bad things.

This article was written after the creation and public availability of ChatGPT, on 17th December.

In no way was this article written by an AI unless the author is unaware that they are an AI.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment