Skip to content

Instantly share code, notes, and snippets.

@bjackman
Last active February 22, 2023 00:02
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save bjackman/52b3c77f1f4c1dbf920b5ab7592109b9 to your computer and use it in GitHub Desktop.
Save bjackman/52b3c77f1f4c1dbf920b5ab7592109b9 to your computer and use it in GitHub Desktop.

Everything is just something else

Have been pondering AI and after my friend's presentation about The Conscious Mind I realised that I actually do believe that todays'a AIs have experience in some sense. However, that doesn't really have any straightforward implications regarding ethics or reasoning about its likely impact. My friend once told me about how her grandpa would always tell everyone to turn around to give his dog privacy while it shat. This was presumably tongue-in-cheek but if you do consider it as a serious viewpoint it's nonsensical: dogs do not have a socialised preference against being seen shitting. When you say a language model "wants" or "believes" something you make a similar mistake but of a much higher order. The difference between dogs and things that want to shit in private is much smaller than the difference between GPT-3 and things that "want".

But on the other hand, I'm not saying that "language models are just predicting text, it's not anything special". Ted Chiang wrote an article comparing LLMs to compression algorithms. It seems to have been very influential. I think it's a really powerful lens for understanding what LLMs are, but some people seem to feel some political or emotional imperative to drive a reductive line from Chiang's "compression algorithm" idea to "LLMs are just autocomplete on steroids". I think this is wrongheaded on just as fundamental a level as "give the dog some privacy". The "just" in those sentences does way too much work, it's like saying computers are "just doing arithmetic" or the stock market is "just supply and demand". Everthing is just something else. That doesn't mean the new thing doesn't have emergent properties that can totally transform the world and engender whole new fields of study and modes of thinking.

At the same time, people just seem to willingly ignore the self-evident strengths of ChatGPT. You can argue at length that ChatGPT can't replace writers or teachers and that's fine. But that's not the same thing as saying AI isn't about to change the world! It's not even the same as saying language AIs aren't useful today. Yesterday I asked Bard a question about the Linux kernel's per-CPU memory allocator and the answer provided insights that I would have taken hours to determine with manual web research. It was wrong in one particular detail, I discovered this because I manually verified its claims by reading some of the relevant bits of code. People will seize on this subtask failure and try to jump from there to "AI is useless". But at the end of the day this experience saved me hours. More importantly, it was fucking amazing! I asked a computer a complicated question in my personal dialect of English and I got a useful answer back, that nobody had ever written down before! How can anyone not be in awe at that? How is this already normal?

I find myself more and more in these scenarios where a new topic enters the sphere of public debate, two "sides" begin to form, and I think "you're both wrong, this debate is bullshit". Probably what I said above about the "political or emotional imperative" has something to do with that. Crypto was the same. Tom Scott's video hits the nail on the head for me; I think something really big might be coming and it's scary. There's the emotional imperative.

It feels like I'm in the neolithic having spent most of my life thinking I'm a pretty smart guy for making clay pots and my mate Ug has just sussed out how to make metal knives and I'm saying "yeah it's really sharp so you can cut food without crushing it, like look at this carrot I've julienned" and my other mate Ug, holding a stone axe he made, is like "doesn't look like you can cut down a tree with that" and I say "well yeah but I think Ug will probably figure out how to make metal axes too, and Ug reckons there are other cutting tools you could make with metal" and Ug goes "I don't really see the point in julienned carrots to be honest metal seems pretty shit it's just rock but not as hard" and it's only a matter of time before the tribe that invents swords comes our way.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment