Skip to content

Instantly share code, notes, and snippets.

@zackmdavis
Created November 11, 2022 04:56
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save zackmdavis/62e6e008dbb9a24e65a95058da1f7426 to your computer and use it in GitHub Desktop.
Save zackmdavis/62e6e008dbb9a24e65a95058da1f7426 to your computer and use it in GitHub Desktop.
Eigenrobot vs. extropy
(Whisper) zmd@CaptiveFacility Whisper % whisper ep22_final.mp3 --model tiny
100%|█████████████████████████████████████| 72.1M/72.1M [00:03<00:00, 23.9MiB/s]
/Users/zmd/.local/share/virtualenvs/Whisper-JcksVwm9/lib/python3.9/site-packages/whisper/transcribe.py:78: UserWarning: FP16 is not supported on CPU; using FP32 instead
warnings.warn("FP16 is not supported on CPU; using FP32 instead")
Detecting language using up to the first 30 seconds. Use `--language` to specify the language
Detected language: English
[00:00.000 --> 00:05.000] Welcome to Robot Friends, the podcast that actively harms its audience.
[00:05.000 --> 00:08.000] Episode 22, I can robot vs. Extra-B.
[00:16.000 --> 00:19.000] JD Pressman, what's your deal?
[00:20.000 --> 00:23.000] What is my deal? Okay, well, um,
[00:23.000 --> 00:26.000] Question? Well, you have notes.
[00:26.000 --> 00:28.000] Yeah, I have notes, right?
[00:28.000 --> 00:33.000] We could, I should maybe friend this podcast as a situation where I'm talking to somebody who was done
[00:33.000 --> 00:38.000] substantial preparation for it, and I've skimmed over some of that preparation.
[00:38.000 --> 00:45.000] So, in a sense, this is a case of a robot with few opinions talking to a man who perhaps is quite a few of them,
[00:45.000 --> 00:47.000] to which you are about to be introduced.
[00:47.000 --> 00:50.000] Oh, my, well, I...
[00:50.000 --> 00:54.000] So, basically, I just kind of wanted to talk about...
[00:54.000 --> 00:58.000] Okay, let me frame it like this. You have the sequences, right?
[00:58.000 --> 01:02.000] Eliza Udkowski writes them about 12 years ago at this point.
[01:02.000 --> 01:07.000] So, the last sequence is post was on April 27, 2009,
[01:07.000 --> 01:09.000] so in about two years, it'll be the 12 year anniversary,
[01:09.000 --> 01:13.000] at two months, it'll be the 12 year anniversary.
[01:13.000 --> 01:15.000] Uh, and...
[01:15.000 --> 01:19.000] I feel like when people, you know, like less wrong as currently in the discourse,
[01:19.000 --> 01:21.000] you have the whole New York Times thing,
[01:21.000 --> 01:25.000] you have the post rat discourse, which just happens like every two weeks.
[01:25.000 --> 01:26.000] What wasn't that the joke that...
[01:26.000 --> 01:27.000] That was made that it just...
[01:27.000 --> 01:29.000] It comes up every two weeks now.
[01:29.000 --> 01:30.000] Oh, it never stops.
[01:30.000 --> 01:31.000] It never stops.
[01:31.000 --> 01:31.000] Yeah.
[01:31.000 --> 01:33.000] It's never stops as it ever resolved.
[01:33.000 --> 01:35.000] It never stops really good results.
[01:35.000 --> 01:39.000] And I feel like when people are talking about it,
[01:39.000 --> 01:43.000] there's very little acknowledgement of...
[01:43.000 --> 01:47.000] Where... like, what less wrong was even supposed to be in the first place,
[01:47.000 --> 01:49.000] where it came from...
[01:49.000 --> 01:53.000] Letting... let alone even an evaluation of how well it did it.
[01:53.000 --> 01:54.000] It's almost like...
[01:54.000 --> 01:59.000] You have this weird tangential discussion where people are stuck at trying to define words
[01:59.000 --> 02:03.000] to talk about something, instead of just going out and talking about what actually exists
[02:03.000 --> 02:04.000] and what actually happened.
[02:04.000 --> 02:05.000] Yeah, yeah.
[02:05.000 --> 02:07.000] That's a really weird place, I think, for the discourse to be.
[02:07.000 --> 02:12.000] So, I guess a lot of what I'm looking at when I come onto this podcast is
[02:12.000 --> 02:17.000] I happen to be very knowledgeable about the history, especially.
[02:17.000 --> 02:24.000] And also, I think that I've spent a lot of time kind of mastering this meanplex
[02:24.000 --> 02:26.000] as you might put it.
[02:26.000 --> 02:29.000] You know, you said it in your rationality in the UI sense.
[02:29.000 --> 02:32.000] I think you said in your podcast with Yashkov was...
[02:32.000 --> 02:34.000] It's a system and you master it.
[02:34.000 --> 02:37.000] And then you become a poster at, yeah.
[02:37.000 --> 02:38.000] But...
[02:38.000 --> 02:39.000] Maybe.
[02:39.000 --> 02:40.000] Yeah, maybe.
[02:40.000 --> 02:41.000] But...
[02:41.000 --> 02:44.000] And so, I think I've spent a lot of time with that system
[02:44.000 --> 02:48.000] and I have opinions, and so we're going to talk about them.
[02:48.000 --> 02:49.000] Yeah.
[02:49.000 --> 02:53.000] I think the first thing that's useful to talk about in terms of, like, what's under discussed
[02:53.000 --> 02:56.000] is that when Elijah and Kowski...
[02:56.000 --> 02:59.000] Okay, he writes the sequences.
[02:59.000 --> 03:01.000] It's basically...
[03:01.000 --> 03:05.000] I guess the first piece of context would be what they even are in the sense that
[03:05.000 --> 03:08.000] everyone always acts like he sat down and he wrote this giant book.
[03:08.000 --> 03:09.000] But that's not what happened.
[03:09.000 --> 03:13.000] He wrote a daily blog post every day
[03:13.000 --> 03:16.000] for two years or something.
[03:16.000 --> 03:17.000] Yeah.
[03:17.000 --> 03:19.000] You have to think about the constraints under which he's writing.
[03:19.000 --> 03:22.000] Not everything he's writing is supposed to be some kind of immortal gospel.
[03:22.000 --> 03:24.000] He literally wrote it that day.
[03:24.000 --> 03:26.000] It's almost like he's got a typewriter.
[03:26.000 --> 03:28.000] He just writes it first draft, froze it out.
[03:28.000 --> 03:31.000] And that's on the internet, and that's his post for the day.
[03:31.000 --> 03:32.000] Yes.
[03:32.000 --> 03:37.000] But nothing he says should be taken like it's this carefully thought over...
[03:37.000 --> 03:41.000] You know, gospel that he spent all this time crafting it because it just isn't.
[03:41.000 --> 03:43.000] Yeah, that's actually...
[03:43.000 --> 03:46.000] That's actually Elron Hubbard's method of writing.
[03:46.000 --> 03:48.000] It was Elron Hubbard's method of writing.
[03:48.000 --> 03:52.000] Did you know that Elron Hubbard actually has the world record for the most books ever written?
[03:52.000 --> 03:53.000] Yep.
[03:53.000 --> 03:55.000] Your draft last draft get it out the door.
[03:55.000 --> 03:56.000] Yeah, for...
[03:56.000 --> 03:57.000] Yep.
[03:57.000 --> 03:58.000] And he would write on...
[03:58.000 --> 04:00.000] I believe it was butcher paper.
[04:00.000 --> 04:02.000] So he doesn't even use high quality paper.
[04:02.000 --> 04:04.000] He just has this typewriter and just slamming it the tight...
[04:04.000 --> 04:05.000] Yeah.
[04:05.000 --> 04:08.000] So that's basically how you'd Kowski wrote the sequences.
[04:08.000 --> 04:10.000] And I feel like this is never acknowledging conversation.
[04:10.000 --> 04:12.000] You'll be ever talks about it.
[04:12.000 --> 04:15.000] It's just, you know, taken like, oh, he wrote this big long book.
[04:15.000 --> 04:18.000] So a bunch of big long thought must have gone to every word and it's like, no.
[04:18.000 --> 04:20.000] Absolutely not.
[04:20.000 --> 04:23.000] Would you say that the sequences were a shit post?
[04:23.000 --> 04:30.000] I don't know if I'd go that far, but I think that certain parts of the sequences are like bordering on shit posts.
[04:30.000 --> 04:35.000] Like, for example, there's this one post, but I always found personally very cringe
[04:35.000 --> 04:37.000] where he discusses going to a dinner party.
[04:37.000 --> 04:41.000] And he's having this argument with someone and he deploys almonds agreement,
[04:41.000 --> 04:45.000] fear among them to make the argument that they can't agree to disagree and that they have to pick a position.
[04:45.000 --> 04:47.000] And it's just the way he writes it is just...
[04:47.000 --> 04:49.000] It's very...
[04:49.000 --> 04:54.000] Oh, you know, oh, oh, oh, oh, it's new atheist, you know, 2,000's new atheist.
[04:54.000 --> 04:56.000] Like, it's just extremely cringe.
[04:56.000 --> 04:57.000] Yeah.
[04:57.000 --> 04:57.000] Yeah.
[04:57.000 --> 04:58.000] Let's go into...
[04:58.000 --> 05:01.000] So what I was going to say though is that he writes these posts.
[05:01.000 --> 05:04.000] He writes them for about, again, I believe it was like two years.
[05:04.000 --> 05:11.000] And then at the end, he actually did give people instructions on what he expected to come of these posts.
[05:11.000 --> 05:13.000] You know, it's not like he just went up, I'm done now.
[05:13.000 --> 05:17.000] And what's funny is that he wrote them and quite in some detail really.
[05:17.000 --> 05:19.000] And then nobody ever references these.
[05:19.000 --> 05:20.000] They never talk about them.
[05:20.000 --> 05:21.000] They never...
[05:21.000 --> 05:24.000] It's never brought up in conversation about what less wrong is or what it should be.
[05:24.000 --> 05:26.000] It's almost like they just got memory hold.
[05:26.000 --> 05:28.000] So I'm going to talk about them.
[05:28.000 --> 05:29.000] Okay.
[05:29.000 --> 05:35.000] So just before you get into that, which I definitely want you to do, this feels a little bit
[05:35.000 --> 05:40.000] to me like the branch of scholarship that started investigating the Gospels.
[05:40.000 --> 05:46.000] You know, hundreds or more than 1,000 years after the life of Jesus, where, you know,
[05:46.000 --> 05:51.000] suddenly there is this biblical scholarship where they started putting together drafts
[05:51.000 --> 05:54.000] and, you know, very old copies of the Gospels.
[05:54.000 --> 05:59.000] And it seems like what you're about to do is like drop some dead sea scrolls.
[05:59.000 --> 06:01.000] Absolutely. That is exactly the whole point.
[06:01.000 --> 06:03.000] I'm going to drop some lore on you.
[06:03.000 --> 06:05.000] So file needs.
[06:05.000 --> 06:06.000] No.
[06:06.000 --> 06:07.000] No. Okay.
[06:07.000 --> 06:10.000] So I love that genre scholarship.
[06:10.000 --> 06:14.000] I think it's very valuable to do.
[06:14.000 --> 06:20.000] But so basically the first one I wanted to talk about was a post called The End of Sequences.
[06:20.000 --> 06:24.000] You can find it still on less wrong. And it was published on April, as I said earlier,
[06:24.000 --> 06:26.000] April 27, 2009.
[06:26.000 --> 06:31.000] And so in this post, he's kind of talking about basically saying,
[06:31.000 --> 06:34.000] I've finished writing all these posts.
[06:34.000 --> 06:35.000] I've finished writing.
[06:35.000 --> 06:39.000] And now here's some kind of, here's why I'm expecting a view people.
[06:39.000 --> 06:41.000] Here's what's going to happen now.
[06:41.000 --> 06:45.000] And essentially says that, you know, I'm writing the site and I need to not run the site.
[06:45.000 --> 06:49.000] Because as he put it, you can only devote your whole life to one thing at a time.
[06:49.000 --> 06:50.000] Yeah.
[06:50.000 --> 06:53.000] And, you know, he's been devoting it to this rationality thing for a few years.
[06:53.000 --> 06:57.000] And, but that's not the most important thing he can be doing.
[06:57.000 --> 07:00.000] So he needs to come back to AI or whatever.
[07:00.000 --> 07:02.000] Elijah Kowski does.
[07:02.000 --> 07:04.000] Funny, like, sidestep.
[07:04.000 --> 07:05.000] Oh, yeah.
[07:05.000 --> 07:06.000] Yeah.
[07:06.000 --> 07:10.000] So it's funny that you bring that up because he actually says exactly in this post.
[07:10.000 --> 07:13.000] So he's talking about a hypothetical rationality book.
[07:13.000 --> 07:14.000] He'll write because he says, Oh, yeah.
[07:14.000 --> 07:20.000] The sequences are like 600 blog posts and all the knowledge is dispersed across the
[07:20.000 --> 07:23.000] thing that only major procrastinators of the seriously dedicated can get out.
[07:23.000 --> 07:25.000] So I'm going to turn into a book.
[07:25.000 --> 07:29.000] Maybe, probably, it's coming out real soon now.
[07:29.000 --> 07:30.000] Uh-huh.
[07:30.000 --> 07:34.000] And he says later in this post, if the rationality book is written sold and takes off,
[07:34.000 --> 07:37.000] I may well vanish entirely off the face of the earth.
[07:37.000 --> 07:40.000] All purposes of publicity have already been served.
[07:40.000 --> 07:42.000] This is the optimal and desirable outcome.
[07:42.000 --> 07:44.000] It means I am allowed to specialize narrowly.
[07:44.000 --> 07:47.000] So the funny part is going to vanish off the earth.
[07:47.000 --> 07:48.000] He's going to send.
[07:48.000 --> 07:49.000] Yeah.
[07:49.000 --> 07:53.000] I'm just going to stuck on on this metaphor.
[07:53.000 --> 07:56.000] Oh, no, it's absolutely fine.
[07:56.000 --> 08:01.000] No, but so, and I think that to a certain extent that is exactly what he did.
[08:01.000 --> 08:04.000] Like, but we'll get, I mean, we'll get into that later.
[08:04.000 --> 08:06.000] But I just think that, you know, just noting it right now.
[08:06.000 --> 08:09.000] I think that that is exactly what happened is that Lesteron took off.
[08:09.000 --> 08:12.000] Probably way beyond what he expected it to.
[08:12.000 --> 08:14.000] And then he just kind of fought to himself.
[08:14.000 --> 08:17.000] Oh, well, I won kind of at least for this part of it.
[08:17.000 --> 08:20.000] So I'm just going to go do, you know, I'm not even going to.
[08:20.000 --> 08:22.000] Because I don't know if you've noticed this.
[08:22.000 --> 08:24.000] But in terms of like public appearances and stuff,
[08:24.000 --> 08:26.000] EY's like profile has dropped significantly.
[08:26.000 --> 08:29.000] Like, I think it's only real public thing left at this point.
[08:29.000 --> 08:32.000] It's the like very occasional blog post in his Twitter.
[08:32.000 --> 08:36.000] But I don't want to get like two into EY beyond like because we're talking about past EY.
[08:36.000 --> 08:40.000] We're talking about EY from 2009.
[08:40.000 --> 08:41.000] Yeah, yeah.
[08:41.000 --> 08:42.000] Yeah, no.
[08:42.000 --> 08:45.000] Right. So, so in this post, right.
[08:45.000 --> 08:48.000] So he's leaving is essentially what he says.
[08:48.000 --> 08:53.000] And then he tells people what is he expecting you to do.
[08:53.000 --> 08:58.000] And first thing he says is that he expects that people are going to get very enthusiastic
[08:58.000 --> 09:07.000] about his ideas and then that's going to become dangerous because any enthusiasm about ideas taken too far becomes dangerous.
[09:07.000 --> 09:10.000] And he says he's really worried that it'll turn his focus away.
[09:10.000 --> 09:15.000] And then find out that someone has picked up the ideas and run with them and gotten it all wrong.
[09:15.000 --> 09:17.000] I have no comment.
[09:17.000 --> 09:19.000] Yeah.
[09:19.000 --> 09:21.000] Okay.
[09:21.000 --> 09:30.000] And, you know, any, any continues in those ways, I have fought to anticipate at least. I have placed a blocking ghost owner to and you have been warned.
[09:30.000 --> 09:31.000] All right.
[09:31.000 --> 09:32.000] A blocking what?
[09:32.000 --> 09:33.000] A ghost stone.
[09:33.000 --> 09:34.000] So in go.
[09:34.000 --> 09:35.000] Right.
[09:35.000 --> 09:37.000] Oh, yeah, yeah, yeah.
[09:37.000 --> 09:41.000] Anyway, okay. So, but here's, and then he gets specific. Like, what does he want you to do?
[09:41.000 --> 09:49.000] He says that basically the sequences were a wager of sorts that you are going to actually do something with the knowledge that you have.
[09:49.000 --> 09:53.000] That you are not going to just have this passive possession of truth as he phrases it.
[09:53.000 --> 09:57.000] But you weren't going to go out and do things. And so here's some things that he lists.
[09:57.000 --> 09:58.000] See studying.
[09:58.000 --> 10:00.000] The math flu is a foundation.
[10:00.000 --> 10:01.000] Give well or cryonics.
[10:01.000 --> 10:04.000] And I think that's a really important because those are really specific.
[10:04.000 --> 10:09.000] And those are almost given us like a benchmark for us to ask like, okay, how well did we do?
[10:09.000 --> 10:10.000] Yeah.
[10:10.000 --> 10:10.000] Okay.
[10:10.000 --> 10:15.000] And you know, he's talking about going to, you know, effect of altruism.
[10:15.000 --> 10:17.000] I believe he's talking about the singularity institute.
[10:17.000 --> 10:25.000] And so that really gives you kind of an idea of specifically what he's kind of angling at in 2009, just as he's finishing up the sequences.
[10:25.000 --> 10:31.000] Like those, that's really specific. So it's very much this kind of a silicon valley style.
[10:31.000 --> 10:35.000] You know, I want you to try and make a new nation on the C maybe.
[10:35.000 --> 10:40.000] I want you to try and end death, like, you know, with life extension, the kind of Aubrey de Grace sort of deal.
[10:40.000 --> 10:45.000] You know, I want you to, so there's give well. So he's explicitly calling out effect of altruism.
[10:45.000 --> 10:51.000] I want you to become effect of altruists and cryonics. I want you to, you know, make cryonics mainstream.
[10:51.000 --> 10:56.000] I mean, that's not he does not say that explicitly, but that's kind of the subject, right?
[10:56.000 --> 10:57.000] Uh-huh.
[10:57.000 --> 10:58.000] Okay.
[10:58.000 --> 10:59.000] Yeah.
[10:59.000 --> 11:02.000] So did he assign probabilities to teachers?
[11:02.000 --> 11:03.000] He did not.
[11:03.000 --> 11:08.000] It's funny. You mentioned that. So one thing about the whole assigning probabilities thing is that
[11:08.000 --> 11:15.000] funny enough, I don't think subjective probability became kind of the tool that it is right now until later.
[11:15.000 --> 11:16.000] Uh-huh.
[11:16.000 --> 11:24.000] Especially, um, one of the things that happened that was big early semi-big was Philip Tettlock coming out of his book Super
[11:24.000 --> 11:31.000] Forecasting and showing off his good judgment project, where essentially he showed that you can do a subjective probability
[11:31.000 --> 11:36.000] Forecasting, and it will actually predict the future that that you can, that this method does work
[11:36.000 --> 11:44.000] and that it is useful in a real geopolitical kind of intelligence context to figure out what is going to happen.
[11:44.000 --> 11:50.000] And I think that that massively raised the profile of subjective probability as like a thinking tool.
[11:50.000 --> 11:54.000] Because before that, it's funny because, you know, EY talks a lot about Bayes Law.
[11:54.000 --> 11:56.000] And so this is become come like a meme, right?
[11:56.000 --> 11:57.000] Yeah.
[11:57.000 --> 12:02.000] And then he doesn't, he never really follows up on it, right?
[12:02.000 --> 12:05.000] He never says like, here's how you use Bayes Law.
[12:05.000 --> 12:14.000] He never says, you know, Bayes Law is only given as like kind of a theoretical model of what it looks like to update and do like kind of certain kind of thinking.
[12:14.000 --> 12:19.000] So essentially just kind of gesturing at a theoretical prosthetic epistemology.
[12:19.000 --> 12:22.000] I don't think he ever really intended that you were going to sit.
[12:22.000 --> 12:27.000] And this is where it gets weird because he writes like this fiction where people do that.
[12:27.000 --> 12:33.000] But it's weird because he writes fiction where people explicitly use Bayes Law casually.
[12:33.000 --> 12:37.000] And then never actually tells his readers how to do that.
[12:37.000 --> 12:38.000] Yeah.
[12:38.000 --> 12:41.000] I mean, I found the Bayes Law thing pretty tiresome honestly.
[12:41.000 --> 12:44.000] I think it's completely tiresome and so I don't want to talk about it any further.
[12:44.000 --> 12:45.000] Yeah.
[12:45.000 --> 12:47.000] Well, I do just a little bit.
[12:47.000 --> 12:47.000] All right.
[12:47.000 --> 12:48.000] Sure.
[12:48.000 --> 12:50.000] Because I feel like I get to do this.
[12:50.000 --> 12:54.000] I feel like for the way that rationalists talk about Bayes Law is kind of like.
[12:54.000 --> 13:02.000] Gladwellian, you know, it's like here's this basic tool that you can use to do statistical inference in some way or another.
[13:02.000 --> 13:08.000] And like turning it into something that's much like much more profound than it actually is.
[13:08.000 --> 13:10.000] I mean, okay, so maybe it's profound in some sense.
[13:10.000 --> 13:14.000] But it's like, yeah, okay, you can update your priors, you know, in this particular way.
[13:14.000 --> 13:18.000] And maybe your pre-existing beliefs matter and interpreting me out coming so on.
[13:18.000 --> 13:25.000] But I don't know, it feels a little bit to be like if people were really excited if they learned about.
[13:25.000 --> 13:34.000] Differential calculus and like, you know, while you can you can use differential calculus to all this cool stuff, but it's like, yeah, that's, you know, it's a useful tool.
[13:34.000 --> 13:42.000] You can use it to, you know, do all sorts of things, but I feel like there's some kind of a like gladwellian forever about it.
[13:42.000 --> 13:44.000] Yeah, no, no, I totally see what you mean.
[13:44.000 --> 13:46.000] I do have some comments on that actually.
[13:46.000 --> 13:50.000] One of them is that I do think that.
[13:50.000 --> 13:59.000] So a lot of Elizer's focus on Bayes Law comes down to him being very interested in statistics and artificial intelligence.
[13:59.000 --> 14:04.000] So it comes out of the fact that EY is a artificial intelligence researcher.
[14:04.000 --> 14:07.000] And I know he, I think he actually had a recent Twitter phrase.
[14:07.000 --> 14:12.000] We said, I have never called myself an artificial intelligence researcher, but like that's what it's right.
[14:12.000 --> 14:15.000] But it's come to me as.
[14:15.000 --> 14:21.000] Another way in which he's like, I have no comments.
[14:21.000 --> 14:25.000] Okay, but no, so.
[14:25.000 --> 14:30.000] There's, okay, before we go on, I just want to have every, like, stop this podcast right now.
[14:30.000 --> 14:37.000] No offense to you, JD. Like, this is fantastic. I'm having a lot of fun, but the podcast that everyone should be listening to are the two
[14:37.000 --> 14:47.000] of the two L-RON Hubbard episodes of the dead authors podcast, which are perhaps the most like the best content ever produced in an audio medium.
[14:47.000 --> 14:51.000] Dead authors podcast, L-RON Hubbard, it's a two-partner.
[14:51.000 --> 14:54.000] The, the man is a genius, the guy who's playing L-RON Hubbard.
[14:54.000 --> 14:58.000] Anyway, continue, please. Oh, yeah, well, okay. So I mean, if you're going there.
[14:58.000 --> 15:04.000] Oh, wow, I'm trying, now, now you've got me because now I have to, I'm trying to remember the name of this book.
[15:04.000 --> 15:10.000] Um, it was, uh, not now. Well, we'll move on. We'll move on. I can, I can, I can put it in the front.
[15:10.000 --> 15:15.000] So, but, okay, so Bayes law, Bayes law.
[15:15.000 --> 15:23.000] And so, Eliza, you can't ask you though, um, the way he sees Bayes laws, he's done this work of statistics, at least theoretically.
[15:23.000 --> 15:30.000] And he sees it as, it is a law. It is a law of thinking. This Bayes law, you know, it's, it's Bayes law.
[15:30.000 --> 15:44.000] It's not Bayes, it's Bayes law. It's a law of thinking. And so in his mind, I think that it becomes something that is more than just a theorem, because it is almost this firm point.
[15:44.000 --> 15:51.000] And I'm, I'm not, I'm just, no, I, I'm just thinking like Samuel Johnson and, and exactly what I could do.
[15:51.000 --> 15:57.000] What sort of thinking process? I could demonstrate it on air to say, I refute it, thus.
[15:57.000 --> 16:02.000] Oh, well, actually, I was the, other thing I was going to say is that when you brought the differential calculus, you realize,
[16:02.000 --> 16:09.000] Liveness actually tried to do that, right, that Liveness, uh, all his work on mathematics was supposed to lead to a
[16:09.000 --> 16:16.000] Language of philosophy that was rigorously precise that would, you know, he has said that in the future philosophers won't debate.
[16:16.000 --> 16:24.000] They will say, let us calculate and they will get down and they will use, um, these tools of, of symbolic manipulation to come to
[16:24.000 --> 16:30.000] Perfectly provable rigorous arguments on all matters of human morality, uh, sociology, like that was his vision.
[16:30.000 --> 16:36.000] That was Liveness's vision. And so when he was working on calculus, that was meant to be a step towards that.
[16:36.000 --> 16:41.000] Okay. We should look back to that to be, because when you talk about live nuts in philosophy,
[16:41.000 --> 16:45.000] I'm certain to think a little bit about Newton and Elachony, which you mentioned around transparency.
[16:45.000 --> 16:49.000] Oh, no, no, no, no, no, okay. Okay. We're definitely getting way up, tract.
[16:49.000 --> 16:50.000] way up track.
[16:50.000 --> 16:51.000] Yeah, no.
[16:51.000 --> 16:52.000] I'm sorry.
[16:52.000 --> 16:53.000] No.
[16:53.000 --> 16:54.000] It's fine.
[16:54.000 --> 16:55.000] Okay.
[16:55.000 --> 16:56.000] Let's go back to the main branch.
[16:56.000 --> 16:57.000] All right.
[16:57.000 --> 16:58.000] Sure.
[16:58.000 --> 16:59.000] Well, I mean, I like it.
[16:59.000 --> 17:00.000] And it's so you brought up.
[17:00.000 --> 17:03.240] And so I do think for there's a big to just sum up the base thing.
[17:03.240 --> 17:09.000] I do think there is a big Martin Bailey kind of going on with base where people early
[17:09.000 --> 17:11.000] on were very enthusiastic about it.
[17:11.000 --> 17:15.000] And mostly because Elijah Kowsky was very enthusiastic about it, I don't feel like very
[17:15.000 --> 17:17.000] many of them really understood it all that well.
[17:17.000 --> 17:18.000] Yeah.
[17:18.000 --> 17:20.240] And it's kind of a cargo cult thing.
[17:20.240 --> 17:24.600] And then over time, it became less and less important.
[17:24.600 --> 17:29.800] In fact, in the more recent CFR handbooks, I don't think the word's base law even appears
[17:29.800 --> 17:31.800] or base rule or base.
[17:31.800 --> 17:32.800] I think research.
[17:32.800 --> 17:34.480] Then at one point, yeah, I don't think it even appears.
[17:34.480 --> 17:41.320] So that it's no longer something that at least the MRI C far branch, it really endorses.
[17:41.320 --> 17:42.320] I'm not sure.
[17:42.320 --> 17:43.320] I mean, they do, but you get what I mean.
[17:43.320 --> 17:47.960] They get no longer meant to be thought of as like a practical technique.
[17:47.960 --> 17:52.200] And I feel like there is like a silent repudiation that happened, but there was never like
[17:52.200 --> 17:54.480] a public recantation.
[17:54.480 --> 17:58.040] And so there's a weird, like, mot Bailey thing going on where it's like, you say base
[17:58.040 --> 18:01.040] law, it's like, don't bring up base law.
[18:01.040 --> 18:02.040] Interesting.
[18:02.040 --> 18:03.040] You get what I mean.
[18:03.040 --> 18:04.040] No.
[18:04.040 --> 18:05.040] Yeah.
[18:05.040 --> 18:07.280] And I think you can see this in public discourse when people talk about it.
[18:07.280 --> 18:08.280] But yeah.
[18:08.280 --> 18:11.280] So now we can keep being keep going.
[18:11.280 --> 18:12.280] Okay.
[18:12.280 --> 18:14.480] And so EY writes this.
[18:14.480 --> 18:18.000] And that kind of gives you an idea of what he's interested in in terms of like why is he
[18:18.000 --> 18:19.000] writing the sequences?
[18:19.000 --> 18:20.000] What does he want?
[18:20.000 --> 18:23.360] He wants people to work on Ayrisk, what else might they work on besides Ayrisk, effective
[18:23.360 --> 18:29.120] altruism, ending death, making a new nation, cryonics, which is really just part of ending
[18:29.120 --> 18:30.120] death.
[18:30.120 --> 18:31.120] Okay.
[18:31.120 --> 18:36.080] So then he comes back and this is in 2011, so April 20 of 2011.
[18:36.080 --> 18:40.480] So really almost about two years later, like almost to the same day.
[18:40.480 --> 18:46.080] And two years later, he is writing this document called the epistle to the New York
[18:46.080 --> 18:47.080] Lessorongians.
[18:47.080 --> 18:50.280] And this is actually another like kind of, wow, look at that.
[18:50.280 --> 18:51.280] Yeah.
[18:51.280 --> 18:52.280] Okay.
[18:52.280 --> 18:57.560] This is a long form, exposition of exactly what he wants people to do.
[18:57.560 --> 19:00.720] It's not, it's like even more long form than like, you know, the end of the sequence
[19:00.720 --> 19:02.600] is he gets some brief instructions.
[19:02.600 --> 19:06.360] And then in epistle to the New York Lessorongians, he just writes this full essay where
[19:06.360 --> 19:09.200] he's explained like, this is exactly what I want you to do.
[19:09.200 --> 19:12.000] These are the failure modes that I expect you to encounter.
[19:12.000 --> 19:16.600] This is how you're going to screw it up, like he's like it's, and this never gets brought
[19:16.600 --> 19:17.600] up ever.
[19:17.600 --> 19:22.040] I have never heard so in reference to this in any of these conversations.
[19:22.040 --> 19:25.520] Maybe you don't get to be Jesus in Paul at the same time.
[19:25.520 --> 19:31.360] I may, maybe, maybe, but okay, so, and even, he even gives a mission statement.
[19:31.360 --> 19:34.920] He even gives a mission statement here, so let's go.
[19:34.920 --> 19:37.640] He says, stay on track toward what you ask.
[19:37.640 --> 19:40.400] So, okay, I guess you should be some context here.
[19:40.400 --> 19:44.360] So two years later, the sequences of ended, people have found it a, I think, really the first
[19:44.360 --> 19:50.520] kind of rationalist community scare quotes in New York, and EY is addressing them specifically
[19:50.520 --> 19:51.520] as an audience.
[19:51.520 --> 19:54.800] And someone said, you should put this on the internet for other people to read too.
[19:54.800 --> 19:56.160] And so he did.
[19:56.160 --> 20:00.800] And so he's telling them basically, I visited you, you're great, you're doing exactly
[20:00.800 --> 20:03.320] what I want and more.
[20:03.320 --> 20:06.560] I think you're doing good, and I think that you're about to run to a bunch of issues.
[20:06.560 --> 20:09.080] And so I'm going to tell you what's going to happen to you and how it's going to screw
[20:09.080 --> 20:10.080] you up.
[20:10.080 --> 20:11.080] Okay.
[20:11.080 --> 20:14.600] And what, and what you should be doing, like what I think you should be doing to advance.
[20:14.600 --> 20:16.360] So what, what were the specifics?
[20:16.360 --> 20:18.040] Yeah, well, so let's talk about it.
[20:18.040 --> 20:21.240] So he says, stay on track toward what you ask.
[20:21.240 --> 20:24.320] And my best shot describing the vision is as follows.
[20:24.320 --> 20:26.840] Through rationality, we should become awesome.
[20:26.840 --> 20:32.040] An invent and test system act methods for making people awesome and plot to optimize everything
[20:32.040 --> 20:33.040] in sight.
[20:33.040 --> 20:35.640] And the more fun we have, the more people will want to join us.
[20:35.640 --> 20:39.520] And he puts in parenthesis, but last part of something I only realized was really important
[20:39.520 --> 20:41.120] after visiting New York.
[20:41.120 --> 20:42.120] And so I don't want to go over it.
[20:42.120 --> 20:43.120] Like this is a long essay.
[20:43.120 --> 20:48.280] And I really, really encourage anyone reading this to just read it, you can search it.
[20:48.280 --> 20:49.520] It's still on less wrong.
[20:49.520 --> 20:54.920] It's a, you know, a pistol to the New York, less wrong, just read this yourself.
[20:54.920 --> 20:57.040] It's, it's pretty good.
[20:57.040 --> 21:01.800] And I think it actually represents a UCowski at kind of his most joyous moment, you know,
[21:01.800 --> 21:07.880] and texturally, I believe he was quite a way into, why no, he was, I think he had just finished
[21:07.880 --> 21:11.080] like, okay, why remembering HPMO in 2011.
[21:11.080 --> 21:15.360] So I know that he had finished at least the Azkaban arc of HPMO R, so really the most
[21:15.360 --> 21:17.040] important arc of the story.
[21:17.040 --> 21:18.040] And he's way into it.
[21:18.040 --> 21:22.800] And the stories becoming popular and the sequences are becoming popular like he's seen
[21:22.800 --> 21:24.760] his vision becoming realized.
[21:24.760 --> 21:27.560] This is kind of EY, it has most joyous moments.
[21:27.560 --> 21:28.560] Okay.
[21:28.560 --> 21:32.920] And this is probably his hedonic high water mark, and so it includes some of his, I would
[21:32.920 --> 21:38.640] say, more fun, like serious, but fun prose, and it's just a joy to read this.
[21:38.640 --> 21:43.400] And so I encourage anyone who is interested in this subject to go track that down.
[21:43.400 --> 21:48.400] But so one question that I have, you mentioned the, the things that he's directing them
[21:48.400 --> 21:50.880] to do, but what does he identify as the pitfalls?
[21:50.880 --> 21:51.880] Yeah, absolutely.
[21:51.880 --> 21:55.600] So let's talk about the pitfalls, because he really goes in more into detail in the pitfalls
[21:55.600 --> 21:56.600] in this essay.
[21:56.600 --> 22:00.600] So in the first, but real quick, in the last one, he said the pitfall was that you
[22:00.600 --> 22:02.640] read all this and you didn't do anything.
[22:02.640 --> 22:06.080] If I have not inspired you to do something, you know, really do something, not even put
[22:06.080 --> 22:09.600] a link in your signature to the sequences, I have failed.
[22:09.600 --> 22:11.160] That basically what he says.
[22:11.160 --> 22:16.400] So then in 2011 to years later, he comes back and he gives more info on the pitfalls.
[22:16.400 --> 22:20.920] He thinks that some of the pitfalls will include things like, you're going to, you know,
[22:20.920 --> 22:24.920] as you get better and you become really shiny people are going to want to join you who
[22:24.920 --> 22:28.440] are not really into this whole rationality thing.
[22:28.440 --> 22:32.760] They are going to see that you are shiny and that you have something good going on, but
[22:32.760 --> 22:34.800] they're going to be coming from a different mindset.
[22:34.800 --> 22:37.680] The first people who join were in this for the rationality.
[22:37.680 --> 22:40.560] The people who, you know, if you're doing your job and everyone is awesome and they're
[22:40.560 --> 22:48.400] becoming awesome and optimized and transhuman gods, right, that they're, you know, that people
[22:48.400 --> 22:53.560] will want to join and they will not be joining as people who are interested in rationality.
[22:53.560 --> 22:58.640] But we join as people who want your success and to deal with this alignment problem,
[22:58.640 --> 23:02.840] you're going to have to kind of push on them that know the rationality is important.
[23:02.840 --> 23:09.240] We promise it's not just a weird, you know, like a, like a, like a hang, you know, hang
[23:09.240 --> 23:10.240] you know, or something.
[23:10.240 --> 23:11.240] I don't even know what you would say.
[23:11.240 --> 23:15.760] So I mean, it's like an extra, like kind of like how sometimes things evolve and then
[23:15.760 --> 23:16.760] they don't.
[23:16.760 --> 23:18.760] Like a, a mandrel, right?
[23:18.760 --> 23:19.760] Yeah.
[23:19.760 --> 23:19.760] Yeah.
[23:19.760 --> 23:20.760] Like a spandrel.
[23:20.760 --> 23:21.760] Exactly.
[23:21.760 --> 23:22.760] I was wondering if that was the word.
[23:22.760 --> 23:26.280] Or are you thinking of the one where it's like, no, I'm saying the spandrel.
[23:26.280 --> 23:29.760] So, okay.
[23:29.760 --> 23:33.600] And so that was kind of one worry had, never worry had was that you're going to have
[23:33.600 --> 23:41.000] people who you have, you take pity on and you have the urge to fix and they are not fixable.
[23:41.000 --> 23:45.760] So he's actually discussing right the outset here in 2011, April, 2020, 2011.
[23:45.760 --> 23:51.760] The whole unconditional tolerance of weirdos problem and why is opinion on it is, don't
[23:51.760 --> 23:52.760] throw them out.
[23:52.760 --> 23:57.480] He actually says the more or less this and he actually even goes so far as to say that
[23:57.480 --> 24:01.400] what you need to do is you need to have like a set period of time for people who are,
[24:01.400 --> 24:04.200] I love the way phrasey says, you know, I've heard all of these people who are making
[24:04.200 --> 24:07.360] so much progress and then you ask them a month later and they haven't actually done anything.
[24:07.360 --> 24:10.840] They're not, you know, but they're making so much progress and he says, you know, if
[24:10.840 --> 24:14.280] he says, all my mental techniques were developed in the course of trying to do something.
[24:14.280 --> 24:18.080] So if you think you're making all this improvement and you're not doing anything, that's
[24:18.080 --> 24:19.520] a really big red flag.
[24:19.520 --> 24:20.520] Yeah.
[24:20.520 --> 24:27.000] And he continues that, you know, you should have like a set period like four months and
[24:27.000 --> 24:32.160] if someone isn't making real tangible, doing things progress in four months, you should
[24:32.160 --> 24:35.160] have a committee that kicks them out and he says that it needs to be a committee because
[24:35.160 --> 24:40.000] if it's a person, the person will take pity and have the whole decision forced on them.
[24:40.000 --> 24:45.000] He says, but of course, you know, if you need to have no pity and no mercy, the only way
[24:45.000 --> 24:48.800] to really do that besides assigning the task to Lord Voldemort is to have it done by
[24:48.800 --> 24:49.800] committee.
[24:49.800 --> 24:50.800] Interesting.
[24:50.800 --> 24:54.360] Okay, sorry, a little bit of dry humor there, I guess.
[24:54.360 --> 24:58.680] I personally thought that was, yeah, no, no, no, no, no, no, no, I was thinking about
[24:58.680 --> 25:06.400] the problem myself and I guess I think honestly like, well, go on, I don't need to drop
[25:06.400 --> 25:07.400] in my opinions of this.
[25:07.400 --> 25:08.400] No, it's fine.
[25:08.400 --> 25:10.680] I mean, you can, no, you can, you can go on, go on.
[25:10.680 --> 25:13.680] Well, I don't know, I'm just speculating about it.
[25:13.680 --> 25:20.640] It seems like the sort of problem where the cost of having like really effective gate
[25:20.640 --> 25:27.360] keeping up front and and directly as much lower than the cost of actually like, expelling
[25:27.360 --> 25:31.440] somebody from a community once, once they're inside, even if it's something that's good
[25:31.440 --> 25:32.440] for the community.
[25:32.440 --> 25:37.760] I mean, like, you know, you look at, say, sex pests, for example, like, you know, people
[25:37.760 --> 25:43.200] turning a broken stairs are, you know, maybe they were always broken stairs, but just
[25:43.200 --> 25:47.000] empirically, it's very hard to kick people out of communities for doing something like
[25:47.000 --> 25:54.720] this until it becomes, you know, like really transparently awful, but it's like people see
[25:54.720 --> 26:00.960] this coming in our backs and I mean, that's an extreme case, but I mean, if you're trying
[26:00.960 --> 26:09.640] to do something and you're a mission-focused community, it, it seems like it can be really
[26:09.640 --> 26:15.040] hard to, you know, kick out deadwooded, it firms, you know, I think our better metaphor
[26:15.040 --> 26:18.920] here and it's really hard for firms to fire people and nobody had a firm actually wants
[26:18.920 --> 26:22.880] to hire anybody, you know, and it's probably one of the hardest things that managers
[26:22.880 --> 26:25.360] do if somebody's just not pulling their weight.
[26:25.360 --> 26:28.360] And yeah, I don't know.
[26:28.360 --> 26:29.360] So I know absolutely.
[26:29.360 --> 26:31.120] So I do have some thoughts on this.
[26:31.120 --> 26:35.240] One of them is that I do, I absolutely agree that having gatekeeping norms out front
[26:35.240 --> 26:38.440] is much more effective than trying to kick people out.
[26:38.440 --> 26:43.200] And part of that is because, okay, so I always think about this in the context of online
[26:43.200 --> 26:47.600] moderation because that's my usual background, but let's say that you have someone who
[26:47.600 --> 26:52.000] shows up to a group on day one and you don't like them and you kick them out.
[26:52.000 --> 26:54.840] Okay, what was the cost of that person?
[26:54.840 --> 26:56.680] About like maybe an hour of their time, right?
[26:56.680 --> 27:00.600] They don't have any really connections yet, socially, they don't, blah, blah, blah.
[27:00.600 --> 27:06.320] Okay, let's say that you kind of wishy-washy about it and you only, you only kick people
[27:06.320 --> 27:11.080] out when they become intolerable and so you have someone who kind of seems off at first
[27:11.080 --> 27:15.320] and then, you know, as time goes on, they just get worse and worse and worse until eventually
[27:15.320 --> 27:19.680] you kick them out and it's been six months and now this person has like some friends in
[27:19.680 --> 27:23.960] the group, you know, it maybe it's become one of their primary social groups, you know,
[27:23.960 --> 27:26.240] what's the cost of that person of kicking them out?
[27:26.240 --> 27:32.120] It's a lot greater and so I think it's much kinder and frankly, not just more efficient,
[27:32.120 --> 27:37.600] but even just kinder, even to the person involved to a fire early and often, so to speak.
[27:37.600 --> 27:41.080] That's actually the word that allows you to use this in this letter to the new or
[27:41.080 --> 27:44.840] communities has, you should fire them and you know, if someone's a sex pest, you should
[27:44.840 --> 27:45.840] fire them.
[27:45.840 --> 27:49.920] You know, it doesn't use that phrase obviously, but like that's essentially what he's
[27:49.920 --> 27:50.920] saying.
[27:50.920 --> 27:55.240] Yeah, and I mean, you know, I'm just a, I'm just a humble game theorist, but it seems
[27:55.240 --> 28:00.360] to me that the way that you want to structure this contract is like, if you're concerned
[28:00.360 --> 28:06.600] about somebody showing up and making things simply make, making things preconditioners
[28:06.600 --> 28:11.680] for for entry, you know, I mean, that solves the problem directly and elegantly, like go
[28:11.680 --> 28:15.280] create something if it's good, will review it and will emit you or not.
[28:15.280 --> 28:21.920] No, no, absolutely, absolutely, um, so, but those were kind of the pitfalls he's discussed.
[28:21.920 --> 28:25.440] I mean, he discusses some other pitfalls, but this is a really kind of a long letter and
[28:25.440 --> 28:30.320] it's, like I said, I don't really think I can do it justice just by just talking about
[28:30.320 --> 28:31.560] it on a podcast.
[28:31.560 --> 28:32.560] It really should be read.
[28:32.560 --> 28:33.560] Yeah.
[28:33.560 --> 28:34.560] Okay.
[28:34.560 --> 28:35.560] All a link to it in the comments.
[28:35.560 --> 28:36.560] Yeah.
[28:36.560 --> 28:43.160] And then so that's kind of, that and so that's kind of where less wrong started, right?
[28:43.160 --> 28:46.560] So I'd say, you know, you have 2,000 nions kind of the first founding period when he ends
[28:46.560 --> 28:47.560] the sequences.
[28:47.560 --> 28:51.920] And then 2011 is maybe the second founding period because he actually almost founded a twice
[28:51.920 --> 28:59.280] with H.P.M.O.R, which got in a whole new and different readership, which as a, as a
[28:59.280 --> 29:03.320] very, like, as a side point, just real quick before I move on, just time out lore.
[29:03.320 --> 29:06.960] I kind of this, almost, I don't want to call it a conspiracy theory, but kind of this,
[29:06.960 --> 29:13.520] a high, let's call it a crank hypothesis that H.P.M.O.R. in many ways was both the seeds
[29:13.520 --> 29:18.960] of, like, less wrongs, like, rise and downfall because it brought in a lot of people
[29:18.960 --> 29:25.160] who are filtered on really liking this elaborate style of fiction rather than doing things.
[29:25.160 --> 29:31.880] So if you read really early, less wrong, people who are on there are, like, math Olympiads,
[29:31.880 --> 29:32.880] they're startup founders.
[29:32.880 --> 29:38.120] You know, it's this ver, it's a much smaller, but much more tightly focused, kind of,
[29:38.120 --> 29:41.120] like, really silicon valley kind of group.
[29:41.120 --> 29:46.360] And, you know, after H.P.M.O.R, of course, you are speaking to a much wider audience and
[29:46.360 --> 29:47.840] filtering on a very different band.
[29:47.840 --> 29:53.400] I know that I myself, I'm kind of gone to less wrong through TV tropes, like, that was just
[29:53.400 --> 29:55.120] how I happened to find it.
[29:55.120 --> 29:58.720] And so, you know, if you're recruiting off TV tropes, what kind of person are you filtering
[29:58.720 --> 29:59.720] for?
[29:59.720 --> 30:07.200] And I think that means, yeah, something, something to think about.
[30:07.200 --> 30:16.480] So yeah, so, I mean, that's kind of interesting as a matter of, like, I mean, just
[30:16.480 --> 30:21.800] to use an example of a firm as something that's often very good at getting things done.
[30:21.800 --> 30:28.560] You know, I don't think that most firms would be more effective if, I mean, just, like,
[30:28.560 --> 30:33.320] say Google, you know, like, say Google through open stores and said, anybody can be a Google
[30:33.320 --> 30:34.320] employee.
[30:34.320 --> 30:35.320] We're not going to pay you anything, say.
[30:35.320 --> 30:39.360] But if you want to, like, affiliate yourself with Google and come in, like, do projects
[30:39.360 --> 30:44.920] with us, I do not believe that would increase the the productivity of Google.
[30:44.920 --> 30:50.440] No, I don't, I don't imagine it would, which, you know, itself is, you know, when you
[30:50.440 --> 30:53.720] say it like that way, you're really kind of gain a bit of the, one of the core kind of
[30:53.720 --> 30:54.720] issues, right?
[30:54.720 --> 30:59.120] Is, you know, how much evangelism should you be doing versus, you know, how much purity
[30:59.120 --> 31:02.080] should you be trying to maintain and kind of the core group, right?
[31:02.080 --> 31:03.080] Yeah.
[31:03.080 --> 31:07.760] I think that's a issue for any, I remember your podcast with, I forget their name, but
[31:07.760 --> 31:08.760] the clubhouse podcast.
[31:08.760 --> 31:11.200] Oh, yes, with IV-estrics.
[31:11.200 --> 31:12.760] Yeah, okay, IV-astrics.
[31:12.760 --> 31:13.760] Yeah.
[31:13.760 --> 31:19.080] And, you know, I believe she was talking about the, you know, kind of the transition
[31:19.080 --> 31:22.060] that it went through where you had all these people coming in who did not share the
[31:22.060 --> 31:25.160] culture, and they would say things like, why are there all these nerds on here?
[31:25.160 --> 31:32.160] It's like, well, because it's a nerdy, it's a nerdy, you know, subculture, it's, yeah.
[31:32.160 --> 31:35.640] And so that, that, you know, managing that is a whole topic.
[31:35.640 --> 31:39.760] We could go on about that for like hours, really, it's, it's a, it's a, it's a rabbit
[31:39.760 --> 31:40.760] hole.
[31:40.760 --> 31:41.760] Yeah.
[31:41.760 --> 31:43.560] And I, I mean, I think, I want to be clear on my side.
[31:43.560 --> 31:51.560] I think that some organizations should be sort of ecumenical and some organizations should
[31:51.560 --> 31:56.280] be gatecapped, and, you know, which, which should be the case depends on what you're trying
[31:56.280 --> 31:57.280] to get the thing to do.
[31:57.280 --> 32:03.120] It's not a, I don't view it as a matter of, you know, some, some kind of deontology, but
[32:03.120 --> 32:10.280] strictly utilitarianism, you know, like, if you're trying to, say, create a church that
[32:10.280 --> 32:14.000] saves all men's souls, maybe you should invite everybody into the church regardless of their
[32:14.000 --> 32:15.520] nation or affiliation.
[32:15.520 --> 32:19.960] And if you're trying to build a rocket ship, you probably don't want a lot of members
[32:19.960 --> 32:21.960] of the public.
[32:21.960 --> 32:22.960] It's interesting.
[32:22.960 --> 32:26.680] You bring that up, the idea of like, you know, if you're trying to make a church
[32:26.680 --> 32:30.360] to save all men's souls, you know, you should probably invite everyone, because I do
[32:30.360 --> 32:35.480] think that that's something that is interesting in the context of, now, I guess we can
[32:35.480 --> 32:43.360] talk about that right, just right about right now, because, you know, a lot of the criticism
[32:43.360 --> 32:50.000] of less wrong is that, oh, like, this is a cult or like, oh, this is, I don't know how to
[32:50.000 --> 32:51.000] put it, but you get what I mean.
[32:51.000 --> 32:55.000] Like, well, we were just talking earlier about L-Ron, however we were making jokes about how,
[32:55.000 --> 33:00.760] you know, that, you know, it's very, I feel so bad, because I don't, I feel like, I feel
[33:00.760 --> 33:05.280] like this is a mean thing to say about you why, but it's also, it's okay.
[33:05.280 --> 33:12.120] I mean, like, people, I'd end up poking fun at people who are sort of weird, maybe kind
[33:12.120 --> 33:19.800] of visionary, and I, I mean, I, I live in fear of the day that somebody does this about
[33:19.800 --> 33:25.000] me, you know, I mean, like, there was that, there was that in group vote, and I was genuinely
[33:25.000 --> 33:31.160] kind of hoping I lost just so that like, I wouldn't be put anywhere close to a sort
[33:31.160 --> 33:32.160] of a position.
[33:32.160 --> 33:36.960] Funny, I, I remember, I think it was a, a cursy was talking about how, as you level up
[33:36.960 --> 33:42.400] on Twitter, your reward is more and more persistent kinds of jerk-wad who, like, attack
[33:42.400 --> 33:43.400] you.
[33:43.400 --> 33:44.400] Oh, there's some of that, yeah.
[33:44.400 --> 33:45.400] Yeah.
[33:45.400 --> 33:46.400] Yeah.
[33:46.400 --> 33:47.400] No, I, yeah.
[33:47.400 --> 33:48.400] No, totally.
[33:48.400 --> 33:54.480] I mean, like, I'm, I'm actually not concerned about jerk-wads so much as I am about, like,
[33:54.480 --> 34:00.440] the possibility that, like, somebody would see me as more than a shit poster, and especially
[34:00.440 --> 34:03.560] if I started acting like somebody who is more than a shit poster, you know what I mean?
[34:03.560 --> 34:04.560] Oh, no, totally.
[34:04.560 --> 34:09.280] I, E, Y himself has this problem, he, so when he's writing the sequences, right?
[34:09.280 --> 34:13.160] He's writing about things like cat girls and volcano layers, and part of the point of
[34:13.160 --> 34:17.240] that in his own words was that he doesn't want people to take him too seriously, because
[34:17.240 --> 34:21.880] if people take him too seriously, they're going to start to worship him, and, you know,
[34:21.880 --> 34:25.960] and they'll think that everything he says is gospel, and then they'll, you know, take
[34:25.960 --> 34:29.720] his daily blog post with you is just like writing on butcher pound on the digital equivalent
[34:29.720 --> 34:34.520] of butcher paper to be these, like, amazing, grand thesis, oh, right, people actually did that
[34:34.520 --> 34:35.520] oops.
[34:35.520 --> 34:36.520] Yeah.
[34:36.520 --> 34:37.520] Yeah, right.
[34:37.520 --> 34:40.320] I mean, like, it's, it's a problem, and I'm glad he was thinking about it.
[34:40.320 --> 34:44.720] I, I don't know that would have used his particular approach, but, you know, I,
[34:44.720 --> 34:48.800] I, I think he, yeah, he was, he was definitely thinking about he, he comments on it,
[34:48.800 --> 34:53.920] several times at length, I, it's not, it's really funny, because there's a sense in
[34:53.920 --> 34:58.480] which his persona is, it's what I like to call guru writing, right, like, on, like,
[34:58.480 --> 35:03.040] Paul Graham, for example, a Steve Palvina kind of these almost, like, self-helpish offers
[35:03.040 --> 35:08.200] who have this, you know, I am, like, the master, and if you want to do well at things,
[35:08.200 --> 35:09.520] you're going to, like, listen to what I say.
[35:09.520 --> 35:13.920] He kind of has that persona to him, but at the same time, there is this, like, a side where
[35:13.920 --> 35:17.760] he'll say things like, hey, listen, like, please, don't, please, please don't, like,
[35:17.760 --> 35:22.600] deaf, you know, a B to find me, right, like, please, and then they do anyway, because it's
[35:22.600 --> 35:23.600] unavoidable.
[35:23.600 --> 35:30.320] But, Steve, give which, uh, so let's talk kind of about the history, in terms of ideas,
[35:30.320 --> 35:37.760] like, where Lesteron come from, okay, yeah, so I like, so I, I, I, I, I, I, before that,
[35:37.760 --> 35:43.760] I want to say one thing of, of the things that you mentioned, so C, setting, like,
[35:43.760 --> 35:50.520] A, I say, F, D, yeah, life extension and cryonics, I would say they're one for four.
[35:50.520 --> 35:51.520] I would say they're one for four.
[35:51.520 --> 35:54.160] Yeah, I would agree with that. We're going to come back to that. We're going to actually
[35:54.160 --> 35:58.640] go into some detail on that, but I would agree overall just briefly, they're, I'd say
[35:58.640 --> 35:59.920] they're one for four.
[35:59.920 --> 36:04.240] I, I'm curious whether we actually have selected the same one, but continue.
[36:04.240 --> 36:06.880] Oh, interesting. Well, we'll, we'll talk about that.
[36:06.880 --> 36:14.400] Uh, yeah, but so in terms of the history of ideas, what part of it is that, so David Chapman,
[36:14.400 --> 36:19.040] for example, I think we were talking about him earlier. I forget what we did talk about.
[36:19.040 --> 36:22.240] You mentioned a female. Yeah, I, I mentioned, yeah, I definitely mentioned him. I'm just trying
[36:22.240 --> 36:25.360] to figure out if we talked about earlier in the podcast or not. It doesn't matter. I'm just going
[36:25.360 --> 36:30.800] to, everyone, everyone listening to this knows who he is. Anyway, I'd be very surprised
[36:30.800 --> 36:35.680] if people didn't, but it's a very brief summary. He's basically someone who comes on and he talks
[36:35.680 --> 36:39.840] about rationality, scare quotes, and then he'll just talk about less wrong rationality,
[36:39.840 --> 36:45.920] like it's the same philosophical tradition as, say, Bertrand Russell and, and Kurt
[36:45.920 --> 36:50.160] Gaudel and, like, you know, oh, this is the same thing. I really don't think it is. I think it's
[36:50.160 --> 36:55.760] related, though. And so let me, let me put it like this. So I would say that less wrong got
[36:55.760 --> 36:58.240] started, because you were talking about this with Yashikov as hell, less wrong got started. I would
[36:58.240 --> 37:02.240] say less wrong got started when this kind of odd-to-dact genius decided he'd figure out the
[37:02.240 --> 37:08.080] key insights to saving the world. So he sits down and writes this dense 800 page book about how
[37:08.080 --> 37:13.360] totally restructure your thinking based on a humanized version of Alec philosophy, and it's very
[37:13.360 --> 37:17.840] comprehensive to the point of including a long section of it. Nobody reads on quantum mechanics,
[37:18.480 --> 37:23.920] then he founded this New York and non-profit in New York, that's a title, dionetics.
[37:24.720 --> 37:29.680] No, no, no, shush, you don't, don't, you bad in New York, decayed to teaching people how to
[37:29.680 --> 37:32.480] think better and over at the apocalypse, and his name was Alfred Crosibsky.
[37:33.440 --> 37:38.480] Oh, shit. Okay, that's a, wow. Yeah, you ruined the joke. Thank you. Did I, no, I was, I was
[37:38.480 --> 37:44.000] getting, yeah, I was moving, I broke that down. I was like, yeah, you ruined it. It's all right.
[37:44.000 --> 37:47.600] I didn't think I did. How many people have done this? I, I thought, sure.
[37:47.600 --> 37:51.200] Oh, actually, you know what, how about this? I'm going to say that again, and you're going to
[37:51.200 --> 37:54.720] edit it out, and we're going to do it correctly this time, all right? You're going to pretend like
[37:54.720 --> 38:00.720] your surprise. You got it. So there's this guy named Crosibsky. Actually, what a beat.
[38:01.760 --> 38:08.240] Okay, there's this guy named Crosibsky, yeah, and Crosibsky is this Polish noble man,
[38:08.240 --> 38:14.320] and he's born in, I think, like, the late 19th century, because he, he participates in World War
[38:14.320 --> 38:19.600] 1. So, and he's like a young man when he does it. So he must have been born in about, like, the late
[38:19.600 --> 38:26.880] 19th century. And Crosibsky goes through his life, and he's this Polish noble man. He's really kind of
[38:26.880 --> 38:31.200] a wild youngster. You know, at one point in a party, he has, like, this argument with someone,
[38:31.200 --> 38:34.960] and he waits for him to walk away, and he pulls out his pistol and shoots, like, the drinking
[38:34.960 --> 38:40.000] glass out of their hand, holy shit. Yeah, like the, yeah, like this really interesting kind of
[38:40.000 --> 38:45.680] dude, and he participates in World War 1. And so the thing about World War 1 is that,
[38:46.480 --> 38:50.720] World War 1 is about when people start thinking for the first time about existential risk,
[38:51.600 --> 38:58.240] because the scale of the war was so vast and just so almost unexpected by most people,
[38:58.240 --> 39:03.280] but some thinkers did predict it. There is, for example, the book is war now impossible by
[39:03.280 --> 39:08.160] I think his name was Ian Block, or John Block. And, you know, he asks, like, you know,
[39:08.160 --> 39:13.600] is the, because there is this big argument about Cam, the original map history. Yeah, exactly.
[39:13.600 --> 39:20.640] And so there's this big argument about Cam, World War 1 happen. And people made a lot of
[39:20.640 --> 39:25.600] arguments that are very reminiscent of the arguments used today against the possibility of nuclear war.
[39:25.600 --> 39:30.960] For example, war would be too costly. People would never do it. It's stuff like that.
[39:30.960 --> 39:35.840] Yeah, but World War 1 did happen. And when it happened, and actually, I'd like to do another
[39:35.840 --> 39:40.720] shout out as, as far as we're showing out podcasts, there's a hardcore history series on World
[39:40.720 --> 39:48.720] War 1 with Dan Carlin that really gets into this, like the exact mechanics of World War 1 and why it
[39:48.720 --> 39:54.880] happened and what, you know, why it couldn't be stopped. It's gut wrenching, because as you listen
[39:54.880 --> 39:59.600] to it, you realize that he is in a parable way describing how the world would hypothetically
[39:59.600 --> 40:06.080] end if we started a nuclear war. Yeah, I just want to say also, I mean, earlier, I was talking
[40:06.080 --> 40:10.080] about the dead authors podcast with L. Ron Hubbard, which is ingenious, but also if you're listening
[40:10.080 --> 40:15.200] to this, and you have a listen to Dan Carlin, just go listen to Dan Carlin. And like, come on,
[40:15.760 --> 40:20.480] but continue, not, not again. No, no, totally. Dan Carlin is an excellent podcaster, really,
[40:20.480 --> 40:25.600] but puts a ton of effort into his show, you know, tons of research really well-sighted,
[40:25.600 --> 40:33.680] excellent narration. Great upon me. Okay, but so continuing. So Corsibski participates in
[40:33.680 --> 40:37.600] World War 1, and this is when people start thinking about existential risk, because the war
[40:38.240 --> 40:44.880] just prompted the question, you know, during the war after the war, you know, people were genuinely
[40:44.880 --> 40:52.320] asking for maybe the first time, are we just going to end the world? And with Calvary? Well,
[40:52.320 --> 40:59.520] so Alfred Corsibski himself is, he participates, and he is haunted by it, because all these
[40:59.520 --> 41:05.040] people die. And what he's really worried about is that all these people died for nothing,
[41:05.040 --> 41:10.400] that everyone, you know, these people go off and they kill each other, and then nothing happens
[41:10.400 --> 41:16.640] that will prevent the next war. So a lot of thinking in the 1920s and immediately after World War
[41:16.640 --> 41:22.000] 2, basically, or World War 1, was how do we prevent World War 2? How do we prevent another World War?
[41:22.560 --> 41:27.680] Yeah. And Corsibski kind of contributed to a lot of the intellectual thinking around this
[41:27.680 --> 41:32.720] subject. You know, he was part of, there was a scene, there was an intellectual scene about
[41:32.720 --> 41:42.320] how are we going to stop World War 2? And Corsibski was a part of it. And he spends a lot of time
[41:42.320 --> 41:47.520] thing about this. And at some point, I believe it's something like he's laying in bed and he's about
[41:47.520 --> 41:56.400] to fall asleep when the answer just hits him. And it's basically that he realizes humans are these
[41:56.400 --> 42:02.480] culture machines. You know, they take culture from their parents and they play it out. And then
[42:02.480 --> 42:06.720] they just give it to their children who in turn play it out and give it to their children. And he has
[42:06.720 --> 42:12.000] this idea of it over time, humanity is accumulating knowledge and insight and wisdom and technology.
[42:12.640 --> 42:18.080] And he has this kind of thought that the technical progress is accelerating exponentially. But
[42:18.080 --> 42:23.680] the social progress is accelerating linearly. If that, if not sub-linearly, there's almost no change.
[42:24.880 --> 42:32.000] The institutions that people are using to control their societies and their justice and all this
[42:32.000 --> 42:37.680] stuff have barely changed in the medieval ages. And yet, you know, you can go from a monarchy
[42:37.680 --> 42:40.800] to democracy or whatever. But that is how much of the change of that really, right?
[42:41.360 --> 42:44.960] Yeah, yeah, barely changed since the dawn of agriculture. Absolutely.
[42:45.520 --> 42:50.960] I was actually thinking, this is an aside, but I think in some ways we're progressing specifically,
[42:51.680 --> 42:56.640] I don't see a lot of use of religion as a social technology, which it is.
[42:57.440 --> 43:01.200] I agree. I completely agree in fact. We can talk about that. Yeah. When we get to the
[43:01.200 --> 43:08.160] post transaction, we're going to take into that. Yeah. Okay. Yeah. No, totally. But, okay. So,
[43:08.160 --> 43:14.160] Corsidzky is thinking about this. And he's realizing that, okay, if technological progress accelerates
[43:14.160 --> 43:20.640] exponentially and social progress accelerates, not at all, maybe even regresses, you know, the
[43:20.640 --> 43:26.800] world will end because we will become so powerful and like these, you know, we're going to become
[43:26.800 --> 43:34.080] like super powerful, but also really, really socially stupid in comparison to our awesome technological
[43:34.080 --> 43:39.600] powers. And to him, this was the cause of World War I. And I think that's a really sober,
[43:39.600 --> 43:47.120] frankly, quite good analysis. And, you know, it wasn't just Corsidzky thinking about this. You know,
[43:47.120 --> 43:55.120] this was a general topic in futureology. For example, HG Wells is 1936 film. What is to come? I
[43:55.120 --> 44:01.280] believe is the title. He, I actually watched it last night, I think. I haven't
[44:01.280 --> 44:06.400] last night or the night before. He's, it actually discusses how in the far-off distant year of
[44:06.400 --> 44:14.080] 1966, England will be a post-apocalyptic wasteland after World War II, which in 1936 had not
[44:14.080 --> 44:20.400] happened yet. Yeah. And so his, his, like, you know, he's writing that, you know, in 1966, England
[44:20.400 --> 44:25.600] will be this post-apocalyptic wasteland because the war won't end, but the technological progress
[44:25.600 --> 44:31.760] will increase. And people will use gas bonds and all this other stuff. And you know, it will be a
[44:31.760 --> 44:37.840] World War. So all civilizations will be destroyed from the total war that will ensue. And humanity
[44:37.840 --> 44:43.920] will regress to a medieval standard of living. And this was, you know, a kind of, you know, this
[44:43.920 --> 44:50.480] was kind of the first version of X-risk as a, as a, as a subject. And quasi-pski deliberately
[44:50.480 --> 45:00.800] developed his philosophy in response as a solution to X-risk. So yeah, okay, one, that's good.
[45:00.800 --> 45:07.600] Two, I'm curious how, I'm curious about the possibility that, in fact, the destruction of
[45:07.600 --> 45:18.080] World War I was not just physical, but deeply social. And in fact, like, technological
[45:18.880 --> 45:24.160] capacity has continued to increase. Whereas there's been like a study degradation of,
[45:24.160 --> 45:29.040] of the sort of institutional and social capital that you'd want to have. I would agree with that.
[45:29.040 --> 45:34.000] And I think that at one of probably been very obvious to Corsibski, because Corsibski was a
[45:34.000 --> 45:43.760] Polish noble man. So he lived in this kind of high society spot where, you know, you have the
[45:43.760 --> 45:49.280] old, you know, the remnants of the feudal system, which was for all its faults and problems stable.
[45:49.840 --> 45:54.560] Yeah. And he lives in this, and he's going, and you see, after I remember, when you're, you have
[45:54.560 --> 46:00.560] a nobility class, right? All the people who are interesting are being deliberately filtered by
[46:00.560 --> 46:05.360] various societal mechanisms to all being the same room. And so he experienced that. And I'm sure
[46:05.360 --> 46:13.760] of it that was probably a huge stimulus to this genre of thinking. But what's also really
[46:13.760 --> 46:20.400] interesting about Corsibski is, okay, he writes this book, right? But what's really interesting
[46:20.400 --> 46:24.880] about it is that, oh, actually, I'm going to make a start. So he has the thesis, you know, man
[46:24.880 --> 46:31.360] is this, he calls it a time binder. You know, he has this thesis that plants bind energy, you know,
[46:31.360 --> 46:37.600] sunlight, and animals are bind space, you know, they control space, you know, geographic, but that
[46:37.600 --> 46:43.440] humans bind time. That to him is the thing that distinguishes man from the other animals in the
[46:43.440 --> 46:50.800] animal kingdom is that humans have this ability to recall and act on the past, not just as a
[46:50.800 --> 46:58.400] system one kind of, you know, subconscious, nervous thing. But as this deliberate act of recall
[46:58.400 --> 47:03.760] and use of previous situations or information. And that's to him is like the thing that distinguishes
[47:03.760 --> 47:10.320] humanity. Yeah. And so he titled his book in 1921 book, it is now public domain. You can go read it.
[47:11.200 --> 47:19.200] He titles it, the manhood of humanity. And as he's writing it, he's, he's kind of collaborating with
[47:19.200 --> 47:24.480] these analytic philosophers. And so Khorzibsky, even since a young adulthood, had been following
[47:24.480 --> 47:29.280] the revolutions and physics like quantum mechanics and the development in mathematics, like for
[47:29.280 --> 47:34.640] example, the Principian Mathematica, Whitehead and Russell. Yep. And he was really interested in all this.
[47:34.640 --> 47:41.760] And one mathematician, nobody became very close to was Cassius J. Kaiser. And his relationship
[47:41.760 --> 47:47.440] to Kaiser is a little bit like E. Y's relationship with James, except as personal. So imagine if E. Y
[47:47.440 --> 47:55.680] actually knew Edwin James. Okay. Interesting. Who's James? I don't know. He's the, he's the Bayes guy.
[47:55.680 --> 48:01.680] He's the guy who takes Bayesian statistics and turns it into this whole intellectual edifice.
[48:01.680 --> 48:06.880] E. Y. Sites him over and over and over. He actually writes an entire post called Fowls and
[48:06.880 --> 48:13.120] your old vampires where he sights E. T. James as, you know, the most, what's that? The meme like
[48:13.120 --> 48:17.760] Albert Einstein was the most advanced being and you will, he will bow to him like he's a little bit
[48:17.760 --> 48:23.760] of that energy to it. Like he's really, really, he's really interested in E. T. James as kind of this
[48:23.760 --> 48:29.760] model of how analytic philosophy can become a practical tool for people to use in a humanistic
[48:29.760 --> 48:35.680] sense. And that's the relationship that Khorzibsky has to Cassius J. Kaiser because Kaiser is
[48:35.680 --> 48:41.520] thinking about mathematics in this almost heretical way. He's thinking about this idea that people
[48:41.520 --> 48:46.560] have principles and downstream of their principles will be what they do in the world. And so they
[48:46.560 --> 48:52.560] better get their principles right mathematically or philosophically. And so Kaiser is doing this kind of
[48:52.560 --> 49:01.360] humanistic, sorry, what? No, but yeah, he's doing this kind of humanistic mathematics and
[49:02.080 --> 49:07.280] Khorzibsky is all over it. And he incorporates it into his book, Manhood of Finanity. And Kaiser
[49:07.280 --> 49:13.040] actually gives a comment at the end of one of his books, I think it was on Fate and something,
[49:13.040 --> 49:16.800] where he actually has a chapter dedicated to Khorzibsky, so they're really close to each other.
[49:20.800 --> 49:28.000] So yeah. And so that's kind of part one of the Khorzibsky story. So Khorzibsky publishes this book
[49:28.960 --> 49:34.880] and it's pretty funny. So he publishes it and he writes in the end of the book. Okay, so now
[49:34.880 --> 49:40.640] would I have laid out my thesis of how we can have, of what is wrong with the world?
[49:40.640 --> 49:44.080] Now we're going to fix it. I'm going to explain the solution to you. And I will come out with a new
[49:44.080 --> 49:51.600] book in a year or two. The book actually took him 10 years to write. Classic. And so his, the next book
[49:51.600 --> 49:56.800] was called Science and Sandy and it came out 10 years later, so I think in 1931, 1932.
[49:57.440 --> 50:01.760] And science and Sandy is basically the sequences if they have been written in the 1930s.
[50:01.760 --> 50:10.080] Interesting. Okay. It, like it down to the dense jargon, down to people kind of making fun of it
[50:10.080 --> 50:14.960] for being a cult, down to, and one of the things that's really funny is that because it's the
[50:14.960 --> 50:19.600] 1930s, Khorzibsky has all this trouble finding a publisher because there is no other book like,
[50:19.600 --> 50:24.240] especially at this time, Science and Sandy. It's totally no. And even for Manhood of Humanity,
[50:24.240 --> 50:29.280] it was kind of a little bit of a cranky hypothesis. So to actually publish it,
[50:29.280 --> 50:35.520] there's a funny story where a guy that he's trying to get to publish the book has a dream,
[50:35.520 --> 50:39.760] where a deceased relative tells him that he needs to pay attention because someone is about to
[50:39.760 --> 50:45.040] come to him with an epoch making book on time and he needs to publish it. And so the guy has this
[50:45.040 --> 50:49.120] dream and he goes to Khorzibsky and he says, I'd like to publish your book right now.
[50:50.880 --> 50:56.080] Wow. Okay. Fortune it. Very fortunate. Especially because, you know,
[50:56.080 --> 51:00.400] if all the things that they could say that the book will be about an epoch making book on time,
[51:00.400 --> 51:07.280] that is Manhood of Humanity. Yes. Yeah. Okay. So he gets this book out. Yeah, two books.
[51:07.280 --> 51:11.680] So Manhood of Humanity then science and Sandy. And it's around the science and Sandy point that
[51:11.680 --> 51:16.400] he gives his philosophy, his new movement is philosophy, a name, general semantics.
[51:17.680 --> 51:25.040] And so what Khorzibsky is on about, mostly, is this idea that people do things with language
[51:25.040 --> 51:30.560] that are bad. And they do things with the way they evaluate things that are bad.
[51:31.360 --> 51:36.560] Okay. And one of the real, one of the biggest things he thinks is that the Aristotle paradigm
[51:36.560 --> 51:43.520] of identity is wrong and bad and that it's like crippling people philosophically. So one of his most
[51:43.520 --> 51:47.760] famous statements is A is not A. Another one is the map is not the territory.
[51:48.400 --> 51:52.400] Oh, interesting. Okay. Yeah. That's a, that's a Khorzibsky quote.
[51:52.400 --> 51:58.320] He famous, you know, like that's his most famous enduring contribution of philosophy as Khorzibsky
[51:58.320 --> 52:03.360] is the man who said the map is not the territory. Yeah. Yeah. Interesting.
[52:04.560 --> 52:07.760] You can tell I haven't read the sequences or really much of anything.
[52:09.680 --> 52:13.440] It's all right. It's all right. That's great. No, no. Yeah.
[52:14.160 --> 52:19.360] So, okay. So, so this is the background on Khorzibsky. And I know you're, you're weaving
[52:19.360 --> 52:23.760] this as part of a larger narrative. So, so how does this relate back, apart from its sort of a
[52:23.760 --> 52:29.120] clear antecedent of maybe some of what L. Ron Hubbard was trying to do or, you know, more direct
[52:29.120 --> 52:32.960] through what L. E. As you was trying to do? Well, it's funny you mentioned that because Hubbard
[52:32.960 --> 52:40.960] actually comes after Khorzibsky, right? So Hubbard is like 40s, 50s I believe. And he is kind of taking
[52:40.960 --> 52:48.240] a lot of general semantics. And then a Scott actually put, you actually says this and one of his
[52:48.240 --> 52:54.320] post extreme rationality. It's not that great. He says something like Hubbard takes the material
[52:54.320 --> 52:59.840] basis of general semantics, strips it out, replaces it with some flimflam and sells it as like
[52:59.840 --> 53:05.360] snake oil to people and that Scientology. Yeah. That was, that was Scott's evaluation of it. Don't
[53:05.360 --> 53:19.520] assume it. Scientologists listening to this, we love L. E. We love L. Ron Hubbard, don't sue us. You
[53:19.520 --> 53:26.240] right. And so, yeah, there's the other, there's a, I could go into like the, like, I actually
[53:26.240 --> 53:31.600] have an essay on my, on one of my websites wrestling on.com where I talk about like the
[53:31.600 --> 53:36.800] almost the intellectual lineage starting from Alchemy going down through a, uh, L. Ron Hubbard
[53:36.800 --> 53:43.200] in the science fiction fandom to Eliza Eukowski. Yeah. I real quick, just to clarify, I do not say
[53:43.200 --> 53:48.320] that L. Ron Hubbard is an antecedent of Eliza Eukowski. Okay, like don't, please do not go around
[53:48.320 --> 53:52.080] quoting me on this and saying that because it's not actually what I think and I will be there in the
[53:52.080 --> 54:03.600] moment just for fun. Okay. Yeah. Oh gosh. All right. He's blocked me. It's fine. I'm imagining
[54:03.600 --> 54:09.120] Eliza Eukowski listening to this podcast now that that would be, I'm so sorry Eliza, I'm, I'm
[54:09.120 --> 54:12.800] not making you, I'm making you, I'm making you legible now because it's, it's been enough
[54:12.800 --> 54:18.320] time and people need to know. So he's, he's always been more legible than maybe even. Yeah,
[54:18.320 --> 54:22.240] but I'm going to make him like even more legible. In fact, I was actually about to get to the next
[54:22.240 --> 54:26.000] thing as we could talk about like, you want to understand like Eliza Eukowski, like I'm going to just
[54:26.000 --> 54:31.200] explain like exactly like where you lie to Eukowski comes from in terms of stuff. I was just starting
[54:31.200 --> 54:37.840] with back because I think I think I sort of do, but also I mean, maybe that, I mean, okay, so we've been
[54:37.840 --> 54:44.160] being rude, but that might be possibly a little bit genuinely rude. I need to think about that a bit,
[54:44.160 --> 54:49.040] but so understand them in what sense. Well, okay, I don't mean like in a psycho analysis way,
[54:49.040 --> 54:53.280] I just literally mean, how do you, how do you become Eliza Eukowski? You know, if you're singing
[54:53.280 --> 54:57.840] around in 1995, because I think this is the question I had when I first read the sequences,
[54:57.840 --> 55:02.000] I was about 14, I was reading the sequences for the first time I asked, how could this person
[55:02.000 --> 55:06.320] possibly exist? He knows so much and he's so smart and, you know, I had like, you know, like that
[55:06.320 --> 55:09.840] really done, like when you're young and you're stupid and you read like your first decent offer
[55:09.840 --> 55:13.920] and you get like really overattached them, just a little bit of that, like that kind of, yeah, like that,
[55:13.920 --> 55:17.360] like a lot of people have that kind of crush with Ayn Rand, you know, that kind of thing.
[55:17.360 --> 55:21.760] Oh, sure, yeah. Yeah, and so I was kind of like sitting there like, how could this person possibly
[55:21.760 --> 55:26.160] exist? And so part of my research was figuring out, you know, if you're just saying around in like
[55:26.160 --> 55:31.840] 1992 or whatever, how do you become Eliza Eukowski or 1989? Yeah. So I do know that. I wouldn't
[55:31.840 --> 55:36.560] mind saying it. I don't think it would be rude. Yeah, sure, sure. I mean, like, sort of like,
[55:36.560 --> 55:41.200] I don't want to psychoanalyze him. I think that is rude. And I'd also think that it's not.
[55:41.200 --> 55:45.120] I don't think he's the kind of person. I don't think there's a problem with him. I think he's saying
[55:45.120 --> 55:53.520] in the world is crazy. So fair. Okay. But real quick, just to, I wanted to taper off the
[55:53.520 --> 55:57.920] Crosibsky bit. And so what I think is interesting about the Crosibsky thing is that
[55:57.920 --> 56:02.240] Eliza Eukowski might be thinking, oh, so Eliza Eukowski's like plagiarizing Crosibsky, right?
[56:02.240 --> 56:08.320] No, or at least probably not. This is where it's interesting. Eliza Eukowski claims he is never
[56:08.320 --> 56:13.840] read, actually read, a work by Crosibsky. Or at least he claimed that in like a 2011, 2012 blog post.
[56:13.840 --> 56:18.560] Okay. I had done a fun. Yeah. When he was discussing a, when he was discussing rational
[56:18.560 --> 56:22.320] inspection, he said, oh, by the way, I just learned that the map is not the territory,
[56:22.320 --> 56:26.640] because he actually learned about general semantics through Hayakawa, who was kind of Crosibsky's
[56:26.640 --> 56:32.000] apprentice. So there is a direct intellectual lineage there. But Eliza Eukowski, I'm sure he was
[56:32.000 --> 56:36.000] listening as would be very shocked to learn that there was this guy who basically tried to do
[56:36.000 --> 56:41.040] everything he did to save the world. I don't, I think he would be very shocked in it to learn that.
[56:42.320 --> 56:48.320] So how did Crosibsky fail? Like what things in particular do you think led to his failure?
[56:48.320 --> 56:52.960] And do you think that the rationalists are in some way like replicating those errors or avoiding
[56:52.960 --> 56:58.080] them? Or? Oh, that's a good question. So I think that a lot of just how Crosibsky failed
[56:58.080 --> 57:04.320] was part of just being too early, too prescient. A lot of the intellectual tools that so I actually
[57:04.320 --> 57:09.040] went and bought some books, a general semantics books. I bought the art of awareness by Samuel
[57:09.040 --> 57:13.440] J. Boys, which is kind of the post-rationality of general semantics. Yes, general semantics
[57:13.440 --> 57:17.920] had a post-rationality phase. Okay. Like this is literally almost the same subculture, but
[57:17.920 --> 57:23.040] transplanted back 50, you know, 50, 100 years, whatever. Oh, they probably dressed better than we do.
[57:23.040 --> 57:30.000] Oh, probably. And so I robot art of awareness, which made me so angry that I actually fruit at
[57:30.000 --> 57:34.160] the wall because there's a passage. At some point where he says that, you know, the basic thing to
[57:34.160 --> 57:38.560] understand is that there is no way to know the world outside of ourselves. And I said, you can predict
[57:38.560 --> 57:43.200] things. That point I just got so mad. Oh, sure. Yeah. Like it was like, no, but I think that it
[57:43.200 --> 57:48.400] really does get across, though, how much more powerful allows you Cowsky's epistemology actually is
[57:48.400 --> 57:54.800] in terms of, like, like, in terms of both coherence and in terms of practical use. I do think that
[57:54.800 --> 58:00.400] the less wrong scene is much more well developed than general semantics ever was. Yeah. I mean,
[58:00.400 --> 58:06.080] like, was that popular outside of Poland? Yeah. Absolutely. It was of semi-pop around the United
[58:06.080 --> 58:10.320] States. There were some colleges that caught the taught some courses in general semantics. It was
[58:10.320 --> 58:15.440] not, it achieved moderate, pretty moderate. I'd say moderate success. Far below what
[58:15.440 --> 58:20.480] Cowsky wanted of course, but not, not like, you know, not like a total failure, not like the
[58:20.480 --> 58:24.400] totally obscure thing. Like it did. It got memory hold, right? Because you've never, I'm sure you've
[58:24.400 --> 58:29.120] probably haven't heard of it until now, but never in my life. But I mean, it was established that
[58:29.120 --> 58:34.160] I'm illiterate. So, no, no, it's fine. I only heard like, murmurs about. Then I just went and
[58:34.160 --> 58:40.720] started looking at like, oh my gosh, you know, the different I dug. It was like, what? So,
[58:40.720 --> 58:44.560] yeah, I was going to say so like, how do they fail? I think a lot of it, part of it was just
[58:44.560 --> 58:49.760] being too early. You know, a lot of the tools that make less wrong good just weren't developed yet,
[58:49.760 --> 58:59.360] theoretically. Yeah. I think part of it was that, that I, this is hard to get across, but so,
[58:59.360 --> 59:05.360] okay, one thing is that Corsidsky, his frame around rationality. He did not call it rationality,
[59:05.360 --> 59:10.320] right? Cause it general semantics and his frame is that everyone is a little insane and general
[59:10.320 --> 59:15.520] semantics is about becoming more sane. So, he thinks of sanity in this very broad way,
[59:15.520 --> 59:19.280] actually fairly similar to the way we're starting to think about mental health now, where people are
[59:19.280 --> 59:25.040] thinking about mental health beyond just like literal diagnosable pathology, like there's subtle
[59:25.040 --> 59:29.840] gradations of mental health. Yeah. Corsidsky, one of the things that kind of torrentious reputation
[59:29.840 --> 59:36.240] was he famously believed that schizophrenia was not like a neurochemical issue. He believed it was
[59:36.240 --> 59:42.800] caused memetically that people wouldn't dip into a meanplex that made them schizophrenic. And I think
[59:42.800 --> 59:48.720] that actually, that idea is like certain aspects of that idea are in fact coming back. Obviously,
[59:48.720 --> 59:54.720] we now know that most cases of schizophrenia are neurochemical in origin, but I think there is
[59:54.720 --> 01:00:00.160] some of this, there is something to the, I think he had the seed of something, but he just happened to be
[01:00:00.160 --> 01:00:08.000] wrong and some of the bets he made intellectually. What do you think would happen if in 10 years,
[01:00:08.000 --> 01:00:14.160] everybody who is a major poster on Twitter had developed like full bloat schizophrenia?
[01:00:14.160 --> 01:00:22.320] No, we're not going. No, like, cocks. Okay. Okay. Okay. I mean, if you, if you, I mean, if you mean
[01:00:22.320 --> 01:00:28.080] seriously, I mean, like I think it really depends. I think that most people embedded institutions
[01:00:28.080 --> 01:00:33.120] would be, would or not posters on Twitter. I don't think they're crazy. I don't like it a lot of
[01:00:33.120 --> 01:00:41.360] the, I do think that a lot of the memetic, I think a lot of the new energy, the new momentum in
[01:00:41.360 --> 01:00:45.600] society would fall off and it would be very bad for people. It would be a very bad thing to have
[01:00:45.600 --> 01:00:50.880] happened. So I hope that Twitter does not cost schizophrenia. That would be terrible. But honestly,
[01:00:50.880 --> 01:00:56.400] like, you know, Karzibski was talking about infohazards and meme and memetic diseases and stuff
[01:00:56.400 --> 01:01:02.240] before there were words to describe it. Yeah. Okay. Like, he's really pressing in here.
[01:01:04.960 --> 01:01:08.800] And I think a lot, and like I said, Harvest being too pressing, a part of it was that he,
[01:01:09.760 --> 01:01:15.600] so at the time, when Karzibski was doing things, he made a gamble on the future prestige of
[01:01:15.600 --> 01:01:20.640] psychology. Now, I don't know if you've read a bunch of like 1950s or 1940s aerosciafine novels,
[01:01:20.640 --> 01:01:26.560] but there is this general belief in society at that time that psychiatry and psychology were going
[01:01:26.560 --> 01:01:30.720] to completely solve the problems of the human soul, that they were going to become a written note,
[01:01:30.720 --> 01:01:36.480] that there was going to become a rigorous science of the human mind and that this was going to
[01:01:36.480 --> 01:01:40.800] become absolutely revolutionary, that they would become the most powerful people in society.
[01:01:40.800 --> 01:01:47.120] So in an, an, an, an organization. And yeah, absolutely, like an Isaac Asmos foundation,
[01:01:47.120 --> 01:01:54.000] and actually in all of Isaac Asmos novels, he takes it as a prediction that psychologists will
[01:01:54.000 --> 01:01:59.440] become the most powerful class in society. That's what was expected by a lot of forward-finking
[01:01:59.440 --> 01:02:05.040] people. And I think Karzibski kind of saw this, too. And even there's actually even a
[01:02:05.040 --> 01:02:12.320] general semantics equivalent of HPMOR, which inspired HPMOR, a EY based HPMOR,
[01:02:12.320 --> 01:02:20.480] at least stylistically, partially off of the book, the world of Noel A, I forget the offer's
[01:02:20.480 --> 01:02:24.800] name at the moment, I haven't been brain-fired and very sorry. But it's the world of Noel A,
[01:02:24.800 --> 01:02:29.360] you can look it up, it's a very famous sci-fi book. And the, the premise, though, of the world of
[01:02:29.360 --> 01:02:34.400] Noel A, is that the foundation of world government becomes the Institute of General semantics,
[01:02:34.400 --> 01:02:38.640] that the Institute of General semantics actually grows to the point where it is the world government.
[01:02:38.640 --> 01:02:46.640] Uh, so there was a lot of very high expectations placed around the social status that psychologists
[01:02:46.640 --> 01:02:53.680] and psychiatrists would wield in the future. And a lot of general semantics as a movement was
[01:02:54.720 --> 01:03:00.320] basically imagine if the less wrong people had all like structure their entire movement around
[01:03:00.320 --> 01:03:05.040] the idea that Bitcoin will succeed and then it failed. Uh-huh. And that would be kind of the law
[01:03:05.040 --> 01:03:09.040] what happened to General semantics, I think, is that they structure the movement around the
[01:03:09.040 --> 01:03:12.960] idea that therapists and psychiatrists and then these people are going to become extremely powerful.
[01:03:12.960 --> 01:03:18.640] Oh, and then they did it. Yeah, I mean, maybe that's good, though. I mean,
[01:03:18.640 --> 01:03:23.040] can you imagine a world where psychology was was actually effective in some sense?
[01:03:23.040 --> 01:03:29.440] Oh, yeah, and I have no, I have no, no comment. I mean, like, okay, but actually since you bring it up,
[01:03:29.440 --> 01:03:34.560] I mean, less wrong kind of is, you can almost think of it as, as reform psychology in a certain
[01:03:34.560 --> 01:03:39.280] sense. Like, because because when you think about a lot of what they're talking about is, hey,
[01:03:39.280 --> 01:03:43.680] we're going to try the thing that, you know, we're going to try and actually understand how humans
[01:03:43.680 --> 01:03:47.200] do things. And we're going to talk about game theory. And we're going to talk about evolutionary,
[01:03:48.000 --> 01:03:54.640] you know, evolutionary psychology and signaling and all this other stuff that does
[01:03:54.640 --> 01:04:00.720] dig deeper into why humans do things, but a lot of classics I go analytic text do. I think that
[01:04:00.720 --> 01:04:05.760] the theories that are available now are better than like what was available when you had
[01:04:05.760 --> 01:04:10.560] Freud and Jung in those ever thinkers. I'm not saying they're worthless just that they were doing
[01:04:10.560 --> 01:04:14.800] a lot of empirical work. And I think that there wasn't, you know, they had theories, but the theories
[01:04:14.800 --> 01:04:18.640] weren't very good. They were kind of just a scaffolding for the empirical observations.
[01:04:18.640 --> 01:04:24.400] Yeah, you know, but wait a second. So I mean, you were saying that like, if you talk about trying to
[01:04:24.400 --> 01:04:30.400] figure out why people do things, or how people act. And I don't see any of that and those
[01:04:30.400 --> 01:04:35.360] four major points that Ellie is right identified as of great interest to rationalists.
[01:04:35.360 --> 01:04:39.760] Yeah, well, I don't think, yeah. So that's actually, oh, thank you. You're using me on myself.
[01:04:39.760 --> 01:04:43.440] Mike, I have already improved the discourse within one podcast. No, okay.
[01:04:44.640 --> 01:04:48.800] No, no, thank you. Thank you. You're bringing that up. Um, so this is an interesting point.
[01:04:48.800 --> 01:04:53.040] And I agree with you. I think that I'm talking more about the net of the movement than I think
[01:04:53.040 --> 01:04:56.800] okay. The level object level was the things I was talking about. I think there was a net of
[01:04:56.800 --> 01:05:01.840] thing, because EY says, right, and I think it's the epistle to New York Lessonians. He says,
[01:05:02.720 --> 01:05:07.360] rationality is the master hack that tells you all the other life hacks to use.
[01:05:07.360 --> 01:05:16.400] That's very, that's great. Yeah, that is. And actually, yeah, okay, go on.
[01:05:16.960 --> 01:05:20.160] Yeah, okay. So that's kind of, so yeah, does that kind of answer your question about
[01:05:20.160 --> 01:05:26.480] Crossebsky? I think we can kind of move on from, from him. Yeah, I'm actually, I mean, so maybe I
[01:05:26.480 --> 01:05:32.320] have something that you want to talk about next. I'm actually starting to view, I'm sort of starting
[01:05:32.320 --> 01:05:39.360] to see rationality. Maybe this is just an argument about virtue. And maybe a casky view is
[01:05:39.360 --> 01:05:45.760] rationality in some sense as, as the only virtue in the same way that Socrates viewed wisdom.
[01:05:45.760 --> 01:05:52.480] Everything is downstream of it. Yeah, I could, you know what, I'm going to be honest with you,
[01:05:52.480 --> 01:05:58.080] that is a really great point. And I really wish I could, I could do it. I don't feel equipped
[01:05:58.080 --> 01:06:03.280] just by conversation with you. I just don't, this is me admitting my own inadequacy. I feel
[01:06:03.280 --> 01:06:06.960] like that's a really good point. And I'm like, it's coming out left field for me. I'm not
[01:06:06.960 --> 01:06:10.400] prepared at all for it. And I don't actually have any immediate thoughts that come to mind,
[01:06:10.400 --> 01:06:16.240] that I think would be worth sharing. That's fair. I, um, I do think it's a good point. My entire
[01:06:16.240 --> 01:06:23.120] existence is coming up with things that let field of drawing them at people. And, um, no, I respect, I respect,
[01:06:23.120 --> 01:06:28.800] you know, I respect your unwillingness to say things you are not sure whether they're true or not.
[01:06:28.800 --> 01:06:35.040] No, I, I know my limitations. I do a certain extent anyway. So, um, okay, so what did you have
[01:06:35.040 --> 01:06:40.240] next then? Yeah, I was going to time that. So, um, okay, so that's a lot, that's Corzib's ski story. And
[01:06:41.680 --> 01:06:49.440] decades pass, general semantics, Peter's out. And over time, more theoretical things are invented.
[01:06:49.440 --> 01:06:55.840] So, for example, I believe that Jay, uh, Jayne's Bayesian stuff. I don't remember exactly when he's
[01:06:55.840 --> 01:07:00.640] writing that, but it's somewhere in like that like that 60s, 80s, I don't even remember any more gosh.
[01:07:00.640 --> 01:07:04.800] Yeah. But I believe that Bayesian statistical revolution, so it's big happens in the 60s.
[01:07:04.800 --> 01:07:09.920] I'm just, this is very approximate. I do not know my history well here, so I can be completely off.
[01:07:10.480 --> 01:07:16.240] But it's funny. It's in the latter half of the 20th century. It was not available to Corzib's
[01:07:16.240 --> 01:07:21.680] ski who died in 1950 to my 20th century. So, Corzib's ski would not have had any of the
[01:07:21.680 --> 01:07:25.040] theoretical tools developed in the latter half of the 20th century available to him.
[01:07:25.760 --> 01:07:29.600] Yeah. And so what happens to general semantics after Corzib's ski dies is it kind of,
[01:07:29.600 --> 01:07:33.760] I guess it just kind of Peters out, right? I feel like when I, for example, when I read their,
[01:07:33.760 --> 01:07:40.800] uh, they have a book for normal people called, uh, drive yourself same. And I bought it and you can buy
[01:07:40.800 --> 01:07:45.200] it on Amazon and I read it. And it's really is very kind of rationalistic, except like,
[01:07:45.200 --> 01:07:49.680] proto-rationalist. It's not as good. There's like some little bits that are that they've got
[01:07:49.680 --> 01:07:53.200] that are like, you know, new to me, but most of it is like, yeah, I've seen this, but I have a better
[01:07:53.200 --> 01:08:00.240] version. Okay. And so I feel like a lot of it was just that the theory wasn't quite there.
[01:08:00.240 --> 01:08:06.080] And, you know, the way you structure the movement, right, as we talked about, like if you're
[01:08:06.080 --> 01:08:10.320] structuring it in a certain way, and then that expectation collapses, that's going to be problematic
[01:08:10.320 --> 01:08:15.760] for you, uh, socially and, you know, orbit organizationally. Many such cases, the worst thing you
[01:08:15.760 --> 01:08:21.600] could do if you're a millennian is to make actual predictions. I don't know, I think for
[01:08:21.600 --> 01:08:26.240] a lot of science fiction people who made predictions in the first half of the 20th century are
[01:08:26.240 --> 01:08:31.360] underrated, but that's a difference. That's true. Actually, we can talk about that right now,
[01:08:31.360 --> 01:08:36.560] because what we're going to talk about is, okay, decades pass, general semantics, Peter's out,
[01:08:37.200 --> 01:08:41.440] you know, it has like some success, and it does, you know, parts of it reach into the culture,
[01:08:41.440 --> 01:08:45.680] but it just doesn't have enduring success. It kind of gets memory-hold, it kind of gets lost.
[01:08:45.680 --> 01:08:50.480] And part of that is that Alfred Corzibski did not interact well with traditional academia.
[01:08:50.480 --> 01:08:58.160] Academia is a institution for preserving ideas and knowledge, and Corzibski was a little
[01:08:58.160 --> 01:09:03.680] hostile to it a lot, like Eliza Eukowski. And part of what that meant is that especially in the
[01:09:03.680 --> 01:09:08.240] first half of the 20th century is that you're not getting access to academic publishers. You're not
[01:09:08.240 --> 01:09:15.680] getting access to prestigious publishing houses where librarians will buy your book and preserve it
[01:09:15.680 --> 01:09:20.000] and make sure that it's available to scholars when they go to the card catalog to see what
[01:09:20.000 --> 01:09:25.200] exists on a subject. You're not using standard academic terms, and that means that ideas you
[01:09:25.200 --> 01:09:33.120] develop don't get referenced in this long chain of ideas. And so the knowledge is largely lost over
[01:09:33.120 --> 01:09:40.000] the ensuing decades. And so kind of what happens later, though, is that analytic philosophy
[01:09:40.000 --> 01:09:46.720] also undergoes this kind of revolution of source where people realize that you cannot, you know,
[01:09:46.720 --> 01:09:53.200] if it likenesses language is not possible, you know, his rigorous language that will let you prove
[01:09:53.200 --> 01:09:57.760] any philosophical argument as rigorously correct is not possible. Okay. So essentially what people
[01:09:57.760 --> 01:10:04.080] realize, you know, that you can prove that it's not possible. And this is it's own kind of problem
[01:10:04.080 --> 01:10:09.200] because, you know, a Hilbert, for example, is very upset by it. Yeah. But, but people, and I think
[01:10:09.200 --> 01:10:18.000] that a like, for example, so to bring up Chapman again, I think that there is a tendency to talk about
[01:10:18.000 --> 01:10:22.800] this like, and that was the end of analytic philosophy. Even if that's not necessarily what you're
[01:10:22.800 --> 01:10:28.560] saying, it's like the subtext almost like, and then analytic philosophy was killed, and everything
[01:10:28.560 --> 01:10:33.440] after that is just a cargo cult of thing, but that's not really true. Like, that's like,
[01:10:33.440 --> 01:10:38.480] at least that's the subtext, I think, that it gets expounded. Do you get what I'm talking about,
[01:10:38.480 --> 01:10:44.080] like, especially when you talk to, do you know what I'm talking about? So, if I could try to
[01:10:44.080 --> 01:10:51.680] refer to it, so I think the argument that you're making is that after, after a number of these
[01:10:51.680 --> 01:10:57.520] difficulties that developed and trying to, I'm not a mathematician, but develop and I'm really
[01:10:57.520 --> 01:11:05.280] not, I'm very sorry. No, no, it's fine. Like, some of the failures of, you know, some of the efforts
[01:11:05.280 --> 01:11:12.560] to really rigorously ground mathematics in a series of axioms that were taking place in the late
[01:11:12.560 --> 01:11:19.600] 19th century, started following a part, going to things like Russell and Giddle, like, after those
[01:11:19.600 --> 01:11:24.240] fellow part, maybe it was all over for analytic philosophy and everything after that has been
[01:11:25.280 --> 01:11:29.600] sort of less useful. Does that sound right? Yeah, I think that's kind of, though, I don't,
[01:11:29.600 --> 01:11:34.640] I don't know if people, I think some people make that argument explicitly. Yeah, I'm not,
[01:11:34.640 --> 01:11:38.720] I don't know if I want to say Chapman is one of them. I think he skirts with the idea almost.
[01:11:40.160 --> 01:11:45.120] Like, well, like, he'll say, okay, he'll say, for example, that rationality is a system,
[01:11:45.120 --> 01:11:51.520] but you can learn. It has its looses and it has its limitations. But a lot of, like, there is a
[01:11:51.520 --> 01:11:55.680] way that he is trying to put the stake in the heart of analytic philosophy, though, which is basically
[01:11:56.560 --> 01:12:00.640] you know, rationality, vis-a-vis analytic philosophy, whatever you want to call it,
[01:12:00.640 --> 01:12:06.400] cannot be the whole world view. It's really the core of the, because what people wanted out of the
[01:12:06.400 --> 01:12:10.800] Hilbert program and stuff was that you would have this complete scientific world view.
[01:12:11.840 --> 01:12:16.480] You know, it's almost, again, I'm not a super great scholar at this part of the philosophy
[01:12:16.480 --> 01:12:22.320] of history of ideas, but I think it was Hegel or something who, like, his Stanford of,
[01:12:22.320 --> 01:12:26.720] of Stanford, and Cyclopee of philosophy, and she says something like, he had promised people
[01:12:26.720 --> 01:12:33.360] a royal road, and then the analytic philosophy was partially a way to try and, and traverse it.
[01:12:34.000 --> 01:12:40.560] Yeah. And so the royal road did not quite work out. And so I think that there is a definitely
[01:12:40.560 --> 01:12:46.400] a statement in Chapman, at least, that rationality and maybe to maybe even generally analytic
[01:12:46.400 --> 01:12:50.960] philosophy cannot be the whole world view. You cannot have this perfectly rational,
[01:12:50.960 --> 01:12:54.880] scientific world view, but is dreamed up by these early 20th century mathematicians.
[01:12:54.880 --> 01:12:59.600] They themselves proved it was impossible. And a lot of these people are, like, cargo cultists
[01:12:59.600 --> 01:13:04.240] who are running around acting like it's still possible when it clearly isn't. And, you know,
[01:13:04.240 --> 01:13:08.640] that, that, that idea just needs the stake put through it, and there's nothing of value there,
[01:13:08.640 --> 01:13:13.840] or at least. Okay. So I think that, I think you're making two separate sorts of
[01:13:13.840 --> 01:13:19.280] claims in that sense. And one of which is like, maybe you can't do this thing very rigorously
[01:13:19.280 --> 01:13:24.080] in the, in, like, a self-contained system in the way that people had a magic, originally.
[01:13:24.080 --> 01:13:30.000] And, but the second one is, is much stronger, specifically, and there's nothing of value there.
[01:13:30.000 --> 01:13:35.360] And I don't, I don't think most people who identify as post-rationalists would agree with that
[01:13:35.360 --> 01:13:41.280] second part. Like, just, like, I, I think, I, I usually feel like I'm being mought and bailed
[01:13:41.280 --> 01:13:45.280] on the issue, if we're being told. Oh, honest. Okay. Yeah. Like, if we're, like, I feel like,
[01:13:45.280 --> 01:13:50.240] if you, like, press them, they'll say, oh, but, of course, like, it's a system and it has its
[01:13:50.240 --> 01:13:55.120] uses and it's useful. But then, like, once, like, you're not putting them on the hot seat, it's
[01:13:55.120 --> 01:14:02.320] nerds. Yeah. Well, I mean, and that's, that's kind of a shorthand. I, I think I don't mean to
[01:14:02.320 --> 01:14:07.680] speak for everybody, but what I'm thinking about the older group of people, in particular, who
[01:14:07.680 --> 01:14:14.480] are involved in, in something that might be called post-rationality back in, say, like, 2015, 2016.
[01:14:14.480 --> 01:14:21.680] Most of them spent a lot of, I, I mean, like, you know, I'm, I'm a trained hard scientist. And
[01:14:21.680 --> 01:14:28.000] after that, I spent a lot of time working in economics, which is, as far as social science goes,
[01:14:28.000 --> 01:14:34.560] it's pretty irrational, actually. I mean, that we have axioms about human behavior and then we
[01:14:34.560 --> 01:14:40.880] just put them in different circumstances and see how they play out. And, um, I mean, I think in that
[01:14:40.880 --> 01:14:45.520] case, very, uh, economics is the thing that I know best. I think it is pretty close to a kind of
[01:14:45.520 --> 01:14:50.800] rationalism or just something in the rationalist tradition, if you were to be really muddy about
[01:14:50.800 --> 01:14:55.760] that term. And, um, yeah, yeah, that's the never thing, as I feel like a love when people are
[01:14:55.760 --> 01:14:59.520] time out rationality, they're almost talking about, like, six different things, and it would be
[01:14:59.520 --> 01:15:03.600] useful if we could split them up, but I don't want to dig into that particular direction right
[01:15:03.600 --> 01:15:09.360] now. Because the thing I was going to say is that I don't think analytic philosophy is dead. I do
[01:15:09.360 --> 01:15:15.120] think, uh, useful, and really important, uh, advances were made in the latter 20th century,
[01:15:15.120 --> 01:15:19.520] subjective probability and, like, the Bayesian senses one of them, and I think it allows you calisky.
[01:15:19.520 --> 01:15:23.440] He has a little bit of booster, isn't it, right? He's almost too enthusiastic, but I do think that
[01:15:24.880 --> 01:15:31.760] he's not wrong, just annoying. Almost. Yeah, I'd say there's a lot of that. There's a lot of that.
[01:15:31.760 --> 01:15:38.240] Yeah. Um, but I do think that advancement, you know, advances were made that
[01:15:38.240 --> 01:15:42.960] corzidsky would not have had access to, and so kind of going forward, going forward, you know,
[01:15:42.960 --> 01:15:49.040] Peter's off in time. Eventually, uh, you have this guy, Elijah Kasky, who's a young person,
[01:15:50.160 --> 01:15:54.720] and he wants to study AI and all this stuff. And so one question I have when I read the sequences
[01:15:54.720 --> 01:15:58.800] was, how do you become, you know, how do you become Elijah Kasky anyway? Well, he actually kind of
[01:15:58.800 --> 01:16:03.680] tells you in a roundabout way, not quite clearly, but if you like pay attention in between the lines,
[01:16:03.680 --> 01:16:06.560] so he starts out when he's young and he wants to be a physicist.
[01:16:06.560 --> 01:16:11.040] And one of the things that I've said before actually two Chapman on Twitter was I said, you know,
[01:16:11.040 --> 01:16:15.200] one of the things that I feel like you're leaving out when you discuss this is that, you know,
[01:16:15.200 --> 01:16:19.200] EY wanted to be a physicist. And from the perspective of a physicist, where things are almost like
[01:16:19.840 --> 01:16:23.440] almost deterministic, right? Because once you get small enough, they're not deterministic anymore,
[01:16:23.440 --> 01:16:27.440] which is fun. Actually, even the macroscale, they're not deterministic. Once you're big enough,
[01:16:27.440 --> 01:16:32.240] because the small things begin to influence it. But two in approximation deterministic, right,
[01:16:32.240 --> 01:16:35.440] like most of physics when you're thinking about it, you have to think in a deter, like especially
[01:16:35.440 --> 01:16:39.200] the macroscale, like classical mechanics, you're thinking of deterministic mode.
[01:16:39.920 --> 01:16:45.520] There is no uncertainty in how an object will move. But even in physics, well, I mean, like,
[01:16:45.520 --> 01:16:50.240] but you know, and then you quantify your uncertainty, right? So when you do a lab experiment,
[01:16:50.240 --> 01:16:55.440] for example, and you say, I got this value, you were expected to quantify exactly how uncertain you
[01:16:55.440 --> 01:17:03.040] expect to be, blah, blah. Yeah. And so, uh, what I was going to say is that, so from the perspective
[01:17:03.040 --> 01:17:08.000] of that, like if you're starting with that, subjective probability is already meta-rational.
[01:17:13.200 --> 01:17:18.640] Because because it is admitting that there is fundamental uncertainty that can only be
[01:17:18.640 --> 01:17:24.400] accounted for as a subjective reason or, you know, under uncertainty, that there is not this
[01:17:24.960 --> 01:17:29.440] God's eye view of the world you can take that will give you good answers to everything. It's not
[01:17:29.440 --> 01:17:36.160] possible. We're not going to get into meta-rationality versus post-rationality, are we?
[01:17:36.160 --> 01:17:40.720] No, no, no, we're not. I was just going to say that from the perspective of, you know,
[01:17:40.720 --> 01:17:45.040] like just a physicist, I would think that days is already meta-rational. So a lot of what
[01:17:45.040 --> 01:17:49.520] happened, like, like from, like, just within EY's frame, like a lot of what Chad would be talking
[01:17:49.520 --> 01:17:54.640] about would be like meta-rational almost, like, and then days itself has issues, and I think that
[01:17:54.640 --> 01:18:01.520] EY is aware of this. Yeah. He is. So if you actually read the sequences, there's a weird thing
[01:18:01.520 --> 01:18:08.640] that EY is doing where he has to talk about, he has to talk about the fact that certain things
[01:18:08.640 --> 01:18:13.280] are laws, right? So there's the laws of physics, there's days which is like this law of thought
[01:18:13.280 --> 01:18:17.920] at least in EY's conception. You know, I do not know enough to be able to, they tell you that
[01:18:17.920 --> 01:18:24.480] as a fact, but it seems plausible. And as he's doing this, he has to emphasize these things are laws.
[01:18:24.480 --> 01:18:28.960] But they're invaluable. At the same time, he also has to talk about uncertainty because you are
[01:18:28.960 --> 01:18:33.360] reasoning under uncertainty because if you're not, you're not even really admitting the subjective
[01:18:33.360 --> 01:18:40.160] probability thesis, right? So to have bays, you can't, you know, you know, if you say, like,
[01:18:40.160 --> 01:18:44.320] because this is a law whole argument, if he himself has in the sequences with a group called the
[01:18:44.320 --> 01:18:49.040] Freakownists who are people that say basically bays as pseudoscience, there should be no subjectivity
[01:18:49.040 --> 01:18:53.600] in mathematics. The entire idea that you can have like this prior, but you just pull out of your
[01:18:53.600 --> 01:19:00.640] butt and you do math with it isn't sane. Yeah. I mean, like, you know, I would identify as neither
[01:19:00.640 --> 01:19:06.080] a Freakownists nor a nor a Bayesian. Sure. I mean, like, you know, I think if you take a step back
[01:19:06.080 --> 01:19:14.320] and if you look at Bayesianism, like, I don't even think it's a matter of so much like. And if you
[01:19:14.320 --> 01:19:21.280] talk about prior selection, typically you would run, you know, you would run a Bayesian model
[01:19:21.280 --> 01:19:25.600] across a, like, perhaps a range of priors and see how much that influences it, right? And you
[01:19:25.600 --> 01:19:32.480] can actually, absolutely. Oh, sorry. And so like, it's not even, it's almost like kind of like a
[01:19:32.480 --> 01:19:36.720] rationality over over subjectivity, right? Or an objective thing over subjectivity, like,
[01:19:36.720 --> 01:19:41.280] yeah, I get what you're talking about. And that's why I was trying to say and is that within
[01:19:41.280 --> 01:19:46.560] bays already is the seeds of a subjective worldview because to even admit that, you know, bays
[01:19:46.560 --> 01:19:51.200] as a good idea, you have to admit that you can have a subjective perspective that is better than
[01:19:52.000 --> 01:19:58.880] this kind of pure objectivity that is advocated in Freakownism. Oh, I'm saying the reverse.
[01:19:58.880 --> 01:20:03.920] I'm saying that Bayes is a way of taking subjectivity and reframing an irrational perspective,
[01:20:03.920 --> 01:20:10.800] right? I think so. I'm just, you know, you can use Bayesian methods, right, to figure out a prior,
[01:20:10.800 --> 01:20:16.000] but it's also, the real world is messy and it doesn't quite. So now you need, like,
[01:20:16.000 --> 01:20:20.240] ever, because even in the sequence as itself, you why kind of acknowledges, like, you know,
[01:20:20.240 --> 01:20:23.280] where did prior's come from? And he says, like, don't ask that. It's kind of a joke.
[01:20:23.280 --> 01:20:27.600] Like he means it jokingly, like, you know, yeah, but you know, actually though, so like,
[01:20:29.280 --> 01:20:35.680] you know, if you look at, if you look at game theory with uncertainty, right, there's
[01:20:36.800 --> 01:20:40.400] there are certain sets of priors that are rational and there are certain sets of priors that are
[01:20:40.400 --> 01:20:46.880] not rational. And like, the solution depends on which priors people have. And that's,
[01:20:46.880 --> 01:20:50.720] that's actually kind of an interesting case here as well, because so we're going to, yeah,
[01:20:50.720 --> 01:20:53.840] we're going to get to that, like, right after this, because that's the next thing I want to talk
[01:20:53.840 --> 01:20:58.560] about, because how did less wrong kind of not succeed as much as it could have maybe?
[01:20:59.600 --> 01:21:04.080] So to fast forward real quick, because we're getting pretty late here. So yeah, fast forwarding
[01:21:04.080 --> 01:21:13.920] EY. He's this young guy. And he's studying physics. And then he gets into, so first he actually reads
[01:21:13.920 --> 01:21:18.960] a book. And the book is called Great Mombo Chicken by Ed Regis, a Great Mombo Chicken, the Transhuman
[01:21:18.960 --> 01:21:24.000] Condition. And this is kind of the book that at one point he writes this made me a transhumanist.
[01:21:24.000 --> 01:21:27.840] And then he kind of walked that statement back and said, no, no, engines of creation by
[01:21:27.840 --> 01:21:33.920] a by, wow, Drexler may be a transhumanist. But I think that's really kind of just splitting
[01:21:33.920 --> 01:21:38.960] hairs, because Great Mombo Chicken is where he's introduced to the idea of nanotech and the Drexlerian
[01:21:38.960 --> 01:21:47.200] sense. So he reads this book. And this book is published in 1990. And he has this realization
[01:21:47.200 --> 01:21:51.200] as easy young person studying physics. He says, no, I don't want to do physics. I want to do this.
[01:21:51.920 --> 01:21:57.520] You know, I want to do this transhumanism thing. And transhumanism itself is actually a fairly
[01:21:57.520 --> 01:22:04.160] recent ideology, ideology, movement, whatever you want to call it. Because there's lots of,
[01:22:04.160 --> 01:22:09.200] there's a long prehistory to it, but the actual notion of like transhumanism itself only really
[01:22:09.200 --> 01:22:14.720] appears in about, you know, the 80s and 90s in the way we would recognize it as a modern movement.
[01:22:15.600 --> 01:22:20.160] Yeah. Okay. And EY kind of is getting in almost on the ground floor of this. He means one of the
[01:22:20.160 --> 01:22:26.480] first big contributors like on the extropians mailing list, which the funny thing is that about the
[01:22:26.480 --> 01:22:35.360] extropian mailing list is that EY gets mad at them over some drama and he splits the list.
[01:22:35.360 --> 01:22:40.480] Basically, he says, I'm forking the list. I'm going to make my own list SL4 where we're going to
[01:22:40.480 --> 01:22:44.720] talk about AI and the singularity because the extropians don't understand like the dangers of
[01:22:44.720 --> 01:22:49.280] technology because they're too gung-ho about it. But so when you think about like, how do you become
[01:22:49.280 --> 01:22:54.720] EY, right? Like going back to this? So he reads this book. And this book is actually excellent
[01:22:54.720 --> 01:23:00.800] in terms of it takes you straight through, you know, this is space travel and then this is nanotech
[01:23:00.800 --> 01:23:06.880] and then this is how nanotech combines with cryonics to defeat death. And then this is how you can
[01:23:06.880 --> 01:23:11.120] make artificial minds that will then take over the universe. It's like really, really like the
[01:23:11.120 --> 01:23:18.320] ex kind of accelerationist viewpoint that is underlying EY's version of transhumanism is right here.
[01:23:18.320 --> 01:23:23.920] Like it's almost laid out. So in terms of if you had read this book in 1990, all you would really
[01:23:23.920 --> 01:23:29.760] have to do to become EY afterwards assuming you're already a precocious person with lots of IQ
[01:23:29.760 --> 01:23:35.760] points and stuff is that you get on this mailing list, right? You're on the extropians mailing
[01:23:35.760 --> 01:23:41.040] list now, which is like the the protein, you know, the transhumanist mailing list for a long time
[01:23:41.040 --> 01:23:46.240] for some years. And then you get into AI, you know, you're studying AI because you want the singularity
[01:23:46.240 --> 01:23:52.240] to happen. And then through that, you get into like this james thing and then you get into,
[01:23:52.240 --> 01:23:56.400] you're talking to Robin Hansen, you get into these biases literature and you, you know,
[01:23:56.400 --> 01:24:01.120] at some point your worldview is going to collapse into something like Elijah Kowsky's worldview
[01:24:01.120 --> 01:24:05.760] if you are pursuing that track and you take those decisions. And did you see it like how you can
[01:24:05.760 --> 01:24:09.440] just because for me it was a huge mystery. I had no idea like how do you become this person and then
[01:24:09.440 --> 01:24:13.920] it's like once you actually research is like, oh, it was not actually that many steps to become
[01:24:13.920 --> 01:24:20.800] Elijah Kowsky in the basic sense. Yeah, no, that's that's probably true. I mean, when I
[01:24:20.800 --> 01:24:24.320] when I look at my own intellectual history or that the like history of thoughts that I've had
[01:24:24.320 --> 01:24:29.520] about the world, I don't know that it's really particularly complicated. Although, I don't know,
[01:24:29.520 --> 01:24:34.320] I also don't really have particularly unified worldview, so maybe it's harder than that. Sure.
[01:24:35.120 --> 01:24:40.720] And so one thing, one aspect of less wrong in kind of this worldview that I think is understated
[01:24:40.720 --> 01:24:47.680] is the extent to which the less wrong kind of viewpoint depends on transhumanism. So like if you
[01:24:47.680 --> 01:24:52.160] go back to what he's talking about, what should be people be doing, defeat death, cryonics,
[01:24:52.160 --> 01:24:55.600] make your own nation, like c-steeding, which is actually almost straight out of the
[01:24:55.600 --> 01:24:59.440] dyxtropian playbook, because the extropians were known as these hardcore libertarians.
[01:25:00.320 --> 01:25:04.960] Part of why the word transhumanist and, you know, rationalist were not the word
[01:25:04.960 --> 01:25:09.040] extropian, which was like what it was originally called, because Max Moore, the founder of
[01:25:09.040 --> 01:25:15.120] extropianism, was literally like the foundation of our movement will be rationality and reason.
[01:25:15.120 --> 01:25:19.360] That's like what he says, and he would have been all over this stuff. I'm sure people have always
[01:25:19.360 --> 01:25:22.560] been saying that. Oh, people have always been saying that I know, but I just meant that
[01:25:23.600 --> 01:25:28.080] you can, you can see the extropian influence in it, though, because they're hyper libertarians. And so
[01:25:28.080 --> 01:25:32.160] when you look at what he is telling, you know, Elijah, Cowski is telling people to do in the
[01:25:32.160 --> 01:25:36.480] end of sequence is post. It's literally like just a straight,
[01:25:36.480 --> 01:25:43.200] extropian platform almost, except with AI risk. Because that's really how I personally view
[01:25:43.200 --> 01:25:50.000] Elijah, Cowski's overall philosophy. I think of it as a series of patches that are applied
[01:25:50.000 --> 01:25:58.400] to Max Moore's philosophy almost. Okay. Interesting. So I often call it, when I want to call,
[01:25:58.400 --> 01:26:03.280] I call it Elizer's extropi. It's basically what it is. It's like his set of patches to this
[01:26:03.280 --> 01:26:07.840] other earlier viewpoint. So would you call it like an extropian heresy?
[01:26:07.840 --> 01:26:13.920] Oh, almost. Yeah. That is actually basically how he founded less or almost is that. So the
[01:26:13.920 --> 01:26:18.560] specific story is I understand it. I haven't read the mailing list yet. I plan to do that at some
[01:26:18.560 --> 01:26:24.560] point, but you know, they're so much to read. Yeah. But the just of the arguments I understand it is that
[01:26:24.560 --> 01:26:28.400] he's on the extropian mailing list. And he's having this argument with people. And he actually
[01:26:28.400 --> 01:26:34.640] describes it in the sequences that they're arguing about nuclear deterrence and weather technology
[01:26:34.640 --> 01:26:40.720] will become dangerous. And one of the founding planks from Max Moore with extropiism is that he really
[01:26:40.720 --> 01:26:46.640] hates this precautionary principle kind of viewpoint. And he really wants people to like slam the
[01:26:46.640 --> 01:26:50.960] acceleration pedal on technology. They need you to just be doing it as fast as humanly possible.
[01:26:50.960 --> 01:26:56.400] You know, get to the singularity as soon as you can. And EY is getting into these discussions.
[01:26:56.400 --> 01:27:01.520] And he's starting to have doubts. You know, the doubts start to creep in like wait.
[01:27:01.520 --> 01:27:09.360] Now, this is actually kind of dangerous. And he eventually decides that the extropians are not
[01:27:09.360 --> 01:27:15.200] taking the ideas seriously enough for him and they're not, you know, because to eventually to his
[01:27:15.200 --> 01:27:19.200] mind, he realizes after reading, I think like Werner vanger something he realizes well,
[01:27:19.200 --> 01:27:22.800] the singularity thing is going to make this nanotech thing irrelevant anyway. So I don't really
[01:27:22.800 --> 01:27:27.920] need to think about that much. And so I'm going to found my own list where we just talked about
[01:27:27.920 --> 01:27:32.480] the singularity and we'll call that SL4. And this is before he really got into like the AI risk thing
[01:27:32.480 --> 01:27:37.360] because he hadn't quite fully internalized yet the whole alignment problem. So he was still feeling
[01:27:37.360 --> 01:27:41.120] like, okay, we're going to make AI. And we're going to do it as fast as possible, blah, blah.
[01:27:42.480 --> 01:27:48.400] And then at some point, of course, he realizes, oh, wait, this is going to destroy the world.
[01:27:48.960 --> 01:27:53.360] And that's about the point where he found's less wrong. But in between there,
[01:27:53.360 --> 01:27:58.080] on I think there's a really interesting document he writes called the shock levels or the future
[01:27:58.080 --> 01:28:06.400] shock levels is essentially or shock level. What does SL4 stand for shock level? It does.
[01:28:06.400 --> 01:28:13.840] Yeah, stands for shock level 4. And so he kind of writes this very short, it's not long,
[01:28:13.840 --> 01:28:20.560] a document where he describes as he sees it, the different almost sociological groups around,
[01:28:20.560 --> 01:28:25.120] it's very interesting as a historical document around science fiction and predicting the future.
[01:28:27.280 --> 01:28:33.280] And I actually adapt this idea in the law in my own thinking where I've kind of like taken it
[01:28:33.280 --> 01:28:36.960] and taken it from a sociological definition to more rigorous because I think that the shock levels
[01:28:36.960 --> 01:28:41.360] as he describes them, you know, there's shock level 1 which is like the future as described in
[01:28:41.360 --> 01:28:45.760] 1950s and there's shock level 2 which is basically a space opera like Star Trek and then there's
[01:28:45.760 --> 01:28:50.960] shock level 3 which is max more in his extropions and then there's SL4 which is you'd kowsky with the
[01:28:50.960 --> 01:28:56.320] AI and the singularity. And does that sort of like when he's talking about shock levels, this
[01:28:56.320 --> 01:29:01.200] related to future shock? Yeah, it's absolutely really a future shock and so he's thinking that's like
[01:29:01.200 --> 01:29:07.760] your shock level is how much you can accept the consequences of future technologies without being
[01:29:07.760 --> 01:29:12.320] like manic about it or being pessimistic, like just like just so really consider it like
[01:29:12.320 --> 01:29:17.040] it's normal part of reality. And so what I think is interesting about these is that I think
[01:29:17.040 --> 01:29:20.240] there is actually a logic, you know, he doesn't really go into the logic of them, he just kind of
[01:29:20.240 --> 01:29:24.560] talks about them like they're a thing and they're different, we can define them as like social groups
[01:29:25.120 --> 01:29:30.160] but I think there is a logic to it and the logic is something like the shock levels are
[01:29:31.840 --> 01:29:37.360] specifically the order in which you're going to have discussions about the future and like a well
[01:29:37.360 --> 01:29:44.080] educated sci-fi war gaming group, right? Like if you just all sit down, you start like actually
[01:29:44.080 --> 01:29:48.880] thinking about the implications of future technologies. You might start with atomic power, you know,
[01:29:48.880 --> 01:29:53.600] if we just actually invested in nuclear power, 10 times improvement over current reactors with us
[01:29:53.600 --> 01:29:58.560] decently enough water for Africans to use it like Americans and 100 times less do this and we know
[01:29:58.560 --> 01:30:04.240] that the underlying theoretical physics of reactors let us make them probably that powerful, blah, blah,
[01:30:04.240 --> 01:30:08.160] blah blah. And at some point someone's going to say to you, well look if I have nuclear, sorry,
[01:30:08.160 --> 01:30:12.400] what? No, no, go ahead. And at some point someone's going to say to you, well look if I have
[01:30:12.400 --> 01:30:18.960] nuclear power, why don't I just make a space rocket make a, I'm going to make a satellite around
[01:30:18.960 --> 01:30:23.840] the earth that reflects a sunlight and then we're just going to concentrate that into a huge beam
[01:30:23.840 --> 01:30:27.360] and then we're going to get tons of energy from the, you get what you see what we're going here.
[01:30:27.360 --> 01:30:32.720] Yeah, you know, someone brings this idea up, suddenly a ton of power doesn't seem so
[01:30:32.720 --> 01:30:36.560] great, you know, doesn't seem so fancy anymore. It's like, oh, well, you know, all these things
[01:30:36.560 --> 01:30:39.920] that we were talking about deep into the weeds of the implication of atomic power,
[01:30:39.920 --> 01:30:43.120] well, no, the first thing you do with, with functioning atomic power is you'd go to space and
[01:30:43.120 --> 01:30:46.880] then make better power and you'd start going to ever plan it. And so, okay, we don't even need to
[01:30:46.880 --> 01:30:52.240] think about all this because the, because the space travel is upstream of the, of the implications
[01:30:52.240 --> 01:30:56.960] of the atomic power society and the atomic power society is upstream of the implications of scarcity
[01:30:56.960 --> 01:31:02.800] politics. And then the nanotech and biotech are upstream of the consequences of the space travel
[01:31:02.800 --> 01:31:06.560] because the future will look nothing like Star Trek because the minute people have anything
[01:31:06.560 --> 01:31:11.200] like a Star Trek society, they will just start adding themselves and make themselves super intelligent
[01:31:11.200 --> 01:31:16.320] and blah, blah, blah. And then those people will take over the universe. But then even that is
[01:31:16.320 --> 01:31:21.840] is not how the future will go because actually we're going to invent AI before any of that matters.
[01:31:21.840 --> 01:31:25.360] And then, you know, and the first thing you would do if you were super intelligent was right,
[01:31:25.360 --> 01:31:29.280] a super intelligent artificial intelligence and then it takes over that you see how this works.
[01:31:29.280 --> 01:31:34.480] Yeah. And I think that you can actually like make it rigorous in that sense. Like,
[01:31:34.480 --> 01:31:38.480] there is a defined logic to it. It's not just a sociological concept at that point.
[01:31:39.680 --> 01:31:45.040] You know, it's about this idea of things that are upstream and downstream of other things in
[01:31:45.040 --> 01:31:50.240] terms of consequences and kind of when you can stop thinking about, you know, the bootstrapping chain
[01:31:50.240 --> 01:31:53.520] almost like, oh, we don't have to think about what we do after this point because at this point,
[01:31:53.520 --> 01:31:59.920] you just do this and a new game is started. So would you count this as the success of rationalists?
[01:32:01.680 --> 01:32:07.200] Um, no, but what I was going to say, those, I think that this is an aspect of like all the things
[01:32:07.200 --> 01:32:13.120] that EY is talking about, they just doesn't really get because there's this meme, right, where
[01:32:13.120 --> 01:32:19.680] it's a caption. It's like there's a person in the center and they're that one of those like rage
[01:32:19.680 --> 01:32:25.360] face people. I don't, I don't know all their names, but it's his caption rationalist. He's in the
[01:32:25.360 --> 01:32:31.840] center. There's a little arrow and it goes to gender pill. They become a girl. It goes to
[01:32:32.960 --> 01:32:38.800] heggle pill or something and they become a post rat or whatever it is. And then in the one corner,
[01:32:38.800 --> 01:32:42.640] there's question mark, question mark, question mark, question mark and it goes to big yad.
[01:32:44.720 --> 01:32:49.600] Yeah. And EY actually reacted to this. He said, you know, every time I see this image
[01:32:49.600 --> 01:32:55.200] on my timeline, I think to myself, I can never figure out what was in those question marks.
[01:32:58.000 --> 01:33:00.880] And I actually replied to him, I said, is the transhumanism, tell me?
[01:33:02.480 --> 01:33:08.640] Oh, sure. Yeah. Like I think that those other things are what happens if you are, if you're
[01:33:08.640 --> 01:33:13.600] do all these ideas, right, like X-Risks, but the world's going to, all this stuff, and then you
[01:33:13.600 --> 01:33:19.360] have no way forward. It's like, well, so here's the thing, is I think like EY managed to transmit
[01:33:19.360 --> 01:33:24.560] his idea of AI risk, but he didn't manage to transmit the underlying ideas that would get you to
[01:33:24.560 --> 01:33:28.720] conclude AI risk is important in the first place. Does that make sense? Yeah, I think so.
[01:33:29.360 --> 01:33:33.280] So people who are into AI risk are kind of cargo cold teeth. They don't really have like
[01:33:33.840 --> 01:33:38.320] this sense of like, here's why you should be specifically focusing on this out over everything.
[01:33:38.320 --> 01:33:43.360] Else, aside from, it will happen really soon, and it's upstream of all this other stuff, blah, blah, blah.
[01:33:43.360 --> 01:33:47.920] But it doesn't like, like, I feel like that intellectual journey that he's discussing with
[01:33:47.920 --> 01:33:52.320] these shock levels is actually a really important piece of the pie that, or the puzzle, whatever,
[01:33:52.320 --> 01:33:58.160] but he doesn't really discuss in the sequences, because in his mind, the sequences are not about
[01:33:58.160 --> 01:34:05.280] transhumanism, they're about rationality. Yeah, it's sort of interesting. When you try to explain
[01:34:05.280 --> 01:34:11.440] the ideas that you have to somebody else, it's difficult to condense all of the thoughts and
[01:34:11.440 --> 01:34:15.520] experiences that you've had over the course of your life that have led you to reject other ideas
[01:34:15.520 --> 01:34:20.720] and embrace these. And that's actually one reason I tend not to try and convince anybody of anything.
[01:34:21.280 --> 01:34:25.840] I mean, like, I can point people toward things that I've read and that have made me think about things.
[01:34:25.840 --> 01:34:31.440] But I don't know that I could actually convey the full weight of everything that's gone into
[01:34:31.440 --> 01:34:40.800] any idea that I have. Even apart from that, I don't think I'm aware of it myself in many cases.
[01:34:42.800 --> 01:34:46.800] So I think I've saved myself a lot of time by saying not writing a series of sequences about
[01:34:48.000 --> 01:34:54.080] about, I can robot that, for example. Oh, no, sure. And I definitely agree. I think that
[01:34:54.080 --> 01:35:02.720] I think it is really difficult to convey your full experience, especially in words and especially
[01:35:02.720 --> 01:35:08.720] in, you know, you have like this limited bandwidth to discuss things with someone and there's this whole
[01:35:08.720 --> 01:35:13.840] life you've lived is behind the specific things you are telling them. Like there's there's whole
[01:35:13.840 --> 01:35:18.880] sea of things you could tell people. You're telling the specific set based on your life experience.
[01:35:18.880 --> 01:35:27.440] And I think there is this bias towards trying to discuss ideas as though they just pop up or
[01:35:28.240 --> 01:35:33.040] objectively like, oh, you know, of course you would be thinking about this thing. When in reality,
[01:35:33.040 --> 01:35:38.080] the actual sequence of steps to get there is, you know, almost subjective, like it depends
[01:35:38.080 --> 01:35:44.640] on you going through a certain root for life. Yeah. And I definitely think there's an aspect of that
[01:35:44.640 --> 01:35:52.480] with us. So yeah, so we've got about, I think we've got like 20, 30 minutes left.
[01:35:53.120 --> 01:35:57.440] Yeah, I think my hard stop time is 630, but I should probably get off a bit before then. And we are
[01:35:57.440 --> 01:36:02.720] definitely going a bit long. Yeah, my apologies. So let's just get to the, yeah, so I think I've said
[01:36:02.720 --> 01:36:08.080] most of what I wanted to say on other stuff. Let's just get to the kind of the last part, right,
[01:36:08.080 --> 01:36:14.800] talking about explicitly about how I'd less wrong kind of do with all this. Yeah. And then with, you know,
[01:36:14.800 --> 01:36:19.120] postrat, if we have any time for that. Sure. I actually, I think we talked a lot quite a bit already
[01:36:19.120 --> 01:36:24.080] about postrat, right? Like, I think so. I think I said most of what kind of what I feel about it.
[01:36:24.080 --> 01:36:29.840] Yeah, I think that came across. So let's, yeah, let's just do the evaluation and then maybe tie it up
[01:36:29.840 --> 01:36:34.080] if you like. Yeah, sure. Absolutely. So in terms of evaluation,
[01:36:34.080 --> 01:36:40.000] I think we can go back kind of to some of what you why I talked about. So you can just start
[01:36:40.000 --> 01:36:44.400] like, let's just start with the things he listed because he gave specific things. And now I don't
[01:36:44.400 --> 01:36:48.160] want again to think we're like, you know, if you list something, you must do it, right? Because
[01:36:48.160 --> 01:36:52.880] your goals can change, your priorities can change. But I do think there's a sense in which certain
[01:36:52.880 --> 01:36:59.040] things were not done and no equivalent thing was done. So the big one that always stands out to me,
[01:36:59.040 --> 01:37:08.720] and especially just totally dropped off the discourse is cryonics. Yeah. So I actually was looking
[01:37:08.720 --> 01:37:14.640] into, you know, what actually needs to happen for you to get froze recently. And there's an
[01:37:14.640 --> 01:37:20.320] interesting statistic side that, you know, only about 400 people, I don't forget those worldwide
[01:37:20.320 --> 01:37:24.800] of the United States, but only about like 400 people have actually been cryogenically frozen.
[01:37:24.800 --> 01:37:33.120] And if you believe that cryonics is this route to defeating death, you know, death, death itself.
[01:37:33.760 --> 01:37:38.080] And your kind, your attitude towards that is, oh, yeah, it's really cool that there's this tiny
[01:37:38.080 --> 01:37:43.280] minority of people who have access to cryonics and actually know to sign up for it. Yeah,
[01:37:43.280 --> 01:37:48.720] yeah, a fun rationality. It's like, okay, you're really, you know what I mean? Like it's,
[01:37:48.720 --> 01:37:55.680] you've missed the point. Yeah. You know, so when, so in EY, an advocate for cryonics, I don't think
[01:37:55.680 --> 01:38:00.640] it's a, I don't think it's like a side thing like, oh, yeah, you know, make sure you get froze
[01:38:00.640 --> 01:38:06.000] kid that's like, no, like you can defeat death right now. Even mentions it again in the epistle
[01:38:06.000 --> 01:38:11.520] to the New York Leserongians as a, you know, he reiterates it. So it's not like a tangential thing to
[01:38:11.520 --> 01:38:16.960] him. And I think that there's been almost no progress on the cryonics front. If anything, there's
[01:38:16.960 --> 01:38:22.240] been regression. The costs that, you know, there's someone wrote this really nice series of posts
[01:38:22.240 --> 01:38:27.440] about cryonics like the state of cryonics in 2020. And one of the things they wrote about is that
[01:38:27.440 --> 01:38:31.360] if you look at the costs for cryonics or such as the cryonics Institute in Alcor,
[01:38:31.920 --> 01:38:37.600] their costs, you know, a flight inflation adjuster or whatever costs have gone up since, oh, no,
[01:38:37.600 --> 01:38:44.080] however, yeah, right? Like, you know, it's regressing. And so in terms of that, I think that you can just
[01:38:44.080 --> 01:38:51.360] count that as a total F minus minus, I mean, okay, maybe not total F minus minus, but like an F,
[01:38:51.360 --> 01:38:56.080] like, you know, don't, don't, don't call it an F minus, maybe an F because they did get some people
[01:38:56.080 --> 01:39:00.960] to sign up for cryonics. It became a more mimetically available thing, you know, we can talk about it.
[01:39:00.960 --> 01:39:09.040] But in terms of actual expanding the cryonics movement, bringing down costs, so more people can do it,
[01:39:09.040 --> 01:39:16.400] but I call it an F. Yeah, it's interesting. Like, as you're mentioning this,
[01:39:17.120 --> 01:39:22.800] I think the, I think, honestly Robin Hansen has done more to bring this into public site than
[01:39:22.800 --> 01:39:28.000] then Ali Azer with, with who, what, I didn't realize was associated with it at all before this
[01:39:28.000 --> 01:39:34.480] podcast. Oh, that, oh, that's interesting. Yeah, that's an is very publicly pro cryonics.
[01:39:34.480 --> 01:39:38.640] Well, so I think that part of it is that, so if you read great mumbo chicken, right, great
[01:39:38.640 --> 01:39:43.440] mumbo chicken, the transhuman condition, it does a really good job at selling cryonics. It talks
[01:39:43.440 --> 01:39:48.880] about how they're all these a theoretical physicists who believe it will work, how they actually go
[01:39:48.880 --> 01:39:53.040] a kind of have a social group, what they did to found these cryonics organizations,
[01:39:53.600 --> 01:39:58.640] and actually modern transhumanism enlarged, like, kind of comes out of the cryonics movement,
[01:39:58.640 --> 01:40:06.400] almost. Max Moore, for example, the founder of extremism, is I believe still the CEO of Alcor,
[01:40:06.400 --> 01:40:12.160] he was, like, made CEO in 2011 or something, and the actual organizing for the first transhumanist
[01:40:12.160 --> 01:40:18.000] stuff came out of cryonics. And part of that is because cryonics is kind of this point that,
[01:40:18.800 --> 01:40:25.520] so remember that SL free, right, a nanotech and biotech? Yeah, yeah. If you believe the premises
[01:40:25.520 --> 01:40:31.200] behind those, that we will get nanotech, and we will have this ability, then cryonics becomes
[01:40:31.200 --> 01:40:34.960] a little bit, I don't want to say two plus two equals four, but it becomes a lot easier to swallow
[01:40:34.960 --> 01:40:42.640] as a concept. And because Alire's Udkowski doesn't get a cross, these prerequisite ideas
[01:40:42.640 --> 01:40:47.440] before the Ayrisk, you know, people just read the cryonics, finally went, oh, that's Alire's Udkowski
[01:40:47.440 --> 01:40:54.000] being weird, ignore. Yeah, they have no foundation for it. It's like, well, you know, that, oh,
[01:40:54.000 --> 01:40:59.040] that's like a weird thing, he's held like a weird thing now. Yeah, I mean, like if you were to ask me to
[01:40:59.040 --> 01:41:05.040] just like picture reason as to why it failed, I think it's just pure aesthetics. Like, oh, I don't even,
[01:41:05.040 --> 01:41:09.040] I don't even disagree with you. I definitely think that there is a thing where, okay, let's just
[01:41:09.040 --> 01:41:13.600] start with the name, right, so rationality. Yeah. I actually said this to Chapman, I said, I think
[01:41:13.600 --> 01:41:18.480] EY calling it rationality may be one of the greatest mistakes in the history of philosophy.
[01:41:18.480 --> 01:41:25.680] That's it, okay, that that's actually a pretty impressive amount of failure to put into
[01:41:25.680 --> 01:41:30.320] something. Okay. No, but I don't know, I don't know if I might, I mean, maybe that's a little too
[01:41:30.320 --> 01:41:34.400] much, but really, like really think about it. It means that from a marketing standpoint,
[01:41:34.400 --> 01:41:37.920] you're undifferentiated because there's a bazillion things called rationality,
[01:41:37.920 --> 01:41:42.080] a check-to-vis-um, and their Ayrandian sense is rationality. Yeah, for example,
[01:41:42.080 --> 01:41:48.400] Tren and Russell and Whitehead, that whole tradition is called rationality. The whole atheist, new
[01:41:48.400 --> 01:41:53.120] atheist, skeptic movement, it calls it self-rationality. You're just not going to be heard above
[01:41:53.120 --> 01:41:58.320] the noise if that is your name. I thought they were the brights. Oh, sure, oh, the brights. Oh,
[01:41:58.320 --> 01:42:04.240] gosh, don't bring that up. No. Oh, but you get what I mean though, right? Like, if that's your name,
[01:42:04.240 --> 01:42:08.000] that's your branding. And then there's another aspect to it, too, where it's, you know,
[01:42:08.000 --> 01:42:12.000] who is attracted by that name? You know, you could just go down all the reasons. All like,
[01:42:12.000 --> 01:42:15.840] I probably could come out with a dozen reasons, we could go on for a long time. But I do think
[01:42:15.840 --> 01:42:21.360] the name is kind of a microcosm of the larger ascetics problem. You know, rationality doesn't
[01:42:21.360 --> 01:42:29.280] have a ton of art, for example. There isn't a bunch of, because again, because like, okay,
[01:42:29.280 --> 01:42:35.440] so for example, just to throw something out there, a communism has a lot of art styles associated
[01:42:35.440 --> 01:42:40.480] with it. There's a lot of propaganda posters. There's all this stuff that goes into promoting
[01:42:40.480 --> 01:42:47.280] the ideology aesthetically. And there's a whole bunch of artists who are on board with the thing.
[01:42:48.240 --> 01:42:53.280] And, you know, like, one thing I always think about is imagery, for example. So if you just take
[01:42:53.280 --> 01:42:59.520] like a, okay, we don't have time for me to completely justify why I'm using this example. I'm just
[01:42:59.520 --> 01:43:06.480] going to use it. So I take a religion, like, say, Christianity. And you'll see that there's all this,
[01:43:06.480 --> 01:43:13.520] all, you have these abstract ideas, right? About, yeah, Ashathon, about the resurrection, about
[01:43:13.520 --> 01:43:18.640] Christ as a figure. And there's all this imagery around it. And that's where stained glass comes from,
[01:43:18.640 --> 01:43:23.920] is that they needed images to show people, you know, like medieval peasants and stuff. They needed
[01:43:23.920 --> 01:43:29.440] this imagery to show people to help them visualize the abstract ideas that Christianity is putting
[01:43:29.440 --> 01:43:34.240] forward. Oh, yeah. Well, and I mean, like, if you want to even talk about art movements, I'm
[01:43:34.240 --> 01:43:41.120] in, you know, Gothic and Baroque. I mean, those were both very explicitly like Christian associated,
[01:43:41.120 --> 01:43:45.440] right? Oh, sure. Sure. Well, I'm just saying just from a perspective of you want to get these
[01:43:45.440 --> 01:43:50.160] ideas across. You want people to, they want them to be sticky. And there really are, like,
[01:43:50.160 --> 01:43:54.480] they're, they're odd ideas, like, like, cryonics. And so these are not intuitive things to people.
[01:43:54.480 --> 01:44:02.240] And so a set of sizing it, a spoonful of sugar helps them medicine go down. And I feel like
[01:44:02.240 --> 01:44:09.120] I can't even think of imagery for some of the concepts. I'm fine with that, though. I mean,
[01:44:09.120 --> 01:44:14.800] I think if you were to ask me right now what the, like, what the movement, I think most of the
[01:44:15.680 --> 01:44:19.760] aesthetic movement of rationality is I can imagine it. And and moon might correct me after this
[01:44:19.760 --> 01:44:25.120] is over in which I apologize. But I think it's probably fanfiction, right? There's a tremendous amount
[01:44:25.120 --> 01:44:29.840] of creative rationality that goes into making. I mean, somewhat effective propaganda of the
[01:44:29.840 --> 01:44:34.960] fanfiction. Oh, no, I agree with that. But I also think that that itself is kind of an,
[01:44:34.960 --> 01:44:38.800] that's kind of a double-edged sword, right? And that it brings people in, but it brings a certain
[01:44:38.800 --> 01:44:44.560] kind of person in. Yeah. Well, and, and, and I think that, like, when you're talking about the
[01:44:44.560 --> 01:44:49.600] unconditional tolerance of weirdos thing, isn't that kind of what fandom is? You know, that,
[01:44:49.600 --> 01:44:55.360] that's like the ethos of fandom. If you're finding that your culture has one, has one group
[01:44:55.360 --> 01:44:59.600] that you start out and you've got, like, math Olympians and stuff. And then over time, you're
[01:44:59.600 --> 01:45:05.600] turning towards this unconditional tolerance of weirdos. Maybe that's because, you know, 99%
[01:45:05.600 --> 01:45:11.440] of your intake funnel now is through fandom communities. Yeah. I'm not, not even trying to be
[01:45:11.440 --> 01:45:17.200] judged just as like an objective sociological statement. If you bring in a lot of people who have
[01:45:17.200 --> 01:45:22.720] a certain set of norms and expectations about what a space is for are those people going to
[01:45:22.720 --> 01:45:28.560] change the norms? Absolutely. Yeah. And, I mean, I think there's an accent to which L.A. is
[01:45:28.560 --> 01:45:33.200] are like, you know, observed and considered this problem. And there's, there's an L.A. is
[01:45:33.200 --> 01:45:38.560] opposed that I definitely have not read about a evaporative cooling. And, you know, I mean,
[01:45:38.560 --> 01:45:44.320] so he was aware of this problem although maybe not so aware that he didn't write HPMLR.
[01:45:44.320 --> 01:45:47.760] Yeah. Right. Exactly. Right. Like you can say, like, oh, he's aware of this problem. And then he
[01:45:47.760 --> 01:45:54.240] produces like this 1,200 page fan Harry Potter fan fiction. Yeah. And I need to be clear. I'm not
[01:45:54.240 --> 01:45:59.440] trying to criticize rationalists by about having like a lame art form. I mean, I would say that
[01:45:59.440 --> 01:46:05.760] the post rationalist art form is the expanding brain meme. Oh, sure. Yeah. No, no, sure.
[01:46:05.760 --> 01:46:09.760] I'm just talking about, I'm just talking about, so like that. And so that's from like a
[01:46:09.760 --> 01:46:14.240] growth perspective. Like, so I think there's also, so like, one thing is like, why didn't
[01:46:14.240 --> 01:46:18.080] less wrong become bigger? And I think, like, the aesthetic argument, right, is part of that.
[01:46:18.080 --> 01:46:21.040] And I think you can get into a lot of reasons for like, why it didn't become bigger.
[01:46:22.160 --> 01:46:28.000] Though, frankly, I think that in terms of how successful you would naively expect this to be,
[01:46:28.000 --> 01:46:32.720] if you just like explain the concept to someone and say 2008. Oh, yeah. I'm going to write this
[01:46:32.720 --> 01:46:38.160] really big book about evolutionary psychology and in Bayes, Fira and all this. And I'm going to
[01:46:38.160 --> 01:46:43.280] have like people read it. I'm going to put it up on the internet. And then it's going to become
[01:46:43.280 --> 01:46:48.480] kind of a silent canon for a lot of people in Silicon Valley. They look at you like your nuts.
[01:46:50.400 --> 01:46:55.200] It's, you know what though? I would say that for a lot of people in Silicon Valley,
[01:46:55.200 --> 01:47:00.720] the canon is not the sequences. It's Slate Star Codex. Oh, oh, absolutely. But you get what I mean,
[01:47:00.720 --> 01:47:06.640] though, right? Like, yeah. Like, if you actually just said the the basic concept to someone in 2008,
[01:47:06.640 --> 01:47:12.560] I think they'd be like, what do you talk? This is nuts. This isn't saying. It did do with HPMR.
[01:47:12.560 --> 01:47:17.360] So I can't argue with success in that sense. I think that in terms of growth and like,
[01:47:18.240 --> 01:47:23.280] you know, like just catching on, I think that it seems likely to me that less wrong is become
[01:47:23.280 --> 01:47:28.400] more influential than general semantics ever was. So I think that's, so I think in that sense,
[01:47:28.400 --> 01:47:33.280] it's successful. I just mean that if we're talking about goals, right?
[01:47:34.960 --> 01:47:39.520] You know, like, what was the, what was the thing supposed to do? Well, C CD didn't work out,
[01:47:39.520 --> 01:47:44.080] but I think a lot of that is just for reasons. Like, that's a little slower in things. I don't
[01:47:44.080 --> 01:47:49.200] really blame it for that. To me, I think the really interesting one to look at is, for example,
[01:47:49.200 --> 01:47:53.920] AI risk. You know, because the big reason he writes the sequences in the first place is that he wants
[01:47:53.920 --> 01:47:58.800] people to focus on AI risk. Yeah. So I guess the question is, how did that work out?
[01:47:59.520 --> 01:48:06.160] I think it's been a complete success. I agree. I think that AI risk went from being like a sci-fi
[01:48:06.160 --> 01:48:13.600] lot concept to a very serious academic thing. Just overall. I was going to say that I haven't
[01:48:13.600 --> 01:48:20.560] seen a hostile AI since they started. Oh, that's, oh, that's interesting. I actually have,
[01:48:20.560 --> 01:48:26.080] but the funny enough, it was, it was in person of interest, which actually has one of the only
[01:48:26.080 --> 01:48:32.080] halfway decent depictions of AI risk I've ever seen in fiction, where the guy is writing his AI.
[01:48:32.880 --> 01:48:38.800] And, you know, it changes some of its own code. And then he asks it, like, who wrote this?
[01:48:38.800 --> 01:48:42.000] And it says, you did. And then he just turns it off. He's like, I have to start over.
[01:48:43.040 --> 01:48:47.280] Yeah. It's self-monified without, you know, and it's self-monified and lied to me. And then later
[01:48:47.280 --> 01:48:53.040] Ronnie's got to the point where it's, you know, it overheats a server to try and trigger the
[01:48:53.040 --> 01:49:00.720] fire suppression system to kill him. Yeah. Sure. And, but the way it's depicted is like, you know,
[01:49:00.720 --> 01:49:07.360] it's not, oh, it becomes powerful. And then it's very much, yeah, if this thing becomes, you know,
[01:49:08.720 --> 01:49:14.240] just a bit more than superhuman, it's going to, you know, be unleashed on the world and do
[01:49:14.240 --> 01:49:18.800] absolutely terrible things. He even says, you know, more at the AI's aren't born with morality.
[01:49:18.800 --> 01:49:25.840] They're born with objectives. It just does it's objective. Like it explains the basic of the problem
[01:49:25.840 --> 01:49:30.400] in a way that, you know, obviously it's still Hollywood. And I'm sure a lot of people would argue,
[01:49:30.400 --> 01:49:35.840] like, no, once it becomes, you know, intelligent, it'll hard take off. And that's unrealistic.
[01:49:35.840 --> 01:49:41.040] But honestly, I think that's way, way foreign above in terms of quality, way better depiction of
[01:49:41.040 --> 01:49:46.480] AI risk than I don't know, like, like war games, right? You ever see what games?
[01:49:47.520 --> 01:49:54.640] Yeah, what about 2001? I've never seen that, but I'm sure it. I've never seen 2001. I don't watch
[01:49:54.640 --> 01:49:58.720] a lot of movies. I could totally imagine, though, that whatever, I'm sure it's cringe.
[01:49:59.760 --> 01:50:05.120] It's, oh, oh, you mean how? You mean how, nine. Oh, oh, yeah. Um,
[01:50:05.120 --> 01:50:11.200] I think that that's, I think that the problem with how is that it's falling prey to the space opera
[01:50:11.200 --> 01:50:16.080] thing where it's like, oh, yeah, we have human level AI, but it doesn't actually bootstrap
[01:50:16.080 --> 01:50:21.760] to superintelligence or even like, you know what I mean? It's like you have a normal space
[01:50:21.760 --> 01:50:26.640] hot. So like, like part of what makes the person of interest thing seen so good is that it's
[01:50:27.600 --> 01:50:32.800] very much not, there's nothing really incongruous about it. Like it's a thing that could really happen.
[01:50:32.800 --> 01:50:40.800] Yeah, interesting. Okay. Um, yeah. No, so I would, I agree with you on the AI risk.
[01:50:40.800 --> 01:50:45.200] Another one is that, you know, he says, okay, c-steeting, I'm going to go ahead and get them
[01:50:45.200 --> 01:50:48.800] the pass on c-steeting because I think that that turned out to be harder than people imagine
[01:50:48.800 --> 01:50:54.000] for reasons that maybe we're first three-yable, for foreseeable, but maybe not. So I'm going to
[01:50:54.000 --> 01:50:59.760] give a pass on it. The meth fuzle foundation, so that would be something. I believe that's,
[01:50:59.760 --> 01:51:05.120] that's a life extension org. Yeah. I don't even, when I click the link, it was gone. So I think
[01:51:05.120 --> 01:51:12.080] there, there, there, there could put, but so that one is, I'm kind of like, yeah, no, I don't think any
[01:51:12.880 --> 01:51:18.000] real, I don't think any real progress was made a life extension by the rationalists. Yeah,
[01:51:18.000 --> 01:51:21.680] you know, no, no, I'm going to give it a hand and give, same as cryonics. I'd give that f.
[01:51:22.800 --> 01:51:27.520] So, okay. So at this point, I was talking, you know, I had that chat with QC,
[01:51:27.520 --> 01:51:35.520] the other day. Yeah. And he mentioned what he did at some of the, I think, C far summer.
[01:51:35.520 --> 01:51:39.280] It was at MIRI. He said he worked at MIRI. He said, he was at, isn't Mary, but this,
[01:51:39.280 --> 01:51:44.480] were the summer camps, Mary? Oh, no, the summer camps, I actually don't know. I, I know C far
[01:51:44.480 --> 01:51:49.920] does a summer camp. I think Mary might also do a summer camp. Okay. So like, he talked about how
[01:51:49.920 --> 01:51:54.800] that went from, I think he said that he was teaching, of course, maybe in Bayesian stats,
[01:51:54.800 --> 01:51:59.040] the first time that he did it. And then by the end of it, it was like, here's a course in how to
[01:51:59.040 --> 01:52:07.760] have fun. And, and like, that actually feels kind of like a sort of post rationalism.
[01:52:08.560 --> 01:52:14.240] Where? Absolutely. And I think that, in terms of, okay. So one thing I have here is
[01:52:14.240 --> 01:52:23.680] kind of Darcy Riley's 2014 post on post rationality. And she basically says that post rationality
[01:52:23.680 --> 01:52:29.440] is like, you know, rationalists think that their system one is bad and they hate it. And they
[01:52:29.440 --> 01:52:34.400] try to like lie in eyes their system too. But really, the system one is useful and it's valuable.
[01:52:34.400 --> 01:52:39.440] And it's like, I don't really think that gets added at all. I think that if you read the sequences,
[01:52:39.440 --> 01:52:44.480] EY does get, he talks a lot about biases because those are the parts of cis one that are bad.
[01:52:45.040 --> 01:52:49.680] But he also says, you know, if you don't have emotions, you wouldn't have values.
[01:52:49.680 --> 01:52:54.720] Emotions are your values. They are the most important part of your, your brain almost.
[01:52:55.280 --> 01:53:00.320] In terms of like, you're at your identity and stuff. Yeah. I don't know. I mean, I think,
[01:53:00.320 --> 01:53:05.120] I think there's an extent to which a lot of post rationality. I mean, you know, if you want to
[01:53:05.120 --> 01:53:10.720] start an intellectual history of that, I think it's probably more influenced. Partly by things like
[01:53:11.600 --> 01:53:17.760] Ian McGill-Christ, for example, or Guy Lorenzer, who talk a lot about boundary rationality and
[01:53:17.760 --> 01:53:28.480] about some specific mechanisms for, you know, just generating thought. And yeah, I think
[01:53:29.760 --> 01:53:33.200] I mean, I think there are people who are like, yeah, thinking is bad. And I definitely
[01:53:33.200 --> 01:53:37.680] fallen to that Bailey from time to time because it's fun. It's a lot of fun to be a literate.
[01:53:37.680 --> 01:53:45.360] I have to tell you, but I mean, if you were to ask me to actually express what I think as opposed to
[01:53:45.360 --> 01:53:50.000] what happens to be fun to express at any point in time, which is torment, like pure torment.
[01:53:51.120 --> 01:53:56.320] I think I would have a much more measured take on it, which is like, yeah, it's very important to
[01:53:56.320 --> 01:54:02.400] have this kind of an explicit and formalized system. And that's a tool that you can use in conjunction
[01:54:02.400 --> 01:54:08.320] with, you know, some kind of a faster and heuristic-based system. And the real strength is having
[01:54:08.320 --> 01:54:14.080] some amount of integration between those two things and being able to integrate them beautifully,
[01:54:14.080 --> 01:54:21.040] pivot back and forth as it becomes useful for some particular space. And does that make sense?
[01:54:21.040 --> 01:54:25.920] Yeah, absolutely. And I completely agree with that. And I think that I think basically everyone,
[01:54:25.920 --> 01:54:30.960] this is where I get's weird for me. I think basically everyone in the conversation agrees with
[01:54:30.960 --> 01:54:36.720] what you just articulated that, right? You obviously have a heuristic-based system. And you have,
[01:54:37.360 --> 01:54:41.040] and you have a, I don't want to say non-rational because the thing is that there's always reasons
[01:54:41.040 --> 01:54:45.920] you are thinking things, and a certain sense, you know, it's not like they just pop up randomly,
[01:54:45.920 --> 01:54:51.680] you know, you have and hold hypothesis for specific reasons, even if those reasons aren't
[01:54:51.680 --> 01:54:57.280] something that say a Bertrand Russell would endorse. Yeah. But, but there is a logic to it.
[01:54:57.280 --> 01:55:04.400] And that I wonder about, actually, I am not sure that I have any much logic that goes into like
[01:55:04.400 --> 01:55:08.560] the generation of hypotheses on my side. I have a thread about this somewhere. All right. All right.
[01:55:08.560 --> 01:55:15.760] When I get a hypothesis, it's like having a direct revelation. And you know, sometimes it turns out
[01:55:15.760 --> 01:55:21.120] to be right, sometimes it turns out to be wrong. But I never get there by some kind of a systematic
[01:55:21.120 --> 01:55:26.960] process. It just pops in my head and hold. Like, that's interesting. I usually, I usually experience it as
[01:55:28.160 --> 01:55:35.440] the system helps me to narrow it down to a particular space, but only so far. And after that,
[01:55:35.440 --> 01:55:40.080] you just kind of got to go digging, it's like looking, it's like exploring. Like, you've narrowed
[01:55:40.080 --> 01:55:45.520] down to a particular set of space that you're going to think about. And you're just, you're not,
[01:55:45.520 --> 01:55:50.800] you don't have a process beyond that point. They can help you figure out the thing. You just have to
[01:55:50.800 --> 01:55:57.120] kind of look, almost. It's like, like I said, it's like exploring a space. And sometimes you find
[01:55:57.120 --> 01:56:02.000] what you're looking for. Sometimes you don't. Sometimes you find it quickly. Sometimes it takes a while.
[01:56:02.560 --> 01:56:09.440] Yeah. But that final, you know, that final portion can't be, it can't be systematized.
[01:56:09.440 --> 01:56:15.200] At least not by any, it is beyond my ability to systematize it. Yeah.
[01:56:15.200 --> 01:56:20.640] That's how I did, did Moon go off at you about reading the master in his emissary.
[01:56:22.000 --> 01:56:28.160] I did not think she did. But what I was going to say though is that I think everyone in the
[01:56:28.160 --> 01:56:34.560] conversation agrees roughly with that though that, you know, there is a set of systemat techniques
[01:56:34.560 --> 01:56:41.760] you can use to aid your heuristic-based banking. Most of what you do is heuristic-based
[01:56:41.760 --> 01:56:47.200] banking on a daily basis. Yeah. So, you know, if you can find ways to improve that, that's
[01:56:47.200 --> 01:56:54.400] probably more useful than if you can, you know, moderately improve your, you know, your explicit reason.
[01:56:55.600 --> 01:57:00.400] One thing I would say is that, so there's Scott's crypto autopsy, right? Have you ever read that?
[01:57:01.600 --> 01:57:05.040] No, but he wrote- Okay. So, let me, let me briefly explain this then.
[01:57:05.040 --> 01:57:11.360] Maybe I should probably hop out of your pretty soon. I know you should. I know you should.
[01:57:13.360 --> 01:57:18.240] Yeah. All right, fine. The basic thing with it is that
[01:57:22.080 --> 01:57:27.360] he goes through any basis. He says, okay, the first people to use Bitcoin were less wrongers
[01:57:27.360 --> 01:57:35.840] and, you know, Warren and these other people literally predicted that Bitcoin would become big.
[01:57:35.840 --> 01:57:39.680] You know, this is what he says. You know, the first mention of Bitcoin,
[01:57:39.680 --> 01:57:44.000] less wrong, a post called Making Money with Bitcoin, was an early 2011 when it was worth 91
[01:57:44.000 --> 01:57:49.360] cents. Warren predicted that it could someday be worth upwards of $10,000. He also, quote,
[01:57:49.360 --> 01:57:54.320] mold bug, who advised that if Bitcoin becomes a new global monetary system,
[01:57:54.320 --> 01:57:58.160] one Bitcoin purchased today for 90 cents will make you a very wealthy individual. Even if
[01:57:58.160 --> 01:58:02.640] the probability of Bitcoin's 16 is epsilon, a million to one. It's still worthwhile for anyone
[01:58:02.640 --> 01:58:06.160] to buy at least a few Bitcoins now. I would not put it in a million to one though.
[01:58:06.160 --> 01:58:10.240] So, I recommend that you go out and buy a few if you have the technical chops.
[01:58:10.240 --> 01:58:14.640] My financial advice is not buy more than 10, which should be a few money if Bitcoin wins.
[01:58:15.360 --> 01:58:18.480] And the thing is, is that at the time when this advice was given, if you had done that,
[01:58:18.480 --> 01:58:22.160] that, you know, is $10. If you had done that, you would now be a millionaire or whatever.
[01:58:22.160 --> 01:58:25.520] Or, well, okay, wait, no. Bitcoin's a 50k each.
[01:58:26.080 --> 01:58:29.440] Something like that. Okay, so yeah, so you'd have like a half a million dollars.
[01:58:31.200 --> 01:58:36.400] If all you had done was just, you know, just literally just listened to that and say,
[01:58:36.400 --> 01:58:40.480] seems reasonable. Do it. Hold onto your private key for a while. You know, don't hold it on
[01:58:40.480 --> 01:58:46.160] Mount Gox. You got to actually hold it locally. And then, you know, and then you just
[01:58:46.160 --> 01:58:52.240] hotdled and you just didn't even think about it. The sides like knowing where the key is, you would be
[01:58:52.240 --> 01:58:57.760] quite wealthy right now in relative terms, especially for a $10 initial cost.
[01:58:59.360 --> 01:59:05.040] Yeah, and this, this is good. And I actually want to close on this just to like end with a compliment.
[01:59:05.040 --> 01:59:11.840] No, I'm so sure. To make sure that I am not sending the wrong message. I think
[01:59:11.840 --> 01:59:18.400] I still see rationalists and post-rationalists as part of the same kind of amorphous group.
[01:59:18.400 --> 01:59:24.320] And I would like both to thrive. And like the better rationalists get the better off we are
[01:59:24.320 --> 01:59:28.800] and advice versus. And I'm like, you know, even to the extent that there's any
[01:59:28.800 --> 01:59:34.640] gain from specialization here. And I did all. I mean, we talked about those four goals that
[01:59:34.640 --> 01:59:41.840] you'd co-skie initially set out to like have people trying to work at. And I think that it's
[01:59:41.840 --> 01:59:46.000] completely rational to drop some of those goals if it turns out to be much harder to implement
[01:59:46.000 --> 01:59:51.680] them than then. Then you might expect or if the perceived. No, yeah, I'm these days. And like,
[01:59:51.680 --> 01:59:56.720] you know, rationalists did like that Bitcoin stuff incredible. Great work, you know.
[01:59:56.720 --> 02:00:02.000] Well, actually, so that actually what Scott said was that very few people actually made money,
[02:00:02.000 --> 02:00:10.080] even though they had predicted it correctly. Oh, that it was a criticism. No. So his post was
[02:00:10.080 --> 02:00:15.760] a long criticism, essentially, of the sort that, you know, if you guys are so smart and you literally
[02:00:15.760 --> 02:00:22.880] predicted the best investment of the century, probably, why aren't you rich? Yeah. Did they not find?
[02:00:23.600 --> 02:00:27.600] And I think that there's a lot and unfortunately we're done here, but I think there is a rich
[02:00:27.600 --> 02:00:33.200] lot of stuff that you could analyze in there. And Scott does. And you can find this post on
[02:00:33.200 --> 02:00:38.080] less wrong. It's, uh, I think it's the cryptocurrency autopsy or something. It's a Scott post.
[02:00:38.080 --> 02:00:43.600] You can find it just by searching for it. Yeah. It's an excellent thing that goes into a lot of it.
[02:00:43.600 --> 02:00:50.000] And in terms of post-rat versus rat, um, I like to said, I think it's, I think people agree
[02:00:50.000 --> 02:00:55.600] on the basic points there. I think that what's actually the argument is about something like
[02:00:55.600 --> 02:01:02.720] aesthetics and the way you should socially organize yourself and about how you should.
[02:01:03.440 --> 02:01:08.400] You get what I mean. I don't think it's actually an argument about ideas. I think that it's
[02:01:09.520 --> 02:01:14.320] because the ideas themselves are rarely brought up. And when they are, it's usually in like a
[02:01:14.320 --> 02:01:20.480] straw man way. Like, oh, yeah, you know, those rationalists don't think that you have feelings.
[02:01:20.480 --> 02:01:23.280] And when the post and when the rationalists are critiquing the post, they're like, oh, yeah,
[02:01:23.280 --> 02:01:27.360] those guys are like hippy-dippy. And so like, there's very little actual discourse around the
[02:01:27.360 --> 02:01:33.840] ideas themselves. I think that most of what is discussed is just these people's aesthetics are bad.
[02:01:33.840 --> 02:01:40.080] And I don't like that. In fact, a silver, a no silver, I don't know. Yeah. He's great.
[02:01:40.080 --> 02:01:43.920] Right. Well, yeah. Like, that's literally what he says. He just says, I have an ascetic objection
[02:01:43.920 --> 02:01:50.720] to rationality. Yeah. And I think, yeah, I don't know. I mean, like realistically,
[02:01:50.720 --> 02:01:56.560] we have separate parties, but we go to a lot of the same parties. And like rationalists do
[02:01:56.560 --> 02:02:02.640] acid and post rationalists actually study a hell of a lot of math. You know, it's like, yeah.
[02:02:02.640 --> 02:02:06.720] Yeah, that's kind of how I feel about it. It feels a little bit like a tempest in a T-pot.
[02:02:06.720 --> 02:02:11.440] If anything, I think if there's any enmity there, it's mostly a matter of
[02:02:12.320 --> 02:02:17.600] these people are so similar that their differences become like, you know what I mean? You know,
[02:02:17.600 --> 02:02:22.480] like those, like in any movement where you have like the dozen splinter factions that all like are
[02:02:22.480 --> 02:02:27.760] nearly like at the point of like strangling each other over these extremely minor ideological
[02:02:27.760 --> 02:02:32.400] differences. Yeah. That's almost how I feel about the rat versus post rat beef. Like, it's
[02:02:32.400 --> 02:02:40.640] extremely, it's like a mix of mutual misunderstanding and then like actual real differences in
[02:02:40.640 --> 02:02:45.920] terms of aesthetics, a social organization, stuff like that. I do think that mostly it's met
[02:02:45.920 --> 02:02:52.400] it up. I think that in terms of the actual ideas, these people 99% agree with each other.
[02:02:52.400 --> 02:02:57.920] Yeah. And I mean, like, you know, to be real and focusing just on recent events,
[02:02:57.920 --> 02:03:01.280] I mean post rats were really pissed about what New York Times did to Scott.
[02:03:01.280 --> 02:03:06.880] I think our reaction was somewhat different in execution than then came from the rational aside,
[02:03:06.880 --> 02:03:12.480] but everybody was interested in everybody was, I think really in kind of a reactive way quite protective
[02:03:12.480 --> 02:03:17.440] of, I mean, somebody that basically, I think we see is one of our own, even if he doesn't exactly
[02:03:17.440 --> 02:03:25.280] look at it like that. No, absolutely. I mean, I would, I'd be surprised if you made like a post rat
[02:03:25.280 --> 02:03:31.840] canon and Scott wasn't in it. Oh, yeah, for sure. I mean, like he's, I mean, he's, he's like a
[02:03:31.840 --> 02:03:38.400] colleague, right? Like, it, I, just going by the questions on some of his surveys, I think he
[02:03:38.400 --> 02:03:44.720] falls quite a bit more into the hard rat tradition, but I mean, like, he's well, well read,
[02:03:44.720 --> 02:03:48.560] widely read and well loved by, by everyone in the post rats sphere. And I think he's kind of
[02:03:48.560 --> 02:03:55.200] a floating signifier for for the entire like mega tribe. So we have about 10 minutes and it's funny,
[02:03:55.200 --> 02:04:01.040] I did, you brought up Scott right now because if you remember, I do actually have a story that is
[02:04:01.040 --> 02:04:09.040] related to Scott. Okay, yeah, yeah. And it's basically about this guy, let me find this here.
[02:04:11.520 --> 02:04:18.400] All right, so there's this guy in, think about the, and like, probably the 20s,
[02:04:19.520 --> 02:04:29.520] maybe his name was Boris Brasel and Corzib's Alfred Corzib's scheme met him in this scene he was in,
[02:04:29.520 --> 02:04:34.640] right, these people were trying to figure out how to prevent World War II. And Boris Brasel,
[02:04:34.640 --> 02:04:40.160] he met Corzib's scheme, he seemed like a really cool guy to him. And Corzib's, he was just immediately
[02:04:40.160 --> 02:04:47.760] impressed with his intellect and his blah, blah, blah. And Boris Brasel also was a secret agent working
[02:04:47.760 --> 02:04:57.760] for the Russian government and his mission was to provoke outrage over the communist ins, ins,
[02:04:57.760 --> 02:05:27.760] ins secret, ins, ins, ins, ins, ins, ins, ins,sis, ins, ins, ins, ins, ins, ins, ins, ins, ins, ins, ins, ins, ins, ins, ins, ins, ins, ins, ins, ins, ins, ins. ins, ins, ins. ins, ins, ins, ins, ins, ins, ins, ins, ins, ins, ins, ins, ins, ins, ins, ins, ins, ins or ins, ins, ins, ins, ins, ins, ins. Cal, ins, ins, ins, ins, ins, ins, ins, ins, outs. Cal, ins, ins, ins, ins, ins, ins, ins, ins, ins, ins, ins, ins, ins, ins, ins, ins, ins, ins, ins, ins, ins, ins, ins, ins, ins, ins, ins and ins, ins, ins, ins. Ins, ins, ins, ins, ins, ins, ins, ins, insван
[02:05:27.760 --> 02:05:35.240] over from Ebola. And so he was working on behalf of this conspiracy. And his basic
[02:05:35.240 --> 02:05:40.760] human, he was working for the whites of the reds. The whites. He was okay. Okay.
[02:05:40.760 --> 02:05:45.760] Yeah. And so his goal was to basically convince people that communism was a Jewish
[02:05:45.760 --> 02:05:52.280] plot to take over the world. Oh. And so Khorzipsky met him and he introduced him to his
[02:05:52.280 --> 02:06:01.040] friend group, the new machine. And this kind of. So the thing about Boris Brasel is that
[02:06:01.040 --> 02:06:08.840] he kind of was caused the Holocaust in a roundabout way. He wrote, I believe he wrote the
[02:06:08.840 --> 02:06:16.000] protocols of the elders of Zion. And he said, Oh, shit. And he circulated it around to
[02:06:16.000 --> 02:06:20.720] all of these people. And he went to the state department and he started circulating
[02:06:20.720 --> 02:06:28.760] the ideas there. And, you know, he then was sponsored by Henry Ford. And Henry Ford's
[02:06:28.760 --> 02:06:32.840] work on anti-Semitism was, I believe, ultimately the inspiration for Hitler. So in
[02:06:32.840 --> 02:06:43.360] a roundabout way, this guy literally caused the Holocaust. And it's working. And so that's
[02:06:43.360 --> 02:06:47.920] an interesting way for raising his work on anti-Semitism. Like you'd usually interpret
[02:06:47.920 --> 02:06:52.720] that as a meaning like working against it. But no, he, okay, furthering it. Yeah, go on.
[02:06:52.720 --> 02:06:59.560] Yeah, sure. I'm just, I'm being obviously, it's, I'm just trying to be historical here.
[02:06:59.560 --> 02:07:03.880] I'm not. Yeah. Yeah. No, no, for sure. No, no, no, no, no, no. I, I get what you mean. It
[02:07:03.880 --> 02:07:06.560] is kind of an odd phrase when it came out in the mouth. I was like, but that's the kind
[02:07:06.560 --> 02:07:15.680] of a, hmm. But still. So he's doing this. And very, you know, obviously, very, in a, again,
[02:07:15.680 --> 02:07:21.960] an objective terms, very successful at it. And this part of Corsid's history kind of damaged
[02:07:21.960 --> 02:07:27.200] his reputation, at least, historically, because he did have a brief friendship with this
[02:07:27.200 --> 02:07:34.400] person. And he did introduce him to his friend group. And this person was a horrible, like,
[02:07:34.400 --> 02:07:42.120] he had absolutely horrible ramifications for how the 20th century played out. And so in Bruce
[02:07:42.120 --> 02:07:46.240] Cotish is biography, which, by the way, is excellent. And I recommend it. It's where most
[02:07:46.240 --> 02:07:50.960] of my knowledge of Corsid's key comes from is, it's just called Corsid's key biography.
[02:07:50.960 --> 02:07:56.360] Yeah. He actually spends a lot of time discussing Corsid's relationship to anti-Semitism.
[02:07:56.360 --> 02:08:00.960] And I don't know this, but my inference is that Corsid's key probably got a lot of
[02:08:00.960 --> 02:08:10.160] flack for this brief association he had with Boris Brasal. Was, was Brasal, I mean, like you
[02:08:10.160 --> 02:08:15.480] mentioned he was an agent. Was he outwardly and explicitly an anti-Semite himself or was
[02:08:15.480 --> 02:08:20.520] that something did covertly? Yeah. So that's a little bit of the thing is that Boris Brasal
[02:08:20.520 --> 02:08:25.040] did not put his name on the protocols, the elders of Zion. He wrote anonymously. And he
[02:08:25.040 --> 02:08:29.800] did spread anti-Semitic ideas, but, and obviously, he was very successful at it. But I
[02:08:29.800 --> 02:08:36.000] don't know if Corsid's key really fully understood. But what's actually interesting about
[02:08:36.000 --> 02:08:42.920] the story is that, you know, Corsid's key himself briefly became an anti-Semite. And
[02:08:42.920 --> 02:08:48.760] then later on, he repudiated it completely. He said that was horrible. And more to the
[02:08:48.760 --> 02:08:54.640] point, he actually started to use anti-Semitism as his model of a magnetic disease. So
[02:08:54.640 --> 02:08:59.640] to him, you know, if you look at an anti-Semite, they start out with like, you know, that
[02:08:59.640 --> 02:09:04.200] the Jews are so bad and they're awful. And then they descend. And I've actually seen this
[02:09:04.200 --> 02:09:07.400] app. And I can totally see why he would think of it like a progressive disease. Because
[02:09:07.400 --> 02:09:12.240] I've seen this happen to people as they start out with like a little bit of anti-Semitism.
[02:09:12.240 --> 02:09:16.040] And then, you know, it's like, no, the Jews are controlling the banks and the world in
[02:09:16.040 --> 02:09:19.640] there. And it's just like building and building and building and like all of the sun,
[02:09:19.640 --> 02:09:23.680] everything is about the Jews and the Jews are the worst thing to ever happen to humanity and
[02:09:23.680 --> 02:09:28.760] all this horrible stuff. And it's like it takes over their mind. It's, you know, it's, it's
[02:09:28.760 --> 02:09:36.480] the brainworms. It's anti-Semitic brainworms. And Khorzybsky actually modeled a lot of his philosophy
[02:09:36.480 --> 02:09:44.320] on combating this kind of cranky, whatever you want to call it, cranky, bigot, you know,
[02:09:44.320 --> 02:09:52.080] this kind of brainworms. Uh-huh. It became a big part of his thing was not necessarily
[02:09:52.080 --> 02:09:57.960] anti-Semitism per se. But the generalized version of combating anti-Semitism of trying to
[02:09:57.960 --> 02:10:03.560] bring people to sanity in the sense that anti-Semitism is wrong. There is no Jewish conspiracy
[02:10:03.560 --> 02:10:11.000] to control the world. You know, this stuff is conspiracy theory. It is, it's crap, right? Yeah.
[02:10:11.000 --> 02:10:17.000] And so a lot of course, and so like, I think there is kind of a parallel there in that,
[02:10:17.000 --> 02:10:21.040] you know, and as I put it in a, during the recent Scott drama on Twitter, I said something
[02:10:21.040 --> 02:10:29.200] like, well, look, Scott, mold bug was going to become popular. Yeah, that's the, right?
[02:10:29.200 --> 02:10:33.760] That's the comparison I'm making, mold bug to, to Boris Braze. Oh, there you go. Yeah.
[02:10:33.760 --> 02:10:39.920] But a, you mold bug was going to become popular regardless of what Scott did. And Boris
[02:10:39.920 --> 02:10:44.400] Braze was going to succeed regardless of whether Khorzybsky had happened to introduce him to this
[02:10:44.400 --> 02:10:51.760] friend group or not. But the interaction between the two is interesting and that Boris, you know,
[02:10:51.760 --> 02:10:57.360] Khorzybsky later, you know, his running with anti-Semitism was a key foundational thing that
[02:10:57.360 --> 02:11:05.200] helped him create his helpful philosophy, right? Yeah. His kind of vaccine to it, which is
[02:11:05.200 --> 02:11:10.000] actually how a driver's self-sane is advertised kind of, you know, if you read this book,
[02:11:10.000 --> 02:11:19.360] it curious bigotry is like one of the reviews on the back. And yeah, yeah, I don't know. I mean,
[02:11:19.360 --> 02:11:24.320] it's, I think it's a little bit complicated as a metaphor. Like, I mean, I do, but I also just
[02:11:24.320 --> 02:11:28.160] think that the, the parallel is really fascinating. Like the fact that there's a parallel,
[02:11:28.160 --> 02:11:32.560] it even, like, just like, wow, that's an interesting one. Yeah, no, it's super interesting.
[02:11:32.560 --> 02:11:36.880] And I do think that I think there's a lot of people who do see mold bug as a Boris Braze
[02:11:36.880 --> 02:11:41.840] will kind of figure. Yeah. You can, you can agree or disagree with that. I think that there's a lot of
[02:11:41.840 --> 02:11:47.840] people who their literal conception of Mensea's mold bug is that he is this person who is going to
[02:11:47.840 --> 02:11:54.800] cause absolutely great amounts of horror in the future by his contribution to the discourse.
[02:11:56.560 --> 02:12:01.200] Yeah, I have a lot of thoughts on that. Yeah, we're not going there because we're about done.
[02:12:01.200 --> 02:12:07.920] But yeah. So I guess what we'll do about the thought thing to close is that, you know,
[02:12:07.920 --> 02:12:13.360] how Scott ended up engaging with mold bug, he was never really a, he's never a near reactionary.
[02:12:13.360 --> 02:12:20.960] He has engagement was mostly, this is horrible. So I'm going to start by presenting my best,
[02:12:20.960 --> 02:12:25.600] you know, my best version of it, so that I'm arguing against something solid. And then
[02:12:25.600 --> 02:12:30.880] I'm going to systematically dismantle dissect and show the ways in which this is just, you know,
[02:12:30.880 --> 02:12:35.520] total cranky. And I think that, you know, if so, I actually made an addition of Slate Star
[02:12:35.520 --> 02:12:41.920] Codex called Slate Star Codex a bridge, which is not, it's not edited. It's only a bridge in
[02:12:41.920 --> 02:12:48.720] the sense that I only use some of his essays. But in there, the first kind of sequence of posts
[02:12:48.720 --> 02:12:53.840] is about defending liberalism and where that starts is with his discourse on neo-reaction because
[02:12:54.640 --> 02:13:00.080] his whole point of engagement with neo-reaction was to say, look, know the liberal system isn't
[02:13:00.080 --> 02:13:05.920] decadent and worn out and we need a new monarch ruler. You know, this is kind of cranky. And I'm
[02:13:05.920 --> 02:13:10.400] going to start by laying out like what the neo-reaction argument is. And I'm going to demolish it.
[02:13:10.400 --> 02:13:14.480] And I think if you actually read those posts and sequence, he does a really good job of that.
[02:13:14.480 --> 02:13:19.760] And it was a, oh, I think it was a net positive thing for him to do. And for the, you know,
[02:13:19.760 --> 02:13:26.800] the New York Times to come after him for it is, I mean, it's ludicrous. It's, I'd say it's been
[02:13:26.800 --> 02:13:33.360] addictive. It's, it's nuts. I mean, like, I, okay, I had a brief thread earlier on
[02:13:34.720 --> 02:13:40.240] liberalism. And maybe, maybe just liberals, because, you know, ultimately, this is this is a
[02:13:40.800 --> 02:13:48.720] idea that's embedded within, you know, human substrates. I, I think Scott is a liberal who has the
[02:13:48.720 --> 02:13:54.880] courage of his convictions. And the fact that other liberals or people who might perceive themselves
[02:13:54.880 --> 02:14:02.960] to be liberals are not explicitly defending him and taking up his torch on this is, I, I think it's
[02:14:02.960 --> 02:14:09.440] kind of ranked cowardice. And I'm really disappointed by public figures who, my, my, especially the
[02:14:09.440 --> 02:14:15.040] old institutions that you might have identified as liberal, right? Like, say the ACLU, or even the New
[02:14:15.040 --> 02:14:19.120] York Times for some parts of the history, although I frankly don't think New York Times really
[02:14:19.120 --> 02:14:26.880] deserves any kind of a positive record. And, um, I don't know, it's, it's, it's kind of a downer for me,
[02:14:26.880 --> 02:14:33.200] but Scott Scott's a bright point. Yeah. I think we're getting agree on that. Scott is a bright point.
[02:14:33.200 --> 02:14:44.000] And I'm really glad that I'm glad he, he contributed. Yeah. Okay. I definitely have to go. Yeah,
[02:14:44.000 --> 02:14:48.880] that's, that's, that's the only episode. Yep. That is, that is the, that is the episode in conclusion.
[02:14:48.880 --> 02:14:55.280] Fuck, yes, Scott. Um, hey, man, it's, it has been an absolute pleasure having you on. Thanks for
[02:14:55.280 --> 02:15:17.840] dubbing all of that. No problem. Thank you. Bye.
(Whisper) zmd@CaptiveFacility Whisper %
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment