Skip to content

Instantly share code, notes, and snippets.

@JD-P
Created January 4, 2024 16:49
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save JD-P/1cf13fe0c8f52f24d3187cfa129f5e45 to your computer and use it in GitHub Desktop.
Save JD-P/1cf13fe0c8f52f24d3187cfa129f5e45 to your computer and use it in GitHub Desktop.
MiniModel
minimodel
A self contained hyper short post (I limit myself to 1024 characters, 2048 if I absolutely need it) which is intended to transmit a complete but not necessarily comprehensive model of some phenomena, skill, etc.
The MiniModel format fell out of three things:
1. My dissatisfaction with essays and blog posts.
2. My experimentation with microblogging as a way of getting my ideas out faster and more incrementally.
3. [Maia Pasek's published notes page](https://web.archive.org/web/20170821010721/https://squirrelinhell.github.io/).
MiniModels can be contrasted with a *note* or *zettel* which is generally a derivative summary of some other document or concept. MiniModels are centrally original, synthesized, or enhanced concepts even if they take inspiration from other sources. It is the model as it exists in the authors head, not someone else's.
---
[Inspiration] Pasek, M. (2017, August 21). [*SquirrelInHell's mind*](https://web.archive.org/web/20170821010721/https://squirrelinhell.github.io/). GitHub.
[Convergence] Demski, A. (2019, September 20). [*The zettelkasten method*](https://www.greaterwrong.com/posts/NfdHG6oHBJ8Qxc26s/the-zettelkasten-method-1). LessWrong.
[Convergence] Farnam Street Media Inc. (n.a.) [*Mental models: The best way to make intelligent decisions (109 models explained)*](https://fs.blog/mental-models/).
Four Extropian Virtues
four-extropian-virtues
The four basic things you **need** to develop to be good at Extropy.
1. **Agency**. Well versed in the methods of piloting yourself to do things. Building habits, not giving up at the first setback, strength, [maximizing](#hunger-cultivation), etc.
2. **Sanity**. [A clear view of the world](#map-and-territory), very well in tune with yourself, a strong well constructed (i.e., not full of [ad-hoc garbage](#cuckoo-belief)) [identity](#keep-your-identity-small), [good epistemology](#expectation), etc.
3. **A love for the world and its inhabitants**. The belief that death is Bad, a fully developed secular moral system. Not limiting your scope of concern to immediate tribe-mates or friends & kin. Caring about people even when you have to fight them. Relentless determination and altruism for people you don't know. Taking consequentialism seriously if not literally.
4. **High future shock**. Necessary to realize that there are solutions to the problems we have, and things worth fighting for. It's not all hopeless, there are glorious things within our reach, etc.
---
[Note] This is my personal interpretation/summary of Eliezer's philosophy, but it's
not clear that he would agree with it. In fact, any frame involving 'religion'
he would double-dog disagree with. So this seems like the obvious place to point
out that what I talk about is *inspired* by Eliezer Yudkowsky, but is not *endorsed*
by him, or necessarily how he would talk about it.
[Inspiration] Yudkowsky, E. (2001, May 14). [*Future shock levels*](http://www.sl4.org/shocklevels.html). SL4 Mailing List.
[Inspiration] Yudkowsky, E. (2015). [*Rationality: From ai to zombies*](https://www.readthesequences.com/).
[Illustrative] Raemon. (2018, December 5). [*On rationalist solstice and epistemic caution*](https://www.greaterwrong.com/posts/CGkZEQdeBZZXbBT7o/on-rationalist-solstice-and-epistemic-caution). LessWrong. {The 'Principle Underpinnings' here seem to be groping towards similar ideas/concepts}
[See Also] [Eliezer's Extropy](#eliezers-extropy), [Map and Territory](#map-and-territory), [Agent](#agent), [Upstream & Downstream](#upstream-and-downstream), [High Future Shock](#high-future-shock), [Alignment](#alignment)
Existential Risk
existential-risk
A problem that will either destroy humanity or the potential for advanced human
civilization if left unchecked. Nuclear war is the best known X-Risk,
along with catastrophic climate change. While these two things exist in the public
mind independently, I don't think most people have the general category of an
X-Risk. Two clear examples aren't enough to feel the need to form a category around
them. During the 21st century however we will be adding several items, including 'artificial
superintelligence', 'bioengineered pandemic', and 'Bronze Age Collapse 2.0 but we
can't bootstrap civilization again because we ate all the fossil fuels'.
The 21st century is a suicide ritual, and at the end humanity kills itself. It’s
so easy to get caught up in the role, in the character you’re playing in this
story, that you forget there’s a real world full of real people who will really die.
Playing a role makes the situation acceptable, it’s a way of coming to a mutual
suicide pact with others.
---
[Inspiration] Fontaine, R. (2006, September 16). [*WHAT TO DO IF A NUCLEAR DISASTER IS IMMINENT!*](http://web.archive.org/web/20071017070221/http://survivaltopics.com/survival/what-to-do-if-a-nuclear-disaster-is-imminent/). Survival Topics. {It's no longer on the web, but the first time I really internalized
the concept of an *existential risk* is when I got idly curious about a nuclear war
(perhaps after playing Fallout 3?) age 12-13 and decided to look up how you would survive one.
The article I found was a quick prep guide by Ron Fontaine that emphasized the
survivability of a nuclear attack, but the lesson I took away from it was that
if WW3 happened right then, I would die. And I wasn't okay with that. Suddenly
nukes were as real to me as an angry grizzly bear on the trail. But my parents
*were* okay with it, totally fine in fact, acted like it was the most normal
thing in the world for everything you know and love to be brief orders away
from total destruction. I wasn't, and that was one of the first basic fundamental breaks
between my perspective and theirs.
Incidentally I didn't remember it until now but I find it interesting how this
survivalist site has the same "lots and lots of content about a subject you can
rabbit hole for weeks" nature as Eliezer Yudkowsky's Sequences. Given I also got
hooked on TVTropes that sort of thing was clearly appealing to me.}
[Inspiration] Yudkowsky, E. (2015). [*Rationality: From ai to zombies*](https://www.readthesequences.com/). Read The Sequences. {
I actually can't remember where I first heard the phrase 'existential risk', but since it appears several times in this book I'm sure I heard it there.}
[See Also] [Radical Truth](#radical-truth), [Upstream & Downstream](#upstream-and-downstream), [High Future Shock](#high-future-shock), [Eliezer's Extropy](#eliezers-extropy), [Dead Man Walking](#dead-man-walking), [Greek Tragedy](#greek-tragedy), [Singularity (Jhāna)](#singularity-jhana), [Raising The Sanity Waterline](#raising-the-sanity-waterline), [North Star Fallacy](#north-star-fallacy)
The Singularity
the-singularity
We live in interesting times. The culmination of thousands of years of civilization
has led us to a knife edge decision between utopia and extinction.
On the one hand, the world is in intractable, fractal crisis. Most of that crisis
is of our own doing: Global Warming, Nuclear Warfare, AI Risk, portents of
ecological and social collapse.
On the other hand, we are now very close to a technological singularity: the point
where we will be able to think and act so quickly that we invent virtually
everything there is to invent in one sprint. Ironically enough this possibility
arises from the same technological powers that endanger us.
It is a race now, between whether we will first unlock that boundless spacefaring
future or meet ruin and die on earth. Ultimately, this might be the only thing that
matters in the 21st century, perhaps the only thing that matters in any century up to
this point and forever after.
The outcome might be decided by a cosmic millimeter, mere hours, days or months.
---
[Note] There's no rule that says a singularity has to have positive outcomes. In fact, [the outcomes are probably negative by default](https://www.slatestarcodexabridged.com/Book-Review-Age-Of-Em).
[See Also] [Singularity (Jhāna)](#singularity-jhana), [High Future Shock](#high-future-shock), [Eliezer's Extropy](#eliezers-extropy), [Existential Risk](#existential-risk), [Talebian Ruin](#talebian-ruin), [Dead Man Walking](#dead-man-walking)
Worldspider
worldspider
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment