Skip to content

Instantly share code, notes, and snippets.

@ivan
Last active July 17, 2025 09:36
Show Gist options
  • Save ivan/a36e2489623469d96c1ad79077b6dcf9 to your computer and use it in GitHub Desktop.
Save ivan/a36e2489623469d96c1ad79077b6dcf9 to your computer and use it in GitHub Desktop.
2024 reading list

Things I might read in 2024.

Now extended into 2025.



  • Antoine de Saint-Exupéry, Richard Howard (translator) - The Little Prince
  • (Translation by) Sam Hamill - Yellow River: Three Hundred Poems From the Chinese
  • Sayaka Murata, Ginny Tapley Takemori (translator) - Convenience Store Woman (via)
  • Jorge Luis Borges - Tlön, Uqbar, Orbis Tertius (in Labyrinths)/ printed (via)
  • Franz Kafka - The Metamorphosis (via)
  • William Olaf Stapledon - Star Maker/ audio, go to 12m35s to skip past the introduction spoilers

  • The Heart of Innovation: A Field Guide for Navigating to Authentic Demand/ audio (via)
  • Peter D. Kaufman - Poor Charlie's Almanack: The Wit and Wisdom of Charles T. Munger, Expanded Third Edition
  • Lia A. DiBello - Expertise in Business: Evolving with a Changing World (in The Oxford Handbook of Expertise) (via)
  • Joël Glenn Brenner - The Emperors of Chocolate: Inside the Secret World of Hershey and Mars
  • Elad Gil - High Growth Handbook/ audio
  • W. Edwards Deming - The New Economics for Industry, Government, Education/ audio
  • W. Edwards Deming - The New Economics for Industry, Government, Education/ the PDF or ebook
  • Henrik Karlsson - Escaping Flatland/ including the posts I SingleFile'd
  • the relevant-looking posts on benkuhn.net/posts
  • Commoncog Case Library Beta
  • Keith J. Cunningham - The Road Less Stupid: Advice from the Chairman of the Board/ audio
  • Keith J. Cunningham - The 4-Day MBA/ video
  • Cedric Chin's summary of 7 Powers
  • Akio Morita, Edwin M. Reingold, Mitsuko Shimomura - Made in Japan: Akio Morita and Sony
  • Nomad Investment Partnership Letters or redacted (via)
  • How to Lose Money in Derivatives: Examples From Hedge Funds and Bank Trading Departments
  • Brian Hayes - Infrastructure: A Guide to the Industrial Landscape
  • Accelerated Expertise (via)/ printed, "read Chapters 9-13 and skim everything else"
  • David J. Gerber - The Inventor's Dilemma (via Oxide and Friends)
  • Alex Komoroske - The Compendium / after I convert the Firebase export in code/websites/compendium-cards-data/db.json to a single HTML page
  • Rich Cohen - The Fish That Ate The Whale (via)
  • Bob Caspe - Entrepreneurial Action/ printed, skim for anything I don't know



Interactive fiction


unplanned notable things read


unplanned and abandoned

  • Ichiro Kishimi, Fumitake Koga - The Courage to Be Disliked/ audio
  • Matt Dinniman - Dungeon Crawler Carl/ audio
  • Charles Eisenstein - The More Beautiful World Our Hearts Know Is Possible/ audio
  • Geoff Smart - Who: The A Method for Hiring/ audio
  • Genki Kawamura - If Cats Disappeared from the World/ audio
  • Paul Stamets - Fantastic Fungi: How Mushrooms Can Heal, Shift Consciousness, and Save the Planet/ audio
  • Jefferson Fisher - The Next Conversation/ audio
@ivan
Copy link
Author

ivan commented Jun 30, 2025

but if rapists and murderers have no free will, then neither do science frauds! they must also be "funished" - allowed to keep their jobs, keep publishing, maybe "heavanbanned" so they only see positive comments about their work

https://x.com/literalbanana/status/1938410363135135896

@ivan
Copy link
Author

ivan commented Jun 30, 2025

If you hire a driver, or use a taxi, offer to pay the driver to take you to visit their mother. They will ordinarily jump at the chance. They fulfill their filial duty and you will get easy entry into a local’s home, and a very high chance to taste some home cooking. Mother, driver, and you leave happy. This trick rarely fails.

Say my good man! Why don't we visit your mother? We can secure a filial obligation for you and an authentic experience for me! By the way, is your sister getting married anytime soon? I have a small cash token to offer in this regard, I think it will be most welcome!

https://news.ycombinator.com/item?id=43066720

@ivan
Copy link
Author

ivan commented Jun 30, 2025

One way to think of political radicalism is that young people can only feel scalars. So they join whatever thing makes them feel the most intense emotions. You need to become older before you can feel vectors.

https://x.com/brianluidog/status/1938806295571898598

@ivan
Copy link
Author

ivan commented Jun 30, 2025

Life lesson I learned way too late: take significantly bigger swings than you think are possible.

https://x.com/MatthewBerman/status/1938633727200182568

@ivan
Copy link
Author

ivan commented Jun 30, 2025

Are you interested in knowing more?

https://x.com/lemonade_grrrl/status/1938636436024299878

@ivan
Copy link
Author

ivan commented Jun 30, 2025

I don't know what a "neural network" is beyond a composition of parameterized functions that you can (easily) differentiate w.r.t. the parameters.

In that case, you can use the imaginary part of the log of a random meromorphic function (i.e., a ratio of complex polynomials).

https://x.com/keenanisalive/status/1448011475231117315
also https://x.com/keenanisalive/status/1448036393012322313
via https://x.com/prathyvsh/status/1938495735382851702

@ivan
Copy link
Author

ivan commented Jul 1, 2025

Many people misunderstand the first rule of kings (including a plenty of kings themselves). You must only give orders that will be obeyed. Break this rule, and you will not stay as king for long.

The Supreme Court showed that they still have the sense to stay as kings.

The rule that the craziest of your subordinates gets to act with the full weight and force of the king himself was on a very fast track to getting the entire authority openly defied. Far better to preserve your long term authority by restricting its scope on the short term.

The big question, of course, is how do you know which orders will actually be followed? That is the large challenge of being king. You are often trying to forecast matters outside of equilibrium, and outside of events you've witnessed before.

The best way to see this tension is to look at the strongest example of a modern absolute monarch - the Pope. When he speaks ex cathedra, it is literally the word of God. What is the consequence of this power? That the Pope almost never speaks ex cathedra!

Start giving orders that God has changed his mind about abortion or whatever, and you might find yourself presiding over a revolt of your bishops, and a massively diminished church.

The requirement to know what orders will be obeyed holds no matter how absolute your theoretical authority is. Does the pope directly control the parishioners? Do the Supreme Court justices control the army? Even if they did, would the army necessarily obey them? It depends!

The reason being a king is so hard is that it requires a constant tension of deception. In public, you must act like your dignity and authority means you will always be obeyed. But in private, you must have an very keen eye on the possibility that you will not be obeyed.

Many people are not able to pull this off, and both versions of consistency have their dangers. Focus only on the latter, as if you're just the head of a popularity contest, and your weakness embolden opposition which can topple you, like Louis XVI.

Focus only on the former, thinking that all of your dignity and power is permanent and real without you needing to monitor it, and you become blind to forces building up against you, like Charles X in France.

The conclusion I have come to is that ultimately all power is informal power. The rules and norms and history still matter, but they matter as Schelling points and coordination devices to guide the hands and choices of men.

https://x.com/shylockh/status/1938631570627379316

@ivan
Copy link
Author

ivan commented Jul 2, 2025

These apps will look dated in a few years, don’t waste your time. You’re just having fun playing around making old shit that could have made you a lot of money 10 years ago but is now just a weekend project. That’s the way things go in tech. Starry-eyed dreamers will let their imagination run wild, but they’re the laggards, the industry is already thinking ahead.

The next generation of apps isn’t going to look like the previous gen. No beautiful UIs and fancy CSS. No UI at all.

Instead, everyone will have some kind of platform like Cursor, but instead of just coding, it’s for everything.

Subscribing to new services for your AI to use will be the equivalent of downloading apps from an AppStore to your phone.

Then you can just say things like “fuck this person! AI, give me an OSINT profile of this Redditor!” and since your AI has the osint app it compiles the info instantly and says “here, damn”. No need to open an app, just straight info into your brain as quickly as possible.

AI has clearly made us tired of googling endlessly for info on random websites, so why are we still opening up apps to do various tasks? Because we want to see pretty interfaces? Get real. It’s time for the UNIX philosophy to go mainstream. Start thinking of how your product can minimize time to satisfaction, graphical interfaces get in the way of satisfaction.

The only problem is we currently don’t have a single unifying platform like an iPhone or something to consolidate a user base, but it will come. Start planning for that day so you can launch new services on day 1. It will be a gold rush.

And in the end, a lot of people will find they will struggle with coming up with good AI app ideas, because 80% of their idea was just putting a pretty interface in front of something complex. That’s how you know it was mostly a bad idea.

https://news.ycombinator.com/item?id=44378692

@ivan
Copy link
Author

ivan commented Jul 2, 2025

In science, we might suppose that the more simple, elegant, and powerful a theory is, the more likely it is to be the right one – there are many ways to write down an equation to describe the oscillation of a spring, but we take Hooke’s law to be the “right” theory because it provides both remarkable simplicity and high predictive power. Similarly, we might suppose that if we have an algorithm that is simple, elegant, and explains similar essential functions as the human mind, then it is likely to be the right model of the mind’s computational processes. That is, if LLMs are trained with a simple algorithm and acquire functionality that resembles that of the mind, then their underlying algorithm should also resemble the algorithm by which the mind acquires its functionality. However, there is one very different alternative explanation: instead of acquiring its capabilities by observing the world in the same way as humans, LLMs might acquire their capabilities by observing the human mind and copying its function. Instead of implementing a learning process that can learn how the world works, they implement an incredibly indirect process for scanning human brains to construct a crude copy of human cognitive processes.

Of course, there are no people strapped to fMRI machines in data centers that train LLMs (that I know of). Instead of directly scanning real live brains, LLMs reconstruct the human mind through the shadow that it casts on the Internet. Most of the text data on the web is there because a person pressed buttons on a keyboard to type out that text, and those button presses were the result of mental processes that arose from underlying cognitive abilities: solving a math problem, making a joke, writing a news story. By acquiring compressed representations of this text, the LLM is essentially trying to reverse engineer the mental process that gave rise to it, and indirectly copying the corresponding cognitive ability. While the Human Connectome Project is busy reconstructing the human brain neuron by neuron, LLMs are trying to skip the neurons all together and reconstruct the mind from the shadow it casts on the Internet.

This explains why video prediction models that learn about the physical world have so far not yielded the same results as next-token prediction on language: while we might hope that models that learn from videos might acquire representations of the physical world in the same way that humans learn through experience, the LLMs have managed to skip this step and simply copy some aspects of human mental representations without having to figure out the learning algorithm that allowed humans to acquire those representations in the first place.

This is both exciting and disappointing. The good news is that we’ve been able to build the world’s most powerful brain scanner without even intending, and it actually works, simulating at least some fraction of human cognitive ability in an AI system that can answer questions, solve problems, and even write poems. The bad news is that these AI systems live in Plato’s Cave. The cave is the Internet, and the light shines from human intelligence, casting shadows of real-world interactions on the cave wall for the LLM to observe. In Plato’s allegory, leaving the cave and observing the world in daylight is necessary to understand the world as it really is. The shadows on the wall are only a small, distorted piece of reality, and crucially the observer in the cave doesn’t get to choose which shadows are presented to them. AI systems will not acquire the flexibility and adaptability of human intelligence until they can actually learn like humans do, shining brightly with their own light rather than observing a shadow from ours.

In practice, this means that we would expect LLM-like AI systems to be proficient in reproducing human-like cognitive skills, but relatively poor at actually acquiring new skills, representations, and abilities from experience in the real world – something that humans excel at. It also implies that implementing this kind of flexibility would require us to figure out something new: a way to autonomously acquire representations from physical experience, so that AI systems do not need to rely on brain scans mediated by text from the web.

https://sergeylevine.substack.com/p/language-models-in-platos-cave

@ivan
Copy link
Author

ivan commented Jul 2, 2025

Can someone please explain why a significant proportion of American food reviews are filmed in the front seat of a vehicle.

Americans live in their cars. I couldn't believe it until I finally visited the US.

Everything is optimized for the car. Drive throughs, parking, three lane streets leading everywhere you wanna go.

They're human-car hybrids like centaurs.

https://x.com/georgecursor/status/1939078650156167466

@ivan
Copy link
Author

ivan commented Jul 2, 2025

how does one find a cracked engineer

You probably ignored his dm

https://x.com/ThuleanFuturist/status/1939323969204596754

@ivan
Copy link
Author

ivan commented Jul 2, 2025

Why did the CEO leave because some dev made a typo? That wasn't necessarily caused by some error in the strategy of the company...

This is one of those cultural transitions which is difficult for people on the other side to understand; it belongs to the forgotten era of personal honor. These days one would simply lie on TV, or hire a PR firm to do that for you, and of course put the blame on the lowest individual that can be found. Repeat when more mistakes are made, because low level employees are disposable accountability shields. (See, for example, the UK post office/Fujitsu Horizon scandal)

But it used to be the case that leaders were expected to take responsibility for the culture and systems underneath them, rather than just taking as much of a salary as the business will bear from it.

That is, if a low level employee makes a significant money-handling mistake on this scale, that's a systems failure. There should be checks and testing and a software development culture which makes this kind of error unlikely. This is what was lost with "move fast and break things". After all, it's only other people's money.

(edit: it seems not to have been an actual money-handling error, but a notification error. Still fairly serious in terms of angry customers)

https://news.ycombinator.com/item?id=44421085

@ivan
Copy link
Author

ivan commented Jul 2, 2025

Everyone is possessed to some degree. But non-Normies transform their inputs in some major way, while Normies just conduct electricity

https://x.com/dystopiangf/status/1939790645134278973

@ivan
Copy link
Author

ivan commented Jul 3, 2025

blackpill on cultivating a specific virtue is that its absence or lack in others, their blindness or apathy about its importance, can begin to infuriate you

https://x.com/z_nightwind/status/1934604471591453072

@ivan
Copy link
Author

ivan commented Jul 3, 2025

one of the worst ways you can mess up people's productivity is regularly making them feel "oh no, if I touch this, something bad might happen!"

there's many flavors of this: the classic example in engineering is convoluted code with nonexistent/bad/too-slow-to-run-often tests, but being unclear/inconsistent about which decisions people on your team can make for themselves or not is just as bad

https://x.com/bschne/status/1940734572570071546

@ivan
Copy link
Author

ivan commented Jul 4, 2025

I will never understand the mind of a politician. Lisa Murkowski is 68 years old. She’s been a senator for nearly 25 years. She had a chance today to make history. who cares if they primaried her? retire with dignity instead of fucking the whole country to hold onto your seat.

https://x.com/crunchyrugger/status/1940236397312713087

@ivan
Copy link
Author

ivan commented Jul 5, 2025

You have no control over yourself or your life, and so you channel your frustration at your own ineptitude and failures on unsuspecting, innocent parties who are unable to fight back without losing their livelihoods

https://x.com/climatepaige/status/1940487868306411717

rufus I think god should kill you

https://x.com/tszzl/status/1940509688610935084

@ivan
Copy link
Author

ivan commented Jul 5, 2025

> buy every game studio
> can’t make game

> partner with biggest ai company
> can’t make ai coding assistant

What the fuck?

[MSFT]

They specialize in generating shareholder value, you wouldn’t understand

https://x.com/Clever_Loon/status/1940984082613391831

@ivan
Copy link
Author

ivan commented Jul 5, 2025

Samurai did head collection with a ritual to beautify severed heads of worthy rivals and put on display.[49] The samurai applied various cruel punishments on criminals. The most common capital punishments up until the Meiji Restoration were (in order of severity): decapitation, decapitation with disgraceful exposure of head post-death, crucifixion (for e.g. parricide), and death by burning with incendiaries.[44] Members of the samurai class had the privilege to perform hara-kiri (suicide disemboweling).[44] If it was not lethal then a friend or relation performed decapitation (kaishaku).[44] In 1597, Toyotomi Hideyoshi ordered the prosecution of 26 Martyrs of Japan.[50] They were tortured, mutilated, paraded through villages and executed by crucifixion, tied to crosses on a hill and impaled by lances (spears).[51] In the 17th century, the Tokugawa Shogunate executed over 400 Christians (Martyrs of Japan) for being more loyal to their faith than the Shogunate.[50] The capital punishments were beheading, crucifixion, death by burning and Ana-tsurushi (穴吊るし; lit. "hole hanging").

https://en.wikipedia.org/wiki/Bushido

@ivan
Copy link
Author

ivan commented Jul 7, 2025

A self-serving bias is any cognitive or perceptual process that is distorted by the need to maintain and enhance self-esteem, or the tendency to perceive oneself in an overly favorable manner.[1] It is the belief that individuals tend to ascribe success to their own abilities and efforts, but ascribe failure to external factors.[2] When individuals reject the validity of negative feedback, focus on their strengths and achievements but overlook their faults and failures, or take more credit for their group's work than they give to other members, they are protecting their self-esteem from threat and injury. These cognitive and perceptual tendencies perpetuate illusions and error, but they also serve the self's need for esteem.[3] For example, a student who attributes earning a good grade on an exam to their own intelligence and preparation but attributes earning a poor grade to the teacher's poor teaching ability or unfair test questions might be exhibiting a self-serving bias. Studies have shown that similar attributions are made in various situations, such as the workplace,[4] interpersonal relationships,[5] sports,[6] and consumer decisions.[7]

https://en.wikipedia.org/wiki/Self-serving_bias

@ivan
Copy link
Author

ivan commented Jul 8, 2025

The main problem seems to me to be related to the ancient problem of escape sequences and that has never really been solved. Don't mix code (instructions) and data in a single stream. If you do sooner or later someone will find a way to make data look like code.

That "problem" remains unsolved because it's actually a fundamental aspect of reality. There is no natural separation between code and data. They are the same thing.

What we call code, and what we call data, is just a question of convenience. For example, when editing or copying WMF files, it's convenient to think of them as data (mix of raster and vector graphics) - however, at least in the original implementation, what those files were was a list of API calls to Windows GDI module.

Or, more straightforwardly, a file with code for an interpreted language is data when you're writing it, but is code when you feed it to eval(). SQL injections and buffer overruns are a classic examples of what we thought was data being suddenly executed as code. And so on[0].

Most of the time, we roughly agree on the separation of what we treat as "data" and what we treat as "code"; we then end up building systems constrained in a way as to enforce the separation[1]. But it's always the case that this separation is artificial; it's an arbitrary set of constraints that make a system less general-purpose, and it only exists within domain of that system. Go one level of abstraction up, the distinction disappears.

There is no separation of code and data on the wire - everything is a stream of bytes. There isn't one in electronics either - everything is signals going down the wires.

Humans don't have this separation either. And systems designed to mimic human generality - such as LLMs - by their very nature also cannot have it. You can introduce such distinction (or "separate channels", which is the same thing), but that is a constraint that reduces generality.

Even worse, what people really want with LLMs isn't "separation of code vs. data" - what they want is for LLM to be able to divine which part of the input the user would have wanted - retroactively - to be treated as trusted. It's unsolvable in general, and in terms of humans, a solution would require superhuman intelligence.

--

[0] - One of these days I'll compile a list of go-to examples, so I don't have to think of them each time I write a comment like this. One example I still need to pick will be one that shows how "data" gradually becomes "code" with no obvious switch-over point. I'm sure everyone here can think of some.

[1] - The field of "langsec" can be described as a systematized approach of designing in a code/data separation, in a way that prevents accidental or malicious misinterpretation of one as the other.

https://news.ycombinator.com/item?id=44502318

@ivan
Copy link
Author

ivan commented Jul 13, 2025

>this is bad
>why?
>because I don’t like it
>why?
>because it’s bad

https://x.com/ITARviolation/status/1943342065334579414

@ivan
Copy link
Author

ivan commented Jul 13, 2025

I don't think the NPR reporter is deliberately spinning the story. I think a lot of people don't really believe that other people are really different from them. The reporter would never knowingly poison people for money, so it's not comprehensible to them that lots of people in the world just don't care whether they do or not. The only reason in their minds that people would do such a thing are economic desperation combined with ignorance; if those two factors are gone, they really believe the problem has been forever solved.

https://news.ycombinator.com/item?id=44535898

@ivan
Copy link
Author

ivan commented Jul 14, 2025

de•ca•thect  (dē′kə thekt**′**),

 v.t. 

  1. to withdraw one's feelings of attachment from (a person, idea, or object), as in anticipation of a future loss:He decathected from her in order to cope with her impending death.

https://www.wordreference.com/definition/de%E2%80%A2ca%E2%80%A2thect

@ivan
Copy link
Author

ivan commented Jul 15, 2025

I would happily pay monthly for Firefox - but not to Mozilla Corporation. Will Pay to developers, development support and operations - not to pad the CEO salary.

Yet we happily do that for everything else.

Either software developers have to figure out how to out compete the CEO ghouls (without becoming CEO ghouls themselves), or we just have to accept that the CEO ghouls will take their cut. There's no version of this where you can pay for a service, but also dictate how that money is spent.

I think that's because those everything else are products with an opaque structure, and Mozilla, and for example Wikipedia, are more transparent. Really highlights why some people don't open up, either themselves, their source code, or their organizational structure: it's just inviting endless criticism.

Adding to the point, donating to Mozilla (or Wikipedia) is optional, and paying for a product is not, legally. So if I'm buying clothing, it's whatever, I need my clothing, and the price is just the functional gateway of getting it. But in case of a Mozilla donation, I'm trying to do something good in the world. And if I discover that it's wasted, then I'm not just getting nothing - I am worse off, because I supported a bad cause.

https://news.ycombinator.com/item?id=44549029

@ivan
Copy link
Author

ivan commented Jul 16, 2025

I think trolley problems suffer from a different type of oversimplification.

Suppose in your system of ethics the correct action in this sort of situation depends on why the various different people got tied to the various bits of track, or on why ‘you’ ended up being in the situation where you get to control the direction of the trolley.

In that case, the trolley problem has abstracted away the information you need (and would normally have in the real world) to choose the right action.

(Or if you have a formulation which explicitly mentions the ‘mad philosopher’ and you take that bit seriously, then the question becomes an odd corner case rather than a simplifying thought experiment.)

https://www.greaterwrong.com/posts/h22n4nZQd9J2MEZxq/the-problem-with-trolley-problems#comment-vs4tJiG3DfuiEnr4j

@ivan
Copy link
Author

ivan commented Jul 17, 2025

I'm not tired of reminding everyone that "conflict resolution" is no more than an euphemism for "breaking durability by dropping already committed and acknowledged data".

Either architect for no data overlap on writes across all the "actives" (in which case software like pgactive could be a good deal) or use a purely distributed database (like Yugabyte).

https://news.ycombinator.com/item?id=44586474

@ivan
Copy link
Author

ivan commented Jul 17, 2025

normies are usually pretty OK at arriving at pragmatically sound worldmodels through their fuzzy trial and error language games,

but one thing that never ceases to blow my mind everytime I come across it, is that after thousands of years of evolution they still think that a successful opinion generating process is meant to be correct everytime, rather than producing a positive risk adjusted track record

they genuinely cannot comprehend the sharpe ratio, it's "did you have breakfast this morning" for 115 IQs

"you were wrong about this thing you were convicted in"

> yes, I'm not optimizing for never being wrong

"?????"

in normie world getting quote tweeted on some wrong prediction from 5 years ago is a cancellable offence

https://x.com/apralky/status/1945201262745600392

@ivan
Copy link
Author

ivan commented Jul 17, 2025

So sad, all the money in the world and somehow he believes he's a victim.

https://news.ycombinator.com/item?id=44572593

@ivan
Copy link
Author

ivan commented Jul 17, 2025

The powerful tend to like the idea of less democratic governments / rigging the game (business) so they win. It's easy, they're not interested in competing in a market (ideas or business) if they can simply cuddle up to a despot and easily get theirs. So we see many line up to take their turn to bend the knee.

There's a weird idea among those on the right in the US where they see business people as somehow having some good insights as far business overall (the market) for the country. But really many of those who gain power are very much not interested in competing / open markets / competition, quite the opposite. They got theirs and for many the inclination is to close the door (market) behind them.

https://news.ycombinator.com/item?id=44572593

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment