Mo Reads: Issue 1
I’m kicking off this newsletter with a mix of classic and recent reads, plus personal commentary.
Classics
Fact Posts: How and Why by Sarah Constantin (1,000 words, 4 mins)
Meditations on Moloch by Scott Alexander (14,000 words, 55 mins)
Against Interpretation by Matt Leifer (900 words, 4 mins)
Personhood: A Game for Two+ Players by Kevin Simler (5,000 words, 20 mins)
Blindsight by Peter Watts (100,000 words, 6.5 hours — SPOILER ALERT)
Recent
This is what peak culture looks like by Ryan Murphy (3,000 words, 12 mins)
How to lose a monopoly by Benedict Evans (2,300 words, 9 mins)
The unreasonable effectiveness of metaphor by Julie Moronuki (5,400 words, 22 mins)
Open Philanthropy report: How Much Computational Power It Takes to Match the Human Brain by Joseph Carlsmith (5,300 words, 21 mins)
If Materialism Is True, the United States Is Probably Conscious by Eric Schwitzgebel (11,000 words, 45 mins)
Classics
Fact Posts: How and Why by Sarah Constantin (1,000 words, 4 mins): one of my favorite thinking skills. Fact posts are Sarah’s answer to the problem of epistemic learned helplessness: while ideally people should believe valid arguments, in practice personal assessments of argument validity are easily influenced by persuasive arguers, so in a low-trust environment of persuasive arguers it’s rational to not even try to assess arguments since you can’t tell if they’re true or false anyway, else you end up a crackpot. Sarah says there’s another way — do the following exercise repeatedly: (1) start with an empirical question, e.g. “why is college so expensive?” (2) open up a doc and take notes (3) look for quantitative data from conventionally reliable sources, e.g. WHO for global health (4) avoid opinion (including experts), news, whitepapers; focus on raw information (5) orient towards the unfamiliar/confusing and try to figure it out (6) do simple arithmetic to compare things to familiar reference points, e.g. “how does this risk compare to risk of smoking?” (7) rank things by scale and focus on big things (8) publish notes publicly for feedback on reasoning / sources (9) write follow-ups if you change your mind, and take embarrassment of naivete of early notes as evidence you’re actually learning. Facts posts are thus a low-effort way (hours vs years) of coming to independent opinions informed by evidence. Internally, it feels like your “sense of the world” when discovering new facts later is “yeah sounds about right” vs “omg what is going on??”. And success in answering the original question feels like realizing that it often doesn’t have one clear answer, but a million different answers depending on finicky details in operationalizing that question, that people arguing on opposite sides are talking past each other since they’re using subtly different statistics, and that you can reconstruct the picture from stats relevant to the operationalization you care about
Meditations on Moloch by Scott Alexander (14,000 words, 55 mins): my first introduction to the notion of perverse incentives in persistent civilizational inadequacies, a.k.a. “why is the world so screwed up?”. Moloch is a parable about coordination failure in complex systems comprised of many agents competing for scarce resources — everyone sacrifices something to gain an edge in zero-sum games, ending up with the same relative status but worse absolute status; moreover no single agent can unilaterally escape the dynamic, so it’s a bad Nash equilibrium, a trap. “Moloch” is Scott’s personification of this dynamic, inspired by Allan Ginsberg’s poem Howl (itself a chillingly good read); the personification is his attempt to get readers to grok a rather abstract idea using the fact that we’re naturally better at story thinking than systems thinking. (If you want more mythological personifications of abstract systems phenomena, check out Ra on socially respectable banal evil, Azathoth on evolutions, and this taxonomy of egregores.) Scott gives 14 examples, from the prisoner’s dilemma to the two-income trap, arms races, cancer, inefficient education, corporate welfare and more. In academic literature, Moloch is related to positional goods and the tragedy of the commons. An excerpt:
Moloch is introduced as the answer to a question—C. S. Lewis’ question in Hierarchy Of Philosophers—what does it? Earth could be fair, and all men glad and wise. Instead we have prisons, smokestacks, asylums. What sphinx of cement and aluminum breaks open their skulls and eats up their imagination?
And Ginsberg answers: Moloch does it.
There’s a passage in the Principia Discordia where Malaclypse complains to the Goddess about the evils of human society. “Everyone is hurting each other, the planet is rampant with injustices, whole societies plunder groups of their own people, mothers imprison sons, children perish while brothers war.”
The Goddess answers: “What is the matter with that, if it’s what you want to do?”
Malaclypse: “But nobody wants it! Everybody hates it!”
Goddess: “Oh. Well, then stop.”
The implicit question is—if everyone hates the current system, who perpetuates it? And Ginsberg answers: “Moloch.” It’s powerful not because it’s correct—nobody literally thinks an ancient Carthaginian demon causes everything—but because thinking of the system as an agent throws into relief the degree to which the system isn’t an agent.
Against Interpretation by Matt Leifer (900 words, 4 mins): philosophy of science distinguishes epistemic claims (“what things exist?”) from ontic claims (“what can we know?”). A realist stance towards scientific theory seeks both epistemic and ontic claims; an antirealist stance gives up on ontic claims, seeking only epistemic ones. Prior to the 20th century all scientific theories were intrinsically realist: science was about understanding the “fundamental nature of reality” via observation and falsification. From ~1900 onwards quantum theory, one of the two great pillars of modern physics, was gradually developed to overcome predictive shortcomings in prior (“classical”) physics. But despite being fantastically predictive, its abstract math structure didn’t straightforwardly correspond to anything in reality, and (worse) it disproved “obviously true” basic intuitions about the fundamental nature of reality — i.e. it wasn’t obviously realist. This gave rise to a cottage industry of attempts to explain how quantum theory “really corresponds” to reality, some trying to save realism, others rejecting it, and still others (including such heavyweights as quantum pioneer Paul Dirac) ignoring it altogether as “pointless philosophizing”, a waste of time. Matt, who does quantum interpretations work, pushes back against this last group’s derision in this essay by arguing that interpretations help “provide alternative paths for the future development of physics, and possibly to take a few steps along these paths”. By analogy: pre-modern thermodynamics was formulated in terms of heat engines, so to apply it to physical systems you had to somehow reformulate the latter in terms of heat engines (even if unnaturally); the modern idea of entropy not only made the theory naturally/easily applicable to nearly all systems, but made the discovery of statistical mechanics, all despite entropy being mathematically equivalent to the pre-modern heat engine formulations i.e. “just interpretation work”. I buy Matt’s argument, although I am biased towards “big questions”
Personhood: A Game for Two+ Players by Kevin Simler (5,000 words, 20 mins): “personhood” is best understood as a social fiction, an “abstraction specifying the contract for an idealized interaction partner”; it’s a label/status earned via proper behavior which obligates other persons to treat us nicely in return. It is arguably the most important interface ever developed, as it enables institutions and civilizations to exist, by being built to it. Whoever doesn’t implement it, e.g. infants and the psychotic, aren’t treated like persons. Like all abstractions it leaks, e.g. during outbursts/breakdowns. Like all social phenomena it isn’t all-or-nothing, but admits of degrees e.g. “the more you behave, the better you’ll be treated”. Like all abstract specs it doesn’t matter how it’s implemented, only how it behaves, so personhood isn’t reserved just for humans, any creature/artifact that can present itself “person-wise” will be treated as such. Personhood confers benefits: the right to be taken seriously, to be treated politely, to autonomy, to be given reasons for doing things that affect you. Personhood entails behaviors that ease interaction, i.e. “responsibilities”: identifiability, honesty, giving reasons for actions and accepting them often, something to lose, autonomy (vs “just doing my job”), and proper use of social emotions (shame when personhood breaks down, anger only if violated and placatable with proper appeasements, remorse when actions harm others). Personhood is inherently recursive/circular, as it only makes sense w.r.t. other persons; this means it’s a stable attractor in social-contract space, so it’s contagious and “sticky” in communities — equivalently: personhood has network effects, since person-person relationships are more mutually beneficial than those involving nonpersons; this makes persons reward you for person-like behavior and punish nonperson ones, which is how socialization happens. Kevin’s piece cleared lots of confusions for me
Blindsight by Peter Watts (100,000 words, 6.5 hours — SPOILER ALERT): a posthuman sci-fi novel ft. one of the most worldview-upending ideas I’ve ever read, the most alien aliens in fiction (Solaris included), the idea of communication via inflicting pain, a convincing critique of life/nonlife and organism/environment distinctions, and a bevy of references to scientific literature (Peter being an ex-biologist). The idea: consciousness is a cancer on cognition, evolutionarily arising from RL algos for habitat/mate selection and survival looping in on itself, consuming disproportionate resources to do endless recursions and simulations i.e. producing nothing but itself, handicapping cognition, not contributing to decision-making but taking credit. The aliens: “scramblers”, human-sized starfish-like things on the cusp of life/non-life, 30% neurons/cabling and capable of ultra-fast learning (“ten minutes from 1+1=2 to predicting 10-digit primes on demand”) but not self-aware, with no genes, evolving intra-organismally (at war with itself at the tissue level, like a colony of tumors regulating each other), with part of their metabolic pathways externalized to their “parent” Rorschach (like cells in a body). Go read it!
Recent
This is what peak culture looks like by Ryan Murphy (3,000 words, 12 mins): people like to say that film, art, music and literature are getting worse. Ryan argues they’re wrong because of revealed preferences: “if we can replicate a near approximation of a historical masterwork at a cost no higher than it was in the past, and we’re doing something else, that something else is probably better than the supposed historical masterwork”. Ryan also points out that the near universality of cultural pessimism among experts is a constant going back in time, so “taking each claim as being transitively true” would lead to the reductio that culture was best when most people didn’t even consume much culture. Also nostalgia and selection bias influence perception of quality, and prose readability has improved dramatically (Ryan compares Malcolm Gladwell and Adam Smith). Gwern Branwen’s argument that we don’t need creativity for the sake of creation because of overwhelming media abundance actually bolsters Ryan’s claim
How to lose a monopoly by Benedict Evans (2,300 words, 9 mins): Ben looks at IBM dominating mainframes —> Microsoft dominating PCs —> Apple and Google dominating smartphones and extracts 3 lessons: (1) talk about ‘monopoly’ (in tech or elsewhere) conflates 2 different things: monopoly around your own product in its market, and whether that monopoly means you control the broader industry; (2) the fact that IBM and Microsoft still dominate mainframes and PCs respectively but can no longer make anyone in tech do what they don’t want to (Roger Lovatt’s definition of power), despite being far richer than when they could, show that wealth != power; (3) there are 2 ways competitive moats (structural barriers to competition that prevent better products by others from winning) can stop working: directly via state intervention e.g. antitrust trials, or indirectly via being made irrelevant e.g. by some new innovation that solves the same underlying user needs in very different ways, like smartphones. Ben makes lots of other good remarks, like how even knowing in advance how the industry would shift might not help if pivoting in that direction is against company DNA (Microsoft again), and how monopoly reigns keep getting shorter i.e. they aren’t immortal (a common argument for tech regulation), and that traditional antitrust interventions might not work for software platforms because inherent network effects mean a breakup wouldn’t actually work
The unreasonable effectiveness of metaphor by Julie Moronuki (5,400 words, 22 mins): in 1960 Nobel Prize physicist Eugene Wigner wrote a now-classic article saying that math was ‘miraculous’ in that it was often applicable far beyond the narrow contexts it was originally developed in, from the inverse-square law for gravity to Maxwell’s equations, proclaiming it ‘had no rational explanation’ and ‘is a wonderful gift which we neither understand nor deserve’. This provoked decades of responses from leaders in a dizzying variety of disciplines, from computer science to data mining, molecular biology and economics. A sensible response I found awhile back was Eric Raymond’s piece (math is “miraculously” applicable only if you forget (1) our ability to appropriately pick formalizations of informal models of some natural phenomena, (2) selection bias: not all phenomena are that easily amenable to formalization). Julie’s essay is a beautiful elaboration of Eric’s (1) above: her response to Wigner’s ‘miracle’ is “our brains are unreasonably good at metaphor, i.e. finding and formalizing the properties that matter”. Some tidbits: (1) math is neither discovered nor invented (false dichotomy), rather we discover properties of the real world and invent idealized formalizations and discover links between those and invent new math to organize/understand the links etc; (2) math (like language and money) is neither objective nor subjective but intersubjective, i.e. it doesn’t exist outside our minds, but we can say things about it that are true/false irrespective of any one mind’s interpretation; (3) the core of cognition is analogy, the perception of “common essence” between two things enabling categorization, this essence being context-dependent; metaphor is the linguistic expression of analogy
Open Philanthropy report: How Much Computational Power It Takes to Match the Human Brain by Joseph Carlsmith (5,300 words, 21 mins): Open Phil is a bunch of affiliated orgs that research focus areas to find “moonshot” giving opportunities, make grants, follow the results and publish findings. One of these moonshots is avoiding global catastrophic risks from misaligned AI, by supporting technical research to reduce such risks and strategic/policy work to “improve society’s preparedness for major AI advances”. To inform AI risk grantmaking strategy, it’s useful to estimate when AI systems will be good enough at competitive costs to do tasks comprising most full-time jobs worldwide, a proxy for “industrial revolution-level transformative”. One aspect of this is estimating how much compute is needed to match the human brain’s capabilities. Joseph, a research analyst at OP, looked into this and concluded that (with caveats) a petaflop probably suffices; for context, IBM’s Roadrunner supercomputer broke the petaflop milestone back in 2008, and nowadays a single Nvidia DGX A100 chip can do 5 petaflops. But this doesn’t mean that transformative AI is imminent, because the bottleneck isn’t compute, it’s creation and training of such AI systems. (Those are covered in this quantitative model to forecast AI timelines by OP senior analyst Ajeya Cotra.) As with any Fermi calculation, the interesting part isn’t the answer but the methodology
If Materialism Is True, the United States Is Probably Conscious by Eric Schwitzgebel (11,000 words, 45 mins): in philosophy, materialism is the idea that the fundamental “substance” in nature is matter, and things like consciousness are byproducts of material processes like brain biochemistry; they can’t exist without matter. Eric argues that the US (indeed any country) has all the kinds of properties that materialists say are characteristic of conscious beings; they only reject the conclusion because they’re “morphologically prejudiced” against spatially distributed group entities (Eric calls this “contiguism”, by analogy to racism/sexism/speciesism). “It’s bizarre so it can’t be true” is a bad rejection because the absurdity heuristic has failed repeatedly in the history of science, even on really weird ideas like quantum theory and evolution. His thought experiment for making the idea of spatially distributed conscious beings more palatable is a race of alien squids whose brains are distributed among nodes in their tentacles, which are detachable, yet communicate with low enough latency that the detached squids act like coherent entities not groups. Returning to the US, Eric notes that it “maintains homeostasis, distributes nutrition, maintains population health, defends against threats, educates future generations, and even reproduce (via fission)”; if materialists saw these properties in an animal they’d say it’s conscious. My take is simple: human language/concepts (like “consciousness”) were never meant to be pushed this far; it’s an intriguing reframing for sure, but I don’t know that it “buys” me any new insights.