Mo Reads: Issue 9
Aliens, art, biology, enlightenment, ethics, fiction, geopolitics, longtermism, math, reasoning, software
Past issue, archive. A few more quotes than usual, to give a taste of the reads.
Links:
Universal love, said the cactus person by Scott Alexander (2,800 words, 11 min)
Hell by Wayne Barlowe (30 paintings)
Xenopsychology by Robert Freitas (6,000 words, 24 min)
On triage in acting to reduce suffering by Brian Tomasik (650 words, 2.5 min)
Hyperproductive development by Jessica Kerr (1,300 words, 5 min)
Long now-style content by Gwern Branwen (2,000 words, 8 min)
Sequence vs cluster thinking by Holden Karnofsky (8,300 words, 33 min)
Evolution as alien god by Eliezer Yudkowsky (2,500 words, 10 min)
Backpropagation by Chris Olah (1,600 words, 6 min)
Rethink what you know about Xi’s Belt and Road by Tanner Greer (1,600 words, 6 min)
Universal love, said the cactus person by Scott Alexander (2,800 words, 11 min): a classic. N,N-Dimethyltryptamine, or DMT for short, is a chemical substance occurring in many plants and animals that’s used as a recreational psychiatric drug. Some users prefer it to LSD/psilocybin mushrooms because of its rapid onset, intense effects and short duration (hence why it was called the “business trip” in the 60s). DMT trips include “profound time-dilation, visual, auditory, tactile, and proprioceptive distortions and hallucinations, and other experiences that, by most firsthand accounts, defy verbal or visual description… like perceiving hyperbolic geometry or seeing Escher-like impossible objects”. On occasion users also report meeting “intelligent entities”; some seem so real the users insist they’re not just hallucinations but “real superhuman beings”. This led Marko Rodriguez to publish a paper devising a simple test to prove if they are real: ask the DMT entities to factor large numbers you’re sure you can’t factor yourself. This inspired Scott to write a short story where the 1st-person POV protagonist meets two DMT entities (a cactus person and a big green bat) and asks them to factor a 100-digit semiprime. Unfortunately the entities aren’t cooperative because they think in a totally orthogonal way to scientific thinking, and hilarity ensues. What elevates this story from ‘fun weird short’ to ‘classic’ is the second half, where the big green bat tries to make the protagonist understand the idea of GETTING OUT OF THE CAR; this idea has been associated with the subtype of enlightenment involving cognitive defusion.
I saw the big green bat bat a green big eye. Suddenly I knew I had gone too far. The big green bat started to turn around what was neither its x, y, or z axis, slowly rotating to reveal what was undoubtedly the biggest, greenest bat that I had ever seen, a bat bigger and greener than which it was impossible to conceive. And the bat said to me:
“Sir. Imagine you are in the driver’s seat of a car. You have been sitting there so long that you have forgotten that it is the seat of a car, forgotten how to get out of the seat, forgotten the existence of your own legs, indeed forgotten that you are a being at all separate from the car. You control the car with skill and precision, driving it wherever you wish to go, manipulating the headlights and the windshield wipers and the stereo and the air conditioning, and you pronounce yourself a great master. But there are paths you cannot travel, because there are no roads to them, and you long to run through the forest, or swim in the river, or climb the high mountains. A line of prophets who have come before you tell you that the secret to these forbidden mysteries is an ancient and terrible skill called GETTING OUT OF THE CAR, and you resolve to learn this skill. You try every button on the dashboard, but none of them is the button for GETTING OUT OF THE CAR. You drive all of the highways and byways of the earth, but you cannot reach GETTING OUT OF THE CAR, for it is not a place on a highway. The prophets tell you GETTING OUT OF THE CAR is something fundamentally different than anything you have done thus far, but to you this means ever sillier extremities: driving backwards, driving with the headlights on in the glare of noon, driving into ditches on purpose, but none of these reveal the secret of GETTING OUT OF THE CAR. The prophets tell you it is easy; indeed, it is the easiest thing you have ever done. You have traveled the Pan-American Highway from the boreal pole to the Darien Gap, you have crossed Route 66 in the dead heat of summer, you have outrun cop cars at 160 mph and survived, and GETTING OUT OF THE CAR is easier than any of them, the easiest thing you can imagine, closer to you than the veins in your head, but still the secret is obscure to you.”
A herd of bison came into listen, and voles and squirrels and ermine and great tusked deer gathered round to hear as the bat continued his sermon.
“And finally you drive to the top of the highest peak and you find a sage, and you ask him what series of buttons on the dashboard you have to press to get out of the car. And he tells you that it’s not about pressing buttons on the dashboard and you just need to GET OUT OF THE CAR. And you say okay, fine, but what series of buttons will lead to you getting out of the car, and he says no, really, you need to stop thinking about dashboard buttons and GET OUT OF THE CAR. And you tell him maybe if the sage helps you change your oil or rotates your tires or something then it will improve your driving to the point where getting out of the car will be a cinch after that, and he tells you it has nothing to do with how rotated your tires are and you just need to GET OUT OF THE CAR, and so you call him a moron and drive away.”
Hell by Wayne Barlowe (30 paintings): Wayne is a concept artist and SF&F writer specializing in esoteric landscapes and creatures, perhaps one of the most accomplished ones alive; he was del Toro’s creature designer for the Hellboy movie series, the head creature designer for Pacific Rim, and worked on Avatar and Aquaman. I first discovered him on the internet 10+ years ago for his striking depictions of Hell.
Xenopsychology by Robert Freitas (6,000 words, 24 min): my first introduction to 2 ideas: (1) there was smart discussion “out there” on aliens (vs humans in funny suits typical of SF fare); (2) there’s a simple yet interesting way to generalize ‘IQ’ to apply to all intelligent entities, not just humans. For (1), Robert’s musings on extraterrestrial intelligence are inspired by observing nonhuman terrestrial ones, in particular chordate (spinal cord-oriented, neural centralization, vertebrates e.g. mammals) vs ganglionic (each body segment its own ‘body + brain’, coordinating via latticework of nerves crisscrossing body, invertebrates i.e. 97% of all animal species — highly efficient for responding to stimuli, organism size limited by latticework ‘strangling’ other organs past a size threshold), and wondering how aliens with high ganglionic intelligence might behave. He also speculates about alien sociobiology, emotions (consider how female octopi fast to death guarding unhatched eggs), and logic (inspired by Godel and quantum logic). For (2), Freitas proposes a “sentience quotient” or SQ, (the log base 10 of) information-processing efficiency as a rough proxy for general intelligence, proportional to info-proc rate and inv. prop. to ‘brain’ mass. The stupidest brain imaginable is lower-bounded by the mass and age of the observable universe having only processed one bit: SQ -70. The smartest is upper-bounded by quantum mechanics, at SQ +50. We humans are +13, all neuron-based intelligences cluster within several points of this, plants around -2. Freitas notes:
The possible existence of ultrahuman SQ levels may affect our ability, and the desirability, of communicating with extraterrestrial beings. Sometimes it is rhetorically asked what we could possibly have to say to a dog or to an insect, if such could speak, that would be of interest to both parties? From our perspective of Sentience Quotients, we can see that the problem is actually far, far worse than this, more akin to asking people to discuss Shakespeare with trees or rocks. It may be that there is a minimum SQ "communication gap," an intellectual distance beyond which no two entities can meaningfully converse.
At present, human scientists are attempting to communicate outside our species to primates and cetaceans, and in a limited way to a few other vertebrates. This is inordinately difficult, and yet it represents a gap of at most a few SQ points. The farthest we can reach is our "communication" with vegetation when we plant, water, or fertilize it, but it is evident that messages transmitted across an SQ gap of 10 points or more cannot be very meaningful.
What, then, could an SQ +50 Superbeing possibly have to say to us?
On triage in acting to reduce suffering by Brian Tomasik (650 words, 2.5 min): Brian reframes the impact-oriented prioritization of actions to reduce suffering as not “cold and calculating” but “warm and calculating” — in other words, triage is the opposite of harsh, claims Brian, it’s “the highest form of mercy and compassion” — but given how we’re not machine-like consistent executors, balance triage as an ideal to strive for and staying within our limits to avoid burnout, to maximize impact over our lifetimes (I’m partial to the axiology, morality, law triad on this). Because the ideal is unattainable:
Imagine a military doctor who comes across a battlefield laden with hundreds of injured soldiers in severe pain. The doctor calls for assistance, but the additional medical units will not arrive for thirty minutes. However, the doctor happens to have with him a bag of pain medicine that he can use to palliate the suffering around him.
Would it be acceptable for him to treat five of the soldiers and then stop to read a comic book, arguing that he has produced some positive change and he needn't spend all of his effort helping others?
Similarly, would we countenance his decision to spend most of his limited supply of pain killer on the mildly injured patients nearest to him, even though many of those a bit farther away are in absolute agony?
I believe that the answers ought to be "no." Rather, triage—giving greatest medical attention to those who can be helped most in the least amount of time—represents the ethical imperative under these circumstances.
Yet how are other situations any different? In choosing how to spend one's time, what to do with one's money, what to pursue in one's career, and how to devote one's life, we are making the same choice as the doctor wondering whether to treat suffering patients or read a comic book; the only difference is that the consequences of the latter option are not so immediate and tangible.
Those who say, "I realize that this undertaking will not relieve as much suffering as possible, but at least I'm doing something," are in effectively the same position as the doctor who treats only those mildly injured patients nearest to him, because he is "at least doing something."
Hyperproductive development by Jessica Kerr (1,300 words, 5 min): the myth of the 10x developer is someone who gets 10x more work done than average. While these actually exist, the more common case is 10x impact. There are many reasons for this, from experience guiding design work to focusing on simplicity (whose time savings compound over time) to building team infra that makes others’ work better to the ability to just finish (rarer than you think). Here Jessica focuses on another common aspect that resonates with my work experience: a moderately talented developer can be very productive if they know the system intimately because they built it, whereas a similarly capable dev will find it a lot harder to get the same level of intimacy and productivity. Braitenberg calls this the “Law of Downhill Invention, Uphill Analysis”: complex systems are easier to build than to figure out after they’re working. Jessica has great advice both for the dev who builds systems solo like this and for those work with them. To the former: realize that nobody else experiences your system like you do, so be kind to them (“You are a synthetic biologist and can alter the DNA of the system from within; they are xenosurgeons and have to cut in through the skin and try not to damage unfamiliar organs”). To the latter: consider a few options (1) for small systems, don’t change it — this really is the fastest way to develop software (2) ask for unit tests, to learn how the system works and document intended behavior (3) pair program: fastest way to transfer understanding (4) get a README for others (5) ask to slow down
Long now-style content by Gwern Branwen (2,000 words, 8 min): the Long Now is the idea that “slower/better” thinking is preferable to today’s pervasive “faster/cheaper” mindset. Gwern asks: “What sort of writing could you create if you worked on it (be it ever so rarely) for the next 60 years?” His entire personal website is built with this in mind. But what about content? Blog posts, including (sadly) this newsletter, are what Gwern calls “the triumph of the hare over the tortoise… read by a few people on a weekday in 2004 and never again”; yet “the best blogs always seem to be building something: they are rough drafts—works in progress”. This suggest the “perpetual drafts” approach, like software: “never start. Merely have perpetual drafts, which one tweaks from time to time. And the rest takes care of itself”. Why? “Knowing your site will survive for decades to come gives you the mental wherewithal to tackle long-term tasks like gathering information for years, and such persistence can be useful—if one holds onto every glimmer of genius for years, then even the dullest person may look a bit like a genius himself”. (I think about that a lot.) Another idea: “a truly Long Now approach would be to make them be improved by time—make them more valuable the more time passes… (like) adding long-term predictions.”
Sequence vs cluster thinking by Holden Karnofsky (8,300 words, 33 min): how do you make decisions under uncertainty when a lot is at stake? This cuts to the heart of good thinking. One way this can be made concrete is when you’re a funder with limited money looking to do as much good as you can. This leads to hard comparisons: should you give to insecticide-treated bednets in sub-Saharan Africa, or research on high-risk high-impact approaches to combat climate change, or animal welfare efforts, or…? Two kinds of thought processes broadly speaking. (1) Sequence thinking involves making a decision based on a single model: break down decision into key questions, take best guess on each, accept conclusion they imply. It makes assumptions/beliefs transparent. (2) Cluster thinking involves approaching a decision from multiple perspectives, seeing what decision each implies, then weighing perspectives by their conclusions against each other (not by combining perspectives to form unified model) to get final decision. It “sandboxes” high-uncertainty perspectives from dominating the final decision, so the latter regresses to the mean. Neither is “objectively better”; instead Holden notes that sequence thinking is better for “idea generation, brainstorming, reflection, and discussion” whereas cluster thinking is better in “tending to reach good conclusions about which action should be taken”.
Evolution as alien god by Eliezer Yudkowsky (2,500 words, 10 min): my first intro to a few ideas: (1) how alien evolution is, when you stop anthropomorphizing and start paying attention to things like Ichneumon wasps, “whose paralyzing stings preserve its prey to be eaten alive by its larvae” (2) how there isn’t one “Evolution” but as many different evolutions as reproducing populations (3) that one of the implications of the simple idea that “genes that replicate more often become more frequent in the next generation” is that evolution has no foresight, it is blind, which explains backwards biological designs like the retina (4) how far you can stretch the idea of ‘alien’.
In a lot of ways, evolution is like unto theology. "Gods are ontologically distinct from creatures," said Damien Broderick, "or they're not worth the paper they're written on." And indeed, the Shaper of Life is not itself a creature. Evolution is bodiless, like the Judeo-Christian deity. Omnipresent in Nature, immanent in the fall of every leaf. Vast as a planet's surface. Billions of years old. Itself unmade, arising naturally from the structure of physics. Doesn't that all sound like something that might have been said about God?
And yet the Maker has no mind, as well as no body. In some ways, its handiwork is incredibly poor design by human standards. It is internally divided. Most of all, it isn't nice.
In a way, Darwin discovered God—a God that failed to match the preconceptions of theology, and so passed unheralded. If Darwin had discovered that life was created by an intelligent agent—a bodiless mind that loves us, and will smite us with lightning if we dare say otherwise—people would have said "My gosh! That's God!"
But instead Darwin discovered a strange alien God—not comfortably "ineffable", but really genuinely different from us. Evolution is not a God, but if it were, it wouldn't be Jehovah. It would be H. P. Lovecraft's Azathoth, the blind idiot God burbling chaotically at the center of everything, surrounded by the thin monotonous piping of flutes.
Backpropagation by Chris Olah (1,600 words, 6 min): a beautiful exposition of the key algorithm that makes training deep machine learning models feasible. Backprop is a technique for calculating derivatives quickly by going “in reverse” through a computational graph, starting at one output and tracking how every node affects it (hence its general name: “reverse-mode differentiation”), so we get the derivative of the output w.r.t. every single parameter all at once. Given that modern neural networks have easily millions of parameters, this is a huge speedup.
Rethink what you know about Xi’s Belt and Road by Tanner Greer (1,600 words, 6 min): Tanner’s take is that Xi Jinping’s sprawling Belt and Road Initiative — a global infrastructure development strategy adopted in 2013 to invest in ~70 countries to build ports, skyscrapers, railroads, roads, airports, dams, and railroad tunnels to “enhance regional connectivity and embrace a brighter future”, estimated to cost $4-8 trillion — is not power play but marketing strategy, expect it backfired. Why marketing? Because many of the most prominent projects were financed or began construction pre-BRI, and investment/construction decisions are guided not by geopolitical importance but profit (i.e. “debt-trap diplomacy” is a myth). What was Xi marketing, and why does Tanner claim it backfired? (1) Guidance for future investment/construction decisions towards his personal diplomatic priorities e.g. “six economic corridors”, although his marketing is limited to sloganeering, which can’t compete with getting rich (2) legitimacy of China as superpower by subverting hostile narratives around its rise using foreign development (“not business deals, but diplomatic endorsements of the Chinese approach to solving mankind’s problems”), although this directs all credit (good and bad) to the CCP, and lets poor investments get a pass when they would’ve been scrutinized otherwise (who dares insult Xi himself?)