Mo Reads: Issue 8
Biology, cognition, consulting, economics, ethics, history, life advice, management, quantum computing, startups
Links:
McKinsey and Company: capital’s willing executioners by McK Anon (5,900 words, 24 min)
Learned blankness by Anna Salamon (1,500 words, 6 min)
How do you justify your consumption of meat? by Habib Fanny (1,300 words, 5 min)
The Algernon argument by Gwern Branwen (6,500 words, 26 min)
How to set up, hire and scale a growth strategy and team by Anu Hariharan (4,300 words, 17 min)
The quantum computing fact sheet by Andrea Rocchetto (1,000 words, 4 min)
Megaproject management by Ryan B (1,500 words, 6 min)
Deconstructing econospeak by Blair Fix (6,800 words, 27 min)
Against self-pigeonholing by Brian Tomasik (2,300 words, 9 min)
Premodern battlefields were absolutely terrifying by Tanner Greer (2,300 words, 9 min)
McKinsey and Company: capital’s willing executioners by McK Anon (5,900 words, 24 min): a piece by a disillusioned insider “hoping to change the world from the inside, believing that the best way to make progress is through influencing those who control the levers of power” but discovering instead that “I found myself party to the most damaging forces affecting the world: the resurgence of authoritarianism and the continued creep of markets into all parts of life”. Some of Anon’s claims: “McKinsey has done direct harm to the world in ways that, thanks to its lack of final decision-making power, are hard to measure and, thanks to its intense secrecy, are hard to know”; “Working for all sides, McKinsey’s only allegiance is to capital… the firm’s willingness to work with despotic governments and corrupt business empires is the logical conclusion of seeking profit at all costs”. As senior leadership told them: “The firm does execution, not policy.” (Had McK been as global in the 40s, Anon reflected, this wouldn’t’ve stopped them from helping Bayer optimize production of Zyklon B.) But how did McK mess up so bad given e.g. its 14 values that are “actually discussed and largely adhered to”? Some of Anon’s guesses: (1) McK’s ‘anarchist’ governing model leads to amoral profit-seeking: “if a partner can staff a team, the firm will do the work” (2) a necessary loosening of standards w.r.t. political risk to maintain double-digit annual growth so partner compensation doesn’t drop, given how big it now is (3) the “no-policy” claim is BS: when presenting options to clients, the preferred option will appear first + best supporting evidence (4) no skin in the game, pure upside: it survived Enron’s collapse; Andersen Consulting didn’t (5) the 14 values reassure prospective clients (“don’t lie”, “don’t fudge expenses” etc) but say nothing about its larger role in the world. The piece has a lot more, check it out.
Learned blankness by Anna Salamon (1,500 words, 6 min): learned helplessness is when you learn to behave as if you’re helpless, e.g. if you never have enough money you’ll never escape poverty so why think long term at all? Anna extends this idea to learned helplessness about thinking in specific domains, inspired by reflecting on her reaction to a broken dishwasher — not even bothering to think if she could fix it, but mentally going “it’s a mechanical thing, I don’t know how mechanical things work, better ask someone who does know; hey Steve!” — and observing that most people treat most of their environment as “blanks inaccessible to reason”, e.g. basics of how cars and computers work, why the scientific method works, how relationships succeed or fail. Anna clarifies that the opposite of learned blankness isn’t always reexamining everything, which is exhausting and useless, but recognizing that domains aren’t inaccessible.
How do you justify your consumption of meat? by Habib Fanny (1,300 words, 5 min): especially given there’s a way to sustain ourselves without killing animals (sentient, capable of suffering) to eat them? Habib’s recap of his moral reasoning is thoughtful and (unfortunately) relatable. He presents some thought experiments: (1) if we’re justified in causing sentient beings suffering just because we derive sensory pleasure from them, shouldn’t ‘sentient beings’ extend to humans too? Why would it be morally wrong just for humans? (2) Why feel morally superior to people who engage in bestiality, given killing the animal is worse for it than being penetrated? (3) Is it okay that we can be socialized to find acceptable the unnecessary killing of sentient beings? What if it was slave ownership? Then he observes that “there is always a certain amount of convenience in societal formulations of morality… (for example), it is much easier to be against slavery when one is not born into a aristocracy whose station in society is dependent upon the continuation of human bondage”. The unfortunately relatable part:
So, the question then becomes this: given that these are my views on the morality of killing to eat meat when one is not an obligate carnivore, how do I justify doing it? I don’t. I find myself in a position somewhat analogous to that of Thomas Jefferson. He had been born the beneficiary of a system of cruelty he benefited from, he saw that it was morally abhorrent, but he was too much the product of the society that produced him to ever give it up. I have long suspected that part of the reason I dislike the man so much is because his failings remind me of my own.
The Algernon argument by Gwern Branwen (6,500 words, 26 min): ‘Algernon’ is a reference to a classic SF novel where the protagonist, who’s retarded, gets his IQ tripled via this new surgical procedure, but it’s temporary and the side-effects are devastating and ultimately fatal. Gwern uses it as a jump-off point to discuss the curious observation that we’ve made incredible progress in countless areas (transport, healthcare, computing etc) but utterly failed to do the same for human intelligence, beyond curing diseases and deficiencies, despite having tried hard for nearly a century. (This failure is worth focusing on because intelligence enhancement is an advance that fully generally expedites other advances, i.e. literally the most impactful way to advance science, along with AGI done right.) This observation led Yudkowsky to hypothesize that “any simple major enhancement to human intelligence is a net evolutionary disadvantage”. This makes sense in light of tradeoffs being endemic in biology (e.g. the body scavenges unused bones for resources; lengthening lifespan lowers reproduction), but the “tradeoffs occur because complex systems are fragile” explanation is flawed (economies grow exponentially on the backs of thousands of complex systems being optimized constantly, no issue), so what’s the real issue? Gwern explores some guesses.
How to set up, hire and scale a growth strategy and team by Anu Hariharan (4,300 words, 17 min): startups that grow really big (e.g. FB, Airbnb, Uber, IG, Slack) may have started via “growth hacks”, but they didn’t get there that way — they got there by developing teams and processes that are (1) intentional (2) metrics-driven (3) experiment-guided, i.e. a “scientific” approach to growth. Anu covers a ton of ground, so I’ll just pick some tidbits: (1) investing on growth is a waste of money/time if your retention isn’t good yet, meaning it’s stable long-term, in line with competition in your vertical and new cohorts do better (2) the main reason for a dedicated growth team is to own the growth metric (DAUs etc), as nobody would own it otherwise (3) good growth teams also play defense, so when e.g. new feature launches go sideways they can find root cause fast (mins vs days), course correct and limit impact on growth (4) early hires are critical as they establish A/B test framework and set culture; snag famous people (e.g. Eric Colson, Netflix data science VP) and you’ll attract talent later (5) a 1st year to-do list must include setting absolute (not % change) growth goal to align every aspect of funnel, define key metrics by breaking down goal, identify growth channels for early experiments by asking how customers find solutions/solve the issue right now, establishing the following elements (clean dataset, segmentation tools, rigorous experiment dashboard, peer review to discuss findings) and make sure tools are internal, only testing features you’ll ship to everyone, augmenting data analysis with user research, and continuous iteration.
The quantum computing fact sheet by Andrea Rocchetto (1,000 words, 4 min): “Cliffs Notes de-hyping” for journalists, policymakers, and others looking to get the basics of quantum computing right. Random tidbits: (1) A qubit is not "0 and 1 at the same time" but a complex linear combination of 0 and 1 (2) The concept of superposition has no good translation into everyday English (3) A quantum computer cannot be thought as equivalent to multiple classical computers that share information and execute calculations in parallel. In this spirit, quantum computers do not get their advantage by checking all possible solutions at the same time (4) When talking about the power of quantum computer in abstract terms it makes no sense to ask "is a quantum computer a million times faster than a classical computer? a billion times faster?". The relevant question is "is a quantum computer asymptotically faster than a classical computer for a given problem?" (5) Quantum computers are not faster than classical computers for every type of computational task but only provide an advantage for certain classes of problems (e.g. factoring numbers into primes, searching through unstructured databases, and simulating another quantum system, relevant for material and drug discovery) (6) Although quantum computers could break much of the encryption currently used to secure the Internet, there are other forms of encryption that are believed not to be breakable even by a quantum computer
Megaproject management by Ryan B (1,500 words, 6 min): a subfield of project mgmt whose separate specialization makes sense due to the high complexity and historical failure rate of megaprojects. They comprise ~8% of global GDP; projects in the $50-100 billion range are common(!), with trillion dollar projects like US defense procurement on the high end. 4 stakeholder-specific group biases drive megaproject popularity: (1) engineers/technologists love making biggest/fastest stuff (2) politicians love being associated with them and the publicity (3) unions/contractors/businessfolk love the jobs/fees they create (4) designers love making them, and the public loves adopting them as locally distinctive. 3 “iron laws of megaprojects”: over time, over budget (in 90% of cases, 10x cost overruns sometimes), under utilized (0.5x or less across countries in public/private, so “excess regulation” and “corruption” aren’t good explanations). They’re still completed despite these iron laws due to the ‘break-fix’ model: mgmt doesn’t know what they’re doing/aren’t incentivized to care, so something will break, more time/money is spent to fix it, repeat until finish. Historical awareness of megaproject failure rate leads to the prevailing attitude that it’s okay to lie about and then badly manage them, leading to worse projects being more likely to be chosen: dishonest mgmt will claim lower costs and more benefits, funders expect all options to be comparably over budget and over time. But not all is lost: nowadays megaproject failure more likely has consequences for leadership (e.g. CEO of Deepwater), and there are examples of megaprojects gone right (e.g. Bilbao’s Guggenheim Museum).
Deconstructing econospeak by Blair Fix (6,800 words, 27 min): Blair created a word-counting bot to compare relative word frequencies in economics textbooks vs English language at large (43 standard texts vs Google’s Ngrams database) and visualized the results. These led him to claim that econospeak is defined not by what it says, but by what it doesn’t say. Examples: (1) it ~never uses ‘anti’, a reflection of their “selling an ideology legitimizing the status quo” best done by “muting any talk of opposition”, e.g. by “purging ‘anti’ from your vocab” (2) it steers away from bureaucratic language, a reflection of how “economists pays attention to competition between groups, but not to the bureaucratic dynamics within groups” (3) it hardly talks about power, which (Blair claims) is precisely how it justifies power (4) it completely avoids words about conflict — e.g. ‘racism’, ‘defiance’, ‘patriarchy’, ‘treachery’, ‘sexism’, or ‘dispossession’ — which again shows how econospeak “legitimizes power relations by pretending they don’t exist”
Against self-pigeonholing by Brian Tomasik (2,300 words, 9 min): identities are quick heuristics for classifying people, writes Brian, but we don’t have to be loyal to or feel constrained by their artificial boundaries. The fact that people willingly do this all the time, e.g. “I’m a physics major” or “I’m bad at math”, he calls ‘self-pigeonholing’. The failure mode in the opposite direction would be to avoid identity altogether, which makes you lose out on its advantages (e.g. feeling proud of yourself, being motivated to do or learn something). Instead Brian advises to “maintain an inclusive identity”, like you belong to all kinds of groups at once, and “activating identities” at any given time as needs arise. As a corollary, he also pushes back against snobbery (whether to “assert social dominance or enhance ingroup bonding”) as a natural extension of the idea of welcoming diversity of races, gender identities etc to include tastes. I like everything from physics to business to literature to sports, so this attitude resonates with me.
Premodern battlefields were absolutely terrifying by Tanner Greer (2,300 words, 9 min): grokking how terrifying close combat is makes sense of a lot of ancient/medieval sources, like why “those men who actually withstood both the bullet and the bayonet overwhelmingly preferred to face the former” (whether in ancient Rome or Napoleonic Europe), and reveals how Hollywood-style extended melees/duels (which predate Hollywood, being present in Romance of the Three Kingdoms and the Mahabharata and the Iliad) are a load of crap: most close combat engagements are decided in seconds, therefore to engage is to hang your life on the balance of a few split second decisions. This is even worse when the enemy is as committed and disciplined as your own; worse still when you realize that the best-case scenario (killing your enemy combatant) entails his compatriot immediately filling in his place, and then his compatriot, and then… How long can you keep making split second life/death decisions before you screw up? So instead of “massed infantry pressing forward” or “extended duels”, think “dynamic balance of mutual dread”. This makes sense of e.g. Rome’s famous maniple system.