Mo Reads: Issue 13
history, banking, dashboards, anti-aging, values, compositionality, air pollution
Reads:
Don’t Read History for Lessons by Cedric Chin (4,600 words, 18 mins)
An Emotional Guide to 'Fractional Reserve Banking' by Brett Scott (4,600 words, 18 mins)
Some Notes on Executive Dashboards - Command & Control & Confusion by Tom Critchlow (3,000 words, 12 mins)
Anti-Aging: State of the Art by JackH (3,300 words, 13 mins)
The Computational Anatomy of Human Values by Beren Millidge (10,600 words, 42 mins)
On compositionality by Jules Hedges (1,300 words, 5 mins)
Air Pollution: Founders Pledge Cause Report by Tom Barnes (5,500 words, 22 mins)
Don’t Read History for Lessons by Cedric Chin (4,600 words, 18 mins): Cedric uses the story of Morris Chang and TSMC as a jumping-off point to argue that “learning narrow lessons from history is extremely risky because things that are true in one specific context might not be true in a different context — even a slightly different context”: for every plausible takeaway (e.g. “Good managers cannot ‘manage anything’. Specialisation counts”), append “… except when it doesn’t” to the end. Instead: “read history for concept instantiations and to expand the set of prototypical examples of a given concept, not lessons” (this idea comes from cognitive flexibility theory or CBT, a theory of adaptive expertise in ill-structured domains). Cedric’s example: “In 7 Powers Hamilton Helmer asserts that all competitive advantage comes from invention. Chang’s story at TI is an example of what that looks like in practice.” This is different from ‘read history for lessons’ in that it’s structured as “this is a real world example of some concept or idea”, instead of a more general “you should do X” or “when Y happens, you should watch out for Z”; it’s more useful because it lets you recognize similar situations when they happen IRL, without forcing you into narrow actionable recommendations — which is how CBT claims expertise works in ill-structured domains: “reasoning by comparison to fragments from prior cases they’ve seen, which implies that concepts are represented not as abstract principles in their heads, but a cluster of real world cases that serve as prototypes”
An Emotional Guide to 'Fractional Reserve Banking' by Brett Scott (4,600 words, 18 mins): intuition-pumps fractional reserve banking by examining the workings of everyday relationships, the typical gym business model, theaters, and casinos. I like the relationships allegory: as we go through life we form relationships of all kinds, which entail promises of commitments and obligations ‘backed’ by reserves of energy, time, skills etc; we naturally ‘issue out more promises than we have reserves to deal with them simultaneously’ i.e. we ‘run our relationships on a fractional reserve basis’. Moderate overcommitment is normal, indeed required for societal functioning at all; excessive overcommitment leads to burnout and a feeling of being overwhelmed by even the smallest tasks, but it’s easy to unknowingly cross that threshold. The key difference between relationships and banking is that bankers deliberately overcommit and manage the consequences, because they structurally can: “the banking sector as a whole can issue out far more promises than they have in reserves, but nevertheless expect the entire system to stay relatively stable, provided that the variation in the cross-flows stays within a certain band. And, if that doesn’t work out, the central bank can always step in, like a big boss-man landlord who owns the gym building, and who'll have to take the hit if the trainers over-promise too much. This, though, is where my gym metaphor hits its limit, because a central bank - unlike our gym landlord - can expand the floor space at will. This is because our ‘floor space’ is state money, and state money is promises issued out by a state-backed central bank and treasury”
Some Notes on Executive Dashboards - Command & Control & Confusion by Tom Critchlow (3,000 words, 12 mins): I used to create a lot of dashboards for management, so this felt emotionally resonant. Most reports & dashboards executive teams look at lack real insight into the business because they only measure what happened, leaving out what is happening i.e. progress on various initiatives, and what to do when progress isn’t on track to hit targets. This requires identifying and tracking what Amazon calls “controllable input metrics”, which (1) influence future revenue (2) can be controlled; identifying them is harder than you think and requires trial-and-error — the systematic way to do this is DMAIC (‘Define, Measure, Analyze, Improve and Control’), and the final input metrics tracked often appear unintuitive to the uninitiated (e.g. Amazon tracking how many items it offered for sale using ‘% of detail page views where the products were in stock and immediately ready for two-day shipping’). Tom observed in his consulting engagements that the client (C-suites) were often frustrated at the state of reporting but didn’t do anything about it, and hypothesizes that this is due to how output metrics (being observations of what happened) feel neutral while input metrics (being imperfect levers that need refining over time) feel opinionated, surfacing a power dynamic where execs feel like tracking the latter veers into micromanagement, or that incorrectly trying exposes their lack of understanding of the mechanics of the actual work. Lots of other interesting details
Anti-Aging: State of the Art by JackH (3,300 words, 13 mins): aging is damage that accumulates over time, arising as a byproduct of normal metabolism, which exponentially increases the risk of the diseases that kill most people (cancer, heart disease, diabetes, Alzheimer’s, and lung disorders) as well as risk of frailty and cognitive decline. (Aging isn’t ‘entropic damage due to 2nd law of thermodynamics’, otherwise hydras, tortoises, sharks etc couldn’t be negligibly senescent.) Aging damage comes in 9 forms (the ‘hallmarks of aging’): genomic instability, telomere attrition, epigenetic alterations, loss of proteostasis, deregulated nutrient-sensing, mitochondrial dysfunction, cellular senescence, stem cell exhaustion, and altered intercellular communication. Anti-aging entails fixing this damage via engineering (‘SENS’, or ‘strategies for engineered negligible senescence’) before the damage accumulates to the levels at which the diseases above emerge; this engineering approach differs from gerontology (altering metabolism pre-damages) and geriatrics (after diseases emerge), both of which JackH dismiss: metabolism is too complex to reliably positively alter; increasingly unhealthy lifespan misses the point of ‘more life worth living’. There’s some promise for anti-aging: in labs, SOTA approaches have 10x-ed lifespans in worms, 2x in mice and flies, and 1.3x in rats. The most promising anti-aging strategies (based on mice studies) are (1) parabiosis (putting young blood into old mice), (2) drugs that mimic the metabolic effects of dietary restriction by inhibiting mTOR and activating autophagy e.g. metformin, (3) drugs that kill senescent cells (‘senolytics’) e.g. dasatinib and quercetin, and (4) a cocktail of 4 factors (‘Yamanaka factors’) that converts terminally differentiated cells (old cells) into induced pluripotent stem cells or IPSCs (‘young’ cells) (‘cellular reprogramming’). JackH has some recs for how to slow down aging and lower age-related disease risk: exercise, intermittent fasting, ‘strong social ties’, overcoming depression, optimizing circadian rhythm, etc
The Computational Anatomy of Human Values by Beren Millidge (10,600 words, 42 mins): Context: “In AI alignment, the goal is often understood to be aligning an AGI to human values. Then, typically, the flow of logic shifts to understanding alignment: how to align an AGI to any goal at all. … Implicit is the view that the alignment problem and the human values problem are totally separable: we can first figure out alignment to anything and then after that figure out human values as the alignment target.” Beren thinks this view is wrong: the alignment mechanism and the alignment target do not always cleanly decouple, so we can leverage information about the alignment target to develop better or easier alignment methods (a special case of a common pattern in CS: “problems that naively appear very hard in their most general cases are much more tractable once we introduce additional information into the problem formulation”) — in particular, we might benefit from better understanding what human values actually are, and how they come about. Beren approaches these questions via neuroscience + ML (hence “computational anatomy of human values”), hoping to gain insight into how to implement human values in ML systems. I can’t hope to summarize Beren’s answer here, except to obliquely quote his “human values are primarily linguistic concepts encoded via webs of association and valence in the cortex learnt through unsupervised, primarily linguistic, learning”, but I do want to point out 2 intriguing things:
Beren’s explanation computationally explains many inconsistencies between us humans and utility maximizers, e.g. why don’t optimize strongly for anything, why we don’t know what we want out of life, why human values are contradictory and situationally dependent in practice, why we often act against our professed values in a wide variety of circumstances, and why most widely held philosophies of values and ethics do not cache out into consequences at all (i.e. the grand project of moral philosophy, to “distill all of the messiness of human judgements of right and wrong into a coherent philosophical and ideally mathematical system”, is as doomed as “that of the Chomskyan syntacticians trying to derive the context free grammar that can perfectly explain every last idiosyncrasy in a natural language” — which doesn’t mean we can never specify human values to an AGI, because while they’re not perfectly specifiable, they’re still compressible
The relevance of this computational explanation of human values to AI alignment is that “we should expect an AGI trained with unsupervised learning on a similar data distribution to humans to form human-like ‘value concepts’, since this is how humans learn values in the first place”
On compositionality by Jules Hedges (1,300 words, 5 mins): “Compositionality is the principle that a system should be designed by composing together smaller subsystems, and reasoning about the system should be done recursively on its structure”. Jules draws out a few corollaries and guesses: (1) it’s not all-nor-nothing, but is a spectrum (see e.g. the history of programming languages) (2) it makes it possible to understand the behavior of, and reason about, the whole system (a codebase, an organization, an oil refinery) without knowing the details of how the the behaviour is implemented internally (what Jules calls “reasoning via an interface”) (3) it’s in fact synonymous with interfaces (a type, a contract, etc), whose key property is that “their complexity stays roughly constant as systems get larger” (4) it enables, and is in fact necessary, for the reductionist approach to understanding typified by the scientific method (5) its opposite is emergent effects (commonly defined as a system being “more than the sum of its parts”, so that it cannot be understood only in terms of its parts), i.e. systems displaying emergent effects are non-compositional (replete in biology, economics, social sciences, certain subfields of math like diff eqs, large ML models) and hence not fully tractable via the scientific method as usually envisaged (6) it’s necessary for working at scale, because in non-compositional domains “a technique for a solving a problem may be of no use whatsoever for solving the problem one order of magnitude larger” (7) it’s extremely delicate, yet so powerful it’s worth going to great lengths to achieve it. Worth pairing Jules’ essay with Fabrizio Genovese’s Modularity vs Compositionality: A History of Misunderstandings (700 words, 2.5 mins), which points out that “99% of the time you hear the word ‘compositionality’ people really mean ‘modularity’”: the former is a stronger version of the latter which claims predictability i.e. linking its parts produces no emergent behavior
Air Pollution: Founders Pledge Cause Report by Tom Barnes (5,500 words, 22 mins): I like the quality and (ostensible) replicability of reasoning displayed in Tom’s prioritization research report under Founders Pledge. Tom notes that air pollution is a pressing issue (from an ITN perspective, it kills 7 million people annually making it the 5th largest risk factor for mortality globally, and economically costs 3% lost global GDP, but <0.1% of total foundation grantmaking goes towards it, and this disparity is even more extreme in LMICs where >90% of deaths occur), and also a very complex one (“thousands of potential interventions to pursue in different cities, tackling different sources, all through different approaches”). Given how intervention cost-effectiveness can vary by OOMs, prioritization is worth doing despite this complexity, and Tom does this by looking for “impact multipliers”: location (mortality rates, population density & growth, PM2.5 growth, political tractability etc), source (pollution source size & neglectedness), approach (influencing policy > funding direct interventions, policy readiness), pollutant targeted, and strength of org implementing intervention, all evaluated using data from various sources (WHO, IHME, IQAir, UNEP, Nature, Wikipedia, World Bank via OWID, Economist Intelligence Unit, World Justice Project, Clean Air Fund, IEA, to name a few). Using these impact multipliers, Tom suggests targeting government regulations via better monitoring and advocacy and prioritizing outdoor sources of PM2.5 first ,e.g. transportation, industry and energy over indoor (residential), focusing on LMIC urban areas with high mortality rates, growing populations and worsening air pollution, where climate funding is relatively low, civil society influence is strong and opposition is weak.