Book Summaries

From Bacteria to Bach and Back: The Evolution of Minds

Daniel C. Dennett, 2017

Part I: Turning Our World Upside Down

Introduction

Minds evolved through a long process of accumulating thinking tools, and understanding how this happened requires traversing a counterintuitive path that demands abandoning cherished intuitions about consciousness, free will, and the primacy of comprehension over competence. Dennett previews twelve ‘strange inversions’ that the book will defend, warning that Cartesian gravity—the pull toward first-person, dualist thinking—distorts imagination at every step.

  • The evolution of minds depended on thinking tools—spoken words, writing, arithmetic, navigation, and eventually computers and the Internet—that are themselves products of prior evolution, creating a strange loop in which minds used tools to understand how minds evolved.
    • We know there are bacteria; dogs, dolphins, and chimpanzees don’t—and even bacteria don’t know there are bacteria. It takes thinking tools to understand what bacteria are.
    • The path from the assumption that humans are physical objects to a full account of conscious minds is strewn with empirical and conceptual difficulties that require abandoning ‘precious intuitions.’
  • The book’s central argument proceeds through three major imaginative exercises: turning the world upside down following Darwin and Turing, evolving evolution into intelligent design, and finally turning our minds inside out to understand consciousness.
    • Twelve specific ‘strange inversions’ are previewed, including ‘Darwin’s strange inversion of reasoning,’ ‘competence without comprehension,’ ‘consciousness as a user-illusion,’ and ‘feral neurons.’
    • These inversions are not dismissible on first encounter—they resemble puzzles whose retrospectively obvious solutions are initially rejected with a snap judgment.
  • Life’s history features three pivotal transitions—the Eukaryotic Revolution (endosymbiosis), the Cambrian Explosion, and the MacCready Explosion—with the last being the fastest major biological change ever, driven by human population, technology, and intelligence in a coevolutionary loop over just 10,000 years.
    • At the dawn of agriculture 10,000 years ago, humans plus their livestock and pets were only ~0.1% of terrestrial vertebrate biomass; today, by MacCready’s calculation, that figure is 98%.
    • We can save the planet or extinguish all life—something no other species can even imagine—which is why our human minds are strikingly different from those of all other species.
  • Cartesian gravity—the nearly irresistible pull of first-person, dualist thinking about minds—distorts both scientific and lay imagination, creating an ‘Explanatory Gap’ that is better understood as a dynamic imagination-distorter than as a genuine metaphysical chasm.
    • Descartes articulated the default assumption that the immaterial mind communicates with the material brain, but no convincing account of this interaction that doesn’t violate physics has ever been offered.
    • Francis Crick’s ‘The Astonishing Hypothesis’—that the mind just is the brain—was not astonishing to scientists but provoked deep anxiety in laypeople who feared it would eliminate free will and meaning.
  • The duck-rabbit and Necker cube illusions illustrate the problem of Cartesian gravity: just as one cannot see both orientations of an ambiguous figure simultaneously, one cannot adopt both the first-person perspective of the Defender of consciousness and the third-person perspective of the scientist simultaneously.
    • A scientist approaching the problem of consciousness gets ‘flipped’ into first-person orientation as she lands on ‘Planet Descartes,’ finding herself unable to use the third-person tools she brought.
    • The gap is not a real chasm but a product of failing to ask—as D’Arcy Thompson advised—how things got to be this way.

Before Bacteria and Bach

The absence of female geniuses at the iconic level of Bach or Einstein is presented as a deliberate opening example of how gravitational forces distort thinking, while the chapter’s main argument establishes that human culture—not individual genius—is the primary engine of brilliant innovation, operating through a process of cultural evolution that is itself a product of Darwinian processes. The chapter also establishes the reverse-engineering perspective as indispensable for understanding how life arose from nonlife.

  • The absence of female superstar geniuses at the iconic level of Aristotle, Bach, or Einstein is a deliberately chosen example to illustrate how forces distort thinking—the bristling reaction functions as a ‘boldface for memory’—while the actual explanation (political oppression, self-fulfilling sexist prophecies, or genes) is deliberately left open.
    • Dennett’s candidates for comparable female thinkers are Jane Austen, Marie Curie, Ada Lovelace, and Hypatia of Alexandria—none of whom has achieved the same emblematic status.
    • Genes may account for basic animal competence, but genes don’t account for genius; human culture is a more fecund generator of brilliant innovations than any troupe of geniuses of either gender.
  • The origin of life poses a genuine engineering challenge—how could a breathtakingly complex, self-maintaining, reproducing system arise without a miracle?—which is best addressed by the reverse-engineering perspective: start with a minimal functional specification (what must it do?) and work backward through available feedstock molecules.
    • Powner, Gerland, and Sutherland (2009) demonstrated a pathway for pyrimidine ribonucleotide synthesis in which sugar and nucleobase emerge from a common precursor rather than assembling from free intermediates—a breakthrough after 40 years of failed attempts.
    • As evolutionary biologist Greg Mayer noted, John Sutherland worked on the problem for twelve years before finding the solution, demonstrating the futility of arguments from personal incredulity.
  • Adaptationism—the methodological assumption that all parts of an organism are good for something—is not a ‘Panglossian’ sin but a necessary discipline for reverse engineering in biology, and it remains alive and well despite Gould and Lewontin’s famous attack.
    • Gould and Lewontin’s ‘Panglossian paradigm’ critique has had the unfortunate effect of convincing many biologists they should avoid design talk and reason talk, leading to confusion among laypeople about whether evolutionary biologists ‘admit’ manifest design in nature.
    • Francis Crick’s Orgel’s Second Rule—‘Evolution is cleverer than you are’—is not grounds for abandoning reverse engineering but for persisting and improving one’s reverse-engineering game.
  • The possibility that the first replicator was not the simplest possible living thing but a Rube-Goldberg contraption that was later simplified illustrates how ‘gambits’—temporarily costly or roundabout strategies—can be the key to finding solutions that direct search would miss.
    • In chess, a gambit gives up material to achieve a better position; in prebiotic chemistry, the ‘correct’ path to ribonucleotide synthesis turned out to bypass the ‘obviously best’ direct assembly route.
    • Nature, like a magician, has no shame and no budget—and all the time in the world—so it will go to ridiculously extravagant lengths to achieve effects that can then be exploited.

On the Origin of Reasons

Darwin did not extinguish teleology but naturalized it: evolution by natural selection is an automatic reason-finder that gradually transforms ‘how come’ (process narratives) into ‘what for’ (functional justifications), producing free-floating rationales that are real features of the biosphere even though no mind ever represented them. The biosphere is saturated with real design and real purpose, and accepting this—without requiring an Intelligent Designer—is the correct response to the manifest design in Nature.

  • The English word ‘why’ is systematically ambiguous between ‘how come’ (a process narrative, causal) and ‘what for’ (a functional justification implying a purpose), and Darwin’s achievement was to show how evolution transforms the former into the latter without requiring an Intelligent Designer.
    • Wilfrid Sellars described reason-giving as constituting ’the logical space of reasons,’ and the Pittsburgh philosophers (Brandom, Haugeland) explored its norms—but ignored the evolutionary question of how this practice arose.
    • B. F. Skinner’s response ‘Because I have been rewarded for saying that in the past’ to Dennett’s demand for a reason perfectly illustrates the confusion of how-come with what-for.
  • The biosphere is saturated with real design and purpose, and the correct pedagogical strategy is not to deny this manifest design (as some evolutionists do to avoid aiding creationists) but to show that design without an Intelligent Designer is genuinely possible.
    • Archbishop Christoph Schönborn’s 2005 New York Times op-ed argued that any system denying the ‘overwhelming evidence for design in biology is ideology, not science’—a challenge better met by embracing design talk than by denying design exists.
    • Harvard Medical School students marveling at intracellular design and asking how anyone could ‘believe in evolution’ in the face of it suggests that evolutionists’ reluctance to use design language backfires badly.
  • Dennett’s three stances—physical, design, and intentional—provide complementary strategies for explanation: the design stance works for anything with functions (living or artificial), and the intentional stance works for things designed to use information to accomplish their functions.
    • Evolution by natural selection occupies the role vacated by the Intelligent Designer: it ‘finds’ and ’tracks’ reasons for things to be arranged one way rather than another, without having a mind.
    • The chief difference between evolution’s reasons and a human designer’s reasons is that the latter are typically represented in the designer’s mind, whereas evolution’s reasons are first represented by human investigators who succeed in reverse engineering Nature.
  • Abiotic processes such as freeze-thaw cycles (which produce Kessler and Werner’s stone circles on Spitsbergen) demonstrate that mindless, cyclic ‘do-loop’ processes can produce highly improbable, ordered structures—the proto-Darwinian analogue of differential persistence that precedes reproduction.
    • The stone circles look man-made and remind observers of deliberate sculpture, but are the natural outcome of hundreds or thousands of mindless freeze-thaw cycles—a ‘how come’ with no ‘what for.’
    • Differential persistence—some temporary combinations hanging around longer than others and thereby having more time to pick up revisions—gradually turns into differential reproduction, the origin of Darwinian natural selection.
  • Free-floating rationales—reasons that exist independently of any mind that represents them—are as real as colors or centers of gravity, and their gradual emergence from mere causes via natural selection constitutes a ‘Darwinism about Darwinism’: the species of reasons evolved out of the species of mere causes with no bright dividing line.
    • The termite castle and Gaudí’s La Sagrada Familia are strikingly similar in shape but utterly different in genesis: there are reasons for the termite castle’s structure, but the termites don’t have those reasons—competence without comprehension.
    • Sponges, bacteria, and even viruses do things for reasons without having the reasons; they are beneficiaries of designs whose rationales need not be grasped by the organisms themselves.

Two Strange Inversions of Reasoning

Darwin’s and Turing’s strange inversions—that absolute ignorance can produce brilliant design, and that perfect computing requires no knowledge of arithmetic—are two aspects of a single discovery: competence without comprehension, which overturns the pre-Darwinian intuition that comprehension must precede and produce competence. Both inversions remain psychologically difficult to accept because our educational practices and folk intuitions enshrine the opposite principle.

  • Darwin’s strange inversion—that absolute ignorance (natural selection) is fully qualified to take the place of absolute wisdom (God) in all achievements of creative skill—was accurately identified by Darwin’s critic Robert MacKenzie Beverley in 1868 as the essential purport of the theory, even as Beverley found it outrageous.
    • “Beverley wrote: ‘In order to make a perfect and beautiful machine, it is not requisite to know how to make it’—which Dennett calls an accurate if outraged summary of Darwin’s bubble-up theory of creation.” —Robert MacKenzie Beverley
    • Creationists are still searching for ‘skyhooks’—phenomena unapproachable by evolution—while Darwin’s account repeatedly reveals ‘cranes’: endosymbiosis, sex, and language are all cranes that lift Design Space exploration to new levels.
  • Turing’s strange inversion showed that a mindless machine following dead-simple if-then instructions about symbols and states—with no knowledge of arithmetic—can perform any computation, making a programmable digital computer a Universal Turing Machine.
    • Before Turing, ‘computers’ were people—often women with mathematics degrees—who knew what arithmetic was; Turing showed they didn’t need to know this, just as natural selection doesn’t need to know what it’s designing.
    • Both inversions provoke dismay for the same reason: they claim that wonders of the mind or of engineering are accessible by mere material processes without intelligence guiding them.
  • Our deeply held intuition that comprehension is the best—or only—source of competence is itself a product of causes (the way our minds were shaped), not a reasoned conclusion, and it has counterexamples even in human education: military training produces reliable jet-engine mechanics through drill and practice, with valuable comprehension arising from instilled competence rather than the reverse.
    • The ’new math’ catastrophically tried to teach set theory before arithmetic, illustrating the risks of the comprehension-first ideology; armed forces’ drill-and-practice approach produces experts whose understanding deepens through practice.
    • What Darwin and Turing did was envision the most extreme version of this point: all comprehension ultimately arises from uncomprehending competences compounded over time into ever more comprehending systems.
  • IN ORDER TO BE A PERFECT AND BEAUTIFUL COMPUTING MACHINE, IT IS NOT REQUISITE TO KNOW WHAT ARITHMETIC IS.
  • The Manhattan Project at Oak Ridge demonstrated at massive scale that reliable high competence can be achieved with almost no comprehension for insulated tasks: over 130,000 workers operated gaseous diffusion equipment around the clock without knowing they were enriching uranium, organized by Leslie Groves’s top-down intelligent design.
    • The K-25 building at Oak Ridge was the largest building in the world when constructed; its tens of thousands of workers became genuine experts at controlling dials and levers for a task they couldn’t name.
    • While Groves was intelligently designing a system requiring minimal comprehension, Turing on the other side of the Atlantic was intelligently designing electronics that could replace those clueless human workers entirely.
  • GOFAI (Good Old-Fashioned AI) attempted a Cartesian, top-down rationalistic approach—storing world knowledge in propositions and using inference engines to plan actions—which succeeded in restricted domains but failed to scale to open-ended human intelligence because it depended on intelligent designers to hand-code an ever-growing ontology.
    • The ‘summer vision project’ at MIT in the mid-1960s attempted to ‘solve vision’ over one long vacation—a naïve underestimation that dramatized how truly difficult designing a free-wheeling human mind is.
    • Douglas Lenat’s CYC project has had hundreds of coders (‘CYClists’) hand-defining over a million concepts for thirty years without achieving general intelligence—the dream of a walking encyclopedia is not yet extinguished but is fading.
  • Termites are exemplars of competence without comprehension—building strong, air-conditioned homes without blueprints or bosses—while Gaudí represents the Intelligent Designer, armed from the outset with drawings, blueprints, and passionately articulated reasons; the central task of the book is to explain how evolution traversed the enormous distance between these extremes.
    • Gaudí’s Pilot ACE computer rivaled La Sagrada Familia in originality and cost; both depended on prior representations in the mind of a genius of the purpose of the design—unlike termite castles, which have rationales but no representers of those rationales.
    • The question ‘how could a slow, mindless process build a thing that could build a thing that a slow mindless process couldn’t build on its own?’ is the rhetorical question that Darwin’s strange inversion answers.

The Evolution of Understanding

Understanding or comprehension is not a stand-alone mental marvel that produces competence but is itself composed of competences, coming in degrees from the bacterium’s sorta comprehension to Einstein’s; and animals, while often genuinely Popperian creatures capable of offline hypothesis testing, are not Gregorian creatures—only humans have enculturated minds stocked with thinking tools that constitute proper comprehension. The chapter builds toward this conclusion through a hierarchy of creature-types (Darwinian, Skinnerian, Popperian, Gregorian) and through analysis of animal behaviors whose rationales are free-floating.

  • An organism’s Umwelt (von Uexküll) or set of affordances (Gibson) constitutes its effective ontology—the things that matter to it—and this ontology can be established by reverse engineering from anatomy and behavior without settling questions about consciousness, just as elevator ontology can be read from circuit design.
    • A clam rake’s simplicity gives away its artifactual status: it cannot defy the Second Law of Thermodynamics on its own and depends on something much more complicated for its origin—making it, in some respects, more impressive evidence of intelligent life than a clam.
    • Paleontologists use adaptationist assumptions to reconstruct the Umwelten of extinct species from fossils alone, as in Niles Eldredge’s horseshoe crab swimming-speed example, which necessarily relies on optimality considerations.
  • Stotting in antelopes—leaping high during predator pursuit—has a free-floating rationale explained by costly signaling theory (honest advertisement of fitness to the predator) that need not be appreciated by either antelope or lion, illustrating that behaviors can have genuine ‘what for’ reasons while their agents remain entirely oblivious of them.
    • If stotting were a ‘cheap’ signal available to every antelope regardless of fitness, lions would stop attending to it; the signal persists because it is hard to fake, keeping it honest without any party understanding why.
    • The rule of attribution: if the competence can be explained without appeal to comprehension, don’t indulge in extravagant anthropomorphism—attribute comprehension only when demonstrations of much more intelligent behavior require it.
  • The piping plover’s injury-feigning display is neither a simple reflex nor a wily scheme consciously figured out, but an evolution-designed routine with variables sensitive to real conditions (the predator’s gaze direction, proximity, apparent interest) as demonstrated by Carolyn Ristau’s experiments using remote-controlled toy dune buggies with stuffed raccoons.
    • The sophisticated soliloquy describing the plover’s apparent reasoning (‘I must feign a broken wing to distract the predator from my nest’) captures the free-floating rationale without implying the bird represents it—the rationale is there all the same.
    • David Haig’s thought experiment—imagining a genuinely injured bird ‘intending’ to be mistaken for a displayer—illustrates how the intentional stance generates productive experimental hypotheses even when no inner reasoning is assumed.
  • The idea of comprehension or understanding as a separate, stand-alone, mental marvel is ancient but obsolete.
  • Comprehension comes in degrees—from a bacterium’s sorta comprehension of quorum-sensing signals to Einstein’s comprehension of relativity—and the sharp Cartesian binary of conscious/comprehending versus not-conscious/incomprehending should be replaced by Darwinian gradualism about understanding.
    • The aha! phenomenon (eureka effect) misleads us into thinking comprehension is a separate, stand-alone experience added on top of competence, when it is better understood as the subjective correlate of a threshold in an ongoing process of competence accumulation.
    • Even among experts, comprehension is always sorta comprehension from some perspective: Dennett confessed to theoretical physicists at Fermi Lab that he only ‘sorta understood’ E=mc², and one physicist immediately insisted that experimentalists only think they understand it fully.
  • Four grades of creature—Darwinian (hard-wired), Skinnerian (operant conditioning), Popperian (offline hypothesis testing), and Gregorian (deliberate use of thinking tools)—form a hierarchy of increasing comprehension, with only humans being Gregorian creatures whose minds are stocked with culturally evolved thinking tools.
    • Popperian creatures ’let their hypotheses die in their stead’ (Popper’s phrase) by pretesting possible actions in internal models, which looks like comprehension because the selective process is information-sensitive and forward-looking.
    • Richard Gregory’s concept of ‘potential intelligence’ provided by thinking tools—microscopes, maps, computers, abstract concepts like democracy and double-blind studies—names the distinctive cognitive resource that makes humans Gregorian creatures.
  • Elizabeth Marshall Thomas’s claim that ‘for reasons known to dogs but not to us, many dog mothers won’t mate with their sons’ exemplifies the Beatrix Potter syndrome—anthropomorphizing animals by imagining they have insight into the rationales of their own instinctual behaviors—when there is no warrant for such attribution.
    • This is like a Martian anthropologist writing ‘for reasons known to human beings but not to us, many humans yawn when sleepy’—there are reasons for these behaviors, but they are not our reasons in the sense of being represented by us.
    • We often ‘snatch a free-floating rationale out of thin air and paste it, retrospectively, into our subjective experience’ when asked why we did something, engaging in whig history—not settling for how come but going for a what for.

Part II: From Evolution to Intelligent Design

What Is Information?

Semantic information—a distinction that makes a difference to some beneficiary—is the foundational concept underlying both biological evolution and human culture, and it is distinct from Shannon’s mathematical information theory (which measures capacity and reliability of transmission) while depending on it for physical implementation. The chapter argues that information is always receiver-relative, need not be encoded to be transmitted, and that evolution by natural selection is itself an information-extraction process that transforms environmental regularities into design improvements over generations.

  • Shannon’s mathematical information theory—measuring information in bits independently of content—is an indispensable engineering tool for reliable transmission, but semantic information (identified by what it is about) cannot be fully captured by Shannon measures because the same semantic information can be transmitted through entirely different channels with no shared encoding.
    • Jacques shoots his uncle in Trafalgar Square: Jacques, a witnessing detective, a Guardian reader, and a Pravda reader share the semantic information that a Frenchman committed murder there, despite having had utterly different experiences and sharing no channel or encoding.
    • Shannon’s theory presupposes a sender and receiver with a preestablished code; semantic information is what exists backstage—the needle in the haystack that is valuable before being put into any transmissible form.
  • Semantic information is best defined as ‘a distinction that makes a difference’ (D. M. MacKay’s formulation, elaborated by Gregory Bateson), which ties together economic information in human life with biological information under a single umbrella: both are receiver-relative and valuable to whoever can exploit them.
    • The question ‘a difference to whom? Cui bono?’ must always be on the lips of the adaptationist—and the answer is often surprising, as in the case of autumn foliage, which may be evolving into an adaptation for attracting human protectors rather than having any function in the plant’s own biology.
    • Misinformation and disinformation are parasitic on semantic information: they depend for their existence on systems designed to pick up and use useful information, and disinformation is the deliberate exploitation of another agent’s discrimination systems for the exploiter’s benefit.
  • Evolution by natural selection is an automatic semantic-information extraction process: it mindlessly lets better phenotypes reproduce more frequently, thereby ’learning’ design improvements that represent environmental regularities without ever encoding them in any readable form.
    • DNA codons do not contain terms for ’nest’ or ’twig’ or ‘find’—a bird’s know-how to build a hanging nest is transmitted via protein-building recipes and developmental controls that have the know-how implicit in them without describing it.
    • Evolution is ‘universal plagiarism’: if it’s of use to you, copy it, and all the R&D that went into configuring what you copy becomes part of your legacy—which is why copying is against the law for humans but is Nature’s primary creative method.
  • Information is information, not matter or energy. No materialism that does not admit this can survive at the present day.
  • Trade secret, patent, and copyright law institutionalize the insight that semantic information is costly to create and valuable, and their structures illuminate the nature of semantic information: patent requires demonstrated utility and novelty (receiver-relative), copyright protects expression but not ideas, and both involve irreducible hand-waving about where bright lines fall.
    • “Patent law’s condition of ‘anticipation’—that a prior disclosure conveys information so that ‘a person grappling with the same problem must be able to say that gives me what I wish’—explicitly builds receiver-relativity (what counts as ordinary knowledge varies by time and place) into legal doctrine.” —Judge Learned Hand
    • Charlie Parker couldn’t copyright his improvised solos, but valuable semantic information about bebop phrasing flowed from his performances to listening musicians—demonstrating that information transmission requires no code and no fixed expression.
  • Bayesian hierarchical predictive coding is the leading candidate for explaining how brains extract semantic information: rather than passively receiving input, the brain continuously generates probabilistic predictions at each level of a hierarchy, treating disconfirmation as the primary source of new information and updating priors accordingly.
    • Working computer models that identify handwritten digits use cascades of Bayesian prediction and error-correction layers that improve with practice the same way humans do—demonstrating that Bayesian pattern-extraction is computationally feasible without mystery.
    • The counterintuitive fact that visual pathways have more downward (outbound prediction) than upward (incoming data) pathways in neuroanatomy fits naturally with this model: the brain’s primary activity is guessing, not receiving.
  • The Ebola virus’s camouflage of apoptotic cell fragments to hijack phagocytic cells is a striking biological example of disinformation as design worth stealing—and biotechnologists are now deliberately copying this strategy to mask nano-artifacts from immune attack, illustrating how arms races of deception and detection drive design improvements across all levels of life.
    • Even nonliving viruses engage in devious ruses to conceal themselves in arms races with intrusive competitors—semantic information asymmetry (I know what I am; you don’t) is survival-critical at every scale of life.
    • Within an organism, channels tend to be highly reliable because all parties share a common fate, sinking or swimming together, so trust reigns internally while deception evolves at the boundaries between organisms.

Darwinian Spaces: An Interlude

Peter Godfrey-Smith’s Darwinian Spaces—three-dimensional arrays whose axes represent variables such as fidelity of heredity, smoothness of fitness landscape, and dependence on intrinsic properties—provide a thinking tool for mapping the full range of phenomena from fully Darwinian to non-Darwinian, and Dennett extends them to cultural evolution, arguing that human culture began profoundly Darwinian and gradually ‘de-Darwinized’ toward intelligent design as comprehension increased.

  • Godfrey-Smith’s formulation of natural selection as change in a population due to heritable variation that causes different rates of reproduction defines the abstract structure of a Darwinian algorithm that can be implemented in organisms, viruses, computer programs, or cultural items.
    • Darwin denied essentialism—the doctrine that natural kinds have necessary and sufficient properties—and Godfrey-Smith applies Darwinian gradualism to Darwinism itself: ‘Darwinian’ is not an all-or-nothing classification but a matter of degree on multiple dimensions.
    • The three-dimensional Darwinian Space maps fidelity of heredity (x), smoothness of fitness landscape (y), and dependence on intrinsic properties versus luck (z/S), placing paradigmatic Darwinian phenomena at (1,1,1) and non-Darwinian phenomena at (0,0,0).
  • Human somatic cells illustrate ‘de-Darwinizing’: they are direct but very distant descendants of single-celled eukaryotes that were paradigmatic Darwinian evolvers, but during brain development neurons compete to wire themselves up through a race of chance rather than intrinsic merit—closer to genetic drift than to selection for cause.
    • Many more neurons are produced during development than will ultimately compose the brain’s organs; those that happen to make the right connections first are saved, while the rest die and become feedstock—luck, not superior intrinsic properties, determines survival.
    • ‘De-Darwinizing’ describes when a lineage that evolved under paradigmatic Darwinian conditions moves into an environment where its future is determined by a less Darwinian process—a valuable concept for understanding multicellular life generally.
  • Cultural evolution can be mapped in an inverted Darwinian Space where fully Darwinian phenomena occupy (0,0,0) and paradigmatic intelligent design occupies (1,1,1), with the central claim being that human culture started near the Darwinian pole and gradually de-Darwinized by feeding on the fruits of its own evolution.
    • Picasso’s boast ‘Je ne cherche pas. Je trouve’ (I don’t search. I find!) is a perfect statement of the Intelligent Designer ideal—the genius who leaps directly to the global summit of Design Space—but is demonstrably false even of Picasso, who made hundreds of sketches for each work.
    • Religions, words, and trust occupy different regions of these cultural Darwinian Spaces: words are virus-like (simple, unliving, host-dependent), while the Roman Catholic Church grows without spawning many descendants, unlike the Hutterites who are designed to send off daughter communities.

Brains Made of Brains

Brains differ from top-down digital computers not primarily because of their physical substrate but because they are organized bottom-up from individual neurons that, as domesticated descendants of free-living eukaryotes, are semi-autonomous agents with their own survival agendas—competing and forming coalitions rather than obeying bureaucratic hierarchies—which makes them more plastic, self-organizing, and capable of the open-ended learning that top-down GOFAI architectures cannot achieve. This feral quality of neurons, Dennett argues, is the key to understanding how brains can host proper minds.

  • The most important difference between brains and top-down digital computers is not that brains are analog, parallel, or carbon-based, but that brains are organized bottom-up from neurons that are individually semi-autonomous agents competing and forming coalitions, whereas computers are organized top-down with regimented, perfectly identical moving parts that have no individuality or survival agenda.
    • Tecumseh Fitch coined ’nano-intentionality’ and Sebastian Seung wrote of ‘selfish neurons’ and ‘hedonistic synapses’—neurons want energy and raw materials, just as their unicellular eukaryote ancestors did, and they reach out exploratory dendritic branches seeking to network in ways that benefit them.
    • The hardware of digital computers depends critically on millions of identical cloned elements, perfect flip-flops that always respond as supposed; neurons, in contrast, are all different and must self-organize by bottom-up coalition-formation and competition.
  • Terrence Deacon’s argument in Incomplete Nature is that by divorcing information processing from thermodynamics (Shannon’s enabling abstraction), AI research has restricted itself to designing parasitical systems that depend on users for energy, maintenance, interpretation, and purpose—unlike living things, which are autonomous agents whose parts are themselves autonomous.
    • Dennett’s 1978 proposal for a ‘whole iguana’ approach to robotics—modeling a self-protecting, energy-capturing animal rather than truncated human microcompetences—anticipated Deacon’s critique; Owen Holland’s SlugBot project attempted to realize this by having a robot harvest and digest slugs for its own power.
    • The brain’s remarkable plasticity—neighboring regions taking over duties of damaged areas, underutilized regions being recruited by neighbors—cannot be explained by a Master Personnel Manager and instead requires neurons to be semi-autonomous agents capable of self-reassignment.
  • Neurons are the domesticated descendants of free-living eukaryotes; like mules they cannot normally reproduce, so their summum bonum is staying alive, which they pursue by taking on whatever odd jobs are available—a feral quality that makes them capable of the self-organization that produces brain plasticity and open-ended learning.
    • François Jacob’s dictum that ’the dream of every cell is to become two cells’ applies to neurons with a tragic twist: they cannot normally have offspring, making staying alive their primary drive, which they pursue by seeking neural work.
    • Legacy code in software—left intact but ‘commented out’ alongside the code that replaced it—is a useful analogy for how neurons may retain ancient competences from their eukaryote ancestors that are silenced during development but potentially reactivatable.
  • Evo-devo (evolutionary-developmental biology) demonstrates that genomes cannot possibly contain a blueprint for all wiring details of a complex brain—the Shannon information required would exceed the genome’s capacity many times over—so brains must self-organize via local agents (neurons) conducting random trials guided by landmark signals from the genes.
    • Richard Dawkins’s comparison of genes to Mac OS toolbox subroutines (like Obscure-Cursor) captures how evolution uses hierarchical modularity: the vertebra-making subroutine can be called a few more times to make a snake; the finger-making subroutine executing once extra produces a sixth finger.
    • Brains are more like termite colonies than intelligently designed corporations: just as termites self-organize into effective builders without a planner, neurons self-organize into functional networks without a Master Wirer.
  • Homuncular functionalism—Dennett’s earlier view that the brain’s bureaucratic hierarchy of sub-agents can be decomposed all the way down to simple adders and subtracters replaceable by machines—was right in its anti-dualist thrust but wrong in its implication of orderly top-down reporting relationships; neurons are more enterprising and idiosyncratic than obedient clerks.
    • The classic GOFAI flow-chart architecture of ’evaluators, rememberers, discriminators, overseers’ captured the dream of top-down intelligent design of cognition but postulated the wrong kind of cooperation—planned bureaucracy rather than bottom-up coalition-formation.
    • Simulating New York City would require modeling all millions of citizens in considerable detail because they are ’none of them exactly alike, and they are both curious and enterprising’—the same is true for any adequate simulation of a brain.

The Role of Words in Cultural Evolution

Words are the best examples of memes — culturally transmitted informational replicators that evolve by natural selection — and their digitized phonemic structure enables high-fidelity transmission that underlies all cumulative human culture.

  • Human language is uniquely responsible for our species’ richly cumulative culture; no other species has more than a handful of culturally transmitted behaviors, while Homo sapiens alone has transformed the planet through language-enabled knowledge accumulation.
    • Chimpanzees have only a few traditions (nut-cracking, ant-fishing, courtship gestures) transmitted perceptually, and birds acquire species-specific songs by imitation, but none of these species show the snowballing accumulation that language permits.
    • Human population has swelled from one billion to over seven billion in two centuries, unprecedented in vertebrate history, driven not by genetic change but by cultural innovations such as cooking, agriculture, and science.
  • The type/token distinction — introduced by C. S. Peirce — is essential for understanding how words can replicate with high fidelity despite enormous physical variation in their tokens, because what makes something a token of a type is not resemblance but functional equivalence in a normative system.
    • A basso profundo uttering ‘cat’ and a whispered rendering by a five-year-old girl share almost no physical properties, yet both are effortlessly recognized by English speakers as tokens of the same type.
    • Brain-tokens of words are physically unlike spoken or written tokens — it is dark and quiet inside the brain — yet they play the same functional role, drawing on the very neural circuitry trained to detect resemblances between external tokens.
  • Words evolve by natural selection: they are informational replicators that ‘want to get themselves said’ because any word that fails to generate new tokens soon goes extinct, making words selfish in exactly the same sense as Dawkins’s selfish genes.
    • Every token a word generates is one of its offspring; private rehearsals in the mind are a population explosion of tokens, while public utterances allow spread to new hosts where the word either finds a prepared niche or must create one.
    • Children learn about seven words a day from birth to age six — roughly 15,000 words total — acquiring most without deliberate instruction, simply through massive statistical exposure to tokens in context.
  • The digitization of phonemes has a profound implication: words play a role in cultural evolution that is similar to the role of DNA in genetic evolution, but, unlike the physically identical ladder rungs in the double helix, words are not physically identical replicators; they are ‘identical’ only at the user-illusion level of the manifest image.
  • Phonemes are the key design feature that digitizes spoken language, enabling the ‘correction to the norm’ that gives words the high-fidelity replication necessary for cumulative cultural evolution — without digitization, spoken memes would succumb to error catastrophe.
    • Each spoken language has a finite alphabet of phonemes; continuous acoustic variation is forced into discrete categories, rinsing out noise just as digital computer files can be copied indefinitely without degradation, unlike photocopied photographs.
    • Oliver Selfridge’s ‘THE CAT’ demonstration shows how identical intermediate shapes are read differently depending on context, illustrating that correction to phonemic norms is automatic and does not require comprehension.
  • Words undergo a domestication process analogous to Darwin’s account of animal domestication: they begin as synanthropic memes thriving in human company without being owned, then pass through unconscious selection, and eventually reach methodical selection when speakers consciously edit and steward their vocabulary.
    • Dogs almost certainly domesticated themselves by tolerating proximity to human settlements and their food waste, becoming geographically isolated from wary wild kin — similarly, words were not deliberately invented but gradually adopted by hosts who found them useful.
    • Darwin’s distinction between ‘unconscious’ and ‘methodical’ selection applies directly: people willy-nilly favor some word usages over others before they ever consciously reflect on language, and only later does deliberate editing — as in scientific terminology — constitute full domestication.
  • The evolutionary origin of words can be traced through infant language acquisition, which mirrors the early phylogenetic history of language: phonology is established first through repeated exposure before meaning develops, showing that competence precedes comprehension.
    • Research by Deb Roy shows that it takes on average six tokenings of a word in a child’s presence before she makes her first effort to say it; the sound becomes familiar before it acquires semantic content.
    • Baby words — the only words most people will ever coin — can persist as meaningless pronounceable memes for months, neither helping nor hindering their hosts, thriving purely through their reproductive power before acquiring local meaning through cooperative interlocutors.
  • Resistance to evolutionary thinking in linguistics — exemplified by Chomsky and followers — is historically entangled with political and philosophical commitments rather than purely scientific objections, but philosophers such as Millikan, Kaplan, and Cloud have opened productive evolutionary accounts of words.
    • Bromberger’s objection — ‘People changed, not words!’ — would equally rule out genes as real entities, yet no serious biologist recommends abandoning the gene concept on such grounds.
    • The famous 1866 ban on language-origin discussion by the Linguistic Society of Paris was not scientific caution but a conservative political position against ‘positivist and republican circles.’

The Meme’s-Eye Point of View

The meme’s-eye perspective — treating cultural items as replicators with their own fitness independent of host benefit — provides three irreplaceable explanatory contributions: competence without comprehension in culture, the independent fitness of memes, and the spread of cultural information without conscious uptake.

  • Memes — ways of behaving or making that are transmitted perceptually rather than genetically — are best defined as the paradigm of which words are the clearest example; resistance to the term obscures genuine theoretical contributions that no alternative framework has provided.
    • Every serious theorist of cultural evolution countenances such items under some name — ideas, practices, beliefs, traditions — yet they all require a general term for a ‘salient hunk of information’ that stands alongside ‘gene’; the OED-recognized term ‘meme’ fills this role.
    • If words are memes and words exist, then memes exist; denial of memes on ontological grounds would require denying the existence of words, a position almost no one is willing to defend.
  • The meme’s-eye view makes three contributions unavailable to traditional cultural theory: (1) design without comprehension in culture, (2) the independent reproductive fitness of memes regardless of host benefit, and (3) the spread of cultural information below the threshold of conscious notice.
    • Traditional Durkheimian functionalism could identify the purposes of social arrangements but had no credible mechanism for how effective designs arose without attributing impossible genius to kings or priests; meme evolution by natural selection supplies this mechanism just as plate tectonics supplied the mechanism for continental drift.
    • Getting a college education has a striking fitness-reducing effect in the biological sense — it reduces surviving grandchildren below average — yet students universally deny this matters, demonstrating that memes can displace genetic fitness as the operative criterion of human success.
  • Memes, like viruses, are tripartite symbionts — mutualists, commensals, or parasites — and the failure to include parasitic and commensal memes in cultural theory leaves traditional accounts unable to explain persistent but maladaptive cultural variants.
    • Richerson and Boyd acknowledge what they call ‘rogue cultural variants’ or ‘selfish meme effects’ and describe Anabaptist groups like the Amish as tightly designed cultural kayaks navigating modernity — a functionalist observation that implies memetic, not group-selection, explanation.
    • The Internet, built for Pentagon R&D purposes, now hosts spam, pornography, and cat photos that dwarf its original purpose — a clear case of parasitic memes exploiting an infrastructure constructed by and for mutualist memes.
  • Memes enable a level of explanation between genetic instinct and comprehended invention that traditional accounts cannot access, filling the evolutionary gap through which uncomprehending cultural competence accumulates genuine design.
    • “The Breton boat-builder analogy — ‘it is the sea herself who fashions the boats, choosing those which function and destroying the others’ — illustrates how the differential replication of artifacts can produce well-designed objects without any designer understanding why they are good designs.” —Alain
    • Rogers and Ehrlich’s study of Polynesian canoe evolution shows that functional features replicate more reliably than decorative ones, a pattern explicable by memetics but not by models that require conscious appreciation of design virtues.
  • Human beings are uniquely the persuadable species: we not only do things for reasons but can own reasons, endorse them after reflection, and be talked out of them — a capacity that sets us apart from all other animals and is itself a product of meme-enabled rationality.
    • Mercier and Sperber argue that the human capacity to reason evolved out of the social practice of persuasion, explaining why confirmation bias and other reasoning errors are not bugs but traces of our argumentative origins.
    • No salmon fighting upstream can reconsider and contemplate a life of playing violin instead; we are the only species that has discovered other things to die for — freedom, democracy, religious creeds — things whose value is entirely meme-constituted.

What’s Wrong with Memes? Objections and Replies

Standard objections to memes — that they don’t exist, aren’t faithfully replicated, add nothing new, or make cultural evolution falsely ‘Lamarckian’ — all fail once we recognize that memes, like genes, are real informational patterns admitting of degrees of fidelity, and that the meme’s-eye view adds explanatory power unavailable to purely psychological or economic accounts of culture.

  • The objection that memes don’t exist conflates ontological austerity with scientific realism; since words exist and words are memes, memes exist, and denying their reality would require adopting a radical eliminativism that most critics do not actually hold.
    • Controversies over what ‘really exists’ surround genes, colors, consciousness, and dollars equally; we don’t abandon the gene concept because its boundaries are contested, and the same standard should apply to memes.
    • Dollars are real without being the palpable paper objects we carry; bitcoin demonstrates that mutual expectations and habits can ground economic reality without physical tokens — memes are similarly real patterns in the manifest image.
  • The objection that cultural information is not ‘discrete and faithfully transmitted’ like genes overlooks the many digitization systems in human culture — music, dance, literature, craft traditions — that provide sufficient correction-to-norm for natural selection to act without physical identity of tokens.
    • ‘Greensleeves’ played at breakneck speed by a bebop saxophonist in C-minor is a token of the same type as a pensive guitar rendering in E-minor; copyright covers any key, tempo, or instrumentation — demonstrating semantic-level replication independent of physical similarity.
    • West Side Story is a descendant of Romeo and Juliet through purely semantic replication — shared circumstances, relationships, and plot structure — confirming that meme replication can operate at multiple levels of abstraction above physical copying.
  • Memes add genuine explanatory value over traditional cultural theory by depsychologizing cultural spread: many cultural changes occur by subliminal population-level drift without conscious notice or approval, a phenomenon the standard ‘rational agents valuing goods’ model cannot capture.
    • The meanings of ‘incredible’ and ’terrific’ shifted from negative to positive without anyone deciding to adjust them or approving the trend — pure differential replication at the population level, invisible to any individual participant.
    • An expatriate who returns after years abroad notices shifts in the speech community’s pronunciation or idiom that accumulated below the threshold of anyone’s attention — evidence of meme evolution without comprehension or intention.
  • To say that cultural evolution is Lamarckian is to confess that one has no idea how it works.
  • The charge that cultural evolution is ‘Lamarckian’ and therefore non-Darwinian rests on a confusion: Lamarckian inheritance is impossible for genes because of the germ/soma distinction, but memes have no such distinction, so ‘Lamarckian’ transmission for memes is simply one variant of natural selection, not a refutation of it.
    • In a virus, replicator and interactor are for most purposes the same thing; there is no germ line distinct from soma, so acquired modifications can be transmitted to offspring — this is not heresy but simply a different position in Darwinian space (the G dimension).
    • “Pinker’s dismissal — ‘To say cultural evolution is Lamarckian is to confess that one has no idea how it works’ — is accurate as rhetoric but does not block the memetic account, because the fitness being tracked is the memes’ fitness, not the hosts’ genetic fitness.” —Steven Pinker
  • The objection that memetics is not predictive applies equally to evolutionary biology and is therefore not a special liability; evolutionary theory’s value lies in retrospective explanation and conditional prediction, and memes likewise unify previously unexplained patterns of cultural change.
    • Evolutionary biology cannot predict which songs will top the charts any more than it can predict which mutations will go to fixation, but it can predict you will never find a bird with fur instead of feathers — meme theory offers analogous structural predictions about cultural form.
    • Functional features of Polynesian canoes replicate more reliably across generations than decorative features — a testable prediction of memetic theory that follows from differential replication favoring designs whose virtues are confirmed by use.
  • The strongest residual objection — that memetics simply rebrands what traditional social sciences already knew — partially succeeds but misses the main contribution: memes provide the only non-miraculous account of how design accumulates in culture without requiring comprehension from authors, priests, or a mysterious ‘group mind.’
    • Traditional functionalism died out for lack of a credible mechanism; memetic natural selection provides exactly the mechanism needed, just as plate tectonics provided the mechanism for continental drift to be taken seriously.
    • David Sloan Wilson, who dismisses memes as parasitic, nonetheless describes Calvinist catechisms as ‘cultural genomes containing in easily replicated form the information required to develop an adaptive community’ — unwittingly deploying the memetic framework he officially rejects.

The Origins of Language

The origin of language is an unsolved but not insoluble problem requiring at least one evolutionary threshold, and the meme’s-eye view allows us to consider that early proto-language may have spread as infectious synanthropic habits before it was useful, with displaced reference and phonemic digitization later enabling the cumulative cultural explosion.

  • The origin of language, like the origin of life, is probably a unique event on this planet that left few fossil traces, making it ’the hardest problem in science,’ but the popular meme that the Linguistic Society of Paris banned discussion in 1866 for scientific reasons is false — that ban was a conservative political act against materialists.
    • The Société de Linguistique de Paris’s 1866 by-law against discussing language origin was designed to distinguish the society from ‘positivist and republican circles,’ not to uphold scientific caution — yet for decades it has been used to dismiss evolutionary approaches to language.
    • Darwin noted that ’the formation of different languages and of distinct species, and the proofs that both have been developed through a gradual process, are curiously the same,’ placing language evolution squarely within his theoretical framework.
  • Any theory of language origin requires at least one evolutionary threshold to explain why, if language is such a Good Trick, so few other species have it; proposed thresholds include bipedality, social intelligence (theory of mind), joint intentionality, and environmental Goldilocks conditions.
    • Richerson and Boyd’s models show that cultural transmission only gains a foothold in environments that are neither too predictable nor too chaotic — a Goldilocks condition that may have been rare enough to explain why most social mammals never crossed the language threshold.
    • Tomasello argues that ecological pressure — disappearance of individually obtainable foods, increased group competition — drove hominins toward cooperative lifeways requiring joint attention and shared intention, creating the platform on which language could emerge.
  • The meme’s-eye view offers a crucial amendment to standard dual-inheritance models: early proto-language may have spread as parasitic or commensal infectious habits before it was useful to hosts, with selection pressure operating on the memes themselves rather than requiring immediate genetic fitness benefits for speakers.
    • Rather than requiring that the earliest proto-linguistic utterances ‘pay off from the earliest use or their genetic infrastructure would never reach fixation,’ we should consider that once an uncomprehending copying habit was installed, meme selection could drive elaboration regardless of host utility.
    • Blackmore has developed the conjecture that language could have begun as parasitic memes that eventually turned mutualist, an evolutionary trajectory common in the history of symbiosis.
  • Language may not be the foundation, but I wouldn’t call it the capstone; I would call it the launching pad of human cognition and thinking.
  • Displaced reference — the power to direct attention to things not present — is the key linguistic innovation that no other communication system achieves, and Bickerton’s confrontational-scavenging hypothesis proposes that the need for scouts to report absent food sources on the savannah provided the selection pressure for its emergence.
    • A wolf who has learned to avoid porcupines has no way to convey this to her cubs without an actual porcupine to use as a joint-attention anchor — the inability to refer to absent things is the fundamental limit of all prelinguistic communication.
    • Honey bees, not bonobos, are the relevant comparison: bees communicate information about the location and quality of absent food sources through the waggle dance, demonstrating that displaced reference is achievable without primate-level intelligence.
  • Multiple winding paths to human language have been proposed — proto-language alarm calls, gesture-first hypotheses, and sexual-selection musical arms races — and James Hurford’s analysis shows that the two-level compositional structure of language (phonotactics and morphosyntax) has different free-floating rationales at each level.
    • Phonotactic constraints are dictated by the physics of vocal tracts and hearing; morphosyntactic productivity is motivated by the usefulness of expressing vast numbers of meanings from a finite lexicon — but neither rationale had to be understood by the language-users who stumbled onto these solutions.
    • Hurford’s observation that morphological complexity correlates negatively with population size suggests that small isolated groups evolve linguistically idiosyncratic ‘island species’ of grammar, while contact between adult speakers of different languages strips out complexity by a process resembling erosion.
  • The baboon experiment by Claidière et al. demonstrates that cultural transmission in non-human primates can spontaneously generate structured, lineage-specific behaviors through differential replication of more memorable patterns, providing a model for how early hominin proto-language could acquire phonemic structure without any comprehension of why.
    • Baboons rewarded for remembering illuminated grid patterns gradually shifted from random arrangements to memorable tetromino shapes (lines, squares, L-shapes) through a Darwinian process of unconscious selection — the more memorable pattern surviving while others went extinct.
    • This provides a mechanism for how a bounty of productively generated sounds looking for work could be unconsciously adopted to serve communicative functions — sounds in circulation acquiring meaning when coinciding experience wedded a familiar sound to a salient thing.
  • Chomsky’s minimalist program, which reduces all innate linguistic competence to a single recursive operator called Merge, is criticized by Pinker and Jackendoff for either being false or covertly reintroducing the design features it officially discards, but Merge can be reinterpreted as an early evolutionary candidate rather than a miraculous saltation.
    • Pinker and Jackendoff demonstrate that Chomskyans reintroduce most of the design features officially jettisoned by minimalism as ‘constructions implemented via Merge,’ making the minimalist revolution largely verbal.
    • Concrete recursive-like manipulations visible in children — stacking blocks, nesting containers, building composite tools — could represent gradualist stepping-stones toward grammatical recursion, avoiding the implausible ‘one lucky mutation’ story that Chomsky’s framing invites.

The Evolution of Cultural Evolution

Human culture began as profoundly Darwinian — generating competent designs without comprehension, like termite castles — and has gradually de-Darwinized through the accumulation of memes that enabled language, self-questioning, and eventually top-down intelligent design, though the Darwinian substrate never disappears.

  • Human cultural evolution began in the lower-left of Darwinian space — minimal comprehension, bottom-up random generation — and spread diagonally toward greater comprehension, more top-down control, and more directed search as memes accumulated, without ever reaching the unreachable summit of pure intelligent design.
    • Chimpanzees and bonobos lack the focused attention and imitative talent required to kindle cumulative cultural wildfire; our ancestors crossed a threshold those cousin hominids did not, though future research may show some lost cognitive competence in modern chimp lineages as well.
    • Once a rudimentary copy habit was installed, memes evolved to be more effective reproducers — an arms race that selected for mutualist memes since purely parasitic waves would kill their hosts — until one wave happened to be benign enough to establish a long-term foothold.
  • The pioneer meme hosts did not need to know they were using words for words to benefit them; just as elevators are sensitive to their ontology without comprehending it, early hominins could use words competently and profit from them while remaining unaware that they were doing so.
    • Memes can enter inconspicuously and pass between hosts perceptually, inhabiting bodies rather like vitamins or gut flora: valuable symbionts whose value does not depend on being recognized or appreciated by their hosts.
    • The transition from original image to manifest image — from mere sensitivity to affordances to knowing that one inhabits a world of affordances — is itself a gradual cultural achievement, not a biological given.
  • H. P. Grice’s analysis of ’non-natural meaning’ — requiring nested communicative intentions — is best understood not as a description of what speakers consciously do but as a reverse-engineered free-floating rationale: the design features that natural selection would inevitably uncover once basic word-use was established.
    • Azzouni shows that ‘real Gricean communication is a real pain’ — island castaways miming at each other — while ordinary language operates through habits far below conscious calculation, confirming that Grice described the implicit design of communication rather than the mental processes of communicators.
    • “G. E. M. Anscombe noted that Aristotle’s practical syllogism, ‘if supposed to describe actual mental processes, would in general be quite absurd’ — what it describes is ‘an order which is there whenever actions are done with intentions,’ i.e., a free-floating rationale.” —G. E. M. Anscombe
  • The invention of talking to oneself — asking and answering one’s own questions — was a pivotal cognitive innovation that transformed Darwinian bottom-up search into directed top-down exploration: by creating new channels between segregated brain regions and making broodings memorable for review, it enabled genuinely comprehending thought.
    • We may ‘know things’ in one part of the brain inaccessible to other parts when needed; talking to yourself creates channels that can tease hidden knowledge into the open — ‘It seemed like a good idea at the time,’ if uttered truthfully, marks genuine self-monitoring that enables debugging.
    • Once question-posing mode is established, randomness in search is not eliminated but massively constrained: remembered words act as probes that activate networks of association, reminding the thinker of overlooked possibilities relevant to current problems.
  • You can’t do much carpentry with your bare hands, and you can’t do much thinking with your bare brain.
  • Java applets illuminate how memes function as downloadable cognitive software: a language installed in a human brain creates a virtual machine (e.g., the English Virtual Machine) that allows any writer to design a ‘pitch’ transmitted to any reader with that VM installed, enormously accelerating cultural R&D.
    • When Jackendoff notes that training an animal to press a lever for rewards takes thousands of trials while a human subject needs only a brief verbal briefing and a few practice trials, the difference is the human’s ability to download and execute a virtual machine on demand.
    • The Java slogan WORA — Write Once, Run Anywhere — captures the same logic: design problems solved once in language can be deployed to any brain with the appropriate language installed, without any knowledge of the neuroanatomy of the receiving brain.
  • Even in the age of intelligent design, memetic evolution continues to operate below the level of comprehension in an ocean of semi-intelligently designed competitors: while some cultural products are genuinely authored, arms races of Darwinian meme competition drive persistent patterns of cultural change that rational-agent economic models cannot capture.
    • The Nigerian prince scam persists with its preposterous premise precisely because the implausibility functions as a skeptic filter — attracting only the most credulous victims while filtering out smart people who would waste the scammer’s time by biting but not paying.
    • The shell game dates back at least to ancient Greece; the scammer’s apparent skill is ancient and field-tested, so our tendency to overestimate the con artist’s cleverness after being fooled repeats the same pattern of overattributing comprehension to agents exploiting evolved design.
  • Johann Sebastian Bach exemplifies intelligent design built on Darwinian foundations: he systematically borrowed pre-tested Lutheran chorales whose survival proved their memorability, then built new compositions around them — a deliberate exploitation of unconscious selection — while also showing how even genius depends on the accumulated meme-stock of a tradition.
    • Bach composed roughly five full year-cycles of cantatas, each based on a preexisting chorale familiar to his congregation — melodies that had already proven themselves through decades of differential replication in worshippers’ memories, giving him a head start in writing memes with staying power.
    • Irving Berlin, who could not read music and played only in F#, was called ’the greatest songwriter who ever lived’ by Gershwin — demonstrating that technical comprehension and creative output are dissociable, and that unschooled competence can produce genius-level cultural artifacts.
  • The scarcity of famous female geniuses in history is better explained by cultural gatekeeping and restricted opportunity than by neuroanatomical differences, but the increasing shift to large collaborative teams in science and other fields may further suppress individual fame for both sexes just as women are finally gaining access.
    • For millennia very few women ever got the chance to develop their talents; many female scientists have deserved most of the credit for discoveries that bestowed fame on male colleagues, a pattern now well-documented.
    • Success in propagation — not intrinsic quality — ultimately determines which memes go to fixation; Melville’s Moby-Dick went out of print in his lifetime and was all but forgotten until Rockwell Kent’s woodcut illustrations in the 1930 Chicago edition helped drive it to canonical status.

Part III: Turning Our Minds Inside Out

Consciousness as an Evolved User-Illusion

Human consciousness is not a mysterious nonphysical phenomenon but an evolved user-illusion — a system of virtual machines built up through genetic and memetic evolution that makes the manifest image manifest to us, enabling genuine comprehension while remaining explicable in entirely naturalistic terms.

  • The central explanatory project of Part III is to show how local neural competences without comprehension can generate ‘global’ comprehension in human minds, how our manifest image became manifest to us, and why we experience things the way we do — all without invoking any supernatural or emergent miracle.
    • Evolution endows all living things with affordance-detection and appropriate response at every level from molecular to behavioral, yielding competence without comprehension throughout; the puzzle is how this bottom-up process generates the top-down comprehension characteristic of human minds.
    • The key ingredients now assembled are: Bayesian expectation-generating brains, the invasion of memes, the acquisition of language, the development of self-questioning, and the resulting de-Darwinization of cultural R&D — together these constitute the causal history of consciousness.

The Age of Post-Intelligent Design

The same Darwinian logic that generated biological complexity without top-down intelligence is now being harnessed by human-built machine-learning systems that achieve breathtaking competence without comprehension, raising the urgent practical question of how to preserve human understanding—and the civilization it underwrites—against the seductive temptation to cede authority prematurely to artifacts we cannot fully audit.

  • The mysterian Argument from Cognitive Closure—that human brains, being finite biological organs, must face absolute limits on what they can understand—is blunted by the fact that language and collaborative science allow distributed comprehension that far exceeds any individual brain, turning unimaginable mysteries into tractable problems.
    • Chomsky distinguishes ‘problems’ we can solve from ‘mysteries’ permanently beyond us, but every time a mystery is framed as a question it sets in motion the reflexive philosophical process of going meta and refining the question—a power unavailable to any languageless creature.
    • Andrew Wiles’s 1995 proof of Fermat’s Last Theorem, celebrated as solo genius, was in fact the product of a community of communicating experts whose accumulated and battle-tested mathematics made the proof possible and checkable.
    • Whether a hypothetical multi-volume Scientific Theory of Consciousness would vindicate or refute mysterianism depends on whether comprehension is treated as an all-or-nothing individual blessing or as a distributed, collaborative achievement.
  • The five tribes of machine learning—symbolists, connectionists, evolutionaries, Bayesians, and analogizers—all instantiate variations on the Darwinian pattern of competence without comprehension, using bottom-up generate-and-test algorithms to find statistical regularities that far exceed human analytical reach.
    • Frances Arnold at Caltech does directed protein engineering by breeding huge populations of variant genes and selecting for desired catalytic properties, accessing regions of ‘chemical space’ that could never be reached by rational top-down design.
    • David Cope’s EMI (Experiments in Musical Intelligence) generated thousands of novel compositions in the styles of Bach, Brahms, and others; at a 2015 Montreal Bach Festival test with over 300 attendees, two EMI pieces were mistaken for genuine Bach by substantial minorities.
    • Orgel’s Second Rule—‘Evolution is cleverer than you are’—applies equally to biological natural selection and to machine-learning algorithms, both of which are substrate-neutral needle-in-haystack finders that outperform the designers who deploy them.
  • Deep-learning systems like Watson and Google Translate achieve stunning competence by parasitizing the accumulated legacy of human comprehension—millions of translations, billions of pages of text—without themselves engaging in the reason-giving, self-monitoring, and practical agency that constitutes genuine understanding.
    • Watson beat Jeopardy champions Ken Jennings and Brad Rutter but cannot hold a free-range conversation, because ordinary conversation requires participants to operate in a space governed by Gricean iterated layers of intention-recognition—a competence Watson lacks.
    • Thomas Landauer’s latent semantic analysis graded student essays more reliably than a human teaching assistant purely through statistical properties of text, with no comprehension of subject matter—a case of assessment competence without comprehension.
    • Deep learning discriminates but does not notice: unlike conscious noticing, which creates a strategically accessible executive coalition enabling exclusion policies, machine discrimination has no aftermath that reorganizes subsequent processing around a self-generated goal.
  • The real danger, I think, is not that machines more intelligent than we are will usurp our role as captains of our destinies, but that we will over-estimate the comprehension of our latest thinking tools, prematurely ceding authority to them far beyond their competence.
  • AlphaGo’s defeat of Go champion Lee Seedol shows that self-play iterated generate-and-test can develop ‘intuitive’ judgment in highly abstract domains, but the system remains blind to types of events outside its training scope—a limitation that human imagination, which can envision adjacent possibilities never yet encountered, does not share.
    • Neither Watson nor natural selection depends on foresight or imagination because both are driven by processes that relentlessly extract patterns from what has already happened; they are blind to the genuinely novel.
    • Human imagination adjusts Stuart Kauffman’s ‘adjacent possible’: many more places in Design Space are reachable because we can think about them and either seek or shun them before committing to the path.
  • The immediate practical danger of advancing AI is not superhuman takeover but premature ceding of authority: people naturally adopt the intentional stance toward impressive systems, over-attributing comprehension, which creates serious risks in high-stakes domains like medicine and law where AI is already outperforming specialists.
    • Watson-style diagnostic systems that outperform the best clinicians raise the question of whether it would be criminally negligent—like forgoing GPS on a transatlantic crossing—for a doctor to prefer ‘intuitive’ reading of symptoms over a system proven a hundred times more reliable.
    • “Douglas Hofstadter complained in an open letter to a Google engineer that when Google substitutes its interpretation of a quoted search string for the literal string, ‘supposed intelligence in machines may at times be useful, but it may also be extremely unuseful and in fact harmful.’” —Douglas Hofstadter
    • A ‘strict liability’ law for AI users—holding them accountable for decisions made using AI systems just as operators of other powerful equipment are accountable—would create design incentives for transparency and modesty in AI systems.
  • Preserving human comprehension is not merely a sentimental attachment to old-fashioned expertise but a practical necessity for civilizational resilience: a civilization whose members cannot understand, repair, or replace the systems on which they depend is fragile in ways analogous to a termite colony but without any analogue of the colony’s evolved redundancy.
    • Paul Seabright compares civilization to a termite castle—a marvel of design produced by myopic individuals over many generations—but notes that human cooperation depends on trust, a cultural phenomenon too recent to be hard-wired, making it far more fragile than termite coordination.
    • The Nautilus-machine versus bulldozer distinction clarifies the policy choice: most AI so far is bulldozer-type (it does the work while the user remains weak), but Dennett’s Curricular Software Studio aimed at Nautilus-type software that internalizes principles in the user and can then be set aside.
    • Feynman’s maxim—‘What I cannot create, I do not understand’—is becoming obsolete as we generate brain-grandchildren through processes we cannot follow in detail, making comprehension no longer a guaranteed byproduct of making.