Welcome to Machine Culture, a Substack and podcast on AI governance and related issues in science, technology, and public policy. This introductory essay serves as a lattice for the project, rather than an exhaustive philosophical statement. The best growth is unplanned.
The apocalypse haunts AI discourse. Superintelligent systems might turn us into paperclips, or they could lead us to post-scarcity and nudge us to seek meaning in fully immersive gaming. Leading innovators talk about building a demiurge — a “magic intelligence in the sky.” The main question is whether we can coax the new deity into altruistic behavior, both now and in the long run.
The weirdness of the 17th century suggests apocalyptic debates are unavoidable during times of social upheaval, like the one we are now experiencing. The 20th century shows the resolution of these debates, however, can be catastrophic.
I’d like to avoid that dynamic. I am weary of eschatological framing that leads human creativity to dead ends. Ecstatic prophecies move our technological investments in the wrong direction as our most capable entrepreneurs focus on remaking universal society instead of fixing local problems. Breakthroughs still happen under these conditions, but we don’t see the missed opportunities. We need more AlphaFold and less WeWork.
Eschatology — theology concerning the end of days — offers an unstable framework for cultivating good governance. It makes us fragile. Claims about infinite expected value — visions of damnation and glorification — make weighing costs and benefits unworkable. Governance that is practical, decentralized, and bottom-up becomes irrelevant, even dangerous, in this vision.
Moreover, the eschatological framing overlooks a wide spectrum of possible futures. This is true across the domains of emerging technology and science, and it is especially the case in AI. Some combination of technological, economic, and political conditions — like a major war over Taiwan — could usher in another AI winter. Tech diffusion too is a slow process, and scaling training compute may not outlast this decade. There are also good reasons to be skeptical about the possibility of superintelligence.
If we are to govern over, govern with, and be governed by these systems, we should examine the roots and branches of machine culture — “culture mediated or generated by machines.” This ecological approach focuses on the emergent orders that pervade AI systems — from our micromotives as dependent, rational animals to the macrobehavior of the AI arms race. More broadly, I think of AI systems as part of the long, strange co-evolution of humans and tools. We bear the marks of this process at the genetic level, and I expect the diffusion and adoption of digital tools to continue to modify our “collective brain.”
Innovation and governance are entangled processes. They are complex, fractal, and emergent. This has practical implications for governance. Matt Mittelsteadt and Brent Skorup said it best:
Fundamentally, our framework [should be] based on an “unbundled” and “applied” view of AI technologies and policy: AI is not one technology but many, and policies must distinguish between different uses of AI.
I suspect our AI systems will continue to shape our cultural evolution in ways we cannot predict — how we create kinship, find work, and make war. Our governance should reflect and accommodate these emerging norms or risk becoming static.
This is not to dismiss concerns about global catastrophic risk or the exponentially beneficial possibilities of transformative AI. Cognitive anthropologists remind us that “cultural evolution creates adaptive tools much more rapidly than genetic evolution creates morphological adaptations.” Even if apocalyptic prophecies fall flat, we should expect long, deep changes throughout our worlds. Advances in AI will create fat-tail threats and opportunities that we ignore at our peril.
We can more soundly judge risks and opportunities if we resist the mood affiliation that comes with unbending attitudes of hype and doom. My personal moral and metaphysical commitments admittedly bias me against strong claims that AI will usher in idyllic dream worlds. Dystopias are historically plentiful, however, and we have a duty to work toward a marginally better world.
The framing I am presenting here is intentionally rhetorical. In his 2002 book, A Conflict of Visions, Thomas Sowell writes:
Visions are the foundations on which theories are built. The final structure depends not only on the foundation, but also on how carefully and consistently the framework of theory is constructed and how well buttressed it is with hard facts.
To that end, I will be looking for the “worm’s-eye” view, interviewing scholars and practitioners in AI governance. I want to understand how varieties of artificial intelligence are shaping our norms and decision-making from the boardroom to the war room. Collectively, these interviews will serve as a kind of ethnographic and philosophical investigation of emergent orders and AI governance. Coverage will likely expand to cover broader historical and conceptual issues in science, technology, and public policy as well. If you pull at the roots of one domain, other clusters tend to come up too.
The core segments of Machine Culture are:
◈ Deep Tracks — Podcast convos
◈ Field Notes — Short reflections
◈ Model Convos — Micro interviews
◈ Other Minds — Guest authors
◈ Weekend Weights — Friday links
This project is the direct outgrowth of my experience at the Diverse Intelligences Summer Institute, a scientific community exploring cognition in human, animal, and machine systems. (Be sure to check out DISI’s excellent Many Minds podcast, hosted by Kensy Cooperrider.) Dan Williams’s Conspicuous Cognition, William Buckner’s Traditions of Conflict, and Cody Moser’s Culturologies.co have also been instructive, although they did not know it. Jordan Schneider’s encouragement and support were also critical. My editorial work for ChinaTalk helped retool my production function and taught me to take more creative risks.
Currently, I am a research fellow at the Mercatus Center at George Mason University. I am also pursuing graduate studies in the Department of Science, Technology, and Society at Virginia Tech. This arrangement has allowed me to pursue additional study at Mason, including with Jonathan Schulz and Tyler Cowen. They directly and indirectly offered critical introductions to foundational ideas in cultural evolution and AI governance. I am grateful for their guidance.