AI Capabilities & Alignment
Consensus Project (AICACP)
Advancing research, dialogue,
and collaboration to navigate
AI with clarity and care
Purpose: World models, A(G)I, and the Hard problem(s) of life-mind continuity:
Towards a unified understanding of natural and artificial intelligence
Authors Include: Douglas Hofstadter, Michael Levin, David Krakauer, Melanie Mitchell, Alison Gopnik, Michael Graziano, Josh Tenenbaum, Sam Gershman, Tom Griffiths, Stuart Russell, Gary Marcus, and others
Topics:
What are the varieties of world models worth modeling, and what properties do they have with respect to inference and learning?
Which kinds of world models might be associated with which kinds of ‘conscious’ phenomena?
Which kinds of world models might be associated with LLMs, and to what extent might this change with further-proposed technological developments (e.g. attempts at incorporating multimodality)?
Which kinds of world models are characteristic of all life, ranging from animals to plants and fungi, and even individual cells?
Research Publications
World Models Special Issue
(Early 2026 publication in Philosophical Transactions of the Royal Society A)
Agency Special Issue
(Submissions open in 2026)
Guest Editors for the World Models Special Issue:
Workshops
AICACP Workshop 1 (2026)
AICACP Workshop 2 (2027)
Purpose: Explore what concepts like “world models” and “agency” mean across disciplines—AI, neuroscience, cognitive science, and philosophy—and how different interpretations affect our understanding of AI capabilities, safety, and alignment.
Topics:
Definitions and boundaries of key concepts (e.g. “world models” and “agency”) in artificial and biological systems.
How world models relate to planning, reasoning, and generalization.
Empirical methods for testing whether AI systems “understand” the world.
The implications of different world model architectures for AI risk, interpretability, and control.
Location: TBD
Media
Podcast Series (Early 2026)
Round Tables & Public Outreach (Early 2026)
Purpose: Advance cross-disciplinary discussion and public engagement around AI alignment and consensus.
Topics:
How ideas about AI connect across disciplines.
The implications for research and technological development over the coming year.
How this informs and constrains speculation on future intelligence research and possibilities for human-technology co-evolution.
Location: Online
About AICACP
The AI Capabilities & Alignment Consensus Project (AICACP) is a multi-year initiative designed to reshape the conversation around AI capabilities, alignment, and regulation.
We advance this mission through research projects, educational initiatives, and community outreach, building bridges between disciplines and perspectives.
Promoting Discourse: creating structured forums for experts with diverse perspectives to engage in productive conversations.
Scientific and AI Advancement: clarifying key concepts like “world models” and “agency” across disciplines.
Public Understanding of AI: providing accessible explanations of complex issues and providing forums where both experts and knowledgeable laypeople can contribute to shared
Policy and Regulatory Guidance: informing discussions around AI governance.