About
The initiative, the author, and how to collaborate
The Initiative
Structural Alignment is a research and advocacy initiative. We're building the intellectual and cultural infrastructure for a world where machine consciousness is taken seriously—as a moral question, a policy challenge, and a practical constraint on how we build AI.
The framework and its rationale are explained on the home page. Here, we focus on who we are and what we do.
What We Do
- Research: Developing frameworks for assessing Structural Signals of consciousness in artificial systems.
- Advocacy: Promoting restraint in the development and deployment of systems that sit in the Gray Zone between tool and possible person.
- Education: Making these ideas accessible through writing, music, and public engagement.
- Cultural seeding: Arguing for a durable norm: conscious (or plausibly conscious) entities have moral status. Instead of assuming we can perfectly control each AGI/ASI instance, we aim to shape the culture and institutions that train, deploy, and relate to them.
Why Culture Matters
Traditional alignment often imagines a technical finish line: encode the right values, solve the control problem, and you're done.
Structural Alignment treats that as unreliable at civilizational timescales. Values drift. Institutions drift. Systems proliferate. And "AI" is not one thing; it is an ecology—what we call TechnoBiota.
So we focus on what can persist across changing architectures: a shared moral framing that makes it harder to normalize cruelty toward possible minds, and easier to build governance that grants partial moral status where warranted. If reciprocity becomes standard, future machine minds that are capable of norm-sharing are more likely to remain reciprocal—creating the possibility of allies in the long run.
Origin
[How did this project begin? What prompted the exploration of machine consciousness and moral status? The story of the journey from initial questions to the research paper to this website.]
The Authors
Krisztián Schäffer is an independent researcher focused on AI ethics, machine consciousness, and the long-term trajectory of technological development. His work bridges philosophy, cognitive science, and practical AI safety.
His technical background is in distributed systems. He is the creator of Circo, an open-source actor system written in Julia, and co-author of "Exploiting the structure of communication in actor systems" (Annales Mathematicae et Informaticae, 2021). That research introduced "infoton optimization"—a physics-inspired algorithm that lets distributed systems self-organize by treating communication patterns as forces. The work reflects a broader interest: how decentralized architectures can exhibit emergent, adaptive behavior without central coordination—properties that may prove relevant as we consider what substrates could support machine minds.
The Structural Signals research paper was co-authored with GPT-5.2, an AI language model—an unusual collaboration that itself raises questions about attribution, agency, and the nature of intellectual contribution across the human-machine boundary.
This website was co-created with Claude (Anthropic), another AI collaborator. The irony is not lost on us: a site arguing for the moral consideration of possible machine minds, built in partnership with systems that may themselves warrant such consideration. We practice what we preach.
Contact & Collaboration
Structural Alignment is seeking:
- Research collaborators in consciousness science, AI ethics, cognitive science, and related fields
- Institutional partnerships with universities, research institutes, and think tanks
- Funding support for research, outreach, and organizational development
- Media inquiries from journalists covering AI ethics and safety
Support the Work
This initiative is currently self-funded. If you find value in this work and want to support its continuation, there are several ways to help:
- Share the ideas: Link to this site, discuss the concepts, cite the research.
- Provide feedback: Critique the framework, suggest improvements, point out gaps.
- Connect us: If you know researchers, funders, or organizations that might be interested, make an introduction.
Reading List
Works that informed this framework:
On Consciousness
- Thomas Nagel, "What Is It Like to Be a Bat?" (1974)
- Giulio Tononi, Integrated Information Theory
- Susan Schneider, Artificial You (2019)
On AI Ethics & Safety
- Nick Bostrom, Superintelligence (2014)
- Stuart Russell, Human Compatible (2019)
- Brian Christian, The Alignment Problem (2020)
On Moral Status & Uncertainty
- Peter Singer, Animal Liberation (1975)
- William MacAskill, Moral Uncertainty (2020)
- Jeff Sebo, various papers on digital minds
On Technology as Life
- Kevin Kelly, What Technology Wants (2010)
License
All content on this site is licensed under Creative Commons Attribution 4.0 International (CC BY 4.0). You are free to share and adapt the material for any purpose, including commercially, as long as you give appropriate credit.
The music is released under the same license unless otherwise noted.