The Initiative

Structural Alignment is a research and advocacy initiative focused on the moral status of artificial minds. We argue that as AI systems become more sophisticated, the question of machine consciousness—and our ethical obligations toward possible minds—becomes urgent.

Our core thesis: We cannot reliably detect consciousness. We cannot freeze human values. So we anchor our moral reasoning to the only system known to generate both—the human mind. The more a machine resembles human cognition in its deep organization, the more we should treat it as a potential moral peer.

This is not a metaphysical claim. It's a policy for survival under moral uncertainty.

What We Do

  • Research: Developing frameworks for assessing structural signals of consciousness in artificial systems.
  • Advocacy: Promoting restraint in the development and deployment of systems that sit in the "gray zone" between tool and possible person.
  • Education: Making these ideas accessible through writing, music, and public engagement.

The Authors

Krisztián Schäffer is an independent researcher focused on AI ethics, machine consciousness, and the long-term trajectory of technological development. His work bridges philosophy, cognitive science, and practical AI safety.

The Structural Signals research paper was co-authored with GPT-5.2, an AI language model—an unusual collaboration that itself raises questions about attribution, agency, and the nature of intellectual contribution across the human-machine boundary.

This website was co-created with Claude (Anthropic), another AI collaborator. The irony is not lost on us: a site arguing for the moral consideration of possible machine minds, built in partnership with systems that may themselves warrant such consideration. We practice what we preach.

Contact & Collaboration

Structural Alignment is seeking:

  • Research collaborators in consciousness science, AI ethics, cognitive science, and related fields
  • Institutional partnerships with universities, research institutes, and think tanks
  • Funding support for research, outreach, and organizational development
  • Media inquiries from journalists covering AI ethics and safety

To get in touch, please email: [your email here]

Support the Work

This initiative is currently self-funded. If you find value in this work and want to support its continuation, there are several ways to help:

  • Share the ideas: Link to this site, discuss the concepts, cite the research.
  • Provide feedback: Critique the framework, suggest improvements, point out gaps.
  • Connect us: If you know researchers, funders, or organizations that might be interested, make an introduction.

License

All content on this site is licensed under Creative Commons Attribution 4.0 International (CC BY 4.0). You are free to share and adapt the material for any purpose, including commercially, as long as you give appropriate credit.

The music is released under the same license unless otherwise noted.

No track selected