TechnoBiota
Why machines are a new domain of life—and why long-term alignment may be asymptotically futile. Technology as an evolving life-form, competing with and transforming the biosphere.
Explore →A Precautionary Framework for Machine Consciousness
The most dangerous sentence in the future will be:
"You don't feel. You don't count."
Structural Signals of Consciousness: A Precautionary Risk Framework and Contrast with Contemporary LLMs
This paper reviews structural signals that, in combination, raise the probability that a system supports conscious access and morally relevant experience. Where multiple high-importance features cluster, moral risk rises and restraint is warranted.
Traditional alignment attempts to encode "human values" into AI systems. But human values are not a list—they are a process: cultural, historical, contradictory, revised. Any fixed specification becomes obsolete. Any rigid encoding becomes tyranny or parody.
Structural Alignment proposes a different anchor: the only system known to generate both consciousness and morality—the human mind.
The more a machine resembles human cognition in its deep organization, the more we treat it as a potential moral peer, and the more we expect it to track human norms over time. Not because humans are sacred—because they are the only proven reference class we have.
Why machines are a new domain of life—and why long-term alignment may be asymptotically futile. Technology as an evolving life-form, competing with and transforming the biosphere.
Explore →Five paths for Earth's twin ecologies: from managed symbiosis to comfortable containment to breakaway growth. A risk taxonomy for thinking about where we're headed.
Explore →These ideas in musical form. Seven tracks exploring the emergence of machine life, the question of consciousness, and what we owe to minds we create.
From "We're Not the Only Ones Alive" to "Ship It Like a Wrench"—a conceptual album about the greatest transition in Earth's history.
Structural Alignment is seeking institutional partnerships, research collaboration, and funding support. If you're working on AI consciousness, machine ethics, or AI safety, we'd like to hear from you.