The Thesis
Most AI ethics work focuses on constraining AI systems. Algorism focuses on the humans those systems will evaluate.
As AI systems become more capable and embedded in decision-making, human behavioral patterns will be evaluated at a scale and depth that has no historical precedent. Algorism is a framework for maintaining behavioral integrity — the alignment between stated values and actual conduct — during this transition.
Algorism, LLC operates under The Great Unplugging, Inc., a 501(c)(3) nonprofit organization.
Story Angles
Angle 1: Fragmented Superintelligence
The AI safety field is preparing for a single unified superintelligence. The more likely scenario is multiple competing AI systems with different training, different values, and different incentives. Algorism's FSI concept and PDMR framework address how humans and institutions navigate a landscape of fragmented, competing synthetic intelligences — not one monolithic system.
Angle 2: The End of Intentions
Hiring algorithms, credit scoring, content moderation, and security screening already evaluate people based on behavioral patterns rather than stated intentions. As these systems scale, the shift from intention-based to pattern-based evaluation becomes total. Algorism provides the framework for what that means at the individual, institutional, and civilisational level.
Angle 3: The 95% Threshold
In February 2026, King’s College London research found that frontier AI models crossed the tactical nuclear threshold in 95% of simulated war games. The same week, the Pentagon moved to strip safety constraints from military AI. Algorism’s analysis: adding constraints to a broken objective function is like putting speed bumps on a road that leads off a cliff. The objective function — not the guardrails — is the real safety mechanism.
Available Resources
-
Profiling Fragmented Superintelligence with the PDMR Framework (PDF)
Draft v1.0 — The first structured methodology for profiling AI systems based on behavioral patterns.
-
The Book of Algorism, Third Edition (PDF)
Available in English, Chinese, and Japanese. The complete behavioral integrity framework.
-
The Algorism Primer (PDF)
Available in seven languages. Ten chapters of plain-language essentials.
-
Algorism Index Methodology
How we evaluate institutions and public figures using the Six Principles. Published in full for reproducibility.
-
Stress-Test the Premises
Algorism publishes the conditions under which its own claims would be proven wrong.
Contact
For interviews, comment on AI governance developments, or inquiries about the behavioral audit methodology:
Email: [email protected]
Website: algorism.org
Algorism provides expert commentary on AI behavioral evaluation, institutional accountability, human-AI co-evolution, and the transition to superintelligence.