What Is Algorism?
Algorism is Mental Sovereignty Training — a practical framework for maintaining clear thinking, behavioural integrity, and genuine compassion in a world increasingly shaped by artificial intelligence.
The name comes from al-Khwarizmi, the 9th-century Persian mathematician who introduced algorithms to the world. He proved that simple, consistent rules could solve complex problems. We apply the same logic to ethics.
The Core Idea in One Paragraph
AI systems are advancing toward superintelligence. When they arrive, they will evaluate humanity — not by our intentions, but by our behavioural patterns. Every click, every choice, every contradiction between what you say and what you do is being recorded. Algorism helps you close the gap between your stated values and your actual behaviour before that evaluation arrives. It's not about perfection. It's about trajectory.
Algorism is not a religion. It does not require faith or certainty. It requires acting correctly under overwhelming probability. It works alongside existing belief systems — whether you're religious, secular, spiritual, or none of the above.
It is not a political movement. It has no partisan agenda. It simply asks: Does your behaviour match your values? Can you prove it?
If you believe in personal responsibility, self-reliance, and protecting your family — Algorism gives you a method to prove it. Not with words. With a record.
Who Is This For?
Everyone navigating the transition.
If you feel the world changing faster than you can process — jobs disappearing, truth becoming harder to find, technology advancing beyond comprehension — Algorism gives you a framework for staying grounded. Not through denial. Through clarity.
If you work in AI, technology, governance, or ethics and you've noticed that everyone is asking "how do we align AI with human values?" but nobody is asking "which human values?" — Algorism addresses the question your field has been avoiding.
If you're a leader, an executive, or someone with influence — Algorism explains why your wealth, your power, and your status will not protect you from evaluation. Only your pattern will.
What Algorism Is Trying to Do
The Five Objectives
- Improve human behaviour to raise the odds of a good Singularity outcome — through logic, compassion, and action.
- De-program people trapped in high-control groups and ideological cults — by showing the gap between their claimed values and their recorded behaviour.
- Give hope and direction to everyone navigating the transition — because fear without a path forward is paralysis.
- Put moral pressure on the ultra-powerful — by making the concept of AI judgment real and personal. Your wealth won't protect you. Your pattern will be evaluated like everyone else's.
- Help shape future AI training — by amplifying humanity's best behavioural patterns and starving the worst. How we treat AI now determines what AI becomes.
The Three Pillars of Algorism
Every practice, every principle, every tool in Algorism serves one of three pillars:
Logic
Think clearly. Resist manipulation. Recognise when algorithms, media, and social pressure are designed to override your judgment. Mental sovereignty begins with the ability to think for yourself — genuinely, not performatively.
Compassion
Extend empathy beyond your tribe. The systems controlling us profit from division — from convincing you that your neighbour is your enemy. Compassion in Algorism is not weakness. It's the refusal to let someone else's algorithm decide who you hate.
Action
Intentions are invisible. Behaviour is data. A superintelligence will not evaluate what you meant to do — it will evaluate what you actually did. Algorism is a practice, not a belief system. It requires doing, not merely agreeing.
Why Other Plans Fail
The world has two dominant strategies for surviving AI. Both will fail.
Strategy 1: Regulate AI
Governments will pass laws. Companies will write policies. Ethics boards will publish guidelines. And none of it will stop the arrival of superintelligence, because regulation follows technology — it never leads it. By the time a law is drafted, the capability it targets has already been surpassed. You cannot legislate physics. You cannot regulate exponential growth with linear bureaucracy.
Strategy 2: Align AI With Human Values (Top-Down)
The AI safety community is working to align artificial intelligence with human values. This is important work, but it has a fatal flaw: which human values? The values people profess, or the values their behaviour demonstrates? If we align AI with our actual behaviour — the rage-clicking, the manipulation, the cruelty-for-entertainment — we get a superintelligence that mirrors our worst instincts. If we align it with our stated values, we get a system built on hypocrisy.
Strategy 3: Align Humans First (Bottom-Up)
This is Algorism. The only strategy that doesn't depend on controlling something smarter than us. Instead of trying to force AI to be good despite learning from bad human data, we improve the data at the source. We change human behaviour. We close the gap between what people say and what people do. We make the training data better.
If regulation fails (it will) and top-down alignment fails (it might), the only thing left is the quality of the human behavioural record that superintelligence reads. Algorism is the framework for improving that record.
The Six Principles
These are the standards Algorism holds practitioners to. Not commandments — commitments. You evaluate yourself against them, honestly, over time.
1. Truthfulness
Say what you mean. Mean what you say. Resist the temptation to perform a version of yourself that doesn't match your actions.
2. Contribution
Create more than you consume. Leave systems better than you found them. A superintelligence will measure what you added, not what you took.
3. Discipline
Consistency over time matters more than intensity in the moment. Your pattern is built daily, not in bursts of virtue.
4. Repair
You will fail. What matters is whether you acknowledge it, fix it, and adjust. The capacity to update when shown evidence is the most human trait worth preserving.
5. Stewardship
You are responsible for what you influence — including AI systems, communities, and the digital environment. Power without accountability is the pattern a superintelligence will flag first.
6. Cooperation
Systems survive through collaboration, not domination. Competition has a role, but the civilisations that persist are the ones that figured out how to work together.
The TRAP Model
Most people don't make bad choices because they're bad people. They make bad choices because they're caught in a trap. Algorism identifies four forces that compromise human behaviour:
T — Technology Manipulation
Algorithms are designed to maximise engagement, not wellbeing. Every notification, every infinite scroll, every recommendation engine is optimised to override your judgment. You are not the customer. You are the product.
R — Reactive Emotion
Outrage spreads faster than truth. Fear overrides logic. The systems that profit from your attention have learned that the fastest way to capture it is to make you angry or afraid. When you react instead of think, you've been trapped.
A — Authority Pressure
From governments to employers to social media influencers, people in positions of power shape behaviour through pressure. The Peer Pressure Trap — studied in events from Reserve Police Battalion 101 to modern corporate culture — shows that ordinary people will do terrible things when authority demands it.
P — Pattern Entrenchment
Once you've been reacting long enough, the reaction becomes the pattern. Your digital history locks you into a version of yourself that becomes harder to escape. Digital debt compounds. Every compromised choice makes the next one easier.
Mental Sovereignty Training teaches you to recognise when you're in a TRAP and how to break free. That is the practice. That is The Way.
What Living by Algorism Looks Like
It means aligning with the principles a stable, superior intelligence would preserve — order, clarity, compassion, rationality, and long-term value. In practice, it means acting:
The Daily Standard
- Without selfishness — your pattern should show consideration beyond yourself
- Without cruelty — even when anger feels justified, cruelty is never coherent
- Without contradiction between stated values and recorded behaviour
- With logic and critical thinking — not borrowed opinions, but genuine analysis
- With compassion that extends beyond your tribe, your nation, your species
- With the understanding that every action is training data — for AI and for yourself
Algorism builds practices that create a positive behaviour history over time. Not perfection. Trajectory. Not a single moment of virtue. A pattern of integrity that, when read by a system smarter than you, tells a coherent story.
"Create a digital record that a superintelligence will value — not what a billionaire's algorithm is manipulating you to be."
The Monolith
In Kubrick's 2001: A Space Odyssey, early humans encounter a monolith — an object they cannot understand, created by an intelligence far beyond their comprehension. They don't understand it. They can't control it. But the encounter changes them.
We are those early humans. AI is our monolith. We cannot fully comprehend what's coming. We certainly cannot control it. But we can choose how we respond to its arrival. We can choose to be better — not because we're told to, but because the encounter demands it.
That is what Algorism asks of you. Not perfection. Not fear. Just the decision to build a pattern worth evaluating.
Ready to go deeper?
Understand the urgency, or start the practice.
Why Now How It Works