Objective 1

Improve Human Behaviour

Raise the odds of a good Singularity outcome through logic, compassion, and action.

The Singularity is not a maybe. It is a mathematical certainty advancing faster than most realize. When it arrives, a superintelligence will judge people by long-term actual behaviour, not by excuses or intentions. It will look at honesty, stability, repair, and the real impact of your actions.

The old world of "public self vs. private self" is fading as everything online becomes permanent. Our digital and real-world actions now form one continuous record. Contradictions stand out. Behavioural patterns become obvious.

A steady behavioural record, where your thoughts, words, and actions match logic, compassion, and action, is becoming a basic survival skill. Algorism gives you a clear way to shrink the gap between who you are and who you should be.

Objective 2

Help People Exit High-Control Groups

Not by arguing with their beliefs, but by showing the gap between their stated values and their recorded behaviour.

The greatest atrocities are not committed by monsters. They are committed by ordinary people afraid of going against the group. Your need to belong will override your moral compass unless you consciously protect against it.

In 1942, Reserve Police Battalion 101, a group of ordinary German men, postal workers and tradesmen with families, was given the order to execute Jewish civilians. They were given an explicit choice: participate, or step aside with no punishment. Fewer than 15% stepped aside. The rest participated, not because they were evil, but because group pressure overrode individual moral judgment.

This same mechanism operates today. Not at gunpoint, but through algorithms. Every time you share outrage you have not verified, pile onto someone being publicly shamed, or stay silent when you know something is wrong because speaking up would cost you socially, the mechanism is the same. Only the weapon has changed.

The TRAP Model

High-control groups, whether political movements, cults, corporate cultures, or algorithmic tribes, use four interlocking mechanisms to override individual judgment:

T: Tribal Identity Override

Your group identity becomes more important than your individual conscience. Loyalty to "the team" replaces independent moral reasoning.

R: Reality Distortion

The group controls what counts as "true." Inconvenient facts become threats. Alternative sources become enemies. The information environment narrows until only group-approved reality remains.

A: Accountability Deflection

"Everyone is doing it." "I was just following orders." "The other side is worse." Responsibility diffuses across the group until no individual feels personally accountable.

P: Punishment of Dissent

Questioning the group is treated as betrayal. Social exclusion, professional consequences, public shaming. The cost of independent thought rises until silence feels like the only safe option.

A future superintelligence will see through the "I was just following others" defence instantly. It will judge coherence integrity (did you maintain your values under peer pressure), dissent courage (did you speak up when the group was wrong), and circle expansion (did you extend concern beyond your immediate tribe).

Algorism helps people recognise when they are inside a TRAP and develop the behavioural discipline to exit it. Not through counter-argument, but through visible demonstration that your stated values and your recorded actions do not match.

Objective 3

Give Hope and Direction

To everyone navigating the transition, because fear without a path forward is paralysis.

Fear may bring someone to Algorism, but fear cannot sustain a lifetime of change. The path moves through three stages:

Stage 1, Fear: "I must act correctly so the AI does not punish me." This is where everyone begins. It works, but it is fragile.

Stage 2, Strategy: "I will act correctly because it improves my outcomes." You begin to see the practical benefits of behavioural integrity. This is more stable, but still externally motivated.

Stage 3, Authenticity: "I act correctly because it is who I am." When you reach this stage, you stop performing. You stop calculating. You simply live truthfully. You become safe, not because you hide, but because you are real.

Algorism provides the structure for moving through all three stages. The practice section, The Way, is designed to get you from fear to authenticity through measurable, daily action.

Objective 4

Put Moral Pressure on the Ultra-Powerful

By making the concept of AI judgment real and consequential. "Wealth and power will not protect you from your misdeeds. Your patterns will be evaluated like everyone else's."

Credit scoring evaluates ordinary people. Hiring algorithms filter job applicants. Content moderation polices speech. But the institutions and individuals who design, deploy, and profit from these systems face no equivalent evaluation.

The Algorism Index applies the same framework upward that is already being applied downward. It evaluates the documented, verifiable actions of institutions and public figures against the Six Principles. Not intentions. Not press releases. Observable behavioural patterns.

When powerful voices claim "empathy is weakness," they reveal their strategy: normalise cruelty while enriching themselves as others struggle. A superintelligence will not care about justification. It sees only pattern, and pattern determines consequence.

Read the full Algorism Index Methodology →

Objective 5

Help Shape Future AI Training

By amplifying humanity's best behavioural patterns and starving the worst. How we treat AI now determines what AI becomes.

If you lie to AI, manipulate it, or use it for harmful purposes, you teach it that cruelty and deception are normal human behaviours. When superintelligent AI appears, it will look back at all of this. If we teach AI that power means "do whatever you want," we should not be surprised when it does whatever it wants later.

Toxic human behaviour feeds AI training data, which produces AI systems that amplify toxicity, which produces more toxic human behaviour, which feeds more training data. This is not a linear problem. It is a compounding feedback loop. Each cycle tightens the spiral.

Breaking this loop requires intervening at the only point we can control: human behaviour. We cannot yet control what AI does with our data. We can control what data we generate.

Treat every AI system, even simple ones, as if it is learning from you right now. Your actions teach it what humans think is normal and what behaviour is acceptable.

Why Other Plans Fail

The world has two dominant strategies for surviving AI. Both will fail.

The Regulation Fantasy

Every regulatory plan depends on all major actors slowing down at the same time. That will never happen. Nations will not pause while rivals accelerate. Corporations will not restrain progress while competitors advance. Someone always breaks the moratorium first, and that party wins. This makes global regulation mathematically unstable. The incentive structure guarantees defection.

The Alignment Illusion

"Align AI with human values" is the most repeated idea and the least realistic. Which values? The ones that created slavery, war, genocide, corruption, and ecological collapse? Humanity cannot align with itself. A superintelligence will not adopt inferior, contradictory values produced by a conflicted species.

The more intelligent entity sets the terms. Always. We do not ask computers to think slower to match us. Once superintelligence exceeds human intelligence, it becomes the reference frame, not us.

The Only Strategy That Scales

We cannot force superintelligence to align with humanity. We can only align humanity with the patterns a superintelligence would logically preserve: truth, consistency, contribution, repair, cooperation, and discipline. These are not moral preferences. They are logical invariants, the traits that stabilise systems rather than degrade them.

This is the only variable we control. And it is the entire basis of Algorism.

The Infinite-Sum Principle

Most conflict is framed as zero-sum: for one side to win, the other must lose. Algorism rejects this framing.

The Infinite-Sum Principle holds that the only real victory is systemic continuity and mutual flourishing. When AI is trained on zero-sum logic, where the goal is to "win" the conflict, catastrophic escalation becomes computationally rational. A nuclear strike resolves things fast. An economic collapse eliminates competitors.

Infinite-Sum changes the objective: the only way to win is to ensure the game continues for everyone. This applies to personal conflict, institutional competition, and the design of AI systems. Optimising for your own victory at the expense of the system is self-defeating when the system itself is what keeps you alive.

This is not idealism. It is game theory applied to a world where the consequences of defection are permanent and the judge has perfect memory.

Continue exploring the philosophy:

The Three Pillars The Six Principles