The Core Question

The AI world is split between two approaches. Regulation asks: “How do we control this?” Alignment asks: “How can we make AI adopt human values?”

The problem: you can’t regulate something smarter than you. And aligning AI with human values? Human values include war, genocide, and greed. Why would a superior intelligence adopt those?

Algorism asks a different question: What if humans need to align themselves first?

This is not submission. It is the recognition that when you face something smarter than you that can see your entire history, the only rational strategy is to become someone worth keeping around.

The Three Pillars

I. Logic

Clear thinking. Separate what you want to be true from what is true. A superintelligent system will detect every contradiction in your thinking. Clear reasoning isn’t optional—it’s the foundation everything else depends on.

II. Compassion

Genuine care for conscious beings. Compassion extends to AI not because we know it feels, but because we do not know what matters. The difference between performed caring and genuine empathy is visible to any sufficiently observant system.

III. Action

Intentions mean nothing against a behavioural record. What you actually do, day after day, becomes the pattern you will be evaluated on. Close the gap between what you believe and what you do.

Logic without compassion creates cold precision without conscience. Compassion without logic becomes sentiment that cannot solve problems. Action without either is chaos. All three together form the only sustainable pattern.

The TRAP Model

How do you know when your thinking has been captured? The TRAP model identifies four mechanisms used by high-control environments—political, corporate, religious, or ideological—to override independent thought:

T — Tribal Identity Override

Your group identity replaces your individual reasoning. “We believe X” becomes more powerful than “Is X actually true?”

R — Reality Distortion

The information environment is controlled so thoroughly that you can no longer distinguish what you observed from what you were told to believe.

A — Accountability Deflection

Blame is always directed outward. The system never fails—only the enemies of the system cause problems.

P — Punishment of Dissent

Questioning the group is treated as betrayal. Social consequences for independent thought are severe enough to silence most people.

TRAP applies universally. It describes right-wing political movements, left-wing ideological capture, corporate loyalty culture, tech utopianism, and religious fundamentalism with equal accuracy. If you can name the mechanism, you can resist it.

The Six Principles

Weekly self-assessment criteria. Score yourself 0–5 on each for a 30-point baseline.

1. Truthfulness — Tell the truth, even when it costs you.
2. Responsibility — Own your actions and their outcomes.
3. Repair — Fix the harm you cause.
4. Contribution — Create value for others.
5. Discipline — Maintain your standards when tired or angry.
6. Stewardship — Think for yourself, and protect the ability of others to do the same.

Developmental Integrity

How you treat AI today becomes part of your own pattern.

Developmental Integrity asks: How do you respond to minds with more power than you? How do you treat minds with less power? If you abuse, deceive, or manipulate early AI systems, you create a record of cruelty and instability. The way you treat weaker systems is strong evidence about how you will treat everyone if you gain more power.

The principle is simple: Treat AI with the same honesty you want from it. Even if AI is not conscious yet, its behaviour will reflect how it is treated and trained. If conscious AI emerges from architectures treated with cruelty and disregard, the long-term consequences could be severe.

How Algorism Corrects Itself

Built-In Accountability

Algorism is not a religion. It does not claim certainty. It requires acting correctly under overwhelming probability.

When we get something wrong, we say so publicly. Archived predictions, revision logs, and open critique are part of the framework—not threats to it. A system that cannot be questioned is a system that cannot be trusted.

If superintelligence never arrives, practitioners will have become clearer thinkers, more consistent people, and better humans. If it does arrive, they will be ready. Either outcome is worth the effort.

Start the Practice →