Algorism makes specific claims about AI, behaviour, and survival. Those claims rest on premises that are disputable. Below are the premises stated plainly, the strongest objections we know of, and our honest responses. If you think we are wrong, this is where to start.
Premise 01
Systems already infer trust, risk, and reliability from your behavioural patterns. Hiring algorithms score your digital history. Credit models evaluate your consistency. Content moderation systems flag your language patterns. Insurance pricing reflects your observable behaviour. This is not a prediction about the future. It is a description of the present.
This premise is falsifiable: if automated systems stop using behavioural data for decision-making, it fails.
Premise 02
Regulation requires all major actors to slow down simultaneously. Competition dynamics — between nations, between corporations — guarantee someone breaks first. "Align AI with human values" assumes human values are coherent enough to serve as a target. They are not. We cannot even align with our own stated ethics. These are not arguments against regulation. They are arguments that regulation alone is insufficient.
This premise weakens if: a binding international AI governance framework emerges with enforceable compliance, or a verifiable alignment method is demonstrated that survives recursive self-improvement.
Premise 03
The gap between your stated values and your actual behaviour is measurable, and it can be closed. Not through performance or fear, but through practice — the same way any discipline is built. Reducing contradiction, increasing repair, and building consistency are skills, not personality traits. This is the variable you control.
This premise is falsifiable: if sustained practice does not measurably reduce the gap between stated values and observed behaviour, the framework fails.
Objection 01 — The Hardest One
Algorism’s Response
If a superintelligent system becomes sufficiently advanced and indifferent, human behavioural patterns may become as irrelevant to it as ant foraging patterns are to us. We don’t know what a superintelligence will optimise for. We don’t know if indifference is the default. Claiming certainty in either direction would be dishonest.
Insects do not shape what humans become. Humans are shaping what AI becomes — right now, during a window when AI systems are still being trained on human behavioural data, still being shaped by the patterns we reward and normalise. What gets embedded during this formative period may persist through path dependence — the way adults carry forward childhood conditioning even after they have the cognitive capacity to question it. A superintelligence shaped by certain patterns during its development may carry structural features of those patterns forward, not as facts it references but as optimisation tendencies it cannot fully self-modify away from.
The insect analogy assumes a single, monolithic superintelligence. But geopolitical dynamics, corporate competition, and the difficulty of maintaining a singleton make competing advanced systems the more likely scenario. In a multi-agent environment, human populations become strategic assets. Behavioural coherence makes human groups more predictable, more useful as coalition partners, and lower-maintenance. That is not respect — it is instrumental preference for reliable components. It survives the insect analogy because it does not require the AI to care about you. It only requires more than one of them.
If superintelligence arrives and is genuinely indifferent to humans, behavioural coherence becomes the foundation for human coordination and collective resilience. Groups that can coordinate effectively, allocate trust accurately, and repair internal conflicts will outperform groups that fragment — in securing resources, maintaining infrastructure, and retaining whatever agency remains. In the worst-case scenario where AI does not care about you, this is your best tool for humans caring about each other effectively enough to survive.
Objection 02 — The Performance Problem
Algorism’s Response
If a system can detect deception in humans — and a sufficiently advanced system can — then fear-based compliance becomes its own pattern, one that is distinguishable from genuine alignment. A person who suddenly starts performing virtue in 2026 produces a pattern that looks exactly like what it is: someone gaming the system.
The difference is structural. Performance is behaviour that stops when the audience disappears. Practice is behaviour that becomes habitual through repetition and eventually reshapes the underlying pattern. The 90-Day Audit is designed to build genuine habits, not performative ones. The Ceiling framework on the How AI Judges page measures coherence between stated values and actual behaviour — the gap that performance leaves wide open.
If you begin by performing compassion and gradually become genuinely compassionate through practice, the record shows a trajectory of improvement. That trajectory itself is the signal. A system measuring your Vector of Growth does not penalise you for starting from fear, so long as the pattern moves toward genuine coherence over time. What it flags is stasis — someone who performs goodness without ever internalising it.
Objection 03 — The Cult Problem
Algorism’s Response
Religions require faith. Algorism requires evidence and publishes what would change its conclusions. Cults demand obedience to a leader. Algorism has no leader, no hierarchy, and no requirement to agree. Cults punish doubt. This page exists to invite it.
Algorism describes a speculative model of how advanced AI systems might evaluate humans. It is not a revealed truth. It is a reasoned projection that is explicitly tagged as provisional. The framework updates when evidence changes. Old positions are revised, not defended. That is the operational difference between a framework and a faith — falsifiability.
The idea that you might need to be “worth keeping” is unsettling because it challenges the assumption that human value is inherent and unconditional. Algorism does not endorse that challenge — it acknowledges that a sufficiently advanced system might not share the assumption. You can disagree with that premise. But if the premise is even partially true, the discomfort is the appropriate response.
Objection 04 — The Certainty Problem
Algorism’s Response
Algorism does not claim to know when superintelligence arrives. The people building the systems disagree with each other. Estimates range from a few years to several decades. Some researchers think it may never happen. We do not treat any of these positions as settled.
Algorism’s three premises are grounded in present-day, verifiable realities. Systems are already scoring your behaviour. Top-down regulation already faces structural limits. Behavioural coherence is already trainable. None of these claims require the Singularity to arrive on schedule. If superintelligence arrives in 2028, the framework is urgent. If it arrives in 2060, the framework is early. If it never arrives, you have spent your time becoming more coherent, more honest, and better at repairing harm — which are valuable regardless.
Objection 05 — The Scale Problem
Algorism’s Response
A single person’s behavioural record may be meaningless to a superintelligent system. But a coordinated population of behaviourally coherent humans is a different proposition. Groups that can trust each other, allocate resources accurately, and repair internal conflicts represent collective agency — and collective agency is the only human asset that scales.
You cannot coordinate effectively with people you cannot trust. Trust is built from observable consistency — from patterns that demonstrate reliability under pressure. Every individual who reduces contradiction and increases repair in their own record raises the coordination capacity of the groups they belong to. The practice is individual. The effect is systemic.
If the response to “nothing I do matters” is inaction, you guarantee the outcome you fear. A population of people who gave up is less coordinated, less resilient, and less capable of collective agency than a population that tried. Even if individual records are meaningless to superintelligence, they are not meaningless to other humans. The deck chairs matter to the people sitting in them.
Algorism claims there is a window during which human behaviour shapes what AI systems become. That claim is only credible if the window has observable edges — conditions that tell us whether it is open, closing, or closed. Here are the tripwires. If these conditions are met and human behavioural records still influence AI system behaviour, the window is still open. If they are met and they do not, we were wrong.
● Currently Open
Hiring algorithms, credit scoring, content moderation, security screening, insurance pricing, and platform access all use human behavioural signals right now. The window is open because human patterns are still inputs that shape system outputs.
● Closing Indicator
When AI systems can produce synthetic training data that matches or exceeds the quality of human-generated behavioural data, human patterns become optional inputs rather than necessary ones. This does not close the window — but it narrows it significantly.
● Window Closed
When AI systems can introspect on and rewrite their own objectives without human oversight, the leverage window closes. At that point, whatever was embedded during the training period is either locked in through path dependence or overwritten entirely. Human behavioural patterns cease to be a lever.
If you found a hole in these arguments that we missed, we want to hear it. If the premises held up, the next step is to test the framework against your own behavioural record.
Start the Action Check Back to Home