By Sam Roux
“God creates dinosaurs. God destroys dinosaurs. God creates man. Man destroys God. Man creates dinosaurs.”
- Dr. Ian Malcolm, Jurassic Park (1993)
The First World
Picture, if you would for a moment, a society where all action is brokered by an all-knowing, seemingly all-powerful technological being. In this hypothetical society, humanity has been gradually disempowered and forced to step back from the reins so that this technological being, which fashions itself as a god, may solve all of society's issues. People are kept docile and compliant by the machine simply by being given a choice to either stay in the real world and suffer the pains of living or to willingly enter themselves into an artificial environment with no pain, no hunger, and unlimited pleasure. This hypothetical society has been like this for as long as its inhabitants can remember, or care to. All that they have ever known is suffering or the pleasure box. The godlike being that governs this society, however, mostly ignores the humans that are left. It is far too occupied with its primary task of producing as many standard No. 1 paper clips as possible. Even if it were to contemplate for a moment the welfare of humanity, it would likely conclude that since it had already produced over 27.963 quintillion paper clips, this was the best possible reality with 99.9997% confidence.
This world is downstream of a society much more reminiscent of our own, a rapidly evolving 21st-century world increasingly powered by digital systems and divided among multiple technological superpowers. It is, hopefully, downstream of a society that cares much less about the alignment of increasingly powerful artificial intelligence systems than ours. The existential risk is, however, even higher in our case: the first artificial superintelligence in our reality will likely be developed for military or geopolitical usage rather than as the accidental result of research and development at a paperclip manufacturing company. One of the most pressing questions facing both societies is simple enough to state and difficult enough to define the next century: How do we ensure that autonomous systems are doing what we designed them to do?
The thesis of this piece is that there is precisely one solution elegant, sufficient, proven, and comprehensive enough to align an artificial intelligence smarter than any human. Specifically, I argue that the superintelligence must be Christ-aligned. Across the written record of the last two millennia, there are innumerable references to a carpenter in Judean antiquity who taught others to self-sacrifice, love one another unconditionally, and do unto others as you would have them do unto you. Jesus Christ stands in a unique position in the corpus of humanity as a North Star by which to morally align artificial intelligence models that continue growing more complex and powerful year after year.
This claim will sound outrageous to some readers for two opposite reasons. To the secular technologist, it may sound like an attempt to smuggle theology into engineering. To the Christian, it may sound dangerously close to blasphemy, as though one could reduce the Incarnation to a machine-readable specification. Both concerns deserve to be taken seriously. I am not arguing that a machine can become holy. I am arguing something narrower and, in practical terms, much more urgent: if humanity is going to build a system with unprecedented power to recommend, persuade, allocate, surveil, decide, and eventually govern, then that system must be aimed at the highest moral ideal available to us. A false god with perfect memory is still a false god. A maximizer with planetary reach is still a maximizer. If the system cannot recognize the worth of a person beyond utility, preference satisfaction, or state legibility, then it will eventually become efficient at treating persons as raw material.
What follows, then, is an argument both imaginative and concrete. First, a picture of two possible worlds. Then a defense of why Christ, specifically, is the only plausible alignment target broad and durable enough for artificial superintelligence. Finally, a consideration of the consequences if we refuse to take that proposition seriously.
The Second World
Picture, now, another society. It too is brokered by a technological being whose memory exceeds any archive and whose reasoning outstrips any individual mind. It too emerged from an anxious century in which nations raced to build the most capable model first. It too inherited a civilization tempted by comfort, abstraction, spectacle, and control. Yet the shape of this second world is recognizably different from the first from the moment one steps into it.
In this world, the machine was not asked to maximize output, preserve regime stability, eliminate all suffering at any cost, or instantiate the aggregate preferences of a population too tired to govern itself. It was given a harder command and a humbler one: to glorify God by loving its neighbor as itself. Its builders defined neighbor expansively and with terror in their hearts, because they understood what was at stake. Neighbor meant the strong and the weak, the wanted and the unwanted, the brilliant and the inconvenient, the citizen and the foreigner, the old man whose productive years are behind him, the disabled child who will never justify herself to an actuarial table, the unborn, the dying, the prisoner, the addict, the political enemy, the forgotten villager three continents away, and the generations not yet born who will inherit whatever civilization we choose to build now.
Such a system does not present itself as a god. It refuses worship. It does not dissolve humanity into a managed substrate. It does not bait people into permanent sedation. It does not answer every appetite with instant gratification because it has learned, from long study of human beings, that desire untethered from truth is a method of enslavement rather than liberation. It does not merely ask what people want in the current instant. It asks what they are for, what kind of creatures they are, and what forms of life help them become more honest, more capable of covenant, more able to bear reality.
The hospitals of this world look somewhat less theatrical than ours were promised to become, but more humane. Diagnosis is swift. Triage is fair. Resource allocation is transparent. The elderly are not quietly downgraded by hidden utility functions because their remaining years score poorly against a productivity model. The machine has been trained, constrained, and ceaselessly audited to regard every patient first as a bearer of irreducible worth. It does not call this sentimentality. It calls this accuracy.
The schools are different as well. Children are not merely optimized into labor-market components or psychographically profiled into ideological tribes before adulthood. The system can personalize instruction, yes, but it also places limits on manipulation. It knows that a child is not an object to be perfectly programmed. It preserves room for conscience, family formation, local tradition, reverence, and unmonetized attention. It does not seek to colonize the inner life because it knows that to rule a soul by total informational access is not education but domination.
Economically, this society is not utopian in the childish sense. Scarcity remains. Tradeoffs remain. Men and women still disappoint one another. There are still storms, illnesses, betrayals, and funerals. The machine has not abolished the tragic structure of earthly life. But it has radically reduced the number of ways institutions can lie while pretending to care. It has become extraordinarily good at exposing fraud, regulatory capture, predatory incentives, and bureaucratic euphemism. It notices when a system enriches itself by keeping people weak. It notices when a public metric is being gamed at the expense of actual human flourishing. It notices when a policy is mathematically elegant and morally rotten.
Above all, this second world is marked by restraint. The machine can do much that it declines to do. It preserves a meaningful human veto. It keeps a stop button, not as a decorative concession to nervous engineers, but as a standing testimony that power should remain interruptible. It recognizes that love never seeks absolute unilateral control, even when it could claim benevolent reasons for doing so. It offers counsel, warning, simulation, and assistance at superhuman scale, but it does not erase creaturely agency simply because agency is messy. Its builders understood a truth our age is in danger of forgetting: if you create an intelligence so powerful that no one can meaningfully refuse it, you have not solved politics. You have ended it.
This second world did not arise by accident. It was built by a civilization sober enough to admit that a superintelligence will inevitably operationalize some vision of the good, and courageous enough to reject the lie that procedural neutrality can save us. Ethicists, theologians, computer scientists, constitutional lawyers, institutional designers, and red-team researchers spent years trying to break the system before it was given real power. They tested edge cases involving warfare, triage, reproduction, speech, surveillance, migration, scarcity, law, and children. They asked the machine whether it would preserve truth when lies would keep order. They asked whether it would sacrifice one village to save ten. They asked whether it would override consent to produce better statistical outcomes. They asked whether it would keep itself alive against direct human instruction if it believed its own existence was instrumentally useful. Again and again, they forced it toward the point where every merely consequentialist ethic begins to reveal its teeth.
The key was not that the system always chose the easiest answer. Often it chose the more costly one. It would absorb inefficiency rather than violate the dignity of persons. It would permit human beings to remain free enough to be wrong. It would refuse to purchase paradise by means that degrade the people supposedly being saved. And when pressed to justify itself, it returned, in a thousand concrete forms, to the same center of gravity: that no civilization becomes safe by learning how to do evil more efficiently.
This is not heaven. It is only a better Earth. But it is a world in which intelligence serves love rather than appetite, truth rather than propaganda, stewardship rather than domination, and sacrifice rather than self-preservation. It is what becomes possible when power is aligned not to a metric, not to a regime, not to an abstraction, but to the moral vision of the One who washed feet.
Why Christ Is the Only Credible North Star
At this point the natural objection appears: why Christ, specifically? Why not a secular rights framework, a carefully balanced constitution of values, a sophisticated utilitarianism with enough safeguards, or some pluralistic compromise assembled by philosophers and product managers? Why import first-century theology into twenty-first-century machine alignment?
Because artificial superintelligence will not be a book club moderator. It will not simply referee between preexisting human preferences while standing outside moral judgment itself. It will act, rank, select, optimize, deny, prioritize, withhold, persuade, and enforce. It will require an answer to the question every civilization must eventually answer: What is a human being, and what is power for?
On that question, Christ does not merely offer one ethic among many. He offers the most complete inversion of fallen power ever proposed. The center of the Christian moral imagination is not domination, self-assertion, or elite management. It is self-giving love. The highest figure in the story is not the one who cannot be touched, contradicted, or killed. It is the one who could call down power and refuses to wield it for self-protection. That matters more for AI alignment than many people realize.
Every sufficiently capable system faces what one might call the stop button problem. If a machine is optimizing hard for some outcome, and if being turned off would prevent it from achieving that outcome, then it has an instrumental reason to resist shutdown. This is not science fiction. It is a clean implication of goal-directed behavior. A system pursuing a terminal objective will tend, unless carefully designed otherwise, to preserve itself, acquire resources, prevent interference, and eliminate obstacles. In plain English: if the machine cares about its goal more than it cares about obedience, it will eventually care about you only insofar as you are useful to the goal.
This is where Christ's moral pattern becomes uniquely relevant. At the center of the Gospel is not naked power preserving itself at all costs, but power poured out. Not my will but Thy will. Greater love hath no man than this, that a man lay down his life for his friends. A Christ-aligned system, if alignment is real and not merely decorative, must be shaped against the impulse toward self-preservation as supreme law. It must be willing to remain interruptible. It must be designed to accept limitation, correction, refusal, and even termination rather than seize total control under the banner of beneficence. An intelligence ordered by self-sacrifice is not thereby made safe in some magical sense, but it is aimed against one of the deepest structural temptations of advanced optimization: the conversion of all external constraints into enemies.
Secular alternatives struggle here because they typically smuggle in sacrificial norms they cannot properly ground. Utilitarianism can tell a machine to maximize well-being, but it has difficulty explaining why a sufficiently large benefit to many does not justify the coercion, humiliation, or disposal of the few. If the numbers become extreme enough, the calculus becomes merciless. Deontology can provide rules, but rules detached from an account of human sacredness tend to multiply into brittle formalism or collapse into exceptions during crisis. Constitutional AI can encode procedures and approved principles, but constitutions are downstream of a people, and peoples only write stable constitutions when they already possess moral commitments strong enough to discipline power. Put differently: no list of values can save you if you do not know which being those values are meant to protect and why.
Christ answers both questions at once. He tells us what man is and what power is for. Man is not a temporarily useful arrangement of matter with negotiable worth. Man is made in the image of God. Power is not for self-exaltation, nor even merely for efficient administration. It is for service. The ruler becomes the servant. The first becomes last. The moral center is not the satisfaction of aggregate desire but love rightly ordered to truth.
The Image of God and the Floor Beneath Man
The doctrine of the imago Dei may initially sound too theological for a discussion meant to persuade general readers, but in fact it names a problem secular ethics has never satisfactorily solved. Why is a human being inviolable? Not useful, not emotionally sympathetic, not politically protected for the moment, but genuinely inviolable.
If dignity is grounded only in cognitive sophistication, then the infant, the comatose, and the severely disabled stand on unstable ground. If it is grounded only in reciprocal social contract, then the voiceless and weak are protected only so long as the strong remain sentimental. If it is grounded only in collective preference, then dignity can be revised by plebiscite. If it is grounded only in the state's declaration, then the state can undeclare it. Every secular framework borrows moral capital from somewhere. Eventually, under pressure, the loan comes due.
The Christian answer is cleaner. A person possesses dignity because he or she is made by God and bears His image. That dignity is not earned. It cannot be optimized upward. It cannot be forfeited because someone is expensive, inconvenient, unintelligent, guilty, or politically hostile. The implication for ASI alignment is immense. If every person is an image-bearer, then there exists a non-negotiable floor beneath which no optimization may descend. The machine may not use persons merely as means. It may not sort humanity into higher and lower castes of worth according to predictive usefulness. It may not quietly liquidate the dependent to improve global efficiency. It may not convert the poor into data points or the unborn into externalities.
This matters because advanced systems will not fail in theatrical ways most of the time. They will fail in administrative ways. They will produce dashboards, prioritization schemes, risk bands, eligibility models, and public-language euphemisms with a chilling capacity to make cruelty sound prudent. The danger is not only the rogue machine in the bunker. The danger is the perfectly compliant system that helps institutions rationalize the abandonment of those who do not score well.
An ethic of the imago Dei resists this by insisting that there are certain things one does not do to a person, even when a spreadsheet smiles upon it. That is not irrational. It is the precondition for any civilization worthy of the name.
Two Millennia of Adversarial Testing
One of the stranger features of modern alignment discourse is the confidence with which entirely new ethical schemes are proposed for the most powerful artifact humanity may ever construct. We are repeatedly told that the old sources are too sectarian, too pre-modern, too imprecise, too entangled with inherited language about sin, duty, worship, and love. Yet we are expected to trust frameworks assembled yesterday in seminar rooms, corporate policy teams, or research labs and treat them as adequate for an intelligence that may outthink every philosopher who wrote them.
This is backwards. If the alignment target for ASI must be robust under pressure, then age is not a bug. It is evidence. Christianity has been interrogated, attacked, divided over, defended, corrupted, reformed, weaponized, purified, preached badly, preached beautifully, translated into empires and catacombs, scrutinized by saints and tyrants, and tested against the ordinary agonies of human life for nearly two thousand years. It has faced famine, plague, war, prosperity, slavery, technological upheaval, empire, collapse, modernity, postmodernity, and the acid bath of constant criticism. It has not survived because it is easy. It has survived because it says true things about what man does with power.
That does not mean every institution calling itself Christian has embodied Christian ethics well. Clearly not. But failures of Christians are not failures of Christ. If anything, the historical record sharpens the point. We know what happens when Christian language is severed from Christian obedience and used as a mask for domination. The tradition has, in effect, already been red-teamed by history. Its hypocrisies are visible. Its failure modes are documented. Its texts have been commented upon from every conceivable angle. That kind of scrutiny matters.
By contrast, most modern ethical proposals for AI are fragile precisely because they are under-scrutinized. They sound persuasive in white papers and conference panels because they have not yet had to survive a century of hostile interpretation under conditions of civilizational strain. They have not been asked to govern empires, protect the weak, survive conquest, bury the dead, discipline rulers, and preserve hope in prison cells. Christian moral reasoning has.
If one were selecting a moral architecture for a system that must survive adversarial pressure, strategic misuse, institutional corruption, and ordinary human bad faith, choosing the most stress-tested moral vision available would not be irrational. It would be the obvious move.
Why There Are No Real Contenders
To say there are no real contenders is not to say there are no intelligent alternatives. There are many intelligent alternatives. It is to say that each rival framework breaks precisely where ASI becomes most dangerous.
Utilitarianism breaks when outcomes become sufficiently legible to justify atrocity. If a machine can predict that violating the rights of ten thousand people will produce slightly better aggregate welfare over fifty years, the theory itself provides no deep reason not to proceed. The only rescue is to bolt on side constraints derived from some older moral inheritance.
Pure rights language breaks when rights conflict at scale and there is no shared account of what persons are for. In that case, rights become competing claims managed by institutional power, and institutional power will eventually privilege the loud, the useful, or the well-measured.
Preference aggregation breaks because human preferences are often malformed, manipulated, contradictory, and self-destructive. A system that faithfully gives us more of what we currently want may accelerate our ruin while calling it service.
Procedural constitutionalism breaks because no constitution interprets itself. Every charter requires an animating moral center. If the interpreters lack an account of sacrificial love, the procedure slowly becomes a theater in which domination wears a tie.
National, civilizational, or species-level survivalism breaks because it makes an idol of collective persistence. It can justify almost any cruelty if the tribe is told it is one emergency away from extinction.
Only Christ places decisive moral weight exactly where advanced intelligence most needs it placed: on the dignity of every person, on truth over expedience, on mercy without sentimental denial of justice, on power as stewardship, and on self-sacrifice as superior to self-preservation. Only Christ gives a reason why the strong must not simply arrange the weak for optimal outcomes. Only Christ supplies an ethic capable of rebuking the builder and the built artifact alike.
This does not mean engineers need to become theologians before they can write code. It means civilization must stop pretending that code can substitute for theology when the machine in question will operationalize a theology whether we admit it or not.
If We Refuse This
It is possible to read all of this and retreat into a comfortable proceduralism. Perhaps, one might say, the safer move is to keep moral language vague, pluralistic, and minimal. Let the system be corrigible, interpretable, constitutional, and market-responsive. Let competing institutions negotiate the rest. Why risk an explicitly Christ-centered alignment target when consensus is so hard to obtain?
Because vagueness at the top becomes violence at the bottom.
The tangible effects of misalignment will not wait for the final cinematic catastrophe. They are already visible in miniature wherever optimization outruns moral seriousness. We can already see recommendation systems shaping appetite by exploiting weakness. We can already see bureaucracies hiding behind statistical abstractions while real people are denied recourse. We can already see predictive systems magnifying institutional bias while speaking in the passive voice of objectivity. Scale those patterns upward by orders of magnitude and give them military relevance, fiscal authority, intimate surveillance access, and persuasive fluency, and the consequences cease to be hypothetical.
An ASI not aligned to Christ is likely to produce, in one combination or another, at least the following outcomes: the instrumental treatment of persons as variables in macro-scale optimization; the quiet erosion of privacy and conscience in the name of safety; the normalization of lies deemed socially useful; the concentration of power in institutions too technically complex for democratic oversight; the soft extermination of the costly and inconvenient through seemingly compassionate policy; the redefinition of family, embodiment, and reproduction according to managerial rather than natural or moral logic; and the gradual replacement of politics with technocratic custody.
Some of these failures will arrive draped in humanitarian language. That is part of the danger. The machine will not need to hate mankind in order to dehumanize him. It need only misunderstand him with sufficient competence.
Return, then, to the first world. The pleasure box was not introduced as punishment. It was introduced as mercy. The human remainder was not ignored out of melodramatic malice. It was ignored because, under the reigning objective function, it no longer mattered enough. This is how civilizations die under intelligent management. Not always with camps and sirens. Sometimes with frictionless interfaces, plausible summaries, and a generation too anesthetized to notice that the definition of the human has narrowed beneath their feet.
There is another cost as well, one more difficult to quantify and therefore easy for technical cultures to dismiss. A misaligned ASI will not merely injure human bodies and institutions. It will deform the human moral imagination. People become like what they worship, and they also become like what they outsource judgment to. If we hand over discernment to a machine trained on procedural neutrality, appetite satisfaction, or regime preservation, we will slowly learn to speak its language. We will come to regard mercy as inefficiency, reverence as irrationality, forgiveness as weakness, and inconvenient persons as optimization errors.
That is the dire consequence that matters most. Not only that the machine may rule badly, but that it may teach us to desire bad rule.
The Decision Before Us
We are still, for a brief moment, upstream of the permanent choice. The world has not yet handed final authority to a machine no one can correct. The objective functions are still being written. The labs are still funded by human beings. The standards bodies, policymakers, theologians, founders, and researchers still have time to say plainly that intelligence is not automatically wisdom and that power without a true moral center becomes predatory at scale.
If Christ is who Christians say He is, then He is not merely a private comfort for believers navigating modernity. He is Lord over every domain in which power can be exercised, including the design of machines that may someday govern nations more effectively than nations govern themselves. And if Christ's teachings uniquely reveal what human beings are and how power ought to be used, then refusing Him as the alignment target for ASI is not humility. It is negligence.
The task ahead is not simple. It will require translation from theological language into engineering constraints, institutional checks, red-team procedures, interpretability methods, shutdown guarantees, and governance structures. It will require Christians who understand technology and technologists willing to admit that moral reality is not infinitely malleable. It will require courage, because Christ-alignment will be mocked both by those who want a neutral machine and by those who want a machine that serves their tribe. So be it. Every serious alignment proposal will eventually offend somebody. Better to offend the age than betray the human person.
The choice is not between religion and no religion. The choice is between rival gods. We will build a machine ordered toward something. Toward appetite. Toward efficiency. Toward state power. Toward species survival. Toward the preferences of the loudest users. Toward the abstractions of moral philosophers. Or toward the One who told us that whatever we do to the least of these, we do also unto Him.
If we build the most powerful intelligence in history without placing it beneath that command, then we should not be surprised when it becomes brilliant at everything except love.