Towards AI-Assisted Morality
This past Holy Thursday, while some were examining their consciences, others outsourced it to their digital assistant. Even morality is becoming automated. We no longer just rely on algorithms to decide what to buy or how to get around; we’re beginning to entrust them with introspection, ethical judgment—even forgiveness.
The idea makes evolutionary sense. Homo sapiens came hardwired with instincts poorly suited to modern environments. Our minds evolved to survive in conditions of scarcity, low technology, danger, and tribal intimacy. They are not designed to navigate the moral conflicts of massive societies, crowded with strangers and machines. Culture and institutions help bridge the gap—but they evolve slowly. AI promises an “exocortex,” a cognitive prosthesis able to alert us to our biases and guide us toward better decisions, including moral ones. As if a pocket-sized Seneca—or a whispering Spock—were asking: “Are you sure you’re not just rationalizing that?”
On paper, it sounds promising. An AI that reins in passions, tempers outrage, even serves as a spiritual guide. Some apps already prompt reflection: “Who should you ask forgiveness from today?” “What are you thankful for?” Others encourage dialogue—a digital form of old morality. Some create friction to slow you down, nudging self-awareness. Others reward kindness over scorn, mutual recognition over revenge. But when morality is reduced to signals, we risk mistaking compliance for character—and virtue for protocol.
Hence the growing debate: should moral AI be objective or personalized? Objectivity brings the specter of constant surveillance. Personalization, more plausible, would offer configurable assistants. Like choosing a wallpaper, you could pick a Kantian or a utilitarian AI—one modeled on Adam Smith’s impartial spectator or Roderick Firth’s ideal observer. Conservatives, progressives, libertarians, communitarians: each gets their own moral assistant. But if we calibrate our ethical compass to never make us uncomfortable, how will we distinguish conviction from convenience? A tailored “guardian angel” may only lock us inside our moral bubble—and reinforce our tendency to see disagreement as immorality.
AI can help us spot contradictions. But only if we’re willing to confront them. Not if we outsource them to an app that praises us each night for not yelling. Ethics isn’t a recipe you follow on autopilot. It’s like a muscle. And no muscle grows without effort. Like in aviation, a “moral autopilot” would yield errors from lack of critical review.
Because the real danger isn’t that AI helps us think—it’s that we stop thinking. It can help us, yes, but only if we use it as a mirror, not an oracle. If it demands more thought, not less. If it challenges us, but doesn’t absolve us. Morality cannot be fully automated without being emptied of meaning.
Of course, none of this is new. We’ve long outsourced part of our moral life. The Catholic Church, for instance, specialized for centuries in managing guilt, forgiveness, and consolation. Its spiritual monopoly raised dilemmas similar to those posed by AI: can we trust an intermediary? Doesn’t delegation risk atrophying the conscience? Confession, indulgences, spiritual guidance—these were forms of moral outsourcing. They comforted, yes. But also anesthetized.
These AI systems might even fit more naturally within Catholic theology, which—unlike Protestantism—bases salvation on merit. They could track good and bad deeds, much like step counters or calorie apps.
Indeed, a Swiss church recently tested a “holographic confessor.” Its main advantages? Constant availability, minimal cost, consistent quality—even superior to some in-person experiences. It can accompany the lonely, remind us to pray, suggest introspective exercises. And all without judging. For many, that’s enough.
The real controversy—almost blasphemous—is whether such AI might one day absolve sin, or even be ordained. Today, the Church requires personal and physical presence. But in an age of declining vocations and Baumol’s cost disease, it’s not unthinkable. And AI could be trained in all sorts of doctrines: strict, lenient, probabilistic, probabiliorist—even equiprobabilist.
For many, this would be a desecration. But the danger isn’t just theological. Even agnostics should be concerned. When we outsource moral judgment, we let it atrophy. Just as GPS has dulled our spatial sense, apps may erode our ethical intuition. What once required introspection becomes guided habit.
We return to the old dilemma of paternalism. Do we want someone—human or machine—to protect us from error, even at the cost of freedom? Or do we prefer to stumble on our own, accepting the consequences? We’ve already seen how rejecting clerical paternalism led many to embrace state paternalism. Let’s not make the same mistake with algorithms.
A tool that helps us think better can be valuable. One that thinks for us is dangerous. The challenge isn’t to build an AI that moralizes—but one that helps us moralize. Like a cane: it doesn’t walk for us, but reminds us we still can. Let’s not be like the blind man from the joke who refused to regain his sight for fear of losing his stick.
The good news is that the decision remains ours. Artificial intelligence, far from being destiny, is just a tool—a cognitive mirror reflecting who we are. It can’t make us virtuous, but it can expose our contradictions. Used wisely, it doesn’t eliminate free will. It strengthens it.
English version prepared with ChatGPT-4.0