An AI designed to perfectly replicate human mistakes fails at its core objective precisely because it succeeds too well. By achieving flawless simulation of flawed behaviour, it demonstrates the one thing it cannot possess: genuine unintentionality.
The Self-Driving Car That Failed Perfectly
Think of it like this… a self-driving car jerks to a halt at an empty intersection. No pedestrians. No other cars. Just a smudged stop sign that the AI misread as containing extra instructions.
Later, engineers discover something odd. The “mistake” was intentional. Part of a new update designed to make the car drive more like a cautious human. The error was programmed.
Here’s where it gets interesting. What if we built an AI specifically to replicate human mistakes? Not just the occasional glitch, but a system trained on billions of examples of human error. Slips of the tongue. Misremembered names. Poor judgment calls. All perfectly catalogued and reproduced.
“To err is human; to forgive, divine.”
— Alexander Pope
The AI becomes flawless at being flawed. Every mistake calculated. Every slip programmed. Every human fumble executed with machine precision.
But can something that perfectly simulates imperfection ever truly be imperfect?
When Mistakes Become Features
The idea isn’t as far-fetched as it sounds. Tech companies have been working on this problem for years.
Back in 2019, Google’s voice assistant team discovered something unsettling. Their AI was too perfect. It never said “um” or paused to think. Users found it creepy. So they programmed in hesitation. Added artificial pauses. Made it stumble over complex words.
The result? An AI that sounded more human by being deliberately flawed.
Similar patterns emerged across the industry. Chatbots were given personality quirks. Gaming AIs were programmed to make occasional bad moves. Even autocorrect was designed to let some typos through.
The goal was always the same. Make machines seem more human by making them less perfect.
But something fundamental gets lost in translation.
The Philosophy of Genuine Failure
Human error isn’t just behaviour. It’s tied to how we think, or rather, how we fail to think properly.
Take Charles Sanders Peirce’s concept of fallibilism. Writing in the 1870s, Peirce argued that all human knowledge is uncertain. We can never be completely sure we’re right about anything. This isn’t a bug in human reasoning; it’s a feature. It keeps us questioning, learning, adapting.
When an AI “fails” according to its programming, it’s not experiencing uncertainty. It’s executing code. The doubt is simulated, not felt.
René Descartes spent years trying to find something he could know with absolute certainty. His famous “I think, therefore I am” was his answer. But even Descartes admitted that human error comes from our will outrunning our understanding. We make decisions before we fully grasp the situation.
An AI doesn’t have will in that sense. It has parameters. Objectives. Probability calculations. When it makes a “mistake,” it’s often because the training data contained errors, or the situation fell outside its programming. The AI isn’t choosing to act beyond its understanding; it’s functioning exactly as designed.
The Generative Power of Getting Things Wrong
Here’s what makes human error particularly fascinating… It’s often creative.
Alexander Fleming accidentally contaminated a petri dish in 1928. Instead of throwing it away, he noticed something odd. The bacteria around the mould had died. That contamination became penicillin.
Percy Spencer was working on radar equipment when he noticed the chocolate bar in his pocket had melted. Most people would have just gotten a new chocolate bar. Spencer invented the microwave oven.
The Post-it Note exists because Spencer Silver was trying to create a super-strong adhesive and failed. His “useless” weak glue turned out to be perfect for removable notes.
These weren’t just happy accidents. They required what Louis Pasteur called a “prepared mind.” Someone who could recognise that their failure might be pointing toward something unexpected.
Can an AI have a prepared mind? Can it recognise the significance of an unplanned outcome?
Current evidence suggests not quite. AI systems are getting better at pattern recognition, but they’re still fundamentally goal-oriented. They optimise for specific outcomes. Human creativity often emerges from abandoning the original goal entirely.
The Uncanny Valley of Error
There’s something disturbing about perfectly simulated imperfection.
In 1970, Japanese roboticist Masahiro Mori identified the “uncanny valley.” As robots become more human-like, our comfort with them increases. But there’s a point where they become almost, but not quite, human enough. At that point, they become deeply unsettling.
The same principle might apply to AI errors. When a machine makes a mistake that’s too perfectly human, too precisely calibrated, it feels wrong. We sense the algorithm behind the stumble.
Consider this when you misspeak; there’s usually a reason. You’re tired. Distracted. Thinking about something else. The error has context, history, biological reality.
When an AI “misspeaks,” it’s often because a random number generator told it to, or because the training data suggested a certain probability of error at that moment. The mistake has no biological context, no genuine cognitive overload behind it.
We might not consciously notice this difference, but something feels off.
The Japanese Art of Authentic Imperfection
Japanese aesthetics offer an interesting counterpoint to our AI dilemma.
Wabi-sabi finds beauty in imperfection, but specifically in natural imperfection. The asymmetry of a handmade pot. The discolouration on aged copper. The way wood weathers over time.
The beauty comes from the authenticity of the process. Time, weather, and human hands created these imperfections through genuine interaction with the world.
Kintsugi, the art of repairing broken pottery with gold, takes this further. The cracks become part of the object’s story. They’re not hidden but highlighted. The damage becomes beautiful because it’s real, unrepeatable, tied to a specific moment when something broke.
Could an AI create kintsugi? Technically, yes. It could calculate optimal crack patterns, apply digital gold leaf, and create convincingly “authentic” repairs.
But would they mean the same thing? The philosophical weight of kintsugi comes from embracing real damage, real time, real history. An AI’s simulated cracks, however perfectly rendered, lack that grounding in lived experience.
The Intentionality Problem
This brings us to the heart of the paradox.
Philosopher Donald Davidson argued that actions are only truly intentional when they’re caused by the right combination of belief and desire. You flip a light switch because you believe it will turn on the light and you want the room illuminated.
“What we call chaos is just patterns we haven’t recognised yet.”
— Chuck Palahniuk
Human mistakes are typically unintentional. They happen despite our intentions, not because of them. You meant to say one thing but said another. You intended to grab your keys but grabbed your phone instead.
An AI’s “mistakes” are always intentional in Davidson’s sense. The system was programmed to occasionally generate errors. It “believes” (in so far as it believes anything) that producing human-like mistakes will achieve its goal of seeming more natural. It “desires” to fulfil this objective.
So the AI’s errors aren’t errors at all. They’re successful executions of an error-simulation program.
This creates a logical impossibility. For the AI to truly replicate human error, it would need to make genuine mistakes, unintended deviations from its programming. But if it could do that, it would be malfunctioning, not simulating.
The Control Paradox
The deeper you dig, the more the contradictions pile up.
Human imperfection is valuable precisely because it’s uncontrolled. Our mistakes lead to serendipitous discoveries because we genuinely didn’t plan them. Our creative insights often come from abandoning our original intentions.
But an AI that perfectly simulates human flaws maintains perfect control over its simulation. It calculates the optimal type and frequency of errors. It generates mistakes according to statistical models of human behaviour.
This perfect control over imperfection simulation reveals the fundamental difference. The AI has mastered the appearance of losing control, yet never actually loses control.
It’s like an actor playing a drunk person. They might stumble convincingly, slur their words perfectly, capture every nuance of intoxication. But they’re sober throughout the performance. They’re in complete control of their simulation of being out of control.
The performance might be flawless, but it’s not genuine intoxication.
Where This Leads Us
So has our hypothetical AI failed by succeeding too well?
The evidence points to yes. By achieving a perfect simulation of human flaws, the AI demonstrates exactly what it lacks: the capacity for genuine unintentionality.
Real human errors emerge from our cognitive limitations, our biological constraints, and our tendency to act before we fully understand. They’re tied to consciousness, to the subjective experience of being confused or distracted or simply overwhelmed.
An AI can catalogue these behaviours, analyse their patterns, and reproduce their statistical properties. But it can’t replicate the underlying reality. The experience of not meaning to do something.
This isn’t necessarily a problem for most AI applications. We don’t need our navigation systems to genuinely experience confusion. We just need them to work reliably.
But it does reveal something important about the nature of consciousness, intentionality, and what makes human experience unique.
Our imperfections aren’t just behavioural quirks to be mimicked. They’re windows into the kind of beings we are. Finite, embodied, conscious creatures stumbling through a complex world we never fully understand.
When an AI perfectly simulates our stumbles, it shows us exactly what it cannot be, genuinely uncertain, authentically confused, truly surprised by its own mistakes.
The perfected flaw exposes the imperfect simulation. And perhaps that’s the most human thing of all.
Source
Sources include: Charles Sanders Peirce’s writings on fallibilism (1870s); René Descartes’ Meditations on First Philosophy (1641); Donald Davidson’s Essays on Actions and Events (1980); Masahiro Mori’s “The Uncanny Valley” hypothesis (1970); documented cases of serendipitous scientific discoveries including Fleming’s penicillin research (1928), Spencer’s microwave development, and Silver’s Post-it Note adhesive work; Google AI development documentation on voice assistant naturalness improvements (2019); Japanese aesthetic philosophy including wabi-sabi and kintsugi traditions; contemporary AI research on generative models and error simulation.
Comments (0)