What If AI Doesn’t Just Judge Us — But Trains Us to Judge Ourselves Differently?
A recent study in Harvard Business Review asked a simple but unsettling question:
What happens when candidates know they’re being evaluated by AI?
The answer wasn’t just technical. It was human.
Researchers Jonas Goergen, Anne-Kathrin Klesse, and Emanuel de Bellis found that people adjust their behaviour—not to highlight their strengths, but to fit what they think the machine wants.
They downplay empathy.
Suppress creativity.
Lead with logic.
In other words: they become a machine-optimised version of themselves.
Not because it’s true to who they are.
But because it feels safer.
This Isn’t Gaming the System. It’s Self-Erasure by Design.
We often frame AI bias in terms of what the system fails to see.
But what if the deeper risk is what people stop showing?
When individuals mute uniqueness, flatten expression, or edit identity to match algorithmic preferences, something profound happens: they stop bringing their full selves.
We call it efficiency.
But it may be conformity by another name.
From Objectivity to Obedience?
AI was supposed to remove human bias from hiring.
But in trying to make the process more objective, we may be encouraging a new kind of performance:
🔹 Conformity bias
🔹 Creativity suppression
🔹 Emotional flattening
When people believe systems reward only what’s quantifiable, they stop offering what’s not.
And in doing so, we filter out the very traits we say we want more of: leadership, resilience, innovation, empathy.
This goes far beyond hiring.
The same behavioural rewiring is showing up in performance reviews, internal mobility, university admissions—even public services.
A Strategic Blind Spot in Plain Sight
This is what I call the emotional cost of AI.
Not just fear, fatigue, or resistance—
but the subtle reshaping of identity itself.
It’s a cost we rarely measure.
But one we can no longer ignore.
Human-Aware AI: Designing for Alignment, Not Just Accuracy
We don’t just need ethical AI.
We need human-aware AI.
That means designing tools that optimise for what matters—not just what signals predict performance, but what actually reflects value, meaning, and humanity.
It’s about aligning measurement with meaning.
And protecting what’s human—before it becomes optional.
The Good News? We Can Still Choose Differently.
AI doesn’t have to flatten us.
When applied with intention, it can amplify emotional intelligence, surface hidden potential, and reward the kinds of capabilities traditional systems overlook.
But only if we design for it.
Only if we stop asking, “What can the machine do?”
And start asking, “What does it honour?”
This is one of the core ideas I explore in my upcoming book,
𝑹𝒆𝒂𝒍𝒊𝒔𝒕𝒊𝒄 𝑶𝒑𝒕𝒊𝒎𝒊𝒔𝒎: 𝑻𝒉𝒆 𝒈𝒓𝒆𝒂𝒕𝒆𝒔𝒕 𝒕𝒉𝒓𝒆𝒂𝒕 𝒊𝒏 𝑨𝑰 𝒊𝒔𝒏’𝒕 𝒕𝒉𝒆 𝒕𝒆𝒄𝒉 𝒊𝒕𝒔𝒆𝒍𝒇 — 𝒊𝒕’𝒔 𝒉𝒐𝒘 𝒘𝒆 𝒄𝒉𝒐𝒐𝒔𝒆 𝒕𝒐 𝒂𝒅𝒐𝒑𝒕 𝒊𝒕.
The book invites leaders, designers, and policymakers to embed emotional intelligence, human dignity, and inclusive design at the heart of AI transformation—not as a footnote, but as a feature.
🔗 You can sign up for updates and early access here:
👉 www.realisticoptimism.ai