Built for the World, Not by It: The Missing Voices in AI
A Global Look at Invisible Stakeholders
We often speak about inclusion in AI as though it’s a design feature—something you apply once the system is already built. Like installing a wheelchair ramp after the building is up, or adding subtitles once the film is finished.
But inclusion isn’t an add-on.
It’s not a postscript.
It’s a question of authorship.
The more fundamental question is not who we design for—but who gets to shape the design in the first place.
Because right now, that design table is surrounded by a very narrow circle.
And the silence around it is louder than we care to admit.
Whose Systems? Whose Voices?
AI may claim to be universal—but its design often reflects a narrow band of geographies, languages, and worldviews. The systems cross borders. The decision-making rarely does.
Across continents, the same pattern repeats. The systems are global. But the authorship is not. AI is being deployed in Nairobi, Jakarta, and São Paulo—but still overwhelmingly built in San Francisco, London, and Beijing. This is not just a gap in geography. It’s a gap in power, context, and cultural fluency.
We’re not just missing inclusion.
We’re missing global balance.
The Sound of an Empty Chair
Imagine sitting in a meeting where a voice is missing—not because they didn’t speak up, but because they were never invited. That’s what AI development feels like for entire communities: Global South regions, neurodivergent users, non-English speakers, and many others whose perspectives are either poorly represented in data or not represented at all in decision-making.
These groups aren’t just “underserved”—they’re structurally excluded.
Take language, for instance. Most AI models are trained overwhelmingly in English. That means if your first language is Swahili or Bengali, your experience is filtered through a system that wasn’t trained to understand you. It's like trying to have a conversation with someone who only half-speaks your language, then making life-altering decisions based on that exchange.
Or think about neurodivergence. AI systems trained on normative patterns of behaviour might flag anything different as an anomaly. But who defines “normal”? If someone with autism, ADHD, or dyslexia uses your platform, and the system misinterprets their intent or flags their behaviour unfairly, the issue isn’t with the user—it’s with the frame.
And then there’s the Global South—home to billions, yet so often on the receiving end of technologies built elsewhere, for different contexts, with different assumptions. These communities don’t just face exclusion in datasets. They’re excluded from the table where the systems themselves are imagined, scoped, and governed.
Designing the Future with the Wrong Blueprint
We would never build a bridge without surveying both sides of the river. Yet that’s how many AI systems are being developed—built from one vantage point, one worldview, one linguistic frame, and then exported globally as though intelligence were universal and context-free.
But intelligence is not universal. It’s shaped by culture, language, history, and lived experience.
When we ignore that, we don’t just risk bias—we risk irrelevance.
And in some cases, harm.
Inclusion as Infrastructure, Not Intention
There’s a dangerous assumption that inclusion can be patched in later. That you can “diversify” your datasets or “localise” your outputs post-launch. But inclusion doesn’t work like that.
You can’t retrofit the foundation without cracking the structure.
Inclusion has to be part of the architecture. It has to be designed into the system—into the teams, the questions we ask, the problems we choose to solve. Otherwise, we end up solving for the few, and exporting those solutions to the many.
And when the system fails, we act surprised—when in fact, the exclusion was baked in from the start.
The Real Risk of a Narrow Table
In some ways, the problem isn’t malicious. It’s structural. Rooms get built around convenience and comfort. Teams get hired through networks that look like us. Product decisions get made by the people who happened to be in the room at the time.
But that’s exactly the problem.
Because once those systems scale—into healthcare, hiring, education, finance—the cost of exclusion becomes systemic. And it’s not just about who gets access. It’s about who gets agency.
If you’re not at the table, your realities don’t shape the system. Your needs don’t define the roadmap. Your voice doesn’t influence what “fairness” means.
And that’s the deeper issue: AI isn’t just a tool. It’s a map of human priorities.
When some voices are missing from the map, they’re missing from the future we’re drawing.
Pulling Up More Chairs
So the question isn’t simply: How do we make AI more inclusive?
The better question is: Who have we made invisible in the process?
The title of this piece asks a simple question: Who’s missing from the AI table?
But here’s a harder one—Whose table is it to begin with?
Because if AI is truly global in impact, then its authorship must be global in voice.
Anything less risks building systems that speak for the world, without listening to it.
It’s time to pull up more chairs.
To fund research led from the margins.
To build governance structures that don’t just feature global voices as guests—but as architects.
Because if AI is to serve humanity, it needs to reflect it.
All of it.