Imagine you’re at a hospital, and your doctor is using AI to help decide who should receive care first.
It sounds efficient—a smart system prioritizing patients based on urgency.
But what happens when the algorithm doesn’t treat everyone equally?
That’s exactly what happened with an algorithm developed by Optum, a subsidiary of UnitedHealth Group, which was widely used by major hospital systems across the United States. A 2019 study revealed that the algorithm significantly underestimated the health needs of Black patients compared to white patients with similar medical conditions. The issue stemmed from using healthcare costs as a stand-in for health needs—a decision that inadvertently reinforced systemic inequalities.
They had introduced an AI tool to identify patients for high-risk care programs, hoping to catch serious conditions early and improve outcomes. But over time, a troubling pattern emerged. The algorithm was prioritizing white patients more often than Black patients — even when those Black patients had more severe medical needs.
So, what went wrong?
The AI had been trained to predict who would benefit most from extra care by looking at how much money had been spent on each patient’s healthcare in the past. It seemed logical — more spending typically means more serious illness, right?
Historically, less money has been spent on Black patients, not because they were healthier, but because of long-standing disparities in access to care, trust in the system, and treatment decisions. The algorithm couldn’t see these social factors — it simply learned from the data it was given. As a result, it underestimated the needs of patients who had historically been underserved.
Once this issue came to light, the developers made a crucial change. They rebuilt the model using direct health data like blood pressure, lab results, and existing medical conditions — rather than relying on financial records as a stand-in for health. This shift led to a much more accurate and equitable system that better reflected who actually needed care, regardless of race or socioeconomic status.
This case offers a powerful lesson. AI systems, no matter how well-intentioned, can mirror the flaws of the society they learn from. In healthcare, those flaws can be life-threatening. That’s why it’s essential to use diverse data, continually test for bias, and remain vigilant at every stage of development.
Ethical AI in healthcare isn’t just about getting the technology right — it’s about making sure it works for everyone.
Final Thoughts: Building AI You Can Trust
AI can be transformative, or it can go terribly wrong. Whether we’re designing recommendation engines or diagnosing diseases, we need AI to be transparent, ethical, and accountable.
The good news? We already have the tools, the frameworks, and the knowledge. Now we need to use them and hold ourselves to higher standards. Because the future of AI isn’t just about power. It’s about trust.
4. Verifiable AI — From Blockchain Anchors to Zero-Knowledge Proofs
We trust our GPS to get us home, our smart assistants to answer our questions, and our AI writing tools to make sense of language, but can we trust how these systems actually work?
As AI models grow more powerful and opaque, a new frontier of transparency is emerging: verifiable AI.
The idea is simple but powerful to prove that an AI system is doing what it claims to do, without needing blind faith. Whether it’s a large language model generating legal advice or an image recognition system screening for medical issues, verifiability ensures that what’s happening under the hood can be checked, traced, and held to account.
What Is a Verifiable Model?
In basic terms, a verifiable AI model is one that can prove its actions, inputs, and outcomes — either to a human or to another system. This means:
You can trace where its data came from
You can confirm which model version was used
You can validate that it wasn’t tampered with after training
And in some cases, you can audit a decision without revealing sensitive inputs — thanks to cryptographic tools
These capabilities are vital in high-stakes areas like healthcare, finance, law, and governance — where decisions need to be transparent, defensible, and accountable.