My Personal Reflections
In the summer of 2020, as the pandemic raged and classrooms shuttered, thousands of British students found their futures suddenly derailed. An algorithm, designed to standardize grades in the absence of final exams, downgraded the marks of students from poorer schools at far higher rates than their wealthier peers1. The outcry was swift and fierce: students marched, parents protested, and the government was forced to reverse course2. But for many, the damage was already done. It is a grim reminder that in the age of artificial intelligence, the promise of technological progress can mask the reinforcement of old divides.
This was not an isolated incident. Across continents and industries, algorithms now shape who gets a job, a loan, an apartment, or even a hospital bed. For those already on society’s margins, these systems often feel less like engines of opportunity and more like invisible gatekeepers, quietly amplifying the very inequalities they claim to transcend.
The Data Trap
Consider the example of Aisha, a Kenyan market vendor who, after a string of successful microloans, was abruptly denied further credit by her digital lending app. She never learned why. The app’s algorithm, trained on data that reflected not just her payment history but also her neighborhood, phone contacts, and even the time of day she made calls, had decided she was too risky. In reality, Aisha’s “risk” was less about her behavior and more about the digital residue of her social world, a world shaped by generations of economic exclusion.
This is the paradox of algorithmic bias.
Algorithms are only as fair as the data that feeds them.
In practice, that means the prejudices of the past such as who had access to credit, who was policed, and who was hired are encoded into the systems that now decide the future. In the United States, a 2018 MIT study3 found commercial facial recognition systems misclassified darker-skinned women 35% of the time, compared to less than 1% for lighter-skinned men. In India, automated hiring tools have been shown to filter out applicants from lower castes, perpetuating entrenched hierarchies4. Cathy O’Neil, in Weapons of Math Destruction, argues that such systems function as models of oppression, cautioning that algorithms not only replicate historical patterns but also reinforce existing social hierarchies by automating the status quo.
The Black Box and the New Gatekeepers
These systems pose a heightened risk because of how little insight they offer into their own functioning. Earlier in computing history, a programmer could walk through the logic of a program line by line. Now, deep learning models operate as black boxes, their inner workings often indecipherable even to their designers. When companies further obscure these processes by declaring algorithms trade secrets, they deny transparency and leave those most affected which is often vulnerable populations without answers or remedies.
For those affected by automated decision-making such as a hypothetical Nigerian market vendor like Aisha, or a Detroit family denied a mortgage by an opaque credit algorithm, the lack of transparency is not just an abstract issue, but a daily challenge. Unable to see the reasoning behind these decisions, they have no way to contest or appeal outcomes that can profoundly shape their lives. Virginia Eubanks, in Automating Inequality, documents how welfare recipients in the United States are frequently denied benefits by inscrutable eligibility algorithms, often with severe consequences. The California Report On Frontier AI Policy similarly warns that without transparency, it becomes nearly impossible to audit automated systems for fairness or accuracy, leaving those impacted with little recourse or explanation.
Compounding Disadvantage
Algorithms do not just reflect inequality. Algorithms can entrench it. In Brazil, residents of favelas are over-policed by AI-powered surveillance5. In the gig economy, workers from Nairobi to New York find their access to jobs and income shaped by platform algorithms6. These algorithms can change overnight, with no warning, no explanation, and no recourse.
These systems create feedback loops that are difficult to escape. Predictive policing tools send more patrols to “high-risk” neighborhoods, leading to more arrests and reinforcing the original data7. Automated resume screening tools may inadvertently filter out job applicants from underrepresented groups, limiting the diversity of new hires and narrowing opportunities for those who already face barriers to employment. Each decision, opaque and unaccountable, becomes another brick in the wall of systemic exclusion.
Whose Future? The Global and Intersectional Stakes
Algorithmic inequality is not confined to any one country or group. In the U.K., the A-level grading debacle disproportionately harmed students from disadvantaged schools. In China, social credit algorithms reinforce regional and class divides8. In Europe, digital surveillance technologies—including facial recognition and algorithmic risk assessment tools—are increasingly deployed in ways that disproportionately target Roma communities, subjecting them to heightened monitoring and intervention9. When multiple axes of identity intersect such as race, gender, class, and disability the harms multiply. Yet most systems are not designed to recognize or mitigate these layered risks.
The stories are as varied as the people who live them:
In India, Dalit job seekers report being filtered out by automated resume screeners that flag “low-caste” surnames.
In Kenya, informal workers are excluded from digital credit due to lack of “acceptable” data, deepening economic precarity.
In the U.K., students from poor neighborhoods are algorithmically downgraded, jeopardizing university admissions.
The Democratic Deficit
As algorithms grow more powerful, they increasingly concentrate decision-making authority among a select few. The California Report on Frontier AI Policy draws parallels to past regulatory failures in the tobacco and energy industries, arguing that information asymmetry undermines both effective oversight and public welfare. When citizens are unable to scrutinize or debate the systems that govern their lives, democratic accountability erodes. Frank Pasquale’s The Black Box Society similarly warns of a future where unelected code controllers wield outsized influence, threatening to replace democratic governance with a technocratic oligarchy.
Technology’s Promise and Limits
Not everyone views technology as a force for exclusion. Leading thinkers such as Peter Diamandis and Steven Kotler in The Future is Faster Than You Think have consistently highlighted the power of digital tools and their potential to empower individuals and communities. In India, for example, mobile banking platforms have brought millions of previously excluded people into the formal financial system10. The California Report on Frontier AI Policy also recognizes this transformative potential, noting that AI innovations are poised to create historic opportunities for humanity.
Technology’s impact is not set in stone, and experts continue to debate how likely certain risks are to materialize. Algorithms have the potential to lower barriers and expand opportunity, but only when they are intentionally designed to promote equity and only if the societies using them are committed to addressing the inequalities that automated systems might otherwise reinforce.
Paths Toward Algorithmic Justice
The first step is transparency. The California Report calls for public-facing transparency in data acquisition, safety practices, security, pre-deployment testing, and downstream impact. The EU’s AI Act mandates a “right to explanation” for algorithmic decisions. Brazil’s data protection law11 includes similar provisions. Technical solutions like Explainable AI (XAI) and open-source auditing platforms (such as IBM’s AI Fairness 360 and Google’s What-If Tool) can help, but only if paired with regulatory teeth.
However, transparency alone is not enough. The report advocates for participatory governance, involving affected communities in the design and oversight of algorithmic systems. Canada’s Algorithmic Impact Assessment requires public consultation for high-impact government algorithms. New York City’s Automated Decision Systems Task Force included civil society and community advocates in its review process. True participation means redistributing power, not just consulting.
Reparative audits, where companies identify and address past harms, are being piloted in the U.S. and U.K. Targeted investments in affected communities, public ownership of critical digital infrastructure, and legal remedies for those harmed by algorithms are essential. The California Report suggests adverse event reporting systems to monitor post-deployment impacts and continuous regulatory adaptation. Threshold-based regulation (by computational cost or downstream impact) can ensure oversight keeps pace with technological change.
Empowering individuals to understand and challenge algorithmic systems is also critical. Digital literacy programs, critical thinking curricula, and public awareness campaigns can help citizens resist manipulation and demand accountability. Some companies are taking steps—IBM’s AI Fairness 360, Microsoft’s Responsible AI Standard, and the Partnership on AI all provide frameworks for responsible development. But voluntary measures are no substitute for regulation and community oversight.
The Road Ahead
Algorithmic fairness must be intersectional. Systems must be designed to recognize and address the layered harms faced by those at the crossroads of race, gender, class, and disability. This requires centering the voices of those most affected in design, in governance, and redress.
California’s policy efforts, the EU’s AI Act, and global grassroots movements show that change is possible. But real progress requires sustained, collective action: demanding transparency, supporting community oversight, investing in reparative justice, and refusing complacency.
Human autonomy, the right to decide for ourselves, free from undue interference is central to our dignity, but it is not enough.
True empowerment comes from human agency: the capacity not only to make choices, but to act on them, shape our environments, and influence the systems that affect our lives. In the digital age, we must move beyond simply granting people the freedom to be left alone by technology, and instead champion their role as active participants and co-creators of the digital world. Only by ensuring that both autonomy and agency are respected can technology truly serve humanity, rather than risk deepening its divides.
The Guardian. (2020). Who won and who lost when A-levels meet the algorithm. https://www.theguardian.com/education/2020/aug/13/who-won-and-who-lost-when-a-levels-meet-the-algorithm
BBC News. (2020). A-levels and GCSEs: U-turn as teacher estimates to be used for exam results. https://www.bbc.com/news/uk-53810655
MIT News. (2018). Study finds gender and skin-type bias in commercial artificial-intelligence systems. https://news.mit.edu/2018/study-finds-gender-skin-type-bias-artificial-intelligence-systems-0212
Context News. (2023). Racist, sexist, casteist: Is AI bad news for India? Context by Thomson Reuters Foundation. https://www.context.news/digital-rights/racist-sexist-casteist-is-ai-bad-news-for-india
Context News. (2023). Brazil turns facial recognition on rioters despite racism fears. https://www.context.news/surveillance/brazil-turns-facial-recognition-on-rioters-despite-racism-fears
World Economic Forum. (2025). The gig economy is booming, but is it fair work? And other trends in jobs and skills this month. https://www.weforum.org/stories/2025/06/the-gig-economy-ilo-labour-platforms/
NAACP. (2024). Artificial intelligence in predictive policing issue brief. https://naacp.org/resources/artificial-intelligence-predictive-policing-issue-brief
Brookings Institution. (2018). China's social credit system spreads to more daily transactions. https://www.brookings.edu/articles/chinas-social-credit-system-spreads-to-more-daily-transactions
European Digital Rights (EDRi). (2021). Roma & Sinti rights, Resistance & Facial Recognition: RYF in Conversation. https://edri.org/our-work/roma-sinti-rights-resistance-facial-recognition-ryf-in-conversation/
Harvard Kennedy School. (2023). How India leapfrogged financial inclusion. https://www.hks.harvard.edu/centers/mrcbg/programs/growthpolicy/how-india-leapfrogged-financial-inclusion
Brazil. (Updated 2020). Brazilian General Data Protection Law (LGPD), Law No. 13,709/2018, as amended by Law No. 13,853/2019. Article 20. https://iapp.org/resources/article/brazilian-data-protection-law-lgpd-english-translation/
Fairness and XAI are such important topics, but unfortunately they often take a backseat to the latest shiny new AI thing. Like today with GenAI. My concern in the GenAI space is that the XAI signal will get drowned out by generated content that's fed into another LLM down the line and so on.