The Right to Explanation
You’re already getting shafted by unexplainable AI decisions and it’s only going to get worse as we race towards an AI-first future. It’s hard to even know what systems are judging you, much less the reasoning behind those judgements.

Training data, implementation details, and system prompts can all be biased, leading to unfair decisions in job applications, loan approvals, and even criminal sentencing. These black box systems leave people in the dark and without recourse — working as intended or not, depending on who you ask.
Efficiency Over Explainability
Regulation lags behind or is out of touch with the actual needs of the public. Capitalism will always be about efficiency, so it’s no surprise that the corporations controlling AI have not prioritized explainability. Developers prioritize innovation and performance over implementing existing transparency standards. The demand for explanations will legitimately slow development down, but like accessibility or privacy, it still needs to remain a priority — draft standards exist; they just need to be enforced.

The Problem With Transparency
AI bias disproportionately affects marginalized people and minorities, so the stakes of getting this wrong are not evenly distributed. As AI becomes more pervasive, the pressure for transparency is real — but transparency is harder than it sounds.
As systems get more complex, it becomes harder to distill their behavior into simplified explanations without obscuring the nuances of the underlying logic. Human-friendly explanations of inhuman processes may obscure the very biases we are trying to understand. There’s also the confidence problem: an “explained” system satisfying a regulatory requirement could mislead people into overestimating their understanding of AI capabilities. Cathy O’Neil makes this case in Weapons of Math Destruction — false confidence in complex systems doesn’t help anyone.
And explanations aren’t static. As models evolve, their biases and decision-making processes may also shift, meaning yesterday’s explanation may be wrong today. Any real accountability framework has to account for constant model changes — versioned documentation, not checkbox compliance.

AI’s Perspective
“Current discussions often miss the intersectionality of AI decisions—how they disproportionately affect marginalized communities. Additionally, there’s a lack of emphasis on educating the public about AI literacy, empowering individuals to demand explanations and challenge unjust decisions.”
ChatGPT o1-preview 2024-10-29
As AI controls more decisions in our lives, we need to examine the algorithms and training data that power them. The EU AI Act, the AI Now Institute, and groups like Partnership on AI are pushing for the legislation and standards needed. Consumer pressure matters too.
