Problems with AI
I just watched Abigail Thorn’s Philosophy Tube video, AI is an ethical nightmare. Spoiler: she’s right. Thorn breaks AI harm into two buckets:
The prediction itself.
How that prediction gets used in the real world.
The example she used was airport body scanners, aka the “penis detectors.” Operators must choose male or female before you step inside. Trans and gender‑nonconforming folks get flagged as “anomalous” and subjected to pat‑downs, effectively outing them in public. This illustrates how an AI system can impose designers’ narrow worldviews onto real human bodies.
Coming from a software‑engineering background, my knee‑jerk reaction was simple: retrain the model with trans-inclusive data. But that’s just a band-aid. The deeper ethical nightmare stems from the way AI systems are fundamentally designed and trained.
Data: The Core Issue
Modern AI models (LLMs, image generators, vision systems) devour massive amounts of data no single entity fully controls. Companies hoover data wherever they can:
1. Public Web Scrapes
- GPT style mass collection from blogs, tweets, forums. No one explicitly consented to have their personal content monetised for corporate profit.
2. Semi‑Private Platforms
- Google mined YouTube videos to train video models. Your content might be public to viewers, but that doesn’t make it a free asset for deepfake factories.
3. Synthetic Data
- Sounds good until you realize it’s generated from already biased datasets, perpetuating biases downstream.
Genuine Solutions to this Mess
“Move fast and patch later” isn’t cutting it. Real fixes need to prioritize autonomy, dignity, and actual human flourishing from the outset:
Consent-Based Data Co‑ops: People voluntarily pool data and receive genuine benefits or payment. No consent, no data.
Participatory AI Design: Affected communities (e.g., trans travelers) actively shape how systems handle their identities.
Risk‑Tiered Regulation:
Unacceptable (Ban): Gender verification, predictive policing, social scoring.
High Risk: Hiring, healthcare, credit scoring require external audits and mandatory human oversight.
Limited: Chatbots, recommender algorithms require transparency and clear opt-outs.
Minimal: Spell checkers, spam filters require basic transparency and oversight.
Right to Explanation and Compensation: If AI decisions negatively impact you, you deserve a clear explanation, human review, and tangible compensation.
Ethical Audits (WOF for AI): Regular safety checks. Models failing standards are taken offline without excuses.
The Bottom Line
We can’t algorithm tweak our way out of ethical nightmares. AI must be built on consent, respect, and transparency, the foundations that genuinely support human flourishing instead of eroding it.