Problems with AI

I just watched Abigail Thorn’s Philosophy Tube video, AI is an ethical nightmare. Spoiler: she’s right. Thorn breaks AI harm into two buckets:

  1. The prediction itself.

  2. How that prediction gets used in the real world.

The example she used was airport body scanners, aka the “penis detectors.” Operators must choose male or female before you step inside. Trans and gender‑nonconforming folks get flagged as “anomalous” and subjected to pat‑downs, effectively outing them in public. This illustrates how an AI system can impose designers’ narrow worldviews onto real human bodies.

Coming from a software‑engineering background, my knee‑jerk reaction was simple: retrain the model with trans-inclusive data. But that’s just a band-aid. The deeper ethical nightmare stems from the way AI systems are fundamentally designed and trained.


Data: The Core Issue

Modern AI models (LLMs, image generators, vision systems) devour massive amounts of data no single entity fully controls. Companies hoover data wherever they can:

1. Public Web Scrapes

2. Semi‑Private Platforms

3. Synthetic Data


Genuine Solutions to this Mess

“Move fast and patch later” isn’t cutting it. Real fixes need to prioritize autonomy, dignity, and actual human flourishing from the outset:


The Bottom Line

We can’t algorithm tweak our way out of ethical nightmares. AI must be built on consent, respect, and transparency, the foundations that genuinely support human flourishing instead of eroding it.