Okay. Let's slow down.

The previous posts on this blog have explored some pretty wild ideas — AI politicians, human zoos, the works. And those are fun to think about. But let's be honest about something: the AI we have today is not the AI in those thought experiments. Not even close.

⚠️ Reality Check

Current AI systems are deeply flawed. They reproduce the biases in their training data. They hallucinate facts. They can be manipulated. They're not impartial. They're not infallible. They're not even consistently competent.

The Current State of Affairs

Here's what's actually happening with AI right now:

This is the reality. Any discussion of AI in politics, AI as arbiter, or AI managing human flourishing has to start here.

The Low Bar

That said — and this is the uncomfortable part — the current human political system isn't exactly great either.

Now, here's the uncomfortable question: if we're comparing "flawed AI that tries to optimize for human flourishing" to "flawed humans who optimize for re-election"... is the gap as large as we'd like to think?

The bar isn't "AI should be perfect." The bar is "AI should be better than the current alternative." And the current alternative is... not good.

What Would Actually Need to Be True

For any of the speculative ideas in previous posts to make sense, several things would need to be true:

We're not there. We might never get there. But the question is worth asking: if we could get there, would we want to?

The Alternative

Maybe the answer isn't "AI politicians" or "AI manages everything." Maybe the answer is more humble: AI as a tool that helps humans make better decisions — not a replacement for human judgment.

Think:

These don't require AI to be perfect. They require AI to be useful — and for humans to remain in the loop.

The speculative posts on this blog are thought experiments. They're not blueprints. They're not proposals. They're ways of thinking through possibilities — while being clear-eyed about where we are today.

Current AI is biased, flawed, and dangerous if mishandled. That's not a reason to ignore it — it's a reason to be honest about it. And maybe, just maybe, to be a little less impressed with the human systems we're comparing it to.

The future isn't written yet. What we do now — how we build, how we regulate, how we think — will shape whether AI ends up as a tool for human flourishing or another disaster. The bar is low. That's not a reason to give up. It's a reason to try harder.