Skip to content

Watching AI

Published:
7 min read

I scroll X every day. It’s been a habit for years. You open the app for a quick look, and suddenly you’re an hour deep.

Lately, when I’m in Kaspa posts (but tbh this isn’t specific to Kaspa), I keep running into the same pattern: posts that look solid at first glance, even convincing… until you notice one part that’s just flat out wrong.

Not “I disagree with this” wrong. Factually wrong. A detail that doesn’t add up. A confident statement built on nothing.

And it puts me in an awkward spot.

Because I don’t want to be the guy correcting everyone all day. That’s not a fun identity (I’ll leave that to others lol). It’s also not a good use of my time. So most of the time I just move on.

But it feels like the volume is going up.

I’m not a Kaspa authority. I get things wrong like anyone else. What I’m talking about is the basic, checkable stuff. The “spend five minutes reading” level. When a thread can’t clear that bar but is written like it’s certain, it starts to feel less like enthusiasm and more like farming for a payout.

And the Kaspa community is welcoming, which is one of the reasons that drew me in to begin with. The downside is that big claims sometimes get waved through with zero evidence or scrutiny. I admit, I used to be part of that. I probably still am. But IMO it does make the community easy to exploit. We’ve already seen enough “projects” (ProbFi = Rug, Xodex = Rug, KDX = Rug (not the wallet), Chainge?) come and go with real people losing real money.

I’m not reacting to someone being wrong or having a different take. That happens. The thing that bugs me is the feeling that i get of: “I didn’t check anything, but I’m going to post it confidently anyway.”

But the posts themselves are only half of it. The replies are worse.

You’ll see a long thread. Then you open the replies and it’s like stepping into a room where everyone is speaking in the same voice. Same phrases. Same weird energy. You click through a few profiles and it keeps going: the person who posted is pushing the same type of content over and over, and the people replying are doing the same.

Maybe it’s not literally bots every time. But a lot of it looks automated. It doesn’t read like what a real person would write.

And this is where it gets messy, because I like AI. A lot. It has genuinely changed my life.

The pace is wild. In one year, we’ve gone from “wow, it can write a paragraph” to code gen that’s genuinely useful, video gen that’s almost indistinguishable (they solved the Will Smith eating spaghetti test), and music that people genuinly want to listen to (sus out the Many men, 50 Cent country remix). Even the smaller stuff, if you’re not using AI in some way then you’re getting left behind by your colleague who is.

So I’m not anti-AI. I’m not nostalgic for some pre-tool golden age. Humans weren’t pumping out pure thoughtful prose before this.

But what I’m seeing on X isn’t “people using AI to communicate better.” It’s people using it to fill the feed.

The “dead internet” theory has been around for ages, but this feels like a newer version of it.

Not empty. Not silent.

Busy. Loud. Endless.

A feed full of posts that technically say something, but also nothing at the same time. No lived experience behind them. No real cost to being wrong. No downside to spamming. Just slop.

And then that content gets answered by more content, and the whole thing blurs.

I’m a supporter of creators getting paid. I like the idea that attention and value can translate into money. I even aimed for that at one point here on X.

But paying for engagement creates a job: post whatever triggers replies and impressions.

So you get an ecosystem where the lowest-effort content has a reason to exist. If it pays, people will do it. “Don’t hate the player, hate the game,” I guess?

Maybe. But the game still shapes the culture. And I don’t love what it’s shaping.

Part of me wonders if this is cyclical.

In crypto communities especially, it feels seasonal:

When the thoughtful posters step back, the feed doesn’t stay empty. It gets occupied. And the easiest occupant is the slop generator.

That might be all this is. A phase.

Or it might be the direction we’re heading.

So where’s the line?

This is the part where I don’t feel clean about any of it, because I’m not standing outside the problem pointing in. I use AI tools too.

I use Wispr Flow to dictate thoughts. It doesn’t write my opinions for me, but it smooths things out. It removes the pauses and the repeated words you get when you’re thinking out loud. I use GPT Pro for researching (at this point it’s replaced Google for me) and brainstorming. Then I run it all through Grammarly and fix the ugly bits.

That’s already a filter. A layer between what I said and what ends up on the page.

Is that fine? Is that already part of the same slide?

I honestly don’t know. But I still feel a difference between:

One feels like typing faster. The other feels like autopilot.

And here’s the part that changes how I behave on X: I don’t correct most of it publicly, because sometimes the correction is the product. If the goal is replies and impressions, being wrong on purpose works. Rage bait works. “Tell the internet the wrong answer” works.

So I mostly mute and move on. If someone seems genuine and it’s worth it, I’ll message them or reply once. Otherwise you’re just feeding the machine.

My simplest test lately is this: would I stand behind this if someone pushed back?

Because the worst part of the slop isn’t that it’s written by a model. It’s that it’s confidently incorrect, and the poster doesn’t care enough to notice.

What makes me wonder is where this goes as it keeps accelerating (Jensen aint slowing down his GPU sales anytime soon).

A point where nothing is raw anymore. No rough edges. No pauses. No “I’m not sure.” No imperfect sentence that tells you there’s a human on the other side of the screen.

Just polished text generated, edited, optimised, posted, replied to, and reposted.

A button you press to turn a vague feeling into a neat paragraph. Another button to turn that paragraph into engagement bait. Another to produce the replies.

Maybe we’re already halfway there.

And I’m sitting here, scrolling, trying to decide what kind of participant I want to be.

Maybe the dead internet isn’t bots pretending to be people.

Maybe it’s people opting out of thinking because the machine can talk for them.

Signed, a guy shouting into the void.


Edit on GitHub