

Digital Oil Rig is such a great analogy for multiple reasons.


Digital Oil Rig is such a great analogy for multiple reasons.


“Digg watches what 1,000 of the most thoughtful voices in AI are paying attention to, and ranks the stories they’re pointing at by what’s rising fastest,” Rose said.
…
It also uses AI from services such as xAI, OpenAI, Anthropic and Google Gemini to “classify public accounts, summarize public posts and linked articles, describe public media, generate topic labels, score public content and power search.” That would explain how Digg quantifies user sentiment into a percentage for its posts.
Focus on AI. AI apologists from Xitter. Powered by AI summarizers.
Barf.


That’s bad, but at the same time, there’s public data aggregators that sell your info, and you could go out right now and find your name, phone numbers you’ve had, former and current addresses, people you’ve lived with, former and current jobs, etc. It’s kind of terrifying what’s “public information,” and yet they continue to be allowed to operate with virtual impunity.
I’m not saying we should just accept things, but cutting off this “training data” would be a great start, and if companies can’t, then they should be forced to cease operations (including AI chatbots).


It’s like when the internet first came about for the general public, and we had to constantly remind people, “Don’t believe everything you read. Nobody has to tell the truth.” I’m still unsure if we learned that lesson, but unlike the internet, AI is additionally and already largely hated by a majority of people.


I wasn’t actually trying to argue. I wanted to get to the bottom of why OP felt it was AI slop. This was and is an earnest question, because if y’all are seeing something, I’d like to know what that is, so I can stay attuned to the various tricks people use to hide when an LLM is speaking for them.


I mean, that doesn’t mean they had an LLM write their article(s). There’s no shortage of people who praise AI, and they don’t need an LLM to do it. That’s a pretty weak connection.
I may not agree with their stance on AI at all, and I think it’s fundamentally flawed, but I’m not going to accuse someone of outsourcing their writing when all we’ve got is “but they like AI, tho.”


Why do you think so? I’m not seeing anything that immediately screams “AI wrote this,” but I’m open to learning.


Sounds like it’s still an alternative!


Bandcamp
https://blog.bandcamp.com/2026/01/13/keeping-bandcamp-human/
And unlike streaming sites, you own what you buy, and the artist gets paid much better; you’d have to stream the same song 200+ times with a premium Spotify account to generate the same profit for the artist as just buying the song outright on Bandcamp.
Be honest. Did you have an LLM write that? Because boy howdy, does it read the way LLMs output text, right down to missing the point.
Regardless, in only one place did I mention climate change effects, and that was in passing as the last item in a list of issues with AI and LLMs in particular. That was on purpose.
You can throw out the environment as an argument entirely, and accept Andy Masley’s entire premise (I don’t), and AI still has much for which it needs to reckon.
That’s called the black swan fallacy. “I’ve never personally seen it, therefore it’s not real.” That just sounds like cope and projection. You feel like this “loud minority” is preventing you from being so open about your love of AI, and so you pretend like everyone is secretly on your side.
If you were to get specific, people are generally receptive to very specific use cases, like tailor-made models for assisting medical diagnosis or security analysis of code. What people almost universally hate is the slop produced by generative AI, particularly in creative niches where what these machines produce isn’t art but a pale shade of it. Yet, these billionaires keep trying to shove GenAI down everyone’s throats at every turn, all the while ruining hobbies (see the RAM/SSD supply chain), livelihoods, health, rights (see Palantir; see who owns these tools), and the planet.
So yeah, if you don’t have a visceral reaction to someone shilling AI, I don’t believe you’re really that far left. The tools that broadly exist are not the tools of nor for the befit of the people.


Treat an AI like the idiot intern without any references you just hired.
My company is in the process of pivoting hard to Claude after 50yrs of doing virtually everything themselves and rolling their own versions of already-existing software, and this is almost verbatim how I’ve described to others what it feels like to use it.
It feels like cajoling an intern to understand a job for which they have some average skill but zero motivation, and they only want to do the bare minimum, so you spend all the time you could be doing your job holding their hand through basic tasks.
It’s fucking annoying.


A few years ago, I had an acquaintance that was trying to join the CIA. She got several rounds into the interview process but ultimately took a job at Meta (who she was also interviewing with). She got fired in one of the later mass layoffs, but she chose to work there. She knew the kind of company they were and are, and she was like, “They gonna pay me lots of money? Then I’m in!” And look how that worked out.
There is no world in which these companies are doing anything good, and if you think they are, then you’re the ignorant rube they want to help bring about technofascism, and they’re fine tossing you into their grinder as meat as they please.
Descending into mass AI Psychosis?
The correct questions


That’s essentially what I’m doing right now, and thus far, they still want workers who understand the code. However, my manager has already said that his boss had it compose a few scripts, and he thought he could therefore replace an entire workflow.
Thankfully, my manager talked him down and pointed out that it still got several nontrivial things wrong and that taking humans out is dangerous when it comes time to push to production.
But it’s concerning to see that the higher ups don’t understand what it is and what its limitations are.


My company is pivoting hard to Claude for everything, and besides the fact that it’s irritating as fuck to use, it has me worried about shenanigans like in this article. For almost 50 years, they’ve had a “no reliance upon 3rd party platforms for core functions,” but since they hired an AI apologist to the C-suite, all that has gone out the window in a matter of months.
Got me thinking I should warm up my resume…


Because there’s alternatives. You don’t have to use the subway if it breaks down, and people have enough brains to take a taxi or walk instead.
This is 60 people going, “Fuck, the subway is down. Guess I can’t travel anywhere, now.”


Stop spreading doom and gloom before the fight has even begun. That would be a good start.
“He’s eccentric! A visionary!” —Rich assholes, probably