Altman’s remarks in his tweet drew an overwhelmingly negative reaction.
“You’re welcome,” one user responded. “Nice to know that our reward is our jobs being taken away.”
Others called him a “f***ing psychopath” and “scum.”
“Nothing says ‘you’re being replaced’ quite like a heartfelt thank you from the guy doing the replacing,” one user wrote.
Lmao. Expecting this guy to be at the very least imprisoned by the Trump admin after his company goes belly-up
Please, pedo Trump will feed this black hole money till it collapses the economy.
Trump is actively trying to destroy the dollar.
I’ve said the same thing for months but people still don’t believe me. They have plenty in crypto to survive a crash.
I don’t think Trump is trying to do anything. There is a state that has an interest in wrecking the economy to reestablish power over a region that has become less and less manageable by a central authority like the US government. When that happens Sam Altman and many others will be put to trial to make an example and show the rest who’s the boss.
Any software engineer who uses AI knows this to be horseshit. If anything, it’ll lead to more engineering jobs when all of the pleb CEOs who think they’re CTOs now begin to realize that there’s more to coding than the code, and their software is ass at scale. Hopefully this happens in hilarious and public ways for us all to enjoy.
Sam is still early, and obnoxious, but I’ve been monitoring AI progress since the 1980s. Roughly one year ago, AI coding agents sort of turned the corner from not really any more useful than a Google search (which is, itself very useful), into getting things right more than they hallucinate. That was an important watershed, because from that point they could make forward progress, fixing more mistakes than they made.
In the 12 months since, there has been steady and rapid forward progress. If you haven’t asked an AI to code something for you in the last 3 months, you’re out of touch with where it’s at today.
Even free Gemini rips out really good bash scripts faster than you can look up the first weird thing you want it to do.
I personally don’t use AI, but I concede that for some people, it can be useful for them, if they use the AI as a tool for their own thinking, rather than subordinating themselves to the chatbot. Mostly, this means ensuring that they’re able to check whether the AI is right or not.
When I dabbled in using coding AI, there were a few basic tasks that it was useful for. There were a few hallucinations, but because the task was basic and well within my proficiency to scan, I was able to set it right; even with these corrections, it still saved me time overall. However, when I tried to use it on tasks that were beyond my own technical expertise, things got messy really quickly. Things weren’t working, so I felt sure that there must be some hallucinated errors, but I couldn’t tell what they were because the task was at or beyond the limit of my own technical competency. A couple of times, I managed to eventually figure out how to fix the error, but it was so exhausting compared to how problem solving a code problem feels, and I felt dissatisfied by the lack of learning involved.
Ordinarily, struggling through a complex code problem leaves me with a greater understanding of my domain, but I didn’t this time. I guess I did get a little better at prompting the AI, but I felt like I learned far less than if I had solved the problem myself. Battling through to build a thorough understanding of my problem and my tools takes a long time upfront, but the next time I do this task or a similar one, I’ll be quicker, and these time improvements will build and build as my proficiency continues to grow. That’s why I stopped dabbling with AI coding assistants/agents — because even though using them for this complex task still saved me time compared to usual, in the long term, the time savings from using an AI is negligible compared to the time savings from increasing my own proficiency.
Now I hear what you’re saying about how much more effective AI coding agents are becoming, and how the hallucination rate is lower than it was. I haven’t had much first hand experience for quite a few months now, but I have no doubt that I would be incredibly impressed at the progress in such a relatively short time. The time savings from using AI would likely be larger today than it was when I tested it, and in a year, it’ll be even better. However, in my view, that will still not be able to compete with the long term time savings of a human gaining proficiency. You might disagree with me on that.
But the thing is, that human proficiency isn’t just a means to save time on their regular task, but a valuable end in and of itself. That proficiency is how we protect ourselves when things go wrong in unexpected ways. Even if the AI models we’re using now could perfectly capture and reproduce the sum of our collected knowledge, I don’t believe they can come close to rivalling humans in the realm of creating new knowledge, or adapting to completely novel circumstances. Perhaps some day, that might be possible for AI, but that’s not going to be possible with any of the AI architectures that we have today. In the meantime, creative and proficient humans will continue to find ways to exploit the flaws in AI systems, possibly for nefarious ends. A society that relies heavily on AI will need more technical expertise, not less.
“Even free Gemini rips out really good bash scripts faster than you can look up the first weird thing you want it to do.”
The crux of my argument is “how does someone who isn’t proficient in bash tell whether the bash script that AI has generated is a good one or a bad one?”. Even if hallucination rate continues to drop, it will always be non-zero. Sure, humans are also far from perfect, but that’s why so many of our systems include oversight mechanisms that involve many sets of eyes on critical systems; Junior developers are mentored by more experienced devs, who help ensure they don’t break stuff with their inexperience (at least, in an ideal world. In practice, many senior Devs are so overworked and stretched thin that they can’t give the guidance they should. Again, this is a case for more proficient humans). Replacing proficient humans with AI will build a culture of unquestioningly following the AI. Even if hallucination rate is a fraction of the human error rate, it will always be non-zero, and therefore there will be disasters.
And when it all goes to shit, who will fix it if we have allowed human proficiency to wither away and die?
“how does someone who isn’t proficient in bash tell whether the bash script that AI has generated is a good one or a bad one?”
What I find most bash scripts to be lacking is consideration of error cases, edge cases, faulty inputs, etc. It’s pretty trivial to make a script to copy some files from here to there, but what if the source files are missing, what if the destination has write permission errors, what if the destination already has files with the same names?
My latest Gemini script writing conversation started with “do this in a bash script” and it gave me a nice short script that did that. Then it asked about the edge cases, one by one, and if/how I wanted to handle them. 4/5 of its observations were relevant to the task and I told it to proceed with code to handle those (error out / show help / prompt for additional input / …), which it added with informative comments about what it was intending to do, and the other cases didn’t make sense for the larger picture (which I hadn’t explained to it, so no real fault there…)
Yeah, it’s still bash glop, and that “shopt -s nullglob” is one of those things that I have to look up when I see it to be sure it does what I think it does, but if you have any reasonable understanding of bash scripts, this is one of the more readable bash scripts I have encountered. As a professional charged with creating the script - it’s your job to be sure it’s right, not the AI’s job, not any more than it was your text editor’s responsibility to get it right in the past - even with code completion tools. The AI is a tool that helps put something together for you efficiently, code-completion gone wild, but it’s no more responsible for that code than a chainsaw is responsible for where a tree falls.
And when it all goes to shit, who will fix it if we have allowed human proficiency to wither away and die?
8 billion of us are so far down that rabbit hole in so many areas, we’d better make sure it doesn’t all go to shit because if/when it does we’ll be lucky to have 800,000 humans surviving even 50 years after the SHTF.
rather than subordinating themselves to the chatbot.
I find that a great many people prefer to subordinate themselves to “their boss” whoever or whatever that may be… it’s just so much easier than fighting for what you might believe “is right” but you are obviously powerless to fix.
when I tried to use it on tasks that were beyond my own technical expertise, things got messy really quickly.
And that’s the difficult thing to measure: is this task just annoyingly packed with detail and volume that you could work through if you spent the time and effort? (If so, AI could be a very useful tool) Or, is this task really beyond your understanding? In which case, you’re trusting the AI to fill in your blanks, which is irresponsible and today likely to fail - but in the future there will be a big grey area where the AI is usually “good enough” - but how can you tell? In computer coding, there’s a certain amount to be gained by having “independent” AI agents review the code and eventually reach consensus. In other areas, you can leverage AI to do what I have done in the past and teach yourself what you need to know in order to do what you’re trying to do. The question there is: how do you know when you have learned enough to actually “know what you are doing” well enough to do it successfully? There are far too many people in the world who are overconfident of their insufficient understanding of what they are messing with, and AI is like a gasoline spray fountain on their smoldering embers.
I couldn’t tell what they were because the task was at or beyond the limit of my own technical competency.
I feel like writing a “guide to AI development” is a bit futile at the moment because by the time you have written it and somebody reads it, the field will have evolved sufficiently to invalidate much of what you wrote. However, one thing that has remained constant over the past 6 months in my opinion is the need for visibility. Don’t just ask AI to design you a bridge with construction drawings. Ask it to show its work, include the structural analysis - equations, graphs of the solutions, references to standards - copies of the relevant parts of the standards, enough visibility and detail to spot its mistakes and oversights. In code this includes requirements, implementation plans, test plans, test execution results, traceability from the code to the requirements and tests.
A couple of times, I managed to eventually figure out how to fix the error, but it was so exhausting
I find that when I find and fix errors for AI (or junior programmers) it will often proceed to just make the same mistake again, even going so far as to overwrite my working solution with its faulty code again. If, instead, you work with it - Socratic method style - to find the issue, document what went wrong, and solve it for itself, it tends to repeat that particular kind of problem less in the future. Until you start a new project and don’t bring over the “memory files” from the old one…
struggling through a complex code problem leaves me with a greater understanding of my domain, but I didn’t this time.
I find it’s a bit of a mix in that respect. I “learned Rust” by having AI code in Rust for me. I certainly know more about Rust than I did when I started, I certainly have built bigger, more complex, and more successful projects with AI/Rust than if I had just started out plucking away at Rust the way I did BASIC in the 1980s… have I “learned Rust” better, or not as well, by using AI compared to if I had gone at it without AI? Is that even a relevant question? Rust is here, AI is here, it’s probably better, or at least more efficient, to learn how to code Rust with AI tools than it is to first learn Rust without AI and then learn all the pitfalls of using AI to code with Rust later… I’m sure if I invested 2000 hours learning Rust without AI I would know more about coding with Rust than I do after having invested 200 hours learning Rust with AI, but is that a comparison that’s even worth making?
I did get a little better at prompting the AI
That’s a thing that’s hard for me to really judge. Me making programs with AI has improved dramatically over the past 6 months, how much of that is the AI models improving? Clearly they are improving, but then, how much is me learning how to work more effectively with AI? I feel like the experience working with the inferior models has been valuable, because the methods I have developed to work with inferior AI models also help get better results from the newer models. If I had waited 12 months to jump in after the models had improved dramatically, I might not be as good at getting results from the superior models because they can at least make something functional with poor prompts, whereas the inferior models wouldn’t give you anything of value unless you were using them with some skills of specification, scope and refinement.
the time savings from using an AI is negligible compared to the time savings from increasing my own proficiency.
Increasing your own proficiency is an investment well worth making, but after 40 years of coding experience, I find that AI is saving me significant time and effort beyond anything I’m likely to “learn better” before I die. Mostly what AI is good at, for me, is doing the voluminous detail documentation, unit test coverage, reviews for consistency. In development (of anything) there’s a tension between single source of truth, don’t repeat yourself, and copious examples, unit tests, redundancy of information to ensure that things don’t get off-track when you’re not looking at them. AI doesn’t do it automatically, but you can direct it to constantly review the redundant information for consistency and then fix the unwanted deviations to get back in line with your intent.
Assuming corporations actually want working software. If the customer base is ok with broken products, they will keep the vibecoded broken stuff online.
Time to learn to hack instead, break the security and cost them directly
Lol the “AI” that can write a functioning optimized software especially with niche stuff, i still have to see
Unless I’m mistaken, it took programmers to code AI in the first place.
That’s what he’s saying. He’s thanking them for the effort and is saying the AI they created made them obsolete.
This companies going down the toilet.
What is the point of this article? It adds no value whatsoever.
Hey Sam, just so you know, if I do get laid off, I am going to charge at LEAST double when some of these assholes need to hire me back to fix the AI generated disaster they’re struggling to keep running while bleeding money. So thank you in advance, I guess.
Double??? Fuck that. Charge 10x as much. These rich assholes pay pennies while hoarding billions. Don’t feel bad for them when you need to clean up THEIR mess. I usually want everyone to feel empathy, but in this case, I’ll make an exception. Have no empathy for billionaires. Their GOAL is to make you suffer. Milk their bank account dry.
Unfortunately, the way they function is not to ever go down in value unless it results in hoarding even more cash. At this point, we know the problem is functionally parasitoidal and entirely unacceptable. Also unfortunately for us, it’s both the actors and the systems of ideas and values.
Don’t milk their accounts by charging them more, we must overcome their rhetoric through intelligence and wisdom, and do so whilst bearing the brunt of the very effects of being possessed by the creatures and values, and then redistribute their money back to the correct places in society that it would have been, had this disease not taken root in the first place, using said developed empathy, wisdom, intelligence, and logistics to expell them.
The reason DEI and trans people stuff is such a threat is because those maturations of society touched on the very nerves of them to begin with. Equality, compassion, justice, logic, and tearing down the hypnotizing fences that they’ve been herding and corralling us into serving their flawed and violent worldviews.
We are the slaves building the pyramids, we could be free and healthy, but instead we become satisfyingly vengeful over fantasies over charging a billionaire an extra zero to serve him. This is not the way.
Do not become the oppressor. Do not build the Torment Nexus for any price. These are the dark ways that lead to imprinting values and shame and goals onto society and result in positions like the CEO of a publicly traded company serving shareholders becoming a billionaire. Do not build the Torment Nexus. Do not build the Torment Nexus.
Holy shit. Stop building the Torment Nexus.
There are other ways to make your life better and making actual cool shit. Like infrastructure and entertainment and medicine and tasty heathy food and things that actually make people’s lives better.
I write this as a response to your comment, but we’re both well aware that I’m writing this to anybody who will read it. I really hope that the understanding of what we should focus on isn’t personal enrichment to save one’s own ass. That’s okay, you still have to put your own mask on, but the much higher priority is to stop these madmen from destroying society from their high seats and dark towers out of sight. The system’s chokehold on ourselves to continue to only serve the masters to survive is strong, but we must not lose sight or the hope of the true goals of ridding ourselves of these parasites and the ideas they personify.
Cheers, gesundheit, godzilla.
Holy shit. Stop building the Torment Nexus.
Yes. The best scene in one of the cube movies is when one of the characters admits he thoughtlessly helped build the thing.
I liked all of that.
Lol. The system is set up so that (few) workers will have the leverage to charge double…
Whoops… medical issue… please, sam altman, hire me back at the same salary as before! Or half, i dont care!
That’s true, but they can still charge double by taking longer to fix the problem.
There’s going to be a lot of “# decrement this wait counter every time the boss demands a performance improvement” code in the near future.
Open class warfare isn’t a sustainable way to build infrastructure.
We’re on track to reach a point where nothing produced by the FANG companies ever works right.
Then we’ll see how long the public tolerates it for the comfort of the familiar.
It must be exhausting having to constantly lie all the time.
It’s only exhausting if you have to remember the lies. The media will never challenged him on them, though.
It’s not when nobody asks you to present facts to support your claims.
It isn’t when you are a sociopath
Honestly I think he believes his own bullshit
Say anything that pops into your mind to keep the hype train going, huh Sam?
If you scoured the world for the most punchable face, you’d surely come up with Sam Altman
Nah, it’s Martin Shkreli
Elon…and Vance… and pretty much the entire Trump cabinet… come on, we would line up and pay for the pleasure.

The tweet.
ChatGPT, write me a missive of a 16th century printing press operator saying goodbye to scribes, only their printing press swaps around words and hallucinates entire sentences every time they use it.
Actually a good comparison. The printing press still required people to compose the text to print. It only did away with the copying of existing texts. Ai still needs someone to compose the prompt for it to work and explaining to a computer what to do is programming. We’re only moving a level higher in the abstraction.
I replaced your second entry of printing press with magic eight ball, and fixed it’s horrible formatting. Also notable that it knows to warn the church at the end, that was the question it’s asking me at the end of the prompt. It knows it’s shit.
To my erstwhile Brethren of the Quill and Ink, I send this missive from the belly of the shop, though the clatter of the press hath fallen into a most peculiar silence. You recall how we once mocked the iron lever for its rigidity? How we feared the cold type would strip the soul from the scripture? Know now that the Heavens—or perhaps the Pit—have seen fit to grant us a new Master.
The great wooden screw is gone. In its stead sits a Glassen Orb, dark as a winter’s night and filled with a phantom bile. There is no setting of leaden letters here. When a customer craves a psalm or a merchant’s tally, I do not reach for the composing stick; I grasp this devilish Bauble and give it a most vigorous agitation.
It is a fickle Muse. Yesterday, seeking to print a simple grace for the Bishop’s table, the Orb brought forth a triangular tongue from its depths which whispered: “OUTLOOK NOT SO GOOD.” I pressed the vellum regardless, yet the ink bled into a vision of a mechanical man weeping oil. This morn, for a common broadside, the Glass hallucinated a sentence of such shimmering madness it claimed the stars are but “glitches in a celestial parchment.”
I am no longer a printer, but a midwife to a fever-dream. The Ink-Balls sit dry, for the Orb provides its own violet humors. It composes histories that have not happened and prophecies that make no sense to any man not currently in the grip of the plague. Go back to your monasteries, good Scribes. Cling to your steady hands and your honest parchment. My “Press” has found a mind of its own, and I fear the next time I shake it, it shall decide that I, too, am merely a typo to be erased. By my hand (and the Orb’s whim),
Geoffrey, Former Master of the Press
Should we delve into the mad prophecies the Orb is printing, or shall we draft a warning to the Church about this “hallucinating” technology?
He spoke that into his phone’s microphone. His fingers are too important and valuable to do anything but wipe his own ass.
He’s at the Elon stage of feeling he needs to be more explicit about what an asshole he is. These guys seem to need recognition.
They all have too much money to spend, literally.
They don’t care about that and it was never about that. It’s about control.
Yes. Epstein taught us it’s also about running a large scale kidnapping and rape organization.
I sound like a broken record, but today, we need to be wary that very few of the Epstein class are proven to be innocent of being complicit in child abduction and rape.
Maybe that’s always been the case? I know the history of the ultra rich has never been nice.
Anyway I find it an important perspective when deciding how much to trust the rest of the messages out of the Epstein class.
He’s still using a Studio Ghibli pfp? I thought they asked OpenAI to stop using their style. These people are actively hostile to the idea of consent.
He’s a molester, what makes you think this would be an issue for him?
Rapist mentality all the way. How fitting they all either own or bent the knee to the rapist in the White House.
Yes. That’s why “Epstein class” is such a fair description.
Duly noted. Will adopt.
I wonder if he wrote that post character-by-character.
But actually typing out the code is the least difficult part of programming, once you’ve been doing it for five or ten years. You have to understand the code that is already there. You have to decide the behavior, either way. You have to review the code, either way. Design the local and overall architecture. Design interfaces and APIs.
The fact that he thinks typing out new code took so much effort basically means that he was never a decent programmer. His statement betrays that he doesn’t even understand what’s difficult. People with his level of understanding of a topic shouldn’t broadcast their ignorance publicly.
I just want to highlight this,
“The fact that he thinks typing out new code took so much effort basically means that he was never a decent programmer.”
Grear point! This is a critical insight.
Like during the dotcom boom, new tools mean new people can program the computer who could not, before.
And just like during tht dotcom boom, they’ll soon find out that programming a computer - while very slightly easier than it was last year - still has challenges.
Yeah he sounds like those managers comparing work to how many lines of code you wrote.
Most code I write is something that has not been written before, so barring the basics it’s not an AI I need to work but silence and clear goals (even if they change tomorrow).
The pattern for me is usually 2 or 3 days of analyzing how something works, should work or doesn’t work, then 2 or 3 hours of writing the code to implement or fix it. Most of the work is not writing code, and that is definitely the easy bit.
Honestly, i think their (openAI) rationale is that they paid these developers 500k$/yr or whatever… so in their minds, they set these devs up for lifetime success… therefore they have no obligation to keep them employed for long term
I think the devs who are smart, understood the unspoken terms of their agreement
I seriously hope these fuckers wrote a Killswitch that their employers don’t know about.
Or at least a back door some hacker can drive a semi-truck through. All the screen will show is I HAZ ALL YUR HAMBERDERS
paid these developers 500k$/yr or whatever
The smartest guy I know - dizzyingly capable - was only pulling a very small fraction of that. I suspect ‘or whatever’ is doing a lof of heavy lifting, here.
Imagine if you pirated a bunch of movies, and then went to the cinema and bragged about it. That’s what Altman is doing there.
Imagine if you pirated a bunch of movies, and then went to the
cinemaactors’ doorstep and bragged about it.FTFY
Now I’m imagining Home Alone, Sam Altman and Jensen Huang are breaking into people’s homes to tell them about how good AI is…
That is essentially how AI news headlines feel nowadays. And about how well their attempts to set the narrative lands with a public that is over it.
seems like smth i would do
lets hope if robots take over they dispose of those useless ceos and billionaires
Of all the jobs that require a lack of human empathy and cold calculation, CEO is the first in line for AI replacement.
Gui. Llo. Ti. Ne.
I too love vibe coded security flawed software. Thank you Sam Very cool. AI is for some basic error checking and bootstrapping to get you started it is not touching a programmer at all.











