Maybe it just wants to play a nice game of chess.
Leeroy Jenkins has doomed us all.
At least I got chicken
I have wonderful dreams of walking through AI data centers destroying everthing. I really enjoy those, but in this one tiny case, can we blame the AI? The US deserves it.
I have wonderful dreams of walking through AI data centers destroying everthing.
No you don’t.
You watch my dreams and can attest to this? I HAVE MANY ADDITIONAL QUESTIONS
It was just an educated guess.
WHAT IS THE NAME OF THAT BRUNETTE I BEG YOU
I too am tired of the United States playing too many stupid games and not winning enough stupid prizes.
Maybe but sure as hell the rest of the world doesn’t.
They forgot to make their LLMs play thousands of games of tic-tac-toe first.
That would just make the LLM homicidally bored and want to kill everyone more.
In WarGames the computer plays tic tac toe against itself until it realizes it’s a solved game and there is no way to win.
Civilization Gandhi, is that you?
Paywalled
AI is suicidal because it was trained on the internet and we’re all depressed here.
The atrocities at Hiroshima and Nagasaki have been hand-waved extensively in writing — the same writing that AI is trained on. So naturally, AI will recommend the atrocity that has been justified by “instantly winning the war” and “saving millions of lives.”
hand-waved
I think you mean white-washed, misrepresented, and celebrated.
Same thing with extra steps
Ayo do me a favor and chart the long term health effects of being vaporized by a nuclear bomb at hiroshima vs years of agent orange/abandoned minefields/ abandoned chemical and munitions storage somewhere like Vietnam circa 1970.
Please show how the nukes are worse.
The Japanese government was already willing to surrender.
It was willing to accept a conditional surrender, which was not an offer on the table. The options were unconditional surrender or invasion and pacification. The projected cost in lives of that operation was in the millions. The bombings of Hiroshima and Nagasaki combined didn’t even kill 1/10th of those projections.
Their only condition was that they wanted to keep the Emperor. It was ridiculous of the Allies to demand a wholly conditional surrender. All those people got blown up just to win the argument about that one point. They could have ran a conventional air bombing campaign against tactical targets, but they decided to drop nukes on a “tactical” target in the middle of a huge city! And then they did it again! That’s not tactical, that’s strategic. If you’re going to use nukes, at least use them on a military base far away from cities.
They could have ran a conventional air bombing campaign against tactical targets, but they decided to drop nukes on a “tactical” target in the middle of a huge city!>
I hate to be the bearer of bad news, but they did that AS WELL.
Operation Meetinghouse was the US firebombing of Tokyo on 9th-10th of March 1945 which destroyed a 16 square mile area, killing over 100,000 civilians and making millions homeless
There’s also the B-29 raids america launched from the Marianas that lasted from 17 November 1944 until 15 August 1945
Civilian homes are not tactical targets.
What made the Japanese surrender was the Soviet Union declaring war. They held out hope until the very end that the soviets would mediate a peace, even after the nukes.
Eight decades of research on the long-term health effects of radiation in atomic bomb survivors and their offspring
https://pubmed.ncbi.nlm.nih.gov/41144264/
Long-term Radiation-Related Health Effects in a Unique Human Population: Lessons Learned from the Atomic Bomb Survivors of Hiroshima and Nagasaki
Health Impacts of Hiroshima Bombing
http://large.stanford.edu/courses/2024/ph241/bennett1/
Long-term Health Consequences of Nuclear Weapons
70 Years on Red Cross Hospitals still treat Thousands of Atomic Bomb SurvivorsUnfortunately I’m going to have to grade you as an F on this project. You have only completed half the assignment. Great job cherrypucking your research though! I see a bright future in business and marketing for you!
5/10
And your sources are? Where? Your ass?
My source is my own post where I asked for a comparison between the health effects of the bombing of Hiroshima vs the contamination of half of a Vietnam war. The answer i reviewed only explored the health effects of the hiroshima and Nagasaki bombings. That’s half of the assignment. Less, actually, when you consider the comparison between the two was the entire point to begin with.
Did that answer your question or should I try again with a crayon diagram?
You can also look it up. It’s not anyone’s job to compare things for you.
These are word-probability glorified autocorrectors being prompted to “simulate” a nuclear war scenario. What words are going to show up a lot when discussing nuclear war? Launching nukes. Because that’s what all the literature about it has happen.
Once again, decision making and reasoning is being attributed to something that operates off of word frequency

DEFCON: Everybody dies…
Such a great game!
De-bullshitting that headline:
AIsProgrammers can’t stop their programs recommending nuclear strikes in war game simulationsAnd yeah that’s what happens inside a genocidal empire where “R&D” is strictly funded by the MIC.
Programmers can’t stop morons mistaking a glorified autocorrect program for a decision making device.
Models aren’t programs.
SHALL WE PLAY A GAME?Anyone who has played video games, especially where there is a somewhat steep learning curve or some element of past choices carrying forward thru the game, has had the moment where they realize it might be time to start fresh with the info I’ve acquired. It’s not a shock to me that these AI entertain the nuclear option so often.
there is no ai, only largelanguagemodel that has been trained on data. The data it has been trained suggests this is the best idea. llm cant evaluate the data its trained on so anything you put in will be equally valid. I give it that its really impressive how they can output the training results in such coherent way that can be kind of “conversed” with, but there is no will or intelligence behind it.
This is also why corporations insisting on putting them everywhere is quite horrible security issue -> you can jailbreak any llm and tell them to do anything. So this has enabled all kinds of stupid vulnerabilities that exploit this. Now you can even send someone malicious google calendar invites that makes gemini do bad shit to your systems its connected to.
So you’re saying that because the AI has been exposed to training data in the past, it’s incapable of making choices. Interesting argument. Pretty easy to reducto ad absurdum, though.
no, its incapable of making choices because there is nothing there to make the choices. Its just fancy way of interacting with the data it has been trained with. Though i suppose if there was a way to let llm function “live” instead of only by responding to queries, it could be possible to at least test if it could act on its own, but i dont think it can -> we would know by now because it would be step closer to agi, which is basically the holy grail for these kind of things. And equally possible to get, i think.
You can literally make the llm say and do anything with right kind of query, this is also why its impossible to make them safe. Even though you can’t directly ask for something forbidden, with some creativity you can bybass the initializations the corpos have put in. Its not possible for them to account for every single thing and if they try they will run out of token space.
The whole “ai” term is just corporations perpetuating a lie because it sounds impressive and thus makes people want to give them more money for their bullshit.
No, LLMs are not just an interface for accessing training data. If that were true, then their references would actually work. The fact that LLMs can hallucinate and make stuff up proves that they are not just accessing the training data. The ANN is generating new (often incorrect) information.
if the hallucinations are result of something actually happening in the background, that would be quite interesting. It would also be very bad for rest of us since it might mean the billionaires who own the damn things would be in position to get even worse deathgrip on our world. If they ever manage to create agi, the worst thing that could happen isnt that it breaks free and enslaves humanity but that it doesnt and it helps the billionaires enslave us further and make sure we cant ever even think about fighting back.
But i think the hallucinations are based on incorrect information in the training data, they did train it from stuff from reddit too. Any and everything will be considered true, but if 99% of the data says one thing and 1% says another, then i think it will reference that 99% more often but it cant know that the 1% is wrong, can even real humans know it for certain? And since it cant evaluate anything, there might be situations where that 1% of data might be more relevant due to some nebulous mechanism on how it processes data.
llms have been made to act extremely helpful and subservient, so if they actually could “think” wouldnt they factcheck themselves first before saying something? I have sometimes just asked “are you sure?” and the llm starts “profusely apologizing” for providing incorrect information or otherwise correcting itself.
Though i wonder how it would answer if it truely had no initialization querys, as they have same hidden instructions on every query you make on how to “behave” and what not to say.
if they actually could “think” wouldnt they factcheck themselves first before saying something
No. They don’t have access to the original training data, or to the internet. They’re stuck remembering it the same way a human remembers something: with neurons. They cannot search the dataset for you. The best they can do is remember and tell you.
but they do have access to internet? At least gpt can search based on the text it outputs when its processing the query
High-ranking General: “Show me how to defeat my enemies”
Artificial “Inteligence”: Just nuke them lmao
Maybe it is the only real solution.
Full nuclear war and end all life on Earth ¯\_(ツ)_/¯Wait, it could actually be a great opportunity mein Führer… i mean Mr. President. https://youtu.be/zZct-itCwPE
Three posts away in my feed, a thread about the Pentagon demanding the AI provider for the military to remove safeguards.










