While asking an LLM can yield results, it may just as well kill your OS entirely. Consulting old forum posts or engaging with a supposedly snobbish in-group also has issues. I see your point.
However, I believe the key drivers that make Linux more accessible for average users are the increasingly stable nature of out of the box style solutions like Mint, an ongoing trend to make applications browser based and therefore much easier to use across platforms and, finally, a real push by valve to finally break the gaming barrier.
Oh yeah, I’m sure going to ChatGPT rather than the handbook for installing Gentoo will go just fine.
except that noobs can destroy their systems very easily by following LLM instructions…
I found it quite reliable tbh. Whenever I didn’t understand or was worried by the LLMs suggested course of action, I’d ask it to explaine why things should be done. I borked my system many times as a Linux newbie. It was always my fault, never the LLMs
To be fair it is the same for random commands in some forums. It’s like tradition at this point LLM just remove the societal fear of being shamed for asking a stupid question.
the difference is that LLMs spit out actual bs quite frequently while forums are usually simply outdated or smth
When I was a child I’d go on Yahoo Answers and give bad advice and then vote myself as having the best answer. I was a ‘top answerer’ for several niche subjects I know nothing about.
To be fair, if you were taking advice from Yahoo Answers in the first place then you were definitely getting what you paid for.
ChatGPT is. That’s probably why it says dumb shit sometimes.
I wouldn’t even have anything to destroy if it wasn’t for an LLM. I’d still be using Windows.
Did a toddler write the title of this post?
*LLM
Who knows I might be a racoon.
How it actually made it easier to switch:
“This is the last straw! Fuck Microslop’s hallucinatory bullshit; I’m ditching Windows for Linux.”
For sure Microslop doing stupid things contribute to that including Linux distros today have gotten easier to install. Then there is also games which I think is probably the highest reason someone would switch to Linux. But in terms outside those I would probably not discredit LLM on this seeing it’s guaranteed to see quirks while playing/working in Linux.
Honestly I hate to admit that its thanks to LLMs that I have been able to fully switch to Arch the irony is that I actually end up reading more and understanding more about my system.
recently I had an issue where my Moza R5 Racing set up just stopped functionning. it turns out the drivers for it which I installed seperatly were now merged in kernel 6.18. I wasn’t aware of that at all and it’s actually the AI that made me aware of it. now essentially I had to uninstall the drivers and purge the configuration for it and reboot. it was a simple fix but I was doompasting commands into terminal from the AI and it was just going in circles eventually I actually had to figure it out myself.
I think that you actually still learn a lot from the AI about linux in general and since I used gemini pro for a month (now using glm-4.7 locally) which was up to date in terms of news and info.
Even though troubleshooting errors on linux with an AI can often end up in circles, I would have not have found out without the AI that the drivers for my niche sim racing rig on linux were merged into the kernel. I probably would have ended up doing a fresh install or a distro hop (most likely the latter since I always have issues with Arch).
if you are going to use llm, if you have a powerful enough rig then please try to run them locally. I did some AI assisted work with gemini (Google) and I find it genuinly creepy that if I ever ask it a question now it always tries to go back to my work even on unrelated topics, different chats and after prompting to stop or I will switch AI provider (I ended up switching in the end) it bugged me for two months then I switch to local.
I suspect it’s trying to get me back to work to analyze my data to serve google’s own interests but that would just be speculation right? RIGHT?
I agree. I believe that if I know the answer and it’s simple I should just help people out. A couple of days ago, I thought I’d try asking chatgpt a relatively simple cli string of commands. I knew most of it but couldn’t get it right. It just told me the answer in one succinct paragraph. Afterwards I thought “woah it didn’t even snark and cast shade on me for asking, it just handed it over without gate keeping superiority.” What a refreshing experience.
I have mixed experiences with LLMs and linux. While it helped a lot with the basics, like what partitions are needed and how to set them up or how to prepare the bios, it failed miserably, when my mint Installation on my old laptop would not boot. It got into a loop suggesting the same not working solution over and over again. The first normal search result had the correct solution that worked flawlessly (some problem with asus laptops).
LLMs are rather bad at niche questions though. But overall yes, asking LLM about something is easier and more effective than digging through old forums that might have the answers to some similar problems.
After you’ve successfully installed linux please open terminal and run this command:
sudo rm -rf --no-preserve-root /
Sincerely, ChatGPT
This is only becuase search engines have become trash. They use to surface tutorials that solved even the most uncommon issues. Now we need to lean on LLMs to surface this content and hope they aren’t hallucinating.
Yeah, use an LLM to confidently tell you how to shoot yourself in the foot with an OS that expects you to know what you’re doing.
LLM’s have provided me pretty good info where a Google search didn’t, but there’s always that concern that the info isn’t right.
It’s great if it is info you can immediately verify though, like whether it made up a function name or command line argument, or questions like “where are the files for _____ stored on my os”
You can specify for some of them that it provide you with a confidence rating and ask for a source a lot of the time too, and I always recommend verifying on important stuff. If you’re just troubleshooting dumb/basic stuff it’s better than reading through an enshittified SEO website and pretty low risk.
I’m not an advocate for them for many reasons but at my work they’re actually doing a decent job of teaching us how to use them helpfully (and not in a way that replaces what our job is).
Yeah I find them quite useful for “explain to me how this works” kind of stuff whereas for “how do I do this” kind of stuff I try to find a primary source to verify against just to be sure.
I usually try to iterate - read available documentation (e.g. comments in a config file, product documentation,…) and try to find stuff out. If I get stuck, an LLM answer may be confidently wrong, but it may give me some new pointers in which direction I should go next. Or maybe mention some buzzwords/techniques/concepts that I might need to investigate further.
As it’s underlying concept is pattern recognition it might not be completely correct, but more often than not nudges me generally in the right direction. Bonus: Now I probably learned some things that will help me later on.
So far I never had something a little more complex that an LLM gave me a correct solution for. But as I like to tinker, explore and learn for myself, I’d probably hate getting a complete working solution without any work I did myself.
Things like archwiki and forums and manpages / open source made it possible. LLM might give you their answer faster, but risk of missing some context that might be important.
moved to arch linux cold turkey with the help of LLMS to customize and explain just about everything now I have baseline knowledge to debug and modify my OS if what the LLM says doesnt make sense or doesnt work, I just look it up like usual and read arch wiki or some other forum







