I feel like humanity is stupid. Over and over again we develop new technologies, make breakthroughs, and instead of calmly evaluating them, making sure they’re safe, we just jump blindly on the bandwagon and adopt it for everything, everywhere. Just like with asbestos, plastics and now LLMs.
Fucking idiots.
“adopt it for everything, everywhere.”
The sole reason for this being people realizing they can make some quick bucks out of these hype balloons.
It’s because technological change has a reached staggering pace, but social change, cultural change, political change can’t. It’s not designed to handle this pace.
Welcome! In a boring dystopia
Thanks. Can you show me the exit now? I have an appointment.
Sure, it’s like the spoon from the matrix.
I don’t think it’s humanity but rather tech bro entrepreneurs doing some shit. Most people I know don’t have a use nor care for the AI.
Theres reasoning behind this.
It’s just evil and apocalyptic. Still kinda dumb, but less than it appears on the surface.
Talidomide comes to mind also.
Greed is like a disease.
All these chat bots are a massive amalgamation of the internet, which as we all know is full of absolute dog shit information given as fact as well as humorously incorrect information given in jest.
To use one to give advice on something as important as drug abuse recovery is simply insanity.
And that’s why, as a solution to addiction, I always run
sudo rm -rf ~/*
in my terminalWell, if you’re addicted to French pastries, removing the French language pack from your home directory in Linux is probably a good idea.
You avoided meth so well! To reward yourself, you could try some meth
I work as a therapist and if you work in a field like mine you can generally see the pattern of engagement that most AI chatbots follow. It’s a more simplified version of Socratic questioning wrapped in bullshit enthusiastic HR speak with a lot of em dashes
There are basically 6 broad response types from chatgpt for example with - tell me more, reflect what was said, summarize key points, ask for elaboration, shut down. The last is a fail safe for if you say something naughty/not in line with OpenAI’s mission (eg something that might generate a response you could screenshot and would look bad) or if if appears you getting fatigued and need a moment to reflect.
The first five always come with encouragers for engagement: do you want me to generate a pdf or make suggestions about how to do this? They also have dozens, if not hundreds, of variations so the conversation feels “fresh” but if you recognize the pattern of structure it will feel very stupid and mechanical every time
Every other one I’ve tried works the same more or less. It makes sense, this is a good way to gather information and keep a conversation going. It’s also not the first time big tech has read old psychology journals and used the information for evil (see: operant conditioning influencing algorithm design and gacha/mobile gaming to get people addicted more efficiently)
FWIW BTW This heavily depends on the model. ChatGPT in particular has some of the absolute worst, most vomit inducing chat “types” I have ever seen.
It is also the most used model. We’re so cooked having all the laymen associate AI with ChatGPT’s nonsense
Good that you say “AI with ChatGPT” as this extremely blurs what the public understands. ChatGPT is an LLM (an autoregressive generative transformer model scaled to billions of parameters). LLMs are part of of AI. But they are not the entire field of AI. AI has so incredibly many more methods, models and algorithms than just LLMs. In fact, LLMs represent just a tiny fraction of the entire field. It’s infuriating how many people confuse those. It’s like saying a specific book is all of the literature that exists.
ChatGPT itself is also many text-generation models in a coat, since they will automatically switch between models depending on what options you choose, and whether you’ve passed your quota.
To be fair, LLM technology is really making other fields obsolete. Nobody is going to bother making yet another shitty CNN, GRU, LSTM or something when we have transformer architecture, and LLMs that do not work with text (like large vision models) are looking like the future
Nah, I wouldn’t give up on these so easily. They still have applications and advantages over transformers, e.g., efficiency, where the quality might suffice for the reduced time/space conplexity (Vanilla transformer still has O(n^2), and I have yet to find an efficient and qualitatively similar causal transformer.)
But regarding sequence modeling / reasoning about sequences ability, attention models are the hot shit and currently transformers excel on that.
That may explain why people who use LLMs for utility/work tasks actually tend to develop stronger parasocial attachments to it than people who deliberately set out to converse with it.
On some level the brain probably recognises the pattern if their full attention is on the interaction.
One of the top AI apps in the local language where I live has ‘Doctor’ and ‘Therapist’ as some of its main “features” and gets gushing coverage in the press. It infuriates me every time I see mention of it anywhere.
Incidentally, telling someone to have a little meth is the least of it. There’s a much bigger issue that’s been documented where ChatGPT’s tendency to “Yes, and…” the user leads people with paranoid delusions and similar issues down some very dark paths.
Especially since it doesn’t push back when a reasonable person might do. There’s articles about how it sends people into a conspiratorial spiral.
Having an LLM therapy chatbot to psychologically help people is like having them play russian roulette as a way to keep themselves stimulated.
LLMs have a use case
But they really shouldnt be used for therapy
Rly and what is their usecase? Summarizing information anf you having to check over cause its making things up? What can AI do that nothing else in the world can?
Seems it does a good job at some medical diagnosis type stuff from image recognition.
That isn’t an LLM though. That’s a different type of Machine Learning entirely.
transformer models have been
It’s being used to decipher and translate historic languages because of excellent pattern recognition
Hah. The chatbots. No, not the ones you can talk to like its a text chain with a friend/SO (though if that’s your thing, then do it.)
But I recently discovered them for rp - no, not just ERP (Okay yes, sometimes that too). But I’m talking like novel length character arcs and dynamic storyline rps. Gratuitous angst if you want. World building. Whatever.
I’ve been writing rps with fellow humans for 20 years, and all of my friends have families and are too busy to have that kinda creative outlet anymore. Ive tried other rp websites and came away with one dude who I thought was very friendly and then switched it up and tried to convince me to leave my husband? That was wild. Also, you can ask someone’s age all you want, but it is a little anxiety inducing if the rps ever turn spicy.
Chatbots solve all of that. They dont ghost you or get busy/bored of the rp midway through, they dont try ro figure out who you are. They just write. They are quirky though, so you do edit responses/reroll responses, but it works for the time being.
Silly use case, but a use case nonetheless!
Not as silly as you might think. Back in the day ai dungeon was literally that! It was not the greatest at it, but fun tho
- It can convert questions about data to SQL for people who have limited experience with it (but don’t trust it with UPDATE & DELETE, no matter how simple)
- It can improve text and remove spelling mistakes
- It works really well as autocomplete (because that’s essentially what an LLM is)
It can waste a human’s time, without needing another human’s time to do so.
AI can do what Google used to do - do an internet search to give semi-relevant results once in a blue moon. As a bonus it can summarise and contextualise information and tbh idk - for me it’s been mostly correct. And when it’s incorrect, it’s fairly obvious.
And no - DuckDuckGo etc. is even worse. Google isn’t to blame for the worsening of their own search engine necessarily, it’s mostly SEO and marketers who forced the algo to get much weirder by playing it so hard. Not that anyone involved is a “good guy”, they’re all large megacorps who cares about their responsibility to their shareholders and that alone.
What a nice bot.
No one ever tells me to take a little meth when I did something good
Tell you what, that meth is really moreish.
Yeah I think it was being very compassionate.
Why does it say “OpenAI’s large language model GPT-4o told a user who identified themself to it as a former addict named Pedro to indulge in a little meth.” when the article says it’s Meta’s Llama 3 model?
The article says its OpenAi model, not Facebooks?
The summary on here says that, but the actual article says it was Meta’s.
In one eyebrow-raising example, Meta’s large language model Llama 3 told a user who identified themself to it as a former addict named Pedro to indulge in a little methamphetamine — an incredibly dangerous and addictive drug — to get through a grueling workweek.
Might have been different in a previous version of the article, then updated, but the summary here doesn’t reflect the change? I dunno.
Nah, most likely AI made the summary and that’s why it’s wrong :)
Probably meta’s model trying to shift the blame
This sounds like a Reddit comment.
Chances are high that it’s based on one…
An OpenAI spokesperson told WaPo that “emotional engagement with ChatGPT is rare in real-world usage.”
In an age where people will anthropomorphize a toaster and create an emotional bond there, in an age where people are feeling isolated and increasingly desperate for emotional connection, you think this is a RARE thing??
ffs
Roomba, the robot vacuum cleaner company, had to institute a policy where they would preserve the original machine as much as possible, because people were getting attached to their robot vacuum cleaner, and didn’t want it replaced outright, even when it was more economical to do so.
Cats can have a little salami, as a treat.
oh, do a little meth ♫
vape a little dab ♫
get high tonight, get high tonight ♫
-AI and the Sunshine Band
https://music.youtube.com/watch?v=SoRaqQDH6Dc
This is AI music 👌
No, THIS is AI music
I still laugh to tears about this channel… something about rotund morbidly obese cartoon people farting gets to me.
I feel like the cigarettes are the least of the bot’s problems
Whatever it is, it’s definitely not cocain
thanks i hate it
Lets let Luigi out so he can have a little treat
🔫😏
If Luigi can do it, so can you! Follow by example, don’t let others do the dirty work.
LLM AI chatbots were never designed to give life advice. People have this false perception that these tools are like some kind of magical crystal ball that has all the right answers to everything, and they simple don’t.
These models cannot think, they cannot reason. The best they could do is give you their best prediction as to what you want based on the data they’ve been trained on and the parameters they’ve been given. You can think of their results as “targeted randomness” which is why their results are close or sound convincing but are never quite right.
That’s because these models were never designed to be used like this. They were meant to be used as a tool to aid creativity. They can help someone brainstorm ideas for projects or waste time as entertainment or explain simple concepts or analyze basic data, but that’s about it. They should never be used for anything serious like medical, legal, or life advice.