We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.
But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.
This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.
So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.
Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).
Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.
Anyone pretending AI has intelligence is a fucking idiot.
You could say they’re AS (Actual Stupidity)
Autonomous Systems that are Actually Stupid lol
You know, and I think it’s actually the opposite. Anyone pretending their brain is doing more than pattern recognition and AI can therefore not be “intelligence” is a fucking idiot.
Clearly intelligent people mispell and have horrible grammar too.
I think there’s a strong strain of essentialist human chauvinism.
But it’s more kinds of thing than LLM’s are doing. Except in the case of llmbros fascists and other opt-outs.
No your failing the Eliza test and it is very easy for people to fall for it.
AI is not actual intelligence. However, it can produce results better than a significant number of professionally employed people…
I am reminded of when word processors came out and “administrative assistant” dwindled as a role in mid-level professional organizations, most people - even increasingly medical doctors these days - do their own typing. The whole “typing pool” concept has pretty well dried up.
you can give me a sandwige and ill do a better job than AI
But, will you do it 24-7-365?
i dont have anything else going on, man
There’s that… though even when you’re bored, you still sleep sometimes.
I think we should start by not following this marketing speak. The sentence “AI isn’t intelligent” makes no sense. What we mean is “LLMs aren’t intelligent”.
So couldn’t we say LLM’s aren’t really AI? Cuz that’s what I’ve seen to come to terms with.
To be fair, the term “AI” has always been used in an extremely vague way.
NPCs in video games, chess computers, or other such tech are not sentient and do not have general intelligence, yet we’ve been referring to those as “AI” for decades without anybody taking an issue with it.
I’ve heard it said that the difference between Machine Learning and AI, is that if you can explain how the algorithm got its answer it’s ML, and if you can’t then it’s AI.
It’s true that the word has always been used loosely, but there was no issue with it because nobody believed what was called AI to have actual intelligence. Now this is no longer the case, and so it becomes important to be more clear with our words.
What is “actual intelligence” then?
I have no idea. For me it’s a “you recognize it when you see it” kinda thing. Normally I’m in favor of just measuring things with a clearly defined test or benchmark, but it is in the nature of large neural networks that they can be great at scoring on any desired benchmark while failing to be good at the underlying ability that the benchmark was supposed to test (overfitting). I know this sounds like a lazy answer, but it’s a very difficult question to define something based around generalizing and reacting to new challenges.
But whether LLMs do have “actual intelligence” or not was not my point. You can definitely make a case for claiming they do, even though I would disagree with that. My point was that calling them AIs instead of LLMs bypasses the entire discussion on their alleged intelligence as if it wasn’t up for debate. Which is misleading, especially to the general public.
Nobody knows for sure.
I don’t think the term AI has been used in a vague way, it’s that there’s a huge disconnect between how the technical fields use it vs general populace and marketing groups heavily abuse that disconnect.
Artificial has two meanings/use cases. One is to indicate something is fake (video game NPC, chess bots, vegan cheese). The end product looks close enough to the real thing that for its intended use case it works well enough. Looks like a duck, quacks like a duck, treat it like a duck even though we all know it’s a bunny with a costume on. LLMs on a technical level fit this definition.
The other definition is man made. Artificial diamonds are a great example of this, they’re still diamonds at the end of the day, they have all the same chemical makeups, same chemical and physical properties. The only difference is they came from a laboratory made by adult workers vs child slave labor.
My pet theory is science fiction got the general populace to think of artificial intelligence to be using the “man-made” definition instead of the “fake” definition that these companies are using. In the past the subtle nuance never caused a problem so we all just kinda ignored it
Dafuq? Artificial always means man-made.
Nature also makes fake stuff. For example, fish that have an appendix that looks like a worm, to attract prey. It’s a fake worm. Is it “artificial”? Nope. Not man made.
May I present to you:
The Marriam-Webster Dictionary
https://www.merriam-webster.com/dictionary/artificial
Definition #3b
Word roots say they have a point though. Artifice, Artificial etc. I think the main problem with the way both of the people above you are using this terminology is that they’re focusing on the wrong word and how that word is being conflated with something it’s not.
LLM’s are artificial. They are a man made thing that is intended to fool man into believing they are something they aren’t. What we’re meant to be convinced they are is sapiently intelligent.
Mimicry is not sapience and that’s where the argument for LLM’s being real honest to God AI falls apart.
Sapience is missing from Generative LLM’s. They don’t actually think. They don’t actually have motivation. What we’re doing when we anthropomorphize them is we are fooling ourselves into thinking they are a man-made reproduction of us without the meat flavored skin suit. That’s not what’s happening. But some of us are convinced that it is, or that it’s near enough that it doesn’t matter.
Thanks. I stand corrected.
LLMs are one of the approximately one metric crap ton of different technologies that fall under the rather broad umbrella of the field of study that is called AI. The definition for what is and isn’t AI can be pretty vague, but I would argue that LLMs are definitely AI because they exist with the express purpose of imitating human behavior.
Huh? Since when an AI’s purpose is to “imitate human behavior”? AI is about solving problems.
It is and it isn’t. Again, the whole thing is super vague. Machine vision or pattern seeking algorithms do not try to imitate any human behavior, but they fall under AI.
Let me put it this way: Things that try to imitate human behavior or intelligence are AI, but not all AI is about trying to imitate human behavior or intelligence.
From a programming pov, a definition of AI could be an algorithm or construct that can solve problems or perform tasks without the programmer specifically solving that problem or programming the steps of the task but rather building something that can figure it out on its own.
Though a lot of game AIs don’t fit that description.
I can agree with “things that try to imitate human intelligence” but not “human behavior”. An Elmo doll laughs when you tickle it. That doesn’t mean it exhibits artificial intelligence.
Llms are really good relational databases, not an intelligence, imo
can say whatever the fuck we want. This isn’t any kind of real issue. Think about it. If you went the rest of your life calling LLM’s turkey butt fuck sandwhichs, what changes? This article is just shit and people looking to be outraged over something that other articles told them to be outraged about. This is all pure fucking modern yellow journalism. I hope turkey butt sandwiches replace every journalist. I’m so done with their crap
I make the point to allways refer to it as LLM exactly to make the point that it’s not an Inteligence.
So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure.
This is not a good argument.
The book The Emperors new Mind is old (1989), but it gave a good argument why machine base AI was not possible. Our minds work on a fundamentally different principle then Turing machines.
It’s hard to see that books argument from the Wikipedia entry, but I don’t see it arguing that intelligence needs to have senses, flesh, nerves, pain and pleasure.
It’s just saying computer algorithms are not what humans use for consciousness. Which seems a reasonable conclusion. It doesn’t imply computers can’t gain consciousness, or that they need flesh and senses to do so.
I think what he is implying is that current computer design will never be able to gain consciousness. Maybe a fundamentally different type of computer can, but is anything like that even on the horizon?
possibly.
current machines aren’t really capable of what we would consider sentience because of the von neumann bottleneck.
simply put, computers consider memory and computation separate tasks leading to an explosion in necessary system resources for tasks that would be relatively trivial for a brain-system to do, largely due to things like buffers and memory management code. lots of this is hidden from the engineer and end user these days so people aren’t really super aware of exactly how fucking complex most modern computational systems are.
this is why if, for example, i threw a ball at you you will reflexively catch it, dodge it, or parry it; and your brain will do so for an amount of energy similar to that required to power a simple LED. this is a highly complex physics calculation ran in a very short amount of time for an incredibly low amount of energy relative to the amount of information in the system. the brain is capable of this because your brain doesn’t store information in a chest and later retrieve it like contemporary computers do. brains are turing machines, they just aren’t von neumann machines. in the brain, information is stored… within the actual system itself. the mechanical operation of the brain is so highly optimized that it likely isn’t physically possible to make a much more efficient computer without venturing into the realm of strange quantum mechanics. even then, the verdict is still out on whether or not natural brains don’t do something like this to some degree as well. we know a whole lot about the brain but it seems some damnable incompleteness theorem-adjacent affect prevents us from easily comprehending the actual mechanics of our own brains from inside the brain itself in a wholistic manner.
that’s actually one of the things AI and machine learning might be great for. if it is impossible to explain the human experience from inside of the human experience… then we must build a non-human experience and ask its perspective on the matter - again, simply put.
I believe what you say. I don’t believe that is what the article is saying.
If you can bear the cringe of the interviewer, there’s a good interview with Penrose that goes on the same direction: https://m.youtube.com/watch?v=e9484gNpFF8
deleted by creator
“than”…
IF THEN
MORE THAN
Our minds work on a fundamentally different principle then Turing machines.
Is that an advantage, or a disadvantage? I’m sure the answer depends on the setting.
philosopher
Here’s why. It’s a quote from a pure academic attempting to describe something practical.
The philosopher has made an unproven assumption. An erroneously logical leap. Something an academic shouldn’t do.
Just because everything we currently consider conscious has a physical presence, does not imply that consciousness requires a physical body.
The other thing that most people don’t focus on is how we train LLMs.
We’re basically building something like a spider tailed viper. A spider tailed viper is a kind of snake that has a growth on its tail that looks a lot like a spider. It wiggles it around so it looks like a spider, convincing birds they’ve found a snack, and when the bird gets close enough the snake strikes and eats the bird.
Now, I’m not saying we’re building something that is designed to kill us. But, I am saying that we’re putting enormous effort into building something that can fool us into thinking it’s intelligent. We’re not trying to build something that can do something intelligent. We’re instead trying to build something that mimics intelligence.
What we’re effectively doing is looking at this thing that mimics a spider, and trying harder and harder to tweak its design so that it looks more and more realistic. What’s crazy about that is that we’re not building this to fool a predator so that we’re not in danger. We’re not doing it to fool prey, so we can catch and eat them more easily. We’re doing it so we can fool ourselves.
It’s like if, instead of a spider-tailed snake, a snake evolved a bird-like tail, and evolution kept tweaking the design so that the tail was more and more likely to fool the snake so it would bite its own tail. Except, evolution doesn’t work like that because a snake that ignored actual prey and instead insisted on attacking its own tail would be an evolutionary dead end. Only a truly stupid species like humans would intentionally design something that wasn’t intelligent but mimicked intelligence well enough that other humans preferred it to actual information and knowledge.
To the extent it is people trying to fool people, it’s rich people looking to fool poorer people for the most part.
To the extent it’s actually useful, it’s to replace certain systems.
Think of the humble phone tree, designed to make it so humans aren’t having to respond, triage, and route calls. So you can have an AI system that can significantly shorten that role, instead of navigating a tedious long maze of options, a couple of sentences back and forth and you either get the portion of automated information that would suffice or routed to a human to take care of it. Same analogy for a lot of online interactions where you have to input way too much and if automated data, you get a wall of text of which you’d like something to distill the relevant 3 or 4 sentences according to your query.
So there are useful interactions.
However it’s also true that it’s dangerous because the “make user approve of the interaction” can bring out the worst in people when they feel like something is just always agreeing with them. Social media has been bad enough, but chatbots that by design want to please the enduser and look almost legitimate really can inflame the worst in our minds.
My thing is that I don’t think most humans are much more than this. We too regurgitate what we have absorbed in the past. Our brains are not hard logic engines but “best guess” boxes and they base those guesses on past experience and probability of success. We make choices before we are aware of them and then apply rationalizations after the fact to back them up - is that true “reasoning?”
It’s similar to the debate about self driving cars. Are they perfectly safe? No, but have you seen human drivers???
I’ve been thinking this for awhile. When people say “AI isn’t really that smart, it’s just doing pattern recognition” all I can help but think is “don’t you realize that is one of the most commonly brought up traits concerning the human mind?” Pareidolia is literally the tendency to see faces in things because the human mind is constantly looking for the “face pattern”. Humans are at least 90% regurgitating previous data. It’s literally why you’re supposed to read and interact with babies so much. It’s how you learn “red glowy thing is hot”. It’s why education and access to knowledge is so important. It’s every annoying person who has endless “did you know?” facts. Science is literally “look at previous data, iterate a little bit, look at new data”.
None of what AI is doing is truly novel or different. But we’ve placed the human mind on this pedestal despite all the evidence to the contrary. Eyewitness testimony, optical illusions, magic tricks, the hundreds of common fallacies we fall prey to… our minds are incredibly fallible and are really just a hodgepodge of processes masquerading as “intelligence”. We’re a bunch of instincts in a trenchcoat. To think AI isn’t or can’t reach our level is just hubris. A trait that probably is more unique to humans.
Yep we are on the same page. At our best, we can reach higher than regurgitating patterns. I’m talking about things like the scientific method and everything we’ve learned by it. But still, that’s a 5% minority, at best, of what’s going on between human ears.
Get a self driven ng car to drive in a snow storm or a torrential downpour. People are really downplaying humans abilities.
Ai models are trained on basically the entirety of the internet, and more. Humans learn to speak on much less info. So, there’s likely a huge difference in how human brains and LLMs work.
It doesn’t take the entirety of the internet just for an LLM to respond in English. It could do so with far less. But it also has the entirety of the internet which arguably makes it superior to a human in breadth of information.
Humans can be more than this. We do actively repress our most important intellectual capacuties.
That’s how we get llm bros.
Human brains are much more complex than a mirroring script xD The amount of neurons in your brain, AI and supercomputers only have a fraction of that. But you’re right, for you its not much different than AI probably
I’m pretty sure an AI could throw out a lazy straw man and ad hominem as quickly as you did.
The human brain contains roughly 86 billion neurons, while ChatGPT, a large language model, has 175 billion parameters (often referred to as “artificial neurons” in the context of neural networks). While ChatGPT has more “neurons” in this sense, it’s important to note that these are not the same as biological neurons, and the comparison is not straightforward.
86 billion neurons in the human brain isn’t that much compared to some of the larger 1.7 trillion neuron neural networks though.
But, are these 1.7 trillion neuron networks available to drive YOUR car? Or are they time-shared among thousands or millions of users?
Keep thinking the human brain is as stupid as AI hahaaha
have you seen the American Republican party recently? it brings a new perspective on how stupid humans can be.
Nah, I went to public high school - I got to see “the average” citizen who is now voting. While it is distressing that my ex-classmates now seem to control the White House, Congress and Supreme Court, what they’re doing with it is not surprising at all - they’ve been talking this shit since the 1980s.
If an IQ of 100 is average, I’d rate AI at 80 and down for most tasks (and of course it’s more complex than that, but as a starting point…)
So, if you’re dealing with a filing clerk with a functional IQ of 75 in their role - AI might be a better experience for you.
Some of the crap that has been published on the internet in the past 20 years comes to an IQ level below 70 IMO - not saying I want more AI because it’s better, just that - relatively speaking - AI is better than some of the pay-for-clickbait garbage that came before it.
Good luck. Even David Attenborrough can’t help but anthropomorphize. People will feel sorry for a picture of a dot separated from a cluster of other dots. The play by AI companies is that it’s human nature for us to want to give just about every damn thing human qualities. I’d explain more but as I write this my smoke alarm is beeping a low battery warning, and I need to go put the poor dear out of its misery.
I’m still sad about that dot. 😥
The dot does not care. It can’t even care. I doesn’t even know it exists. I can’t know shit.
David Attenborrough is also 99 years old, so we can just let him say things at this point. Doesn’t need to make sense, just smile and nod. Lol
The idea that RAGs “extend their memory” is also complete bullshit. We literally just finally build working search engine, but instead of using a nice interface for it we only let chatbots use them.
This article is written in such a heavy ChatGPT style that it’s hard to read. Asking a question and then immediately answering it? That’s AI-speak.
And excessive use of em-dashes, which is the first thing I look for. He does say he uses LLMs a lot.
“…” (Unicode U+2026 Horizontal Ellipsis) instead of “…” (three full stops), and using them unnecessarily, is another thing I rarely see from humans.
Edit: Huh. Lemmy automatically changed my three fulls stops to the Unicode character. I might be wrong on this one.
Edit: Huh. Lemmy automatically changed my three fulls stops to the Unicode character.
Not on my phone it didn’t. It looks as you intended it.
Asking a question and then immediately answering it? That’s AI-speak.
HA HA HA HA. I UNDERSTOOD THAT REFERENCE. GOOD ONE. 🤖
I agreed with most of what you said, except the part where you say that real AI is impossible because it’s bodiless or “does not experience hunger” and other stuff. That part does not compute.
A general AI does not need to be conscious.
That and there is literally no way to prove something is or isn’t conscious. I can’t even prove to another human being that I’m a conscious entity, you just have to assume I am because from your own experience, you are so therefor I too must be, right?
Not saying I consider AI in it’s current form to be conscious, more so the whole idea is just silly and unfalsifiable.
No idea why you’re getting downvoted. People here don’t seem to understand even the simplest concepts of consciousness.
I guess it wasn’t super relevant to the prior comment, which was focused more on AI embodiment. Eh, it’s just numbers anyway, no sweat off my back. Appreciate you, though!
It’s only as intelligent as the people that control and regulate it.
Given all the documented instances of Facebook and other social media using subliminal emotional manipulation, I honestly wonder if the recent cases of AI chat induced psychosis are related to something similar.
Like we know they’re meant to get you to continue using them, which is itself a bit of psychological manipulation. How far does it go? Could there also be things like using subliminal messaging/lighting? This stuff is all so new and poorly understood, but that usually doesn’t stop these sacks of shit from moving full speed with implementing this kind of thing.
It could be that certain individuals have unknown vulnerabilities that make them more susceptible to psychosis due to whatever manipulations are used to make people keep using the product. Maybe they’re doing some things to users that are harmful, but didn’t seem problematic during testing?
Or equally as likely, they never even bothered to test it out, just started subliminally fucking with people’s brains, and now people are going haywire because a bunch of unethical shit heads believe they are the chosen elite who know what must be done to ensure society is able to achieve greatness. It just so happens that “what must be done,” also makes them a ton of money and harms people using their products.
It’s so fucking absurd to watch the same people jamming unregulated AI and automation down our throats while simultaneously forcing traditionalism, and a legal system inspired by Catholic integralist belief on society.
If you criticize the lack of regulations in the wild west of technology policy, or even suggest just using a little bit of fucking caution, then you’re trying to hold back progress.
However, all non-tech related policy should be based on ancient traditions and biblical text with arbitrary rules and restrictions that only make sense and benefit the people enforcing the law.
What a stupid and convoluted way to express you just don’t like evidence based policy or using critical thinking skills, and instead prefer to just navigate life by relying on the basic signals from your lizard brain. Feels good so keep moving towards, feels bad so run away, or feels scary so attack!
Such is the reality of the chosen elite, steering us towards greatness.
What’s really “funny” (in a we’re all doomed sort of way) is that while writing this all out, I realized the “chosen elite” controlling tech and policy actually perfectly embody the current problem with AI and bias.
Rather than relying on intelligence to analyze a situation in the present, and create the best and most appropriate response based on the information and evidence before them, they default to a set of pre-concieved rules written thousands of years ago with zero context to the current reality/environment and the problem at hand.
I’ve never been fooled by their claims of it being intelligent.
Its basically an overly complicated series of if/then statements that try to guess the next series of inputs.
It very much isn’t and that’s extremely technically wrong on many, many levels.
Yet still one of the higher up voted comments here.
Which says a lot.
Given that the weights in a model are transformed into a set of conditional if statements (GPU or CPU JMP machine code), he’s not technically wrong. Of course, it’s more than just JMP and JMP represents the entire class of jump commands like JE and JZ. Something needs to act on the results of the TMULs.
I’ll be pedantic, but yeah. It’s all transistors all the way down, and transistors are pretty much chained if/then switches.
Calling these new LLM’s just if statements is quite a over simplification. These are technically something that has not existed before, they do enable use cases that previously were impossible to implement.
This is far from General Intelligence, but there are solutions now to few coding issues that were near impossible 5 years ago
5 years ago I would have laughed in your face if you came to suggest that can you code a code that summarizes this description that was inputed by user. Now I laugh that give me your wallet because I need to call API or buy few GPU’s.
I think the point is that this is not the path to general intelligence. This is more like cheating on the Turing test.
I love this resource, https://thebullshitmachines.com/ (i.e. see lesson 1)…
In a series of five- to ten-minute lessons, we will explain what these machines are, how they work, and how to thrive in a world where they are everywhere.
You will learn when these systems can save you a lot of time and effort. You will learn when they are likely to steer you wrong. And you will discover how to see through the hype to tell the difference. …
Also, Anthropic (ironically) has some nice paper(s) about the limits of “reasoning” in AI.
ChatGPT 2 was literally an Excel spreadsheet.
I guesstimate that it’s effectively a supermassive autocomplete algo that uses some TOTP-like factor to help it produce “unique” output every time.
And they’re running into issues due to increasingly ingesting AI-generated data.
Get your popcorn out! 🍿
I really hate the current AI bubble but that article you linked about “chatgpt 2 was literally an Excel spreadsheet” isn’t what the article is saying at all.
Fine, *could literally be.
And they’re running into issues due to increasingly ingesting AI-generated data.
There we go. Who coulda seen that coming! While that’s going to be a fun ride, at the same time companies all but mandate AS* to their employees.
Removed by mod
I’m neurodivergent, I’ve been working with AI to help me learn about myself and how I think. It’s been exceptionally helpful. A human wouldn’t have been able to help me because I don’t use my senses or emotions like everyone else, and I didn’t know it… AI excels at mirroring and support, which was exactly missing from my life. I can see how this could go very wrong with certain personalities…
E: I use it to give me ideas that I then test out solo.
This is very interesting… because the general saying is that AI is convincing for non experts in the field it’s speaking about. So in your specific case, you are actually saying that you aren’t an expert on yourself, therefore the AI’s assessment is convincing to you. Not trying to upset, it’s genuinely fascinating how that theory is true here as well.
I use it to give me ideas that I then test out. It’s fantastic at nudging me in the right direction, because all that it’s doing is mirroring me.
If it’s just mirroring you one could argue you don’t really need it? Not trying to be a prick, if it is a good tool for you use it! It sounds to me as though your using it as a sounding board and that’s just about the perfect use for an LLM if I could think of any.
That sounds fucking dangerous… You really should consult a HUMAN expert about your problem, not an algorithm made to please the interlocutor…
So, you say AI is a tool that worked well when you (a human) used it?
deleted by creator
Are we twins? I do the exact same and for around a year now, I’ve also found it pretty helpful.
I did this for a few months when it was new to me, and still go to it when I am stuck pondering something about myself. I usually move on from the conversation by the next day, though, so it’s just an inner dialogue enhancer
Steve Gibson on his podcast, Security Now!, recently suggested that we should call it “Simulated Intelligence”. I tend to agree.
I’ve taken to calling it Automated Inference
reminds me of Mass Effect’s VI, “virtual intelligence”: a system that’s specifically designed to be not truly intelligent, as AI systems are banned throughout the galaxy for its potential to go rogue.
Same, I tend to think of llms as a very primitive version of that or the enterprise’s computer, which is pretty magical in ability, but no one claims is actually intelligent
Pseudo-intelligence
I love that. It makes me want to take it a step further and just call it “imitation intelligence.”
If only there were a word, literally defined as:
Made by humans, especially in imitation of something natural.
Fair enough 🙂
Hey AI helped me stick it to the insurance man the other day. I was futzing around with coverage amounts on one of the major insurance companies websites pre-renewal to try to get the best rate and it spit up a NaN renewal amount for our most expensive vehicle. It let me go through with the renewal less that $700 and now says I’m paid in full for the six month period. It’s been days now with no follow-up . . . I’m pretty sure AI snuck that one through for me.
In that case let’s stop calling it ai, because it isn’t and use it’s correct abbreviation: llm.
Its*
Good for you.
It’s means “it is”.
My auto correct doesn’t care.
So you trust your slm more than your fellow humans?
Ya of course I do. Humans are the most unreliable slick disgusting diseased morally inept living organisms on the planet.
And they made the programs you seem to trust so much.
Ya… Humans so far have made everything not produced by Nature on Earth. 🤷
So trusting tech made by them is trusting them. Specifically, a less reliable version of them.
But your brain should.
Yours didn’t and read it just fine.
That’s irrelevant. That’s like saying you shouldn’t complain about someone running a red light if you stopped in time before they t-boned you - because you understood the situation.
Are you really comparing my repsonse to the tone when correcting minor grammatical errors to someone brushing off nearly killing someone right now?
That’s a red herring, bro. It’s an analogy. You know that.
Kinda dumb that apostrophe s means possessive in some circumstances and then a contraction in others.
I wonder how different it’ll be in 500 years.
I’d agree with you if I saw “hi’s” and “her’s” in the wild, but nope. I still haven’t seen someone write “that car is her’s”.
Keep reading…
Would you rather use the same contraction for both? Because “its” for “it is” is an even worse break from proper grammar IMO.
Proper grammar means shit all in English, unless you’re worrying for a specific style, in which you follow the grammar rules for that style.
Standard English has such a long list of weird and contradictory rules with nonsensical exceptions, that in every day English, getting your point across in communication is better than trying to follow some more arbitrary rules.
Which become even more arbitrary as English becomes more and more a melting pot of multicultural idioms and slang. Although I’m saying that as if that’s a new thing, but it does feel like a recent thing to be taught that side of English rather than just “The Queen’s(/King’s) English” as the style to strive for in writing and formal communication.
I say as long as someone can understand what you’re saying, your English is correct. If it becomes vague due to mishandling of the classic rules of English, then maybe you need to follow them a bit. I don’t have a specific science to this.
Standard English has such a long list of weird and contradictory roles
rules.
Swypo
I understand that languages evolve, but for now, writing “it’s” when you meant “its” is a grammatical error.
It’s called polymorphism. It always amuses me that engineers, software and hardware, handle complexities far beyond this every day but can’t write for beans.
Do you think it’s a matter of choosing a complexity to care about?
If you can formulate that sentence, you can handle “it’s means it is”. Come on. Or “common” if you prefer.
Yeah, man, I get it. Language is complex. I’m not advocating for the reinvention of English, it was just a conversational observation about a silly quirk.
Software engineer here. We often wish we can fix things we view as broken. Why is that surprising ?Also, polymorphism is a concept in computer science as well