cross-posted from: https://lemmy.ml/post/35078393
AI has made the experience of language learners way shittier because now people will just call them AI on the internet.
Also, imagine trying to learn a language and not being able to tell whether it’s your own lack of knowledge or if what you’re reading is actually AI slop and doesn’t make sense.
Nah, there’s a difference between chatbot and not knowing a language.
They’re predictive text, so grammar is actually one of it’s strong suits. The problem is the words don’t actually mean anything.
Someone online would likely spend a little time trying to understand it, run thru a translator, realized it was slop, and move on relatively quickly.
It’s much more likely that we sound like children than we sound like an LLM.
Nah. If I’m learning a new language, I’m going to speak like a toddler at first. I’m more likely to be accused of that than an LLM capable of long paragraphs giving minimal accuracy about stuff
As a person who learned English as a second language, I would say probably not. If anything, a human’s grammar/conjugations might be off if they’re learning a new language. A machine, as others have pointed out, would have proper grammar but might be nonsensical.
Wouldn’t someone make mistakes unlike AI?
deleted by creator
What if we had two circulatory systems, one for blood, and one for pesto?
Haven’t machine translators always been some form of language model? Pretty sure it was invented by training a system with two human translations of the same document/work
What if you use AI to translate some text because you can’t express yourself well enough yet?
I think translation machines have always used similar language models
Why? Somebody learning a new language would make mistakes, an AI would make no mistakes.
I wouldn’t say no mistakes, just different types of mistakes.






