Freedom is the right to tell people what they do not want to hear.

  • George Orwell
  • 1 Post
  • 42 Comments
Joined 22 days ago
cake
Cake day: July 17th, 2025

help-circle
  • I hear you - you’re reacting to how people throw around the word “intelligence” in ways that make these systems sound more capable or sentient than they are. If something just stitches words together without understanding, calling it intelligent seems misleading, especially when people treat its output as facts.

    But here’s where I think we’re talking past each other: when I say it’s intelligent, I don’t mean it understands anything. I mean it performs a task that normally requires human cognition: generating coherent, human-like language. That’s what qualifies it as intelligent. Not generally so, like a human, but a narrow/weak intelligence. The fact that it often says true things is almost accidental. It’s a side effect of having been trained on a lot of correct information, not the result of human-like understanding.

    So yes, it just responds with statistical accuracy but that is intelligent in the technical sense. It’s not understanding. It’s not reasoning. It’s just really good at speaking.



  • I’ve had this discussion countless times, and more often than not, people argue that an LLM isn’t intelligent because it hallucinates, confidently makes incorrect statements, or fails at basic logic. But that’s not a failure on the LLM’s part - it’s a mismatch between what the system is and what the user expects it to be.

    An LLM isn’t an AGI. It’s a narrowly intelligent system, just like a chess engine. It can perform a task that typically requires human intelligence, but it can only do that one task, and its intelligence doesn’t generalize across multiple independent domains. A chess engine plays chess. An LLM generates natural-sounding language. Both are AI systems and both are intelligent - just not generally intelligent.





  • It doesn’t understand things the way humans do, but saying it doesn’t know anything at all isn’t quite accurate either. This thing was trained on the entire internet and your grandma’s diary. You simply don’t absorb that much data without some kind of learning taking place.

    It’s not a knowledge machine, but it does have a sort of “world model” that’s emerged from its training data. It “knows” what happens when you throw a stone through a window or put your hand in boiling water. That kind of knowledge isn’t what it was explicitly designed for - it’s a byproduct of being trained on data that contains a lot of correct information.

    It’s not as knowledgeable as the AI companies want you to believe - but it’s also not as dumb as the haters want you to believe either.






  • The level of consciousness in something like a brain parasite or a slug is probably so dim that it barely feels like anything to be one. So even if you were reincarnated as one, you likely wouldn’t have much of a subjective experience of it. The only way to really experience a new life after reincarnation would be to come back as something with a complex enough mind to actually have a vivid sense of existence. Not that it matters much - it’s not like you’d remember any of your past lives anyway.

    If reincarnation were real and I had to bet money on how it works, I’d put it down to something like the many‑worlds interpretation of quantum physics - where being “reborn as yourself” just means living out one of your alternate timelines in a parallel universe.