Refusing to reduce complex reality into slogans and clichés since 19XX

  • 7 Posts
  • 242 Comments
Joined 3 months ago
cake
Cake day: February 5th, 2026

help-circle







  • I don’t think you fully appreciate the implications of creating something orders of magnitude more intelligent than us. You can’t outsmart something smarter than you. Even if it was only as smart as the smartest human, being a computer it would still process information a million times faster. Everything would happen in super-slow motion from its perspective. It would have so much time to consider each move.

    Humans aren’t anywhere near the strongest primate on Earth, yet we’re by far the dominant one. I don’t think a gorilla has any idea just how much smarter we are, and even if it did, it would probably still assume that a war with humans would mean us outnumbering them, hitting, biting, and throwing things at them. They’d have no clue we can end them from a distance without them ever knowing what hit them. They can’t even imagine all the ways we could - and have - screw things up for them, even when we have nothing against gorillas.

    The point isn’t that I think this is absolutely going to happen, but just to highlight that we’re effectively rolling the dice on it and seeing what happens - which I find incredibly irresponsible. This whole “it’ll be fine, we can always turn it off” attitude is incredibly naive and short-sighted.






  • It’s to illustrate the alignment problem. What you literally ask isn’t always what you actually want. This is usually obvious to humans but not necessarily to an AI. If you sit in a self-driving car and tell it to take you to the airport as fast as possible, you might arrive three minutes later covered in vomit with the entire police department after you. That’s obviously not what you wanted, but you got exactly what you asked for.

    The paperclip maximizer is a cartoon example of this. If you just ask it to make as many paperclips as possible, that becomes its priority number one and everything gets turned into paperclips and you might not get the chance to tell it this isn’t what you meant.

    A kind of real-life example is the story of a city that started paying people for rat tails to eradicate the rat population, only for folks to start breeding rats instead to make money. It’s a classic case of unintended results due to unspecific requirements.



  • There is no evidence for consciousness anywhere in the universe except for our own subjective experience of it. If I wasn’t conscious, I wouldn’t have a clue it was even a thing.

    While it’s true we haven’t discovered consciousness in non-biological systems, it’s also true that besides ourselves we haven’t discovered it in biological systems either - because there’s no way to measure it. We just assume other humans and animals are conscious because their behavior suggests it, but there’s no scientific way to prove it actually feels like something to be them. Consciousness is entirely a subjective experience.

    It’s perfectly valid to claim our current AI systems aren’t conscious. We can’t know for absolute certainty, but it’s a relatively safe assumption. However, the jump from that to claiming they’ll never be isn’t valid.