

But can this be? Lemmy assured me that this capability of Mythos was just a marketing tactic.
Refusing to reduce complex reality into slogans and clichés since 19XX


But can this be? Lemmy assured me that this capability of Mythos was just a marketing tactic.
Interestingly LLMs manage to turn crazy not only the people most fanatical about it but also the ones who make opposing them a core part of their identity. And I encounter a lot more people belonging to the second group.


I’m not american and I too prefer the 12 hour clock. 24 hour clock has never been intuitive for me. I always have to put in brain power to convert it in my head.


Where do they not leave you alone about it? 95% of AI related content I encounter online is people complaining about it on Lemmy.


Puritan gatekeepers have always existed. It’s just the goalposts that keep moving.


I feel like a variation of this exact article gets posted here every single day for the past year or so, and every time the same comments show up underneath. Nobody ever opens one of these threads and discovers a surprising or novel point of view.
I don’t understand why people spend their whole day talking about something they don’t like. It’s so bizarre to me.


I don’t think you fully appreciate the implications of creating something orders of magnitude more intelligent than us. You can’t outsmart something smarter than you. Even if it was only as smart as the smartest human, being a computer it would still process information a million times faster. Everything would happen in super-slow motion from its perspective. It would have so much time to consider each move.
Humans aren’t anywhere near the strongest primate on Earth, yet we’re by far the dominant one. I don’t think a gorilla has any idea just how much smarter we are, and even if it did, it would probably still assume that a war with humans would mean us outnumbering them, hitting, biting, and throwing things at them. They’d have no clue we can end them from a distance without them ever knowing what hit them. They can’t even imagine all the ways we could - and have - screw things up for them, even when we have nothing against gorillas.
The point isn’t that I think this is absolutely going to happen, but just to highlight that we’re effectively rolling the dice on it and seeing what happens - which I find incredibly irresponsible. This whole “it’ll be fine, we can always turn it off” attitude is incredibly naive and short-sighted.


Nobody possesses artificial super intelligence and nobody claims to.


AGI is capable to solve all our problems. It’s not LLMs that Bostrom is talking about here.


It’s not a matter to decide but a problem to try and solve. In most cases we get to learn from our mistakes but when it comes to AGI we might not.
Or are you suggesting we shouldn’t even think about it but rather just roll the dice and see what happens?


AGI is always AI, but AI isn’t always generally intelligent. AI is the parent category that AGI is a subcategory of. It’s like the difference between the terms “plant” and “dandelion.” All dandelions are plants, but not all plants are dandelions.


It’s to illustrate the alignment problem. What you literally ask isn’t always what you actually want. This is usually obvious to humans but not necessarily to an AI. If you sit in a self-driving car and tell it to take you to the airport as fast as possible, you might arrive three minutes later covered in vomit with the entire police department after you. That’s obviously not what you wanted, but you got exactly what you asked for.
The paperclip maximizer is a cartoon example of this. If you just ask it to make as many paperclips as possible, that becomes its priority number one and everything gets turned into paperclips and you might not get the chance to tell it this isn’t what you meant.
A kind of real-life example is the story of a city that started paying people for rat tails to eradicate the rat population, only for folks to start breeding rats instead to make money. It’s a classic case of unintended results due to unspecific requirements.


The paperclip maximizer is a thought experiment. That’s all. It’s an overly simplistic way to explain the gist of a more complex idea. The fact that even this basic thought experiment goes over people’s heads just further proves why that simplification was needed in the first place.


There is no evidence for consciousness anywhere in the universe except for our own subjective experience of it. If I wasn’t conscious, I wouldn’t have a clue it was even a thing.
While it’s true we haven’t discovered consciousness in non-biological systems, it’s also true that besides ourselves we haven’t discovered it in biological systems either - because there’s no way to measure it. We just assume other humans and animals are conscious because their behavior suggests it, but there’s no scientific way to prove it actually feels like something to be them. Consciousness is entirely a subjective experience.
It’s perfectly valid to claim our current AI systems aren’t conscious. We can’t know for absolute certainty, but it’s a relatively safe assumption. However, the jump from that to claiming they’ll never be isn’t valid.


Yeah they seem to be suggesting that there’s something inherently mystical that’s happening with biology that can’t happen with computers despite the fact that both are made of matter that follows the laws of physics.


Yeah I’m not a native english speaker so I’m not sure about the correct terminology. Oversteer is probably better one here.


That’s the messing around part.


I’ve been heavily considering investing in the Knipex micro flush cut side cutters but 30 euros is a heavy investment for someone with zipties for shoelaces.


But zipties can withstand being showered with sparks from angle grinder.
So what about what I just said isn’t true? Aren’t you now just repeating that exact narrative, that the “too powerful to release” is just marketing speech?