• Rhaedas@fedia.io
    link
    fedilink
    arrow-up
    2
    ·
    8 months ago

    You’re correct, it’s more likely that humans will use a lesser version (eg. an LLM) to screw things up, assuming it’s doing what it says it’s doing while it’s not. That’s why I say that AI safety applies to any of this, not just a hypothetical AGI. But again, it doesn’t seem to matter, we’re just going to go full throttle and get what we get.