• tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    4 days ago

    I’ve got some pessimistic views as to long-term AI concerns — I’m not sure that aligning advanced AI goals with human goals in the long run is a viable problem to solve. We may not be able to achieve Friendly AI. I could believe that.

    But I certainly don’t think that AI development is “moving too fast”. Not really anything to gain in slowing down development. I remember Elon Musk proposing a six-month moratorium on development — that doesn’t make any sense, only would be something that you’d want to do if you had an immediate milestone that you believed that there was major risk attached to. In general, either AI is something that you should ban globally because it’s too much of an existential risk for humanity, and halt all development and enforce that halt, or you’d like to achieve it as soon as possible. We are not at a point where there is a consensus that that level of unacceptable risk exists and there is a global commitment to enforcing such a global prohibition.

    I can believe that there might be an excess of infrastructure development in particular, that we might not have the research side moving as quickly as need be to support that. Like, we might be doing misallocation in buying a lot of specific chips without establishing that those chips are going to provide a worthwhile return. But in terms of the technology advancing…no, can’t agree there.