• 0 Posts
  • 390 Comments
Joined 3 years ago
cake
Cake day: June 22nd, 2023

help-circle
















  • Specifically LLMs fail at multiple of the axioms that underpin the “theory” of signularity

    • Fail - Recursive self-improvement is possible - LLMs aren’t being coded, beyond specific fields like image generation and programming, it’s not really clear how an LLM would improve itself in a general sense.
    • Fail? - Moore’s Law (or its generalization) - it seems like we are hitting the limits of fitting more chips into a processor and LLMs are not going to solve that
    • Fail - Human cognition is near the threshold for AI being able to self-improve - they really seem to be showing something AI researchers have known for a while - people are dumb and anthropomorphize anything the moment it can pretend to talk to you
    • Fail - Greater intelligence reliably translates into greater real-world capability - I think tech CEOs are doing a great job of demonstrating that this isn’t true - so the idea that a supersmart general AI would run the world rather than be stuck generating deep fake porn of children isn’t necessarily true.
    • Fail? - There is no fundamental ceiling on intelligence - it seems like each itteration of LLMs is returning a smaller improvement than the last, which to my simple meat bag brain implies there is a ceiling on the intelligence of LLMs - I don’t know if this points to some fundamental limit of intelligence but at least with LLMs it seems like they have an asymptotic limit.