

1·
16 days agoWe don’t know why LLMs return the responses they return.
While I agree with most of your points, this is a strange thing to say. Sure, you and I don’t know why LLMs return the responses they do, but the people who actually make them definitely know how they work.
Hmm. That is interesting, and I admit it does seem like the company that made it is also still researching their own model, but some parts of the article seem a bit dramatic (not sure if there is a better word).
Like when it says the model doesn’t “admit” to how it solved the math problem when asked. Of course it doesn’t, it is made for humans to interact with so it is not going to tell a human how a computer does math, it makes more sense for it to explain the “human” method.
Interesting stuff though, thanks for the article!