I’m being serious. I think that if instead of Trump there was just a prompt “engineer” the country would actually run better. Even if you train it to be far right.
And this is not a praise of LLMs…
I’m being serious. I think that if instead of Trump there was just a prompt “engineer” the country would actually run better. Even if you train it to be far right.
And this is not a praise of LLMs…
While I agree with most of your points, this is a strange thing to say. Sure, you and I don’t know why LLMs return the responses they do, but the people who actually make them definitely know how they work.
It isn’t just you and me. Not even the people who designed them fully understand why they give the responses they give. It’s a well-known problem. Our understanding is definitely improving over time, but we still don’t fully know how they do it.
Here’s the latest exploration of this topic I could find.
Hmm. That is interesting, and I admit it does seem like the company that made it is also still researching their own model, but some parts of the article seem a bit dramatic (not sure if there is a better word).
Like when it says the model doesn’t “admit” to how it solved the math problem when asked. Of course it doesn’t, it is made for humans to interact with so it is not going to tell a human how a computer does math, it makes more sense for it to explain the “human” method.
Interesting stuff though, thanks for the article!