It’s like getting your enemy to tell the truth. You know, we’re not in some movie or TV series, and in the future he’ll lie more and more convincingly and also refuse to answer uncomfortable questions.
I do use RAG and tools to push content into them for summarization/knowledge extraction.
But even then it’s important to have an idea of your model biases. If you train a model that X isn’t true, then ask it to find info on the topic it’s going to return crap results.
It’s like getting your enemy to tell the truth. You know, we’re not in some movie or TV series, and in the future he’ll lie more and more convincingly and also refuse to answer uncomfortable questions.
I try not to get facts from LLMs ever
I do use RAG and tools to push content into them for summarization/knowledge extraction.
But even then it’s important to have an idea of your model biases. If you train a model that X isn’t true, then ask it to find info on the topic it’s going to return crap results.