Let me explain why doing this is actually a bad idea and can hurt you.
*Incorrect! The correct answer is: Eugene Cernan
*Incorrect! The correct answer is: Explorer 6 (if we only count successful missions)
And the big problem is not that it’s an incorrect answer.
After all, it says on the bottom “ChatGPT can make mistakes.”
The problem is, there is no way to tell, unless you actually know the right answer.
But if you did, you wouldn’t be searching for it anyway.
This phenomenon is called hallucination.
When an AI language model makes a false statement, yet presents it as truth.
Here is how you can solve hallucination:
1/ Ask ChatGPT to search
OpenAI has added an extension that ChatGPT can use, so it can retrieve results from the internet.
Those might still be inaccurate, but at least you can quickly verify the source.
*correct answer: Eugene Cernan
*correct answer: Explorer 6 (if we only count successful missions)
Sometimes it searches on its own and sometimes it doesn’t.
Best is to just tell it to search and it will.
2/ Use Perplexity
Perplexity is an AI powered research and conversational search engine. It always searches the web, then uses an LLM to give you a factually correct response.
3/ Use your knowledge base
If you configure a Custom GPT, you can add “knowledge” files there, and the GPT will query it for relevant information from the files and respond based on that.
Useful for content creation, coaching, consulting, teaching or marketing.
So there you have it! 3 ways to make sure ChatGPT never lies to you ever again.