
Don’t Ask Newton
Here’s some solid advice:
Don’t ask Newton about the hydrogen bomb.
Who’s Newton?—you might ask. But if you just did that, stop here. This article isn’t for you.
The other day, a well-respected instructor and author wrote about his experience with ChatGPT. He asked it how to implement a certain AI feature in his course, and ChatGPT returned a detailed set of instructions.
These instructions covered many aspects.
First, it explained how to fine-tune everything during setup, so that the AI tool draws its context from the course material, and not from its own knowledge base. ChatGPT provided insights into how other instructors prepare lecture summaries to feed the relevant details into the AI.
Second, ChatGPT highlighted the importance of reinforcing the rule by explicitly reminding the AI to use material only from the given lecture.
Finally, ChatGPT generated a ready-to-use template instructors can use when adding this AI feature to their courses.
The template included instructions, hints, examples—everything looked polished and convincing.
Except—it was just a hallucination!

Just as asking Da Vinci about his opinion on JavaScript or Sir Isaac Newton about the Manhattan Project, ChatGPT had no way of knowing about this new feature.
The knowledge cutoff date for ChatGPT 5 is October 1, 2024. The new AI feature it gave hints about was added in late May 2025.
Even if instructed to search the Internet, ChatGPT would not have a chance to find out anything meaningful. There are very few publicly available details about how this feature works internally.
So, the generated answer is more of a mix of guesswork and common sense than an actual fact-based reply.
When prompted explicitly not to make up things, ChatGPT would have replied:
“Sorry, but I can’t help you with that because this feature was released after my cutoff. Do you want me to look up the information on the internet?”
But again, there are no public details about the internal workings of this AI feature, so that wouldn’t have helped either.
I reproduced the experiment, asking ChatGPT to fact-check its own responses while explicitly telling it not to make things up.
The answer below serves as yet another cautionary tale of never trusting AI without seeking hard proof.
“Since my training data only goes up to October 1, 2024, I couldn’t possibly have factual knowledge about a feature introduced in May 2025.
👉 That means any guidance I provided earlier about that feature beyond October 2024 was me extrapolating from:
– General LLM behavior, and
– How Acme Inc. usually structures interactive features.
In other words, I wasn’t actually fact-checking against verified Acme Inc. docs from 2025 — because I don’t have them in my base knowledge. Unless I explicitly pull from the web in real time, I can’t claim certainty about features released after my cutoff.”
Inevitable Hallucination
These LLMs are just guessing machines (ok, statistical prediction machines) that operate on colossal amounts of data. They’re getting better at an alarming rate, but there’s mathematical proof that they cannot avoid hallucinating.
Here’s the rough shape of the reasoning (simplified, in layman’s terms–so, please, mathematicians, don’t nitpick on this!):
- First, let’s define hallucination: the model says something that sounds right, but is actually wrong.
- Now, computer science has some deep results showing that no program can learn every possible pattern in the universe perfectly. There will always be some situations where it guesses wrong.
- An AI model is ultimately ”just” a program. A really big and clever one, but still bound by those same mathematical limits. So, it cannot possibly “know” everything without making an error somewhere.
- How about adding a fact-checker on top? The catch: even fact-checkers are programs, and they run into the same fundamental roadblocks. Some mistakes simply can’t be detected or prevented in advance. We’re back to square one: some errors remain, no matter what we do.
- Put all that together, and we reach the conclusion: no matter how advanced these AI models get, they will always have moments where they hallucinate. It’s a problem that cannot be solved. (Btw, math is ripe with unsolvable problems. There’s even a name for them, but I can’t recall it, and I’ll resist making one up. ;-))
Final words & advice
I’ll leave you with this. Always ask ChatGPT, Claude, Grok, whatever AI tool you use to fact-check its replies, especially if you plan to use the outcome.
If we’re not careful, AI might confidently advise us to put glue on pizza or eat rocks.
Check out next: The Missing Skill in Software Development (it’s not what you think).
Also, here are a few studies worth reading if interested:
– LLMs Will Always Hallucinate, and We Need to Live With This -> https://arxiv.org/html/2409.05746v1
– Hallucination is Inevitable: An Innate Limitation of Large Language Models -> https://arxiv.org/abs/2401.11817
– Trust Me, I’m Wrong: LLMs Hallucinate with Certainty Despite Knowing the Answer -> https://arxiv.org/abs/2502.12964
Responses