The 3-Rule Prompt That Stops ChatGPT, Gemini, and Claude From Guessing

In this video, Dylan Davis explains how to prevent AI models like ChatGPT, Gemini, and Claude from “hallucinating” or guessing when extracting information from uploaded documents. He provides a framework centered on model selection, specific prompting rules, and verification methods.

1. Choose the Right Model

The first step is to use high-level reasoning models to reduce errors. As of the video’s date, recommended models include:

  • ChatGPT: GPT-5.2 with extended reasoning.
  • Claude: Opus 4.5 with extended reasoning.
  • Gemini: Gemini 3 Pro.

2. The 3-Rule Grounding Prompt

To stop AI from using its general training data or making things up, you should include these three rules in your prompt:

  • Strict Grounding: Tell the AI to base its answers only on the uploaded documents and nothing else.
  • Permission to be Uncertain: Explicitly state that if the information isn’t found, the AI should say “not found” rather than guessing.
  • Mandatory Citations: Require the AI to provide the document name, page/section, and a direct quote for every claim it makes.

Bonus Rules:

  • Mark Unverified: Ask the AI to flag any information it is “unsure” about as “unverified” so you know what to double-check first.
  • High Stakes Mode: For legal or financial work, tell the AI to only respond if it is 100% confident. This reduces the amount of data you get but ensures higher accuracy.

3. Verification Methods

Once the AI provides an output, use these three levels of verification to ensure accuracy:

  • Self-Check: Ask the same AI to “rescan the document” and provide exact quotes for every claim. Forcing a rescan prevents it from just agreeing with its previous summary.
  • Cross-Model Check: Take the first AI’s analysis and the source document, then feed them into a different AI model. Ask the second model to flag any claims not supported by the document.
  • NotebookLM: Upload your document and the AI’s analysis to Google’s NotebookLM. Ask it which claims are unsupported; it provides clickable citations to the exact spot in the source text, making manual verification much faster.