You Don't Know When AI Is Wrong—The Three-Question Verification Test
TLDR: AI errors look exactly like AI correct answers—confident, well-formatted, plausible. The Three-Question Test provides a simple framework for verifying AI outputs before acting on them.
Read these two AI responses:
Response A: "The project budget is $2.4 million with $1.2 million committed to vendor contracts."
Response B: "The project budget is $3.1 million with $800,000 committed to vendor contracts."
One is correct. One is hallucinated. Can you tell which is which?
You can't—not without checking against source data. And that's the problem. AI presents all information with equal confidence, whether accurate or fabricated.
The Verification Imperative
Trusting AI without verification is like trusting a new employee's numbers without checking their work. You might get lucky. You might also make decisions based on fiction.
The stakes in project management are real: budgets, timelines, stakeholder commitments, resource allocations. Wrong information leads to wrong decisions. Wrong decisions cost money, time, and credibility.
Verification isn't distrust—it's professionalism.
The Three-Question Test
Before acting on any AI output that matters, ask three questions:
Question 1: Does this match what I already know to be true?
Your existing knowledge is a validation baseline. If AI says the project is on track but you've been hearing concerns about delays all week, that mismatch signals potential error.
This question catches gross errors—AI outputs that contradict obvious reality. It doesn't catch errors in areas where you don't have prior knowledge.
Question 2: Can I verify this against a source?
For factual claims—numbers, dates, names, citations—check against source documents. If AI claims the budget is $2.4M, does your budget document say $2.4M?
Verification takes seconds for most claims. The small time investment prevents potentially large problems from acting on wrong information.
Question 3: Does this pass the common sense test?
Some AI outputs are technically unverifiable but feel wrong. If AI suggests an approach that seems far too simple for a complex problem, or recommends something that would obviously create conflict, common sense signals trouble.
This question catches AI's tendency to provide plausible-sounding answers that don't hold up to practical scrutiny.
Applying the Test
Not every AI output needs rigorous verification. Apply the test proportional to stakes:
Low stakes: Quick mental check. Does this seem reasonable?
Medium stakes: Spot-check a few specific claims against sources.
High stakes: Comprehensive verification of all factual claims and logic.
For status reports going to executives, verify everything. For brainstorming ideas you'll filter anyway, light verification is sufficient.
Building Verification Reflexes
Verification should become automatic, not effortful. Build it into your workflow:
After AI generates a report, immediately open source documents to spot-check.
After AI provides advice, pause to consider whether it aligns with your situational knowledge.
After AI cites numbers, verify at least one or two before accepting the full output.
These habits add minimal time but catch significant errors.
When Verification Reveals Problems
When AI gets something wrong, don't just correct that instance—diagnose why.
Missing context: AI didn't have information needed for accuracy. Solution: provide more complete context.
Hallucination: AI fabricated information that wasn't in the provided context. Solution: use prompts that discourage fabrication and always verify factual claims.
Misinterpretation: AI understood your input differently than intended. Solution: provide clearer, more explicit instructions.
Each error type has a different fix. Diagnosis enables improvement.
The Calibration Benefit
Over time, verification calibrates your intuition about AI accuracy. You learn which types of outputs to trust more and which to verify more carefully.
You might discover AI is highly accurate for your status reports but tends to hallucinate numbers in budget analysis. That calibration lets you apply verification effort efficiently—heavy verification where AI is less reliable, lighter verification where it's proven accurate.
Learn More
Ready to build systematic AI verification into your workflow? Check out the complete training:
Watch the Project Management AI Playlist on YouTube
For more project management insights and resources, visit subthesis.com
Related Articles
AI Conversations Keep Going Off Track—The Recovery Playbook
When AI conversations derail—through misunderstandings, context loss, or accumulated errors—you need systematic recovery techniques. The Recovery Playbook provides methods to reset conversations and get back to productive work.
MasteryWhen NOT to Use AI for Project Management
AI is powerful but not universally appropriate. Some project management tasks require human judgment, relationship skills, or confidentiality that AI cannot provide. Know when to use AI and when to use your own brain.
MasteryAI Keeps Mixing Up Your Project Details—How to Prevent Context Confusion
When you work on multiple projects with AI, details bleed between conversations. Proper context isolation and explicit context setting prevent AI from confusing your projects and giving you wrong information.
