Why We Still Trust AI Too Much – Even When It’s Wrong

Why We Still Trust AI Too Much – Even When It’s Wrong
Introduction
Imagine you’re working on a tricky math problem, and you get a recommendation from ChatGPT. It seems confident, so you trust it. But what if it’s actually wrong?
This is exactly what researchers Brett Puppart and Jaan Aru wanted to explore in their recent study. They tested whether a short AI literacy lesson could help high school students avoid blindly trusting ChatGPT’s answers. The results? Despite learning about ChatGPT’s limitations, students still accepted incorrect AI-generated solutions over half the time.
Why does this happen? And what does it mean for the future of AI in education? Let’s dive in.
The Problem: Over-Reliance on AI
Large language models like ChatGPT are designed to generate responses based on probability. This means they can sound confident and convincing—whether they’re right or wrong. The problem is that many users don’t question the answers they get, leading to over-reliance, where people place too much trust in AI recommendations without verifying them.
Previous research has shown that over-reliance on AI can:
- Impair critical thinking by discouraging users from questioning information.
- Lead to cognitive offloading, where users depend on AI instead of actively solving problems.
- Increase the risk of believing misinformation, which could be dangerous in academic settings or decision-making.
With more schools introducing AI into classrooms, the concern is that students might use ChatGPT as an effortless shortcut without fully understanding the subject matter.
The Experiment: Testing AI Literacy Interventions
To find out whether AI literacy training could reduce over-reliance, Puppart and Aru conducted a study with Estonian high school seniors.
How the Study Worked
Participants were divided into two groups:
– Intervention Group: Given a short educational text explaining how ChatGPT works, its strengths, limitations, and best practices for using it.
– Control Group: Provided only basic information about ChatGPT without discussing its risks or limitations.
Students then solved math puzzles with ChatGPT’s help. Half of the AI-generated recommendations were intentionally incorrect.
The Results
- Over-reliance remained high – Students still accepted incorrect ChatGPT suggestions 52.1% of the time, regardless of their AI literacy training.
- AI literacy training didn’t help – The intervention did not significantly reduce over-reliance on false recommendations.
- A surprising side effect – Students in the intervention group were more likely to ignore correct ChatGPT responses, leading to under-reliance.
This suggests that instead of helping students make better AI-assisted decisions, the training made them too skeptical of AI, causing them to reject even its correct outputs.
Why Did AI Literacy Training Fail?
If learning about AI’s weaknesses doesn’t stop people from over-relying on it, what else is going on? There are several possible explanations.
1. AI Literacy Alone Isn’t Enough
Just knowing about AI’s risks doesn’t necessarily change how we think. Habits like slow analytical thinking take time to develop. Studies have shown that thinking more deliberately can help reduce bias—so a quick AI literacy lesson might not be enough to change ingrained decision-making habits.
2. Human Brain vs. AI Confidence
ChatGPT presents answers in a highly fluent and confident style. Even when wrong, it doesn’t hedge its responses with uncertainty (“I might be wrong about this”). People tend to equate confidence with correctness, so they instinctively trust ChatGPT’s confident tone.
3. We Prefer the Easy Route
Research shows that people naturally try to minimize cognitive effort. If an answer seems good enough and is easy to accept, we’re less likely to spend extra time questioning it. Since ChatGPT serves up polished answers instantly, it encourages users to take the easy route instead of working through problems themselves.
What Can We Do About It?
If AI literacy training alone isn’t the solution, what might help people make better decisions with AI?
1. Encourage Deliberate Thinking
The study found that students who took longer to decide were less likely to accept incorrect AI recommendations. Teaching students to slow down and analyze AI-generated content critically could be more effective than simply warning them about its risks.
2. Use AI as a Collaborator, Not an Answer Machine
Rather than relying on ChatGPT to give answers, users should see it as a thought partner. Instead of asking, “What’s the answer to this math problem?”, try:
– “Can you walk me through the steps to solve this?”
– “What are the possible errors in the approach you just suggested?”
By shifting from passively receiving answers to actively engaging with AI, students retain control over their learning.
3. Build AI Awareness Over Time
A one-time educational text might not be enough. Instead of a single AI literacy lesson, schools might need to integrate ongoing critical thinking exercises involving AI, allowing students to regularly practice verifying AI outputs in different contexts.
4. Design AI That Expresses Uncertainty
AI models could be designed to communicate uncertainty more clearly. Instead of saying, “The answer is X,” ChatGPT could indicate confidence levels, e.g., “I’m 60% sure about this, but you should double-check with other sources.”
These changes would help users remain curious and skeptical rather than blindly trusting AI-generated content.
Key Takeaways
- Over-reliance on ChatGPT is widespread – Even when students are aware of AI’s flaws, they still trust incorrect AI recommendations more than half the time.
- AI literacy training alone doesn’t solve the problem – A short educational lesson did not reduce over-reliance and actually led to students rejecting more correct AI responses.
- Slower, more deliberate decision-making reduces over-reliance – Taking time to think before accepting AI responses was linked to better accuracy.
- The way AI communicates can mislead users – ChatGPT’s confident tone makes people trust it more, even when its answers are wrong.
- Successful AI use requires active engagement – Shifting from passive AI consumption to actively questioning and analyzing AI outputs may be more effective in reducing over-reliance.
Final Thoughts
AI is becoming an integral part of education, but the way people interact with it matters. This study shows that even well-intentioned AI literacy programs might not do enough to reduce blind trust in AI-generated content.
Instead of just teaching students about AI’s limitations, we need to encourage critical engagement with AI—questioning, verifying, and thinking critically rather than just accepting AI’s confident responses at face value.
The next time you use ChatGPT, ask yourself: “Is this actually correct, or am I just assuming it is?” That extra moment of reflection could make all the difference.
Got thoughts? Do you find yourself blindly trusting AI, or do you think critically when using tools like ChatGPT? Let’s discuss in the comments! 🚀💬
If you are looking to improve your prompting skills and haven’t already, check out our free Advanced Prompt Engineering course.
This blog post is based on the research article “Short-term AI literacy intervention does not reduce over-reliance on incorrect ChatGPT recommendations” by Authors: Brett Puppart, Jaan Aru. You can find the original article here.