Ministry Of AIMinistry Of AI
  • Home
  • Courses
  • About
  • Blog
  • Login
  • Register
Back
  • Home
  • Courses
  • About
  • Blog
  • Login
  • Register
  • Home
  • Blog
  • Blog
  • The AI Illusion: Why ChatGPT and Pseudolaw Trick Us Into Believing Nonsense

Blog

21 Mar

The AI Illusion: Why ChatGPT and Pseudolaw Trick Us Into Believing Nonsense

  • By Stephen Smith
  • In Blog
  • 0 comment

The AI Illusion: Why ChatGPT and Pseudolaw Trick Us Into Believing Nonsense

Artificial intelligence is transforming the way we produce and consume information. Tools like ChatGPT generate text that sounds human-like, while pseudolegal arguments borrowed from sovereign citizen groups are creeping into courtrooms worldwide. But what if both of these trends have more in common than we realize?

Dr. Joe McIntyre’s research explores a fascinating parallel: Both ChatGPT and pseudolaw rely on form over substance, creating the illusion of meaning while lacking actual depth or validity. This blog post will break down how human psychology makes us vulnerable to these illusions—and why it’s crucial to develop digital and legal literacy to see through them.


Why Do People Fall for AI and Pseudolaw?

If you’ve ever seen a face in a cloud or a smiley face in an electrical outlet, you’ve experienced pareidolia—the brain’s tendency to find patterns in random input. This pattern-seeking ability is essential for survival but can also deceive us.

When we read ChatGPT’s responses or listen to a confident pseudolaw guru citing legal nonsense, we think we’re seeing expertise and meaningful information. In reality, we’re witnessing pattern recognition gone wrong—what McIntyre calls conceptual pareidolia. We mistake form (legalese or well-written text) for actual substance (valid arguments or true facts).


ChatGPT: A Confidence Trick on a Global Scale

How Large Language Models Mimic Human Speech

At its core, ChatGPT is not designed to understand meaning. It predicts the next word in a sentence based on massive amounts of training data, creating statistically likely responses rather than fact-checked information.

Imagine an AI system trained to predict movie dialogue. It might generate a convincing Star Wars script by recognizing common phrases, but it does not understand the plot, emotions, or themes of the movies. It’s just playing an advanced game of Mad Libs with probabilities.

Why We Trust ChatGPT Even When It’s Wrong

Research shows that confidence influences credibility—the so-called confidence heuristic. We tend to trust information that is presented without hesitation.

LLMs take advantage of this bias by generating fluent and authoritative text. Unlike a human who might hedge with “I think” or “this might be wrong,” ChatGPT will deliver its responses with complete confidence—even when they’re incorrect.

As a result, naïve users can mistake hallucinated content for facts, leading to misinformation in journalism, education, and even the legal system.


Pseudolaw: When Legal Gobbledygook Sounds Convincing

What Is Pseudolaw?

Pseudolaw operates much like ChatGPT but in the courtroom. It consists of false legal arguments that sound sophisticated but have no actual legal basis. Sovereign citizens and other pseudolegal theorists claim that:

  • Governments don’t have legitimate authority over them.
  • They have a secret second identity (a “strawman”) that debts and taxes apply to.
  • They can avoid legal obligations simply by using the right legal-sounding jargon.

The Pseudolaw Playbook: Legalese Without the Law

Pseudolaw flourishes because legal language is inherently complex. Much like AI-generated text, pseudolaw mimics legal forms and terminology but lacks actual legal reasoning.

For example, sovereign citizens often file affidavits containing:
✅ Formal legal language (“I declare under penalty of perjury…”)
✅ Fancy formatting that looks official
✅ Outdated legal citations that sound authoritative

This ritualistic use of legal jargon tricks both practitioners and victims into believing the arguments hold legal weight—when they don’t.


Parallel Psychological Traps: Why We Fall for It

1. Conceptual Pareidolia: The Brain Sees Meaning Where There Is None

Both ChatGPT and Pseudolaw exploit our brain’s pattern-seeking system.

  • ChatGPT users trust outputs because they look like well-written sentences.
  • Pseudolaw adherents trust their arguments because they sound like legal reasoning.

In both cases, the output may be completely disconnected from reality, but our brains instinctively associate familiar patterns with truth.

2. The Confidence Heuristic: Mistaking Confidence for Competence

  • ChatGPT writes with absolute assurance—so readers assume it knows what it’s talking about.
  • Sovereign citizens perform legal rituals with great conviction—so followers assume they are correct.

When information is presented in an authoritative way, we are less likely to question it, even when we should.

3. Magical Thinking: The Promise of a Secret Shortcut

Both AI and sovereign citizens sell the dream of hidden knowledge:

🤖 ChatGPT: “You don’t need to research—just ask me anything, and I’ll tell you!”
🧙 Pseudolaw: “The government is hiding the real law, but I can teach you the secret to beating the system!”

This taps into psychological tendencies toward wishful thinking and conspiracy belief, where people long for a hidden truth that “experts” don’t want them to know.


The Real Danger: When Form-Over-Substance Has Consequences

1. AI-Powered Legal Disasters

ChatGPT and similar tools have already been misused in legal cases:

  • Lawyers in New York and Australia submitted legal briefs with fabricated cases generated by ChatGPT.
  • Judges in the Netherlands and the U.S. cited AI outputs in court, not realizing they contained hallucinated legal principles.

These incidents show how easy it is to be fooled by AI that looks smart but cannot verify truth.

2. Courts Are Struggling to Handle Pseudolaw Cases

Judges are overwhelmed by sovereign citizens filing meaningless lawsuits, clogging the legal system with fictional claims. One $50 parking ticket can escalate into $2000 in court fees because of time wasted on pseudolegal arguments.


How to Fight Back: Legal & AI Literacy

Both pseudolaw and AI-generated content succeed when users don’t have the knowledge to distinguish appearance from truth. The solution? Better digital and legal education.

✅ For AI Users: Critical thinking skills must keep pace with AI advancements. Don’t assume ChatGPT is correct—ask for sources, verify facts, and trust human expertise in critical fields.

✅ For Legal Consumers: Plain-English legal education should be emphasized in schools. Understanding how law actually works is the best defense against pseudolaw scams and false claims.


Key Takeaways

🔎 ChatGPT and pseudolaw both create the illusion of knowledge. Their outputs look trustworthy, even when they’re nonsense.

🧠 Human psychology makes us vulnerable to these illusions. Our brains mistake familiar patterns and confidence for real expertise.

⚖️ AI and pseudolaw are causing real-world damage. Misinformation, legal trouble, and clogged courts are just the beginning.

📢 Legal and AI literacy are the best defenses. We must teach critical thinking and verification skills in an era of algorithmic and legal deception.


Final Thought

The next time you read an AI-generated response or hear someone argue they don’t have to pay taxes because they didn’t consent to the law, ask yourself:
Am I looking at knowledge—or just a really good illusion?

If you are looking to improve your prompting skills and haven’t already, check out our free Advanced Prompt Engineering course.

This blog post is based on the research article “Pareidolic Illusions of Meaning: ChatGPT, Pseudolaw and the Triumph of Form over Substance” by Authors: Joe McIntyre. You can find the original article here.

  • Share:
Stephen Smith
Stephen is an AI fanatic, entrepreneur, and educator, with a diverse background spanning recruitment, financial services, data analysis, and holistic digital marketing. His fervent interest in artificial intelligence fuels his ability to transform complex data into actionable insights, positioning him at the forefront of AI-driven innovation. Stephen’s recent journey has been marked by a relentless pursuit of knowledge in the ever-evolving field of AI. This dedication allows him to stay ahead of industry trends and technological advancements, creating a unique blend of analytical acumen and innovative thinking which is embedded within all of his meticulously designed AI courses. He is the creator of The Prompt Index and a highly successful newsletter with a 10,000-strong subscriber base, including staff from major tech firms like Google and Facebook. Stephen’s contributions continue to make a significant impact on the AI community.

You may also like

Unlocking the Future of Learning: How Generative AI is Revolutionizing Formative Assessment

  • 30 May 2025
  • by Stephen Smith
  • in Blog
Unlocking the Future of Learning: How Generative AI is Revolutionizing Formative Assessment In the evolving landscape of education, the...
Navigating the Coding Classroom: How Peer Assessment Thrives in the Age of AI Helpers
30 May 2025
Redefining Creative Labor: How Generative AI is Shaping the Future of Work
29 May 2025
Guarding AI: How InjectLab is Reshaping Cybersecurity for Language Models
29 May 2025

Leave A Reply Cancel reply

You must be logged in to post a comment.

Categories

  • Blog

Recent Posts

Unlocking the Future of Learning: How Generative AI is Revolutionizing Formative Assessment
30May,2025
Navigating the Coding Classroom: How Peer Assessment Thrives in the Age of AI Helpers
30May,2025
Redefining Creative Labor: How Generative AI is Shaping the Future of Work
29May,2025

Ministry of AI

  • Contact Us
  • stephen@theministryofai.org
  • Frequently Asked Questions

AI Jobs

  • Search AI Jobs

Courses

  • All Courses
  • ChatGPT Courses
  • Generative AI Courses
  • Prompt Engineering Courses
  • Poe Courses
  • Midjourney Courses
  • Claude Courses
  • AI Audio Generation Courses
  • AI Tools Courses
  • AI In Business Courses
  • AI Blog Creation
  • Open Source Courses
  • Free AI Courses

Copyright 2024 The Ministry of AI. All rights reserved