Breaking the AI Mold: How Your Old Search Habits Could Be Limiting ChatGPT

Breaking the AI Mold: How Your Old Search Habits Could Be Limiting ChatGPT
The way we interact with search engines and virtual assistants has shaped how we expect technology to respond to us. We’re used to typing specific keywords into Google or barking brief, clipped commands to Alexa — so when something new like ChatGPT shows up, we instinctively treat it the same way. Here’s the problem: that habit might be holding us back.
A new study from researchers at the University of Oklahoma and Microsoft Research reveals that our prior experiences with search tools can box in our thinking — a mental trap called functional fixedness. The researchers explored how this bias affects the way people interact with large language models (LLMs) like ChatGPT, especially when more thoughtful exploration and interaction are needed.
Turns out, the way we talk to AI — and what we expect it to do for us — is deeply influenced by what we’ve done before. And sometimes, that leaves us using cutting-edge tools in old-fashioned ways.
This blog breaks down what this means, why it matters, and how you can start unlocking more of ChatGPT’s true potential.
What the Heck is Functional Fixedness?
Let’s start with a quick explainer. Functional fixedness is a concept from psychology that describes our tendency to see objects (or tools) as being good for only their most obvious purpose.
Think of the classic “candle problem”: you’re given a candle, a box of tacks, and some matches, and you’re told to mount the candle on the wall. Most people don’t think to use the box as a shelf — because they’re hung up on its usual role as a container. That’s functional fixedness.
Now apply the same idea to tools like ChatGPT. If you’ve always used Google to look up quick facts, or Alexa to set timers, you might approach ChatGPT with similarly narrow expectations — asking it to spit out short answers or perform simple tasks, even though it’s capable of much more.
Functional fixedness in this context keeps us from engaging in creative conversations, deep problem-solving, and multi-step reasoning — all things LLMs are great at.
The Big Study: How Users Get Stuck (and Sometimes Break Free)
To figure out how functional fixedness shows up in real-world use of ChatGPT, researchers ran a large experiment involving 450 participants, each asked to complete one of six decision-making tasks. These ranged from debating AI in hiring to ranking home insulation materials by cost, energy efficiency, and environmental impact.
Participants used ChatGPT to explore the task and generate an answer. Before and after their interactions, they were asked about their expectations, how satisfied they were, and what kind of prompts they used.
Here’s what the researchers discovered.
Your Past Use of Google or Alexa Shapes How You Talk to ChatGPT
Let’s break it down by prior experience.
1. Frequent Search Engine Users (e.g., Google)
- Tended to write longer prompts that resembled traditional search queries.
- Kept prompts more structured and less conversational.
- Made only minor tweaks to their queries between turns (e.g., adding a word or two).
- Showed higher “similarity scores” — meaning their prompts didn’t change much.
In short, they treated ChatGPT like a better Google. Helpful? Maybe. Limiting? Definitely.
2. Virtual Assistant Users (e.g., Alexa, Siri)
- Used blunt, command-style phrasing: “Give me…”, “Tell me…”, “List…” etc.
- Rarely used conversational cues or softening language (like “maybe” or “could you?”).
- Didn’t go deep into clarifying their prompts, suggesting a preference for one-shot answers.
They treated ChatGPT like a virtual butler, not a conversation partner.
3. Frequent ChatGPT Users
- Issued more dynamic prompts and explored alternative phrasing.
- Used fewer contextual references like “this” and “that,” assuming the model wouldn’t remember context.
- Employed more variety in their language — politeness, clarifying questions, and exploratory phrasing.
They were more adaptive, but still had patterns (like favoring shorter prompts and skipping contextual resets).
–
When Expectations Weren’t Met, Something Interesting Happened
Here’s a twist: when the model didn’t give users what they wanted or expected, many people changed tactics.
Instead of giving up, they often:
- Wrote longer prompts.
- Used more varied and specific language.
- Shifted how they approached the model (e.g., asking it to compare trade-offs or explore pros and cons).
These “friction points” — points where the interaction doesn’t go as planned — sometimes served as learning moments. Users adjusted their strategies and moved beyond their initial fixed mindset.
So while functional fixedness is powerful, it’s not unbreakable.
Tasks Matter Too: Not All Questions Are Treated Equally
The study used different types of tasks to see how users interacted with ChatGPT:
- Some tasks asked participants to choose between fixed options (e.g., best diet).
- Others were more open-ended (e.g., is a carbon-neutral lifestyle feasible?).
- Some required prioritization or ranking rather than selecting a single answer.
The more open or complex the task, the more adaptability was needed — and that’s where fixedness became a bigger liability. People who stuck too closely to rigid formats — like structured keyword queries — struggled to work effectively through open-ended tasks.
In contrast, users who were willing to treat ChatGPT more like a collaborator — asking follow-ups, rephrasing prompts, experimenting with approaches — faired better.
So, What Can You Learn From This?
Whether you’re a casual user or a pro using ChatGPT in your workflow, here’s how to beat functional fixedness:
🧠 Rethink how you see the tool
ChatGPT isn’t just a search engine or a calculator. It’s a flexible assistant that can brainstorm, summarize, explain, compare, reason, and more — depending on how you prompt it.
If you’re used to typing “Best vegan restaurant NYC”—try instead:
“Can you recommend 3 vegan restaurants in NYC that are affordable and near public transport? I’m visiting for 3 days and want something casual.”
That conversational shift opens up new possibilities.
🔄 Embrace multi-turn conversations
Most people give up after one or two “meh” responses. Instead, treat ChatGPT like a partner in discovery.
Try:
- Clarifying: “Can you explain that in simpler terms?”
- Iterating: “That’s helpful. What are the downsides?”
- Reframing: “Now, answer that from the viewpoint of someone who’s skeptical.”
Each turn helps refine the output. Don’t stop too early.
❓ Ask more open-ended or hypothetical questions
Instead of “What is X?”, try:
- “What are some creative uses for X?”
- “How might someone with Y constraint approach this?”
- “If I wanted to do Z on a budget, what are my options?”
These kinds of prompts create space for richer responses and exploration.
📶 Be aware of your habits
If you realize you’re leaning on old patterns — like copy-pasting search-style queries — pause and ask: “Is there another way I could approach this?”
Don’t let your past with Google or Alexa dictate how you use ChatGPT.
Key Takeaways
-
Functional fixedness is a mental shortcut that limits your use of new tools based on how you’re used to using older ones. In the world of AI, it shows up when people treat ChatGPT like Google or Alexa.
-
People with heavy search engine backgrounds tend to write structured, keyword-rich prompts and show less flexibility in their interactions.
-
Virtual assistant users adopt a command-heavy style that’s not well-suited to exploratory or conversational tasks.
-
Frequent ChatGPT users are more adaptive, but even they show patterns that could limit deeper capabilities (like assuming the model won’t remember previous conversation turns).
-
When the system doesn’t meet expectations, many users adapt — writing more detailed prompts or changing their approach — which suggests functional fixedness can be overcome.
-
Broader, more creative tasks (like evaluating ethical trade-offs or brainstorming ideas) are where rigid prompting behavior holds users back the most.
-
Curious, iterative interactions — rather than one-shot commands — unlock more of ChatGPT’s potential. Systems could help by nudging users with adaptive feedback or suggesting alternative prompt styles.
Whether you’re using ChatGPT to research a topic, plan a trip, or make tough decisions, remember: the model is only as flexible as you are.
So next time you’re tempted to type in a one-off, Google-style query, stop and explore — you might just be surprised at what ChatGPT can really do.
Want to get better at prompting? Challenge yourself: take one simple question and rewrite it three different ways — one conversational, one exploratory, one evaluative. Watch how the responses change. You’ll be training your AI intuition — and breaking the mold in the process.
If you are looking to improve your prompting skills and haven’t already, check out our free Advanced Prompt Engineering course.
This blog post is based on the research article “Trapped by Expectations: Functional Fixedness in LLM-Enabled Chat Search” by Authors: Jiqun Liu, Jamshed Karimnazarov, Ryen W. White. You can find the original article here.