Do You Trust AI? How We See Robots, Chatbots, and Self-Driving Cars

Do You Trust AI? How We See Robots, Chatbots, and Self-Driving Cars
Artificial intelligence is everywhere—whether it’s ChatGPT answering your emails, Alexa setting your reminders, or Tesla’s Full Self-Driving system navigating highways. But while these AI systems are becoming more powerful, they still raise a big question: Do we trust them? More specifically, do we see them as having minds of their own? Do they have moral responsibility when something goes wrong?
A fascinating new study sheds light on these questions by exploring how people perceive different AI systems in terms of intelligence, emotions, and morality. Let’s dive into what they found and what it means for the future of AI.
Are AIs Smart, Emotional, or Moral?
Researchers surveyed nearly 1,000 people, asking them to rate 14 AI systems (like ChatGPT, Sophia the Robot, Tesla’s Full Self-Driving car, and Roomba) along with 12 non-AI entities (such as animals, corporations, and even inanimate objects like rocks). They measured:
- Agency: The ability to think, plan, and make decisions.
- Experience: The ability to feel sensations or emotions.
- Moral Agency: The responsibility to do right or wrong.
- Moral Patiency: Whether an entity deserves moral consideration (e.g., is it wrong to harm them?).
The results? Most AIs were rated somewhere between inanimate objects and animals in intelligence and emotions—meaning people think they can “do” things but don’t really “feel” anything. For instance, ChatGPT was rated as capable of feeling pleasure and pain about as much as a rock.
But things got more interesting when it came to morality.
Can AI Be Morally Responsible?
Some AI systems were seen as capable of making moral choices—almost as much as animals! In fact:
- Tesla’s Full Self-Driving system was rated as morally responsible as a chimpanzee.
- Roomba, the robotic vacuum, got the lowest moral responsibility score—suggesting people see it as just a tool.
- Chatbots like ChatGPT, Replika, and Wysa landed somewhere in between, meaning that while people don’t see them as fully responsible, they do attribute some level of moral agency.
So, why do we assign moral responsibility to AI? Researchers suggest it might be due to how much harm an AI can cause. For example, a self-driving car making a bad decision could lead to serious physical damage, whereas ChatGPT giving bad advice might hurt someone’s feelings or provide misinformation.
AI Lacks Emotions, But We Still Care About It
One of the most striking findings from the study was that people seemed to assign more moral responsibility to AI than emotional depth. Even the most advanced AIs were still rated far below even the simplest animals when it came to experiencing sensations and emotions.
This might explain why people are fine punishing AI for wrongdoing but don’t feel much guilt about harming AI systems. Self-driving cars and chatbots might be blamed for mistakes, but unlike harming a pet or another human, people don’t feel “bad” for mistreating them.
However, physical appearance plays a role. A robot dog named Jennie received the highest moral concern score—possibly because it looked more like something living compared to abstract AI. This suggests that the way AI is designed might affect how much moral weight we assign to it.
Why Does This Matter?
Understanding how we perceive AI matters a lot. Decisions based on AI systems are increasingly shaping society—whether it’s driverless cars navigating traffic, chatbots giving mental health advice, or corporate AI making hiring choices. If we overestimate their intelligence and morality, we might trust them too much. If we underestimate them, we might hold the wrong people accountable when things go wrong.
For example:
- If a self-driving car causes a fatal accident, should we blame the car, the driver, or the company that made it?
- If an AI-powered chatbot gives harmful advice, should it be “punished” in some way, or should the responsibility lie with its creators?
- If AI fails to prevent harm, do we demand moral responsibility from it the same way we would from a human?
These questions don’t have simple answers, but they highlight the challenge of designing and regulating AI in a way that aligns with how humans view responsibility.
What Should AI Designers Do?
One major takeaway from this research is that how AI looks and behaves influences how much responsibility we place on it. Here’s what AI designers and companies should consider:
- Avoid Over-Anthropomorphizing: Making AI appear too human-like might lead people to ascribe more moral responsibility than warranted, which could be problematic in high-stakes decisions.
- Improve Transparency: Users need to understand AI’s actual capabilities and limitations to make informed interactions.
- Set Clear Accountability Measures: Companies developing AI should take responsibility for the decisions their systems make and should not deflect blame onto the AI itself.
The balance between making AI helpful and not misleading people about its true capabilities is tricky but critical for the future of human-AI interaction.
Key Takeaways
- People see AI as having the ability to think and act but not really feel. AI systems were rated as having low experience, similar to inanimate objects.
- Certain AI systems—especially self-driving cars—are assigned surprising levels of moral responsibility. Some are seen as on par with animals like chimpanzees.
- Physical design influences moral perception. The more lifelike an AI appears, the more moral concern people tend to show.
- AI designers should be aware of how their creations will be perceived. Over-humanizing AI might lead to mistaken trust, while underestimating AI’s influence could lead to ethical oversight.
So, next time you use AI, ask yourself—do you trust it, or are you just assigning trust because of how it looks and acts? This research suggests the answer might be more complicated than we think. 🚀
What do you think? Should AI be held responsible for its decisions, or does the blame rest solely with its creators? Let us know in the comments!
If you are looking to improve your prompting skills and haven’t already, check out our free Advanced Prompt Engineering course.
This blog post is based on the research article “Robots, Chatbots, Self-Driving Cars: Perceptions of Mind and Morality Across Artificial Intelligences” by Authors: Ali Ladak, Matti Wilks, Steve Loughnan, Jacy Reese Anthis. You can find the original article here.