Ministry Of AIMinistry Of AI
  • Home
  • Courses
  • About
  • Blog
  • Login
  • Register
Back
  • Home
  • Courses
  • About
  • Blog
  • Login
  • Register
  • Home
  • Blog
  • Blog
  • Can AI Pass Engineering Classes? A Study Puts ChatGPT to the Test

Blog

11 Mar

Can AI Pass Engineering Classes? A Study Puts ChatGPT to the Test

  • By Stephen Smith
  • In Blog
  • 0 comment

Can AI Pass Engineering Classes? A Study Puts ChatGPT to the Test

Introduction

Imagine breezing through an entire semester of complex engineering coursework without lifting a finger—just copy-pasting questions into ChatGPT and getting perfect answers. Sounds like every lazy student’s dream, right? But how close is this to reality?

A new study from the University of Illinois Urbana-Champaign takes this idea seriously, testing whether ChatGPT can independently tackle an entire semester-long control systems course. Spoiler alert: It didn’t ace the class, but it came impressively close—earning a solid B (82.24%), almost matching the class average of 85%.

This experiment sheds light on what AI can (and can’t) do in technical education, giving us insights into how engineering courses might need to evolve in the age of advanced AI tools. Let’s dive into the key findings!


Experiment: Can ChatGPT Survive an Engineering Course?

The researchers set out to answer a simple but bold question: If a student were to rely entirely on ChatGPT (GPT-4) to complete their coursework with minimal effort (literally just copy-pasting questions), how well would it perform?

The study focused on AE 353: Aerospace Control Systems, a junior-level engineering course involving:

  • Auto-graded homework (multiple-choice, coding tasks, numerical problems)
  • Midterm and final exams (math-heavy, hand-written solutions)
  • Programming projects (complex coding and technical reports)

Rather than fine-tuning prompts or providing detailed instructions, the researchers kept things realistic—just like a typical student might:

  1. Screenshot Uploads – Directly feeding images of questions into ChatGPT.
  2. Text-based Prompts – Converting questions into simple text without fancy formatting.
  3. Context-Enhanced Prompts – Adding minimal lecture notes before asking AI to solve problems.

This “lazy student” strategy was used throughout the semester, and the results were fascinating.


How Did ChatGPT Perform?

1. Homework: Almost as Good as Humans

ChatGPT excelled at structured, auto-graded homework assignments, scoring 90.38%, just a hair lower than the class average of 91.44%.

  • It nailed multiple-choice questions (MCQs) with high accuracy.
  • It did reasonably well on coding tasks, though its solutions were often clunky and inefficient.
  • It struggled with more complex numerical problems, where small misinterpretations led to cascading errors.

Takeaway: AI thrives in structured, clear-cut problems but starts sliding when things get tricky.

2. Exams: Surprisingly Strong—But With a Catch

On written exams, ChatGPT impressively scored 89.72%, even beating the class average of 84.81%. BUT, here’s the catch:

  • Auto-graded final exam questions (which were just variations of homework problems) were aced with a 97.4% score.
  • Hand-written midterm solutions were shakier at 86.5%, showing difficulty with deeper mathematical reasoning.

Additionally, when fed solutions from past midterms to assist on the final, the AI showed almost no improvement—meaning it struggles to apply past knowledge effectively.

Takeaway: While AI can mimic solutions well, it lacks genuine understanding and adaptability.

3. Programming Projects: A Major Weakness

ChatGPT’s biggest weakness was in long-term, complex projects, where it scored 64.34%—far below the class average of 80.99%.

  • Its Python code worked, but often inefficiently.
  • It failed in error handling, system integration, and optimization, essential for real-world engineering.
  • It overused overcomplicated jargon in technical reports, making them look smart but lacking depth.

Takeaway: AI-generated code “works” but often lacks finesse, optimization, and deeper engineering judgment.


What This Means for the Future of Engineering Education

1. AI Can Pass, But It Can’t “Think”

ChatGPT can mimic engineering knowledge, but it doesn’t truly grasp concepts the way a skilled student would. It answers questions based on pattern recognition but fails at innovation, troubleshooting, and deep intuitive reasoning—all essential traits of a great engineer.

2. Structured Assessments May No Longer Be Enough

Auto-graded assignments and multiple-choice exams are too easy for AI. If universities rely too much on these, students might simply use AI to pass without properly learning. Instead, educators may need to:

  • Use more open-ended, real-world projects that require human creativity.
  • Shift exams from “right answers” to explaining reasoning and design choices.
  • Focus on AI collaboration rather than treating it like a cheating tool.

3. AI Could Free Up Humans for Higher-Level Thinking

Rather than banning AI, universities could integrate AI into coursework, allowing students to offload tedious calculations and focus on what really matters—critical thinking, decision-making, and innovation.

Imagine engineers of the future using AI not just to solve problems but to ask better questions. That’s where education could (and should) be headed.


Key Takeaways

  • ChatGPT can pass an undergraduate control systems course with a B-grade (82.24%), nearly matching the class average (84.99%).
  • It thrives in structured problems like auto-graded homework and multiple-choice exams.
  • AI struggles with open-ended tasks like coding projects and written explanations requiring deep reasoning.
  • Auto-graded exams don’t effectively measure understanding, as AI can ace them without actually learning.
  • Engineering education must evolve, focusing on applied problem-solving, critical reasoning, and AI-aware assessment methods.
  • Instead of banning AI, universities should explore how to integrate it effectively to enhance education.

Final Thoughts: Should Students Use ChatGPT for Courses?

If you’re an engineering student thinking “Can I just use ChatGPT to pass my classes?”, the answer is: Sort of, but it won’t make you a great engineer.

AI can help with tedious tasks, but real engineering requires creativity, intuition, and adaptability—things no AI currently possesses. If education adapts to focus on these uniquely human skills, students will graduate not just as engineers but as AI-powered problem solvers for the future.

So, instead of fearing AI, maybe we should ask: How can we use it to make learning better?

What do you think—should universities change their approach to assessments in the AI era? Let’s discuss in the comments! 🚀

If you are looking to improve your prompting skills and haven’t already, check out our free Advanced Prompt Engineering course.

This blog post is based on the research article “The Lazy Student’s Dream: ChatGPT Passing an Engineering Course on Its Own” by Authors: Gokul Puthumanaillam, Melkior Ornik. You can find the original article here.

  • Share:
Stephen Smith
Stephen is an AI fanatic, entrepreneur, and educator, with a diverse background spanning recruitment, financial services, data analysis, and holistic digital marketing. His fervent interest in artificial intelligence fuels his ability to transform complex data into actionable insights, positioning him at the forefront of AI-driven innovation. Stephen’s recent journey has been marked by a relentless pursuit of knowledge in the ever-evolving field of AI. This dedication allows him to stay ahead of industry trends and technological advancements, creating a unique blend of analytical acumen and innovative thinking which is embedded within all of his meticulously designed AI courses. He is the creator of The Prompt Index and a highly successful newsletter with a 10,000-strong subscriber base, including staff from major tech firms like Google and Facebook. Stephen’s contributions continue to make a significant impact on the AI community.

You may also like

Unlocking the Future of Learning: How Generative AI is Revolutionizing Formative Assessment

  • 30 May 2025
  • by Stephen Smith
  • in Blog
Unlocking the Future of Learning: How Generative AI is Revolutionizing Formative Assessment In the evolving landscape of education, the...
Navigating the Coding Classroom: How Peer Assessment Thrives in the Age of AI Helpers
30 May 2025
Redefining Creative Labor: How Generative AI is Shaping the Future of Work
29 May 2025
Guarding AI: How InjectLab is Reshaping Cybersecurity for Language Models
29 May 2025

Leave A Reply Cancel reply

You must be logged in to post a comment.

Categories

  • Blog

Recent Posts

Unlocking the Future of Learning: How Generative AI is Revolutionizing Formative Assessment
30May,2025
Navigating the Coding Classroom: How Peer Assessment Thrives in the Age of AI Helpers
30May,2025
Redefining Creative Labor: How Generative AI is Shaping the Future of Work
29May,2025

Ministry of AI

  • Contact Us
  • stephen@theministryofai.org
  • Frequently Asked Questions

AI Jobs

  • Search AI Jobs

Courses

  • All Courses
  • ChatGPT Courses
  • Generative AI Courses
  • Prompt Engineering Courses
  • Poe Courses
  • Midjourney Courses
  • Claude Courses
  • AI Audio Generation Courses
  • AI Tools Courses
  • AI In Business Courses
  • AI Blog Creation
  • Open Source Courses
  • Free AI Courses

Copyright 2024 The Ministry of AI. All rights reserved