Why AI Does Not Promote Critical Thinking
I’ve been concerned about the overuse of AI systems such as ChatGPT by college students who rely on it too much. It has the potential to detract from using creative thinking in responding to questions and in creating essays and term papers. I was curious what ChatGPT itself would say about the matter, so I asked it: “How does overuse of AI stifle critical thinking.”
ChatGPT said:
Overuse of AI can stifle critical thinking in several important ways, particularly when people become overly reliant on it for decision-making, problem-solving, and learning. Here’s a breakdown of how that happens:
- Reduction in Effortful Thinking
When AI provides instant answers or solutions, users may bypass the process of analyzing, evaluating, and synthesizing information themselves. This can lead to:
- Less practice in logical reasoning
- Reduced ability to question assumptions
- A tendency to accept outputs without critique
- Erosion of Curiosity
AI tools often give polished, confident answers. If users aren’t encouraged to ask “why” or “how,” curiosity can diminish. Without curiosity, critical thinking—the ability to explore beyond surface-level answers—atrophies.
- Dependence on Authority
Over-trusting AI can mimic the effect of blind trust in traditional authority figures. People may assume AI is neutral and correct, ignoring the need to:
- Cross-check facts
- Understand context
- Consider alternative perspectives
- Echo Chamber Effect
AI systems often tailor responses based on user history or popular trends. This personalization can:
- Reinforce existing beliefs
- Reduce exposure to diverse viewpoints
- Undermine the habit of considering contradictory evidence
- Shortcut Culture
When students, professionals, or everyday users turn to AI to write essays, solve math problems, or even generate creative content without engaging in the process themselves, it fosters:
- Intellectual laziness
- Superficial learning
- Reduced problem-solving stamina
- Loss of Metacognitive Skills
Critical thinking involves thinking about your thinking. If AI mediates every cognitive task, users may stop asking:
- “Is this a good strategy?”
- “Am I biased in my reasoning?”
- “Did I miss something important?”
- Blunting Ethical and Moral Reasoning
AI doesn’t have values or moral intuition. Overreliance can dampen users’ development of:
- Ethical judgment
- Social reasoning
- Emotional intelligence
AI is a powerful tool, but when overused or used uncritically, it can lead to mental passivity, confirmation bias, and reduced intellectual resilience. To keep critical thinking sharp, humans must stay in the loop—questioning, reflecting, and occasionally wrestling with ambiguity. The future of work experience in an AI environment is directly related to the knowledge gained and ability to develop critical thinking skills within the AI thinking process.
The fact is that the use of ChatGPT can negatively affect a young person’s ability to think through conflict situations, weigh the alternative courses of action, assess the consequences, and come to a conclusion. This is a process that should take some time to thoroughly evaluate the positives and the negatives and select the course of action that maximizes the good while minimizing the bad.
Does AI Promote Young People Who Can’t Think
I while back I read an opinion editorial in The Wall Street Journal by Allysia Finley, a member of the editorial board. She makes some important points in discussing the likelihood that AI/ChatGPT is a game changer for young students and workers. Here are a few:
- Excessive reliance on AI could create a generation of brainless young people unequipped for jobs of the future because they have never learned to think creatively or critically.
- While ChatGPT can regurgitate information and ideas, it struggles to come up with novel insights or analyze issues from different directions.
- The over-reliance on AI/ChatGPT can lead students to rely on typing in information, which limits handwriting of notes. Handwriting movements can be used as a learning activity by stimulating brain activity.
A 2023 study revealed that heavy reliance on AI for writing tasks reduces the accuracy of the results by 25.1%. How does AI write? AI tools analyze huge chunks of existing texts to produce content. While this can make writing faster, it also means the content can end up looking and sounding the same. When writers lean too much on AI, they risk dulling their ability to think originally and expressively. The Role of Universities
According to a thought piece in Higher Education Today in September 2025, while student use of AI is high, only 58 percent of respondents feel their universities are adapting quickly enough to provide institution-approved AI tools, a figure that shows minimal improvement from 2024 (57 percent). Furthermore, 55 percent of students now expect their institution to provide AI support, reflecting shifting priorities among students. This year, “confidence” overtook “speed” as the main reason students prefer institution-provided AI tools, underscoring the demand for reliable and ethical solutions. The data also highlight heightened stress levels linked to AI use, with 66 percent of students reporting some level of anxiety about incorporating AI into their studies. Students voiced concerns about academic integrity, accidental plagiarism, and cognitive offloading. One student said, “AI tools usually need a well-detailed prompt. Most times AI gets outdated data. Most importantly, the more reliable AI tools require payment, which makes things unnecessarily hard.” This highlights an equity issue in AI use, as some students reported paying for a premium AI tool to get better results. Those experiencing constant academic stress were more likely to report regular AI use, suggesting a need for support systems that integrate human connection with technological assistance. The research emphasizes actionable strategies for universities:
- Develop or purchase institution-backed AI tools with clear ethical guidelines.
- Provide transparent and consistent policies to help students understand how to use AI responsibly.
- Integrate AI support with existing academic services to preserve human interaction and peer engagement.
- Ensure equitable, affordable access to AI technologies to avoid exacerbating existing inequalities.
According to Higher Education Today, as students navigate an increasingly AI-driven academic environment, universities must step into a leadership role. Providing ethical, institution-approved AI tools isn’t just about keeping pace with technology; it’s about safeguarding learning, reducing stress, and fostering confidence in academic outcomes. The 2025 survey makes one thing clear: students are ready for universities to meet them where they are in their AI use, but they are asking for guidance and assurance in doing so. Blog posted by Steven Mintz, Ph.D, professor emeritus from Cal Poly State University on March 19, 2026. Find out about Steve’s activities on his website.