- Published on February 19, 2025
- In AI News
The study indicates that professionals who are more confident in GenAI tend to think less critically during their tasks.

A new study reveals that while generative AI (GenAI) tools can significantly reduce workload, they also risk diminishing critical thinking skills among knowledge workers.
The study was conducted jointly by researchers from the Microsoft research lab in Cambridge and Hao-Ping (Hank) Lee, a PhD student at the Human-Computer Interaction Institute at Carnegie Mellon University.
The researchers surveyed 319 professionals and analysed 936 real-world examples to understand the impact of AI tools like ChatGPT and Copilot on cognitive processes in the workplace.
This was targeted at professionals who use these tools at work at least once a week. The researchers said, “When using GenAI tools, the effort invested in critical thinking shifts from information gathering to information verification, from problem-solving to AI response integration, and from task execution to task stewardship.”
The findings, presented at the CHI Conference on Human Factors in Computing Systems, indicate that professionals who are more confident in GenAI tend to think less critically during their tasks.
This suggests a potential over-reliance on AI, hindering independent problem-solving.
“It’s a simple task, and I knew ChatGPT could do it without difficulty, so I just never thought about it, as critical thinking didn’t feel relevant,” noted one participant, highlighting this tendency to overestimate AI capabilities.
Conversely, participants who were highly self-confident in their skills often perceived greater effort in tasks, particularly when evaluating and applying AI responses.
The research highlights a significant shift in how knowledge workers approach their responsibilities.
Instead of focusing primarily on hands-on task execution, they are increasingly transitioning to overseeing AI-generated results, including verifying outputs for accuracy.
This includes setting clear goals, refining prompts, and assessing AI-generated content to meet specific criteria.
A user stresses that with straightforward factual information, ChatGPT usually gives good answers, showing AI’s ability.
However, GenAI’s limitations and biases also require careful consideration. One participant noted that AI tends to make up information to agree with whatever points you are trying to make. Hence, the editing process could be time-consuming.
Additionally, a participant also said the AI output was too emphatic and did not fit the scientific style, and it needed to be rephrased.
Based on these findings, researchers emphasise the importance of designing GenAI tools to support critical thinking. The study suggests addressing factors such as awareness of limitations, motivation for careful evaluation, and skill development in areas where AI might fall short.
Sanjana Gupta
An information designer who loves to learn about and try new developments in the field of tech and AI. She likes to spend her spare time reading and exploring absurdism in literature.
Subscribe to The Belamy: Our Weekly Newsletter
Biggest AI stories, delivered to your inbox every week.
Rising 2025 Women in Tech & AI
March 20 and 21, 2025 | 📍 NIMHANS Convention Center, Bengaluru
AI Startups Conference.April 25, 2025 | 📍 Hotel Radisson Blue, Bangalore, India
Data Engineering Summit 2025
May 15-16, 2025 | 📍 Hotel Radisson Blu, Bengaluru
MachineCon GCC Summit 2025
June 20-22, 2025 | 📍 ITC Grand, Goa
Sep 17-19, 2025 | 📍KTPO, Whitefield, Bangalore, India
India's Biggest Developers Summit Feb, 2025 | 📍Nimhans Convention Center, Bangalore
Our Discord Community for AI Ecosystem.