Artificial intelligence is undermining deep and critical thinking in humans.

Academics and educators agree that the "instant and organized answers" provided by artificial intelligence can eliminate the "constructive confusion" needed in individuals' problem-solving processes.
This situation is said to be fueling a tendency, particularly among younger users, to delegate thinking entirely to digital tools. Critics emphasize that AI should be used as a "thinking partner," but that misused AI can undermine long-term cognitive skills.
Adding a new dimension to these discussions, a Massachusetts Institute of Technology (MIT) study revealed that using ChatGPT reduces brain activity and reduces learning motivation over time.
The results of the study already contain significant warnings for educational policies and classroom practices.
Effects of using ChatGPT on the brainMIT Media Lab researchers divided 54 people aged 18 to 39 into three groups and asked them to write essays using ChatGPT, Google search, or just their own information.
In the study where brain activity was measured with EEG, the "executive control" and "attentional engagement" levels of those using ChatGPT were found to be the lowest.
The research team found that the group using ChatGPT repeated the same phrases instead of original ideas in their third post, with some participants taking the text directly from the AI and submitting it with minor edits.
In contrast, the group that wrote solely using their own information showed the highest brain connectivity and creativity, while the group that wrote using the Google search engine showed active and high brain activity.
The contribution of "productive failure" to cognitive power developmentDr. Avijit Ghosh from the University of Connecticut in the US said that AI's "instant and spectacular" answers hinder deep thinking by eliminating what John Dewey described as "constructive confusion."
"Experiencing 'productive struggle' is invaluable when starting to learn a new skill. When AI removes this struggle, cognitive development suffers, especially for complex and analytical tasks," Ghosh said.
Ghosh stated that the habit of delegating thinking to artificial intelligence, called "metacognitive laziness," is rapidly spreading among young users, and that this could lead to a measurable decline in critical thinking skills in the long run.
Ghosh noted that while AI, when used correctly, can improve questioning skills, personalized systems often reinforce users' existing beliefs, thus reducing the opportunity to confront different perspectives.
The impact of artificial intelligence on curiosityGhosh stated that artificial intelligence, when used correctly, can improve questioning skills, and that personalized algorithms often reinforce users' existing beliefs.
Ghosh pointed out that this situation could "limit the intellectual flexibility" of individuals by preventing them from confronting opposing views.
Describing curiosity as "the engine of critical thinking," Ghosh stated that instead of nurturing curiosity, artificial intelligence could dull it by providing rapid responses.
According to Ghosh, this affects not only the way individuals access information but also their capacity to question it.
The place of artificial intelligence in educationGhosh emphasized the need to establish clear boundaries for the use of artificial intelligence in education. He stated that delegating students' thinking processes, especially at an early age, could create cognitive habits that are difficult to reverse later.
On the other hand, Ghosh touched upon the benefits of artificial intelligence for students with a certain knowledge background, emphasizing that it could be a helpful tool for these individuals in solving complex problems.
“But it should be a partner that deepens thinking, not an assistant that shortens the thinking process,” Ghosh said.
Getting the answer immediately is harmful to the brain's learning mechanism.The harms of trying to learn with artificial intelligence or trying to obtain information by asking artificial intelligence about topics that require cognitive intelligence can lead to disasters beyond predictable results.
Professor Barbara Oakley of Oakland University gave an example of this in her assessment, saying, "Imagine a nursing student in Finland who has never learned her multiplication tables. She's calculating a medication dose, typing "10 × 10" into her calculator, but accidentally presses an extra zero. "1000" appears on the screen. She accepts without hesitation. Why? There's no internal alarm system to warn her that something is terribly wrong."
Oakley emphasized that it's crucial for individuals to have some basic knowledge about AI before asking it questions that require cognitive intelligence. This way, the AI's output will align with the underlying knowledge, effectively acting as an "assistant" for the individual.
Oakley, noting the dangers of losing the ability to detect errors, said, "In healthcare, engineering, finance, and countless other fields, knowledge in the mind is not outdated baggage but our last line of defense when technology fails or we make input errors."
Oakley emphasized that "the brain's ability to signal errors" is critical in the learning process, adding, "If a student immediately resorts to ChatGPT when faced with a difficult question, this disables the brain's most powerful learning mechanism."
Oakley said, "Without the basic knowledge base, individuals cannot distinguish genuine quality from the seemingly sophisticated but worthless output of AI. AI is a powerful tool for those with a solid knowledge base, but it creates a superficial illusion of mastery for those lacking it."
Oakley pointed out that artificial intelligence systems could reinforce existing prejudices in education and offered examples of how a "discover it yourself" approach could lower student success, especially in complex subjects like mathematics.
Oakley noted that the most effective learning occurs at the "ideal difficulty level," where students answer approximately 85 percent correctly, and stated that artificial intelligence should be designed as a "thinking partner" that guides to this level.
Researchers warn of "cognitive erosion"The MIT study's summary and conclusion stated that regular use of generative AI tools like ChatGPT could increase the risk of "cognitive erosion," especially for younger users.
According to the study, such tools can disable the "problem-solving, memory consolidation, and creative thinking stages" central to the learning process.
The research team emphasized that low levels of brain engagement can slow down the development of critical thinking skills in the long run, turning users into passive consumers who access information only from “external sources.”
The study also noted that artificial intelligence can support learning if integrated correctly, and this is only possible if users have the basic knowledge and skills infrastructure.
The researchers stated that clear boundaries should be drawn regarding the use of these tools in education and that intensive use of AI at an early age could create "cognitive habits that are difficult to reverse."
Cumhuriyet