An experiment run by the MIT Media Lab shows that drafting a text with ChatGPT lowers the brain activity needed for memory-formation and creativity—a time-saver that researchers label a cognitive debt. The payoff is immediate (a 60 percent boost in output) but it costs 32 percent less mental effort, and the effects linger four months after the trial.
The study, “Your Brain on ChatGPT,” tracked 54 volunteers split into three groups: one writing with a large-language-model assistant, one using Google Search, and one writing unaided. EEG caps with 32 electrodes revealed that chatbot users exhibited roughly half the usual alpha–beta connectivity and a marked drop in executive attention. Their drafts were corrected more quickly, yet judges found them soulless and stylistically flat. Three months later, the same subjects still showed working-memory scores 15 percent lower than the other two groups. In short, when AI writes, the cortex idles.
The authors ground their findings in the theory of germane cognitive load—the mental work that converts information into durable knowledge. ChatGPT collapses that load, argue researchers at Case Western Reserve University, installing a liability akin to programmers’ technical debt: today’s gains will be repaid tomorrow in gaps. Productivity soars, but—as The New Yorker notes—thought converges toward a bland average, smothering originality. Already one American teenager in four admits using the chatbot for homework, twice the share in 2023; teachers, neuroscientists and psychologists are sounding the alarm.
In the workplace, AI assistants condemn their most avid users to easy tasks—and to stagnation. Nowhere is this clearer than in software development. Pat Casey, CTO of ServiceNow (the world’s tenth-largest software company by market cap), warns of a bottleneck: AI vacuums up junior assignments, leaving fewer chances to practise debugging and architecture, both essential steps toward seniority. Debugging is, ironically, where AI helpers still falter; Microsoft puts their success rate between 8 percent and 37 percent, depending on the model.
A recent randomised controlled trial across Microsoft, Accenture and an unnamed Fortune 100 firm measured the impact of code-generation assistants on developer productivity. Output rose by 26 percent, though with a hefty 10-point standard deviation. Less-experienced coders adopted the tools faster and posted bigger gains than veterans.
Seniors use AI to accelerate tasks they already master; juniors lean on it to learn what to do. Results diverge. Novices are likelier to accept faulty or outdated solutions so long as the code compiles, erecting brittle systems they barely understand. They also struggle to debug AI-generated code once it breaks. Juniors—and non-engineers dabbling in vibe coding for quick prototypes—hit a frustrating ceiling: they race through 70 percent of a project, only to find the remaining 30 percent an exercise in steeply diminishing returns.
Like students who farm out their essays, developers who outsource the reasoning phase skip the cognitive rungs that lead to expertise and risk seeing their careers stall at entry level. In one test where computing students had to extend unfamiliar code, GitHub Copilot cut typing time by 35 percent but slashed by a third the time spent reading the existing structure. Many later confessed unease at not grasping the overall architecture.
Ask ChatGPT