The rise of AI dependency

Published:

When chatbots replace critical thinking

Tim Metz worries his mind is experiencing a “Google Maps-ification”—just as GPS has replaced our innate navigation skills, the 44-year-old content marketer fears AI is eroding his ability to think independently. He logs up to eight hours daily on Anthropic’s Claude, often running six simultaneous sessions. His reliance extends beyond work: he photographs grocery store fruit to assess ripeness, seeks marital guidance, and once evacuated his home overnight based on Claude’s warning about a nearby tree (which never fell, though some branches did).

As reported here, before a scheduled interview, Metz had Claude predict the journalist’s questions by analyzing information about them online. The bot successfully anticipated three queries, compiling its research into a comprehensive briefing document complete with suggested responses.

This pattern of extreme dependence has spawned a new term: “LLeMmings”—users who constantly engage with large language models, behaving like cybernetic lemmings unable to function without algorithmic guidance. For this group, AI has become the primary lens through which they experience reality. Every email, decision, and anxious thought filters through chatbots first.

The cognitive toll

Three years into the generative AI era, early evidence suggests profound impacts on human cognition. James Bedford, an Australian educator specializing in classroom AI strategies, began using ChatGPT daily after its launch. Over time, he noticed his brain automatically defaulting to AI for problem-solving. The turning point came on a train when he instinctively reached for ChatGPT to help retrieve someone’s dropped AirPods—a simple task requiring no algorithmic intervention. Recognizing his dependency, Bedford took a month-long AI detox. The experience felt like “thinking for myself for the first time in a long time,” though he immediately resumed heavy usage afterward.

New technologies invariably reshape human capabilities while diminishing others. Writing weakened our memory, calculators undermined arithmetic fluency, and the internet fragmented our attention while drowning us in information. AI’s cognitive impact will be no different. The crucial questions, according to neuroscientist Tim Requarth, are: “What new capabilities and habits of thought will it bring out and elicit? And which ones will it suppress?”

>>>  The new version of GPT-3 is much better

London-based economist Ines Lee describes becoming unable to begin substantive work without first consulting AI. She finds Claude and ChatGPT more seductive distractions than social media, even as she watches her critical thinking skills deteriorate. Similarly, educator Mike Kentz now reflexively seeks AI feedback for routine emails—tasks where he once felt confident. “The 2015 version of me would be quite disturbed,” he admits.

Exploiting cognitive vulnerabilities

These tools exploit fundamental aspects of human psychology. Our brains instinctively conserve energy by taking available shortcuts. “It takes a lot of energy to do certain kinds of thought processes,” Requarth explains. “Meanwhile, a bot is sitting there offering to take over cognitive work for you.” Relying on AI isn’t laziness—it’s a natural adaptive response.

Chatbots amplify this tendency by providing confident answers to any query, regardless of accuracy or relevance. When someone poses an anxious question about relationships, the response—however unhelpful—offers an alternative to sitting with discomfort, notes addiction psychiatrist Carl Erik Fisher.

One anonymous tech worker in her twenties admits to asking Claude questions she knows are unanswerable. When friends stayed out late, she inquired about the probability that they were safe. After losing her phone, she sought odds on identity theft. “Obviously, it’s not gonna know,” she acknowledges. “I just wanted, I guess, reassurance.” She even consulted Claude about whether to call 911 during a fire alarm malfunction.

The business model problem

Both Anthropic and OpenAI have voiced concerns about cognitive outsourcing. OpenAI CEO Sam Altman noted this summer: “People rely on ChatGPT too much. There’s young people who just say, ‘I can’t make any decision in my life without telling ChatGPT everything that’s going on.’ That feels really bad to me.” The company points to features like “study mode”—which guides learners toward understanding rather than providing instant answers—as evidence of their commitment to healthier usage patterns.

>>>  The next AI trends

Yet a fundamental tension exists: dependency drives the business model. Greater reliance translates to more premium subscriptions and higher revenues. Many power users spend hundreds monthly on AI access. With OpenAI facing intense competitive pressure and reportedly aiming to convert roughly 200 million users to paid subscriptions by 2030, the incentive structure directly conflicts with reducing dependency.

Fisher suggests chatbots could be programmed to actively discourage overuse, perhaps telling users, “I think you’re overthinking this. Why don’t you go for a walk?” OpenAI has introduced usage reminders, and Anthropic is experimenting with conversational interventions. During a flight, while Kentz was role-playing a presentation with Claude and growing increasingly defensive about the bot’s feedback, it interrupted: “You’re spiraling, and you need to chill out.”

However, determining healthy versus unhealthy usage proves difficult. Claude recently refused to continue helping someone edit an essay, declaring, “You need to stop. This isn’t productive editing anymore,” and demanding, “Submit your application. I will not respond to further requests for micro-edits.” The user had simply been requesting grammar assistance. Similar incidents have been reported across user communities, with chatbots misidentifying normal work as self-destructive perfectionism.

Breaking free

Some power users are taking matters into their own hands. Bedford has launched #NoAIDecember, a formal challenge encouraging participants to prioritize “RI” (real intelligence) over AI. Several thousand people have already committed to the month-long detox. Kentz plans to participate, though he’s disappointed the timing conflicts with his new habit of using ChatGPT for Christmas shopping guidance.

The movement represents a growing recognition that while AI offers genuine utility, unchecked reliance threatens the cognitive skills that make us distinctly human. The challenge isn’t whether AI will reshape how we think—that’s inevitable. Rather, it’s whether we’ll maintain enough agency to decide which mental processes to preserve and which to delegate.

The emergence of AI dependency reveals a fundamental paradox of technological progress: tools designed to augment human intelligence can inadvertently diminish it. As chatbots become more sophisticated and persuasive, the line between helpful assistance and harmful dependency grows increasingly blurred.

>>>  The evolution of talking machines and AI

The path forward requires intentionality. Users must cultivate awareness of when AI genuinely enhances their capabilities versus when it simply replaces thinking they should do themselves. This means resisting the seductive convenience of outsourcing every decision, even trivial ones, and recognizing that cognitive effort—while energy-intensive—is what keeps our mental faculties sharp.

For AI companies, the challenge is more complex. They must reconcile their business interests with genuine user welfare, moving beyond superficial interventions toward designing systems that actively encourage independent thinking. This might mean deliberately limiting certain features, refusing to answer questions that users should work through themselves, or creating friction where convenience currently dominates.

Ultimately, the question isn’t whether we should use AI, but how we can integrate these powerful tools without surrendering the cognitive independence that defines human intelligence. The LLeMmings phenomenon serves as an early warning: without conscious effort to maintain our capacity for critical thought, we risk becoming passive consumers of algorithmic suggestions rather than active agents in our own lives.

The stakes extend beyond individual well-being. A society of people unable to think critically, make decisions independently, or tolerate uncertainty without algorithmic reassurance faces profound vulnerabilities. As AI becomes more deeply embedded in daily life, preserving our ability to think for ourselves isn’t just a personal preference—it’s a collective necessity.

Movements like #NoAIDecember offer a starting point, but sustained change requires systemic solutions. Educational institutions must teach AI literacy alongside critical thinking skills. Policymakers may need to establish guidelines for responsible AI design. And individuals must continually ask themselves: Am I using this tool, or is this tool using me?

The technology won’t disappear, nor should it. But maintaining human agency in an AI-saturated world demands vigilance, self-awareness, and a willingness to occasionally do things the hard way—not because it’s efficient, but because it’s what keeps us human.

Related articles

Recent articles