- Michael Willson
- June 16, 2025
A few days ago, OpenAI suffered a worldwide outage. ChatGPT went down. The internet noticed immediately.
Users flooded platforms like X with jokes, panic and memes. But one trend stood out. Several people pointed out how quiet social media platforms like LinkedIn had become. One user even claimed that activity dropped by 84%. There’s no data to prove that, but something clearly changed. People weren’t just missing a tool. They were missing a thinking companion. The silence raised a bigger question: has it really become that hard to think for ourselves?
This isn’t a new fear. In 2011, a study published in Science Magazine looked at how internet use affects memory. The researchers found that people who searched online remembered less than those who learned offline. What stuck with users wasn’t the information, but where to find it.
It was called the Google effect and it revealed something important. When people know a tool can do the work, they often stop trying to do it themselves.
Fast forward to today and AI is doing far more than search. It writes, plans, advises and even decides. But as we hand over more mental work, something else is happening, something we need to talk about.
Because if thinking feels like a lost skill, we should probably ask: how did we get here?
What Is Critical Thinking?
Let’s first understand what critical thinking is. Critical thinking isn’t just about sounding smart. It’s about asking better questions and making better choices.
At its core, critical thinking means being able to analyze, evaluate and synthesize information. It’s how people weigh options, spot flaws and make decisions that fit their values and goals. This isn’t just for school or work. It’s used every day. Comparing loan offers. Planning a busy day. Reading between the lines of a news article. These moments all require clear thinking.
People with strong critical thinking skills are better problem-solvers. They’re less likely to fall for misinformation. They know how to look deeper and not just react. Research shows that critical thinkers also do better in school and are more confident in their decisions. They handle new situations more calmly because they’re used to thinking things through.
But here’s the catch: critical thinking takes effort. It can feel slow or uncomfortable. That’s why shortcuts like AI feel tempting. It’s easier to accept an answer than to challenge it.
The Rise of AI in Everyday Life
It’s hard to ignore how much AI is part of our lives now. It’s in our phones, laptops and even in the apps we use to make to-do lists. From writing emails to helping us pick a restaurant, AI tools are everywhere.
People use AI for travel plans, job interviews, financial advice and more. ChatGPT alone has 123.5 million daily active users. That’s not a small number. According to a 2024 Pew Research poll, about half of Americans use AI several times a week. And many don’t even realize that most of the apps they use are powered by AI.
AI promises convenience. Companies say it helps us save time and do more creative work. They claim it takes care of boring tasks so we can focus on better things. This idea sounds good. In fact, it connects with something called cognitive load theory. This theory says the brain can only handle so much at once. So reducing mental load should help us perform better.
But here’s the catch. Not everyone uses that saved time wisely. Dr. Michael Gerlich, a researcher studying AI and cognition, explains that instead of doing creative work, many just scroll through social media. He warns that this may lead to a lazy brain.
AI is supposed to help us. But if we’re not careful, it might slowly take over how we think.
Are We Losing Our Thinking Skills?
Think back to the last time you had to figure something out without asking a bot for help. Hard, right? You’re not alone. A lot of us are slowly handing over our thinking to AI. And we don’t always notice it.
This is called cognitive offloading, letting machines do the thinking for us. At first, it seems helpful. But over time, it weakens our ability to solve problems on our own.
There’s even a name for what happens next: digital amnesia. We stop remembering things. Not just facts, but also how to think through problems. Why remember something when a chatbot or search engine can recall it faster?
A study published in Societies in 2023 found a strong negative link between AI use and critical thinking. People who used AI tools more often scored lower on thinking skills. The effect was stronger than any other factor, including education.
Dr. Michael Gerlich explains why. “As individuals increasingly offload cognitive tasks to AI tools, their ability to critically evaluate information… diminishes.”
One participant admitted, “I rely so much on AI that I don’t think I’d know how to solve certain problems without it.”
It’s not about hating AI. It’s about noticing when we’re giving up our thinking without even realizing it.
What Happens When We Stop Thinking for Ourselves?
At first, it feels harmless. You ask ChatGPT to write something simple. Then it’s helping with emails. Soon it’s making decisions for you.
The truth is, thinking is tiring. Letting a machine do it feels easier. But every time we offload a task, we lose a little more sharpness.
It’s not just a feeling, it’s backed by science. A study from 2023 showed that 27.7% of the drop in decision-making skills was linked to AI use. That’s almost one-third.
Ahmad (2023) warned that the more we rely on AI, the less we solve problems ourselves. And when we stop solving problems, our brains don’t get the exercise they need. Dr. Michael Gerlich calls this a “feedback loop.” AI saves time, but most people use that time on passive stuff. Like watching videos. Not creating. Not reflecting. Just consuming.
In another study by Microsoft, 40% of tasks done with AI involved no critical thinking at all. People accepted what AI gave them without questioning it. And that’s dangerous. Because AI doesn’t understand context the way we do. It can’t “read the room.” It doesn’t know your feelings, your goals, or your values.
As one researcher put it, “Used improperly, technologies… result in the deterioration of cognitive faculties that ought to be preserved.”
If we’re not careful, we won’t just stop thinking, we’ll forget how to even start.
Where’s the Human Touch?
Remember the last time someone gave you a handwritten note? Or picked out a gift just for you? Now compare that to a “Happy Birthday!” text that clearly came from an auto-suggestion. You’ve probably received an auto-generated birthday message before. It says the right words, but it feels empty.
That’s the problem. AI can say things. But it doesn’t mean them.
Personal connection matters. A handwritten note, a real compliment, a shared laugh, these are human things. Not digital ones. In a world run by algorithms, we risk losing warmth and care. One click replies are fast. But they lack soul.
Think about LinkedIn. People click the suggested responses like “Congrats!” or “Well done!” It’s efficient, sure. But it’s also robotic. One writer shared her disappointment. She noticed how almost every reply to a celebration post was the same. No real thought. No effort. Just copy-paste emotions.
This goes beyond social media. AI can even pick out gifts or write love letters. But without the human behind it, what’s the point?
AI-generated content lacks empathy. It doesn’t understand the person you’re talking to. It doesn’t know what matters to them. Yes, AI saves time. But it often strips away meaning. Real care takes effort. It takes noticing small things. It takes thinking about others, not just typing fast.
So ask yourself, when was the last time you said something that only you could say?
Because that’s what makes it human.
Decision-Making in the Age of Automation
AI makes decisions easy. Too easy, maybe. But what happens when we stop making choices for ourselves?
You wake up. Check your phone. Ask an app what to eat, wear, or do. Sounds familiar?
You ask for advice and it gives you answers. But here’s the concern: you stop checking if those answers fit your life. This is life with AI in charge. Helpful, maybe. But what are we giving up in return?
In a recent study by researchers at Microsoft and Carnegie Mellon, something troubling showed up. People who trusted AI most were less likely to question its answers. That means less thinking, less checking and fewer personal choices. The more you trust it, the less you think.
One of the researchers said it plainly: “You deprive the user of the routine opportunities to practice their judgment.”
That’s the danger. AI doesn’t take your brain. It just gives you fewer chances to use it.
Even everyday tools like Notion, which used to help you plan, now suggest your schedule for you. It’s smart. But it also means you stop planning for yourself. Another study found that people began to rely on AI for even emotional decisions, like mental health advice.
The result? A slow loss of independence. Quiet, easy, unnoticed, but real.
If you’re not making decisions, who is?
How AI Affects Students and Learning
Walk into a classroom today and you might hear students talking about “humanizing” ChatGPT essays. Not writing. Just tweaking.
This isn’t just one school or one country. It’s everywhere. A 2024 survey by the Digital Education Council found that 86% of students use AI in their studies. Of those, 24% use it daily and 54% use it at least weekly.
On average, students use more than two AI tools. The most popular is ChatGPT, used by 66% of respondents. Grammarly and Microsoft Copilot follow, each at 25%. The trend spans bachelor’s, master’s and PhD students across the world.
And with AI use comes a new habit, hiding it. Tools that “humanize” AI writing are becoming just as popular. These tools rewrite AI content to avoid detection. Some have been around for years. But now, they’re everywhere. A simple search brings up millions of ways to “humanize” AI writing. Students use them not just to cheat, but to blend in.
A study by Bai (2023) warned of a serious consequence, cognitive laziness and reduced analytical thinking from overuse. Teachers feel it too. You can tell who wrote their work and who just pasted and polished.
When AI replaces learning, it doesn’t just change how students work. It changes how they think.
Real-World Risks of Blind Trust in AI
AI sounds confident. It answers fast and rarely shows doubt. But confidence doesn’t equal accuracy. It rarely says “I don’t know.” A study by Microsoft and Carnegie Mellon found that the more people trusted AI, the less they questioned its responses. That kind of blind trust can become a serious problem, especially when the answers are wrong or misleading.
Stats That Show the Risk of Trusting AI Blindly
Some of the newest AI models have hallucination rates as high as 79% on certain benchmarks. Even top-performing models like GPT-4o still hallucinate around 1.5% of the time, while smaller models produce false or fabricated answers 15–30% of the time. Legal questions? 6.4% of AI responses are wrong. Programming content? Wrong 5.2% of the time. And students feel it too. According to a 2024 survey, 35% say their AI tools give wrong information “very often” or “quite often,” while only 26% say it happens rarely.
The impact goes beyond individual users. In 2024, false AI output led to $67.4 billion in global losses. Media, finance and healthcare sectors were all affected. In just the first quarter of 2025, over 12,800 AI-generated articles were taken down for spreading false or misleading information. Nearly half of enterprise AI users admitted they had made at least one major decision based on hallucinated content.
Recent Apple research shows how shaky our trust in AI really is. They tested “reasoning” models like Claude Thinking, DeepSeek-R1, and o3-mini on brand-new logic puzzles — ones the models had never seen before.
The outcome? As the puzzles got harder, accuracy dropped to zero. Even with step-by-step instructions, the models failed. They used fewer words and gave up faster, showing they weren’t really reasoning at all.
Apple’s team found that what looks like thinking is often just pattern matching. Once the patterns break, so do the models.
To fix these errors, knowledge workers now spend an average of 4.3 hours a week checking AI output. Companies spend about $14,200 per employee each year just to catch and correct those mistakes.
The danger isn’t just that AI gets things wrong. It’s that we trust it too much to notice.
The Brain’s Response to AI Dependence
Our brains are built to think, adapt and solve problems. But when AI takes over those tasks, something changes. The more we let machines do the work, the less effort our minds put in. Over time, this has an effect on the brain itself.
The prefrontal cortex, which handles critical thinking, becomes less active when we rely on AI too often. This part of the brain helps with planning, self-monitoring and flexible thinking. But if we always ask AI for answers, that area slows down. We get used to routine responses. We stop checking our own thoughts.
A 2023 study titled ChatGPT: The Cognitive Effects on Learning and Memory found that too much AI use can lead to reduced analytical thinking and mental laziness. In one example, a user asked ChatGPT for career advice. The answers were accurate but lacked depth. They didn’t reflect the person’s goals or interests, just data pulled from trends.
This kind of shortcut thinking might feel helpful in the moment, but it weakens long-term decision-making. When AI gives us ready-made options, we stop exploring on our own. We stop asking questions. We follow rather than think.
Over time, the brain adjusts to this. It favors easy answers. It resists effort. And once that happens, rebuilding those mental skills takes real work.
Can AI Be Used Without Losing Ourselves?
AI isn’t the enemy. It’s a tool. Like any tool, it depends on how we use it. The real issue starts when people stop thinking and let AI do it all.
Dr. Michael Gerlich believes the problem lies in the wrong use of AI. His research shows that when people rely on AI just to replace their thinking, their critical skills weaken. But used the right way, AI can actually support deeper thinking.
For example, generative tools like ChatGPT can help with brainstorming. They offer new angles, words, or ideas users may not have considered. But that value only shows up when users shape the input, question the output and build on what they receive. It’s about guiding the tool, not being guided by it.
Gerlich explained that asking better prompts forces users to think more clearly. Trying to create a specific image or outcome requires careful thought. That effort builds cognitive strength.
The key is interaction, not dependence. Passive use turns AI into a crutch. Active use turns it into a partner.
Educational systems have a big role in this shift. Teaching students how to question, evaluate and revise AI responses helps protect their thinking.
For individuals looking to strengthen their understanding of AI and use it responsibly, structured learning helps. Certifications like those offered by the Blockchain Council provide a guided way to explore AI tools, their real-world uses, and ethical boundaries. These programs focus on practical knowledge, critical thinking and how to work with AI without becoming dependent on it.
AI doesn’t have to replace us. But it will, if we let it.
The Final Question: Who’s in Control?
AI is everywhere now. It’s writing, thinking, planning and even making decisions. But the big question remains, who’s actually in control?
Dr. Michael Gerlich believes the outcome depends on how people respond. In his words, “Ultimately, the choice rests with each individual: whether to take the convenient route of allowing AI to handle our critical thinking, or to preserve this essential cognitive process for ourselves.”
That choice has consequences. When people lean too hard on AI, they often stop thinking deeply. They accept answers without questioning. They lose the habit of solving problems on their own.
Over time, this can lead to something worse than laziness: dependence. And dependence shapes behavior. People stop making choices. They follow suggestions without thinking twice.
Yuval Noah Harari has warned that this could lead to a kind of mental stagnation. A world where people don’t think for themselves anymore. A world shaped by automation, not reflection.
But this path is not guaranteed. Experts say it’s still possible to build systems that support thinking, not replace it. Schools, workplaces and families can all encourage questioning, creativity and independent judgment.
AI is powerful. But it doesn’t get the final say.
That’s up to the person using it.