On GenAI and critical thinking

Will we devolve into lifeless lumps of clay?

Beyond making great TV, zombies loom large in philosophical thought experiments. I first came into contact with zombies in the context of consciousness1. Simply put, a philosophical zombie looks, acts, and even smells just like a real person. But inside there’s nothing - no subjective experience. And here’s the rub: We, as outside observers, can never know if the person we’re engaging with is a zombie or not. This leads to a plethora of mind bending questions such as: Are they conscious? Does it matter if we can’t tell from the outside? What exactly do we mean by consciousness?

🧟 Is GenAI the zombie, or are we?

I use GenAI tools daily—ChatGPT Plus for most tasks (including search), NotebookLM for navigating collections of related documents, Gemini through my Google Workspace, and GitHub Copilot for my side projects. I also dabble with Claude (free), Perplexity (free), and I leverage Ollama (free) for projects involving sensitive data2.

I'm not quite a power user, but I’m definitely an enthusiast. And I’ll be the first to admit that I sometimes turn off my brain when using these tools3.

However, it doesn’t start that way. At first I read through the output, challenge it, maybe feed it into another LLM to get a different perspective. But as the project progresses, and as I get more positive signals (i.e. the first few suggestions it makes are absolutely correct), I’m more likely to take shortcuts and accept what it says without thinking too hard about it. It’s not quite vibe coding (I don’t think?) but I don’t interrogate with the same level of rigor as I might have before. I just don’t have to4. That’s a subtle shift over the course of a project, but its a microcosm of what’s happening everywhere.

By relying on the AI so much, am I losing something critical, specifically, my critical reasoning capabilities?

🧠 Are we outsourcing our critical faculties?

Critical reasoning is the act of thinking carefully and actively about information you're presented with, rather than passively accepting it.5 

But who has the time for thoughtfully evaluating all the possibilities before ultimately arriving at a conclusion? That’s for the scientists to do (until we remove their funding good luck to us all!). There’s too much information and we can’t possibly reason about it all. Isn’t this the basis for our society anyway? We trust doctors with our health, we trust other drivers with our lives, we trust that the bridge won’t collapse beneath us, we elect representatives to represent us, etc…

GenAI is simply another thing we instill with trust, so we can continue doing our thing. When the answer is a prompt away, why wrestle with ambiguity or struggle to form your own synthesis?

🎥 We’ve seen this movie before

Right? This isn’t a GenAI specific issue anyway - it’s part of a much longer story. With each wave of technological progress, we’ve moved further from the raw mechanics of how things work:

  • Homer’s epics were passed down orally for centuries before they were ever written down. People had to hold that in their heads. Wild!

  • Before the printing press, literacy was a privilege for elites and scribes. Knowledge was literally scarce, hoarded in monasteries and private collections. Most people lived their entire lives without ever holding a book.

  • Before industrialization, you needed a craftsman to make a table - someone who understood wood grain, joinery, and had years of hands-on experience. Try affording that with a college budget.

  • Before the internet and iPhone, we memorized phone numbers6, wrote by hand (in cursive!), visited libraries, and constructed our own narratives.

We’re many layers removed from the foundations of how things work, and mostly for the better. I can’t fix a car or milk a cow, but I can navigate the complexity of today’s world just fine. Still, I worry we’re losing something.

🤔 On the other hand, GenAI is of a different kind

All that notwithstanding, I believe GenAI (and Agentic AI, etc. by extension) is of a different kind than past knowledge revolutions. Two main reasons to come mind:

  • Ubiquity: It’s everywhere - in our phones, our laptops, our social media, our transportation, our glasses, our classrooms, etc. and soon in our brains. It’s beyond a thing you can point to. It’s in the substrate of society.

  • Engagement: It talks to us and, increasingly, has an opinion7. Unlike calculators that offload arithmetic or GPS that replaces navigation skills8, GenAI doesn't just handle discrete tasks - it does (or will do) the end-to-end thinking. When I ask ChatGPT to analyze a problem, it doesn't just give me data; it walks through reasoning steps, weighs alternatives, and presents conclusions in ways that mirror human cognition9.

There are a bunch of thought pieces out there that elaborate on this. But when it comes to our critical faculties, I’m thinking less about me, and more about future generations. Our old methods of teaching critical thinking - writing essays, having debates, “doing it by hand” - may not translate as well in a post-GenAI world.

🧐 Better to master something, or be effective?

Here's what worries me: mastery is more than being good at something - it's about developing judgment through struggle. I think part of this is taste. When I spent years learning to code, the bugs and failures taught me to think systematically about problems, to question assumptions, to build intuition for what smells right10. If students can generate working code without ever experiencing that productive struggle, what happens to their ability to troubleshoot novel problems? To sense when something is subtly wrong? To build the pattern recognition that comes from deep experience? This is exactly what I expect from senior engineers. Will they still be around in the next cohort?

With GenAI able to convince us its human, write better than us, and make it all a click of a button away3, it’s unreasonable to expect students and young people, with all the pressures they’re faced with to ignore that siren’s call (“Hey you over there, I see you. Why don’t you Insert Into Document and go take a TikTok break”) and instead do the hard thing. I think critical reasoning, deep thinking, and mastery are key parts of the human experience, as much as social connection and delightful experiences are.11

☁️ Higher levels of abstraction

The optimistic view is that we'll simply operate at higher levels of abstraction -becoming conductors rather than musicians. But this assumes we can effectively evaluate AI output without deep domain knowledge. Can you really validate a legal brief if you've never agonized over case law? (Seriously, I’m not a lawyer, can you?) Can you spot subtle errors in code you didn't write? This is where my car analogy above fails: when my car breaks down, I call AAA and Uber. But when AI reasoning goes wrong, I’m supposed to figure out where it failed, why, and how to fix it. How will future generations do this if they never had to struggle with it to begin with?

🤞 Maybe it’ll be okay?

Okay, so while they won’t understand what’s happening under the hood, maybe it’ll push them even higher up the abstraction layer. I can’t fix my car, but I don’t need to. We’ve figured out, as a society, how to get around that issue. What if in the future AI can fix it? (Mechanic AI!) Maybe we won’t know exactly how the AI does it, but we’ll have built good enough AIs that we won’t need to. We’ll have similar problems to solve, simply at a different level of abstraction.

The question isn't whether GenAI will change how we think - it already is. We know technology changes our brain. The question is whether we'll guide that change deliberately or let it happen to us. We need to start treating critical thinking like a muscle that requires regular exercise, not a skill we can outsource indefinitely or assume it’ll just come with time and experiences. For educators and parents especially: the students learning to think with AI today will be making decisions that shape our world tomorrow12. In the meantime, hopefully blindly copying from Github Copilot doesn’t zombify me too soon!

1  David Chalmers popularized it (AFAIK), but it predates him. For the record, I’m sympathetic to the Daniel Dennett and Douglas Hofstadter view, but I appreciate that’s a minority opinion.

2 Ollama is the easiest way to run open source models on your computer!

3 Coding tools, like Github Co-pilot make it such that you don’t even need to copy and paste in the suggested code - it can automatically update your code with its suggestions, making the changes where necessary, all at the click of a button. It’s too hard to avoid!

4 Lol, until you do. And then you’re trying to untangle a massive web of spaghetti code. And its often easier to start from the ground up, the right way, than to undo what someone (something?) else did.

5 Thanks Gemini!

6 How many of you have your childhood friend’s parent’s phone number burned into your long term memory.

7 Sure, it may be our opinion, but does that matter?

8 Anyone else ride in a car with someone who follows Waze or Maps blindly, despite the real world telling them otherwise? GenAI will be like that but for every part of your life.

9 Some of this is for our benefit only. It doesn’t always do the things it says it does, but we feel better when we think it does.

10 Anyone feel unsettled when you manically pip install until it runs, but you’re not sure why?

11 If you’re sensing a touch of nostalgia for how things used to be, you aren’t wrong.

12 I’m hoping they can develop better nursing home tech!

Reply

or to participate.