Critical Thinking and LLMs
Use AI to Think More, Not Less
Here’s a counterintuitive idea: the goal of using AI well isn’t to think less. It’s to think more.
That might sound strange. After all, the whole promise of AI tools is that they can handle cognitive work for us. They can research, summarize, draft, analyze, and suggest. Why wouldn’t we let them?
Because there’s a cost that’s easy to miss. When you let AI do your thinking, your thinking skills don’t stay frozen in place. They erode. And once they’ve eroded, you become dependent on the tool in ways that aren’t obvious until it’s too late.
The good news: this isn’t inevitable. You can use AI in ways that actually strengthen your critical thinking rather than replacing it. But it requires understanding what’s really happening when you interact with these tools, and making deliberate choices about where the thinking actually happens.
The Real Question Isn’t Whether to Use AI
Let’s set aside the debate about whether AI is good or bad for us. That framing isn’t useful. AI tools are here, they’re powerful, and most knowledge workers are already using them daily. The interesting question isn’t whether to use them, but how to use them without giving away something valuable in the process.
That something valuable is your capacity for judgment. Your ability to frame problems, weigh evidence, recognize what matters, and commit to conclusions you can stand behind. These skills don’t maintain themselves automatically. They develop through use and atrophy through neglect.
Here’s what the research is starting to show: frequent AI use correlates with weaker critical thinking skills, and the mechanism appears to be something called cognitive offloading. That’s the technical term for delegating mental tasks to external tools. We’ve always done this to some extent (think calculators, or writing things down so we don’t have to remember them). But AI is different. It doesn’t just offload minor tasks. It can offload the deep stuff: analysis, evaluation, synthesis, reasoning itself.
When you ask AI to summarize a document and you accept the summary without engaging with the source material, you’ve offloaded comprehension. When you ask AI to draft an email and you send it with light edits, you’ve offloaded the thinking that writing forces you to do. When you ask AI for a recommendation and you follow it without understanding the tradeoffs, you’ve offloaded judgment.
None of these individual moments feel dangerous. But they accumulate.
Why “Human in the Loop” Isn’t Enough
You might think the solution is simple: just stay involved. Review the AI’s output. Approve things before they go out. Keep a human in the loop.
This is the standard answer, and it’s not wrong exactly, but it’s not sufficient either. “Human in the loop” is an engineering concept. It describes a checkpoint in a process. The AI does something, and then a human reviews it before it takes effect. That’s valuable for catching errors and maintaining accountability, but it doesn’t actually protect your thinking.
You can be in the loop and still be a passive consumer of conclusions. You can review AI output, approve it, and move on without ever engaging the parts of your brain that evaluate, challenge, or synthesize. You were present, technically. But you weren’t reasoning.
The problem is that “human in the loop” asks the wrong question. It asks: were you there? It should ask: were you thinking?
A Better Frame: Human as Decision Owner
Here’s a more useful way to think about it. Instead of asking whether you’re in the loop, ask whether you’re the decision owner.
When I say decision owner, I don’t mean just the person who clicks approve at the end. I mean the person who frames the problem, directs the inquiry, evaluates the options, and stands behind the output. Decision ownership isn’t a single moment. It’s a posture you maintain throughout the entire process.
The test is simple: can you defend it? Not “did you check it” but “could you explain the reasoning, articulate the tradeoffs, and stand behind this conclusion as your own?” If you’d have to say “well, the AI suggested it,” you’re not the owner. You’re the recipient.
This distinction matters because it shifts the focus from process to cognition. Being in the loop is about workflow. Being the decision owner is about where the thinking actually happens.
Where Does the Thinking Happen?
This is the core question. When you work with AI, thinking still happens. Ideas get generated, options get weighed, conclusions get reached. The question is: who’s doing that work?
If the AI is doing it and you’re consuming the results, the thinking is happening in the model. Your role has been reduced to approval and transmission. You’re a conduit, not a reasoner.
If you’re using the AI to extend your reach while you stay in the reasoning seat, the thinking is happening in you. The AI might surface information you wouldn’t have found, challenge assumptions you didn’t know you had, or help you see patterns you might have missed. But you’re the one framing the problem. You’re the one deciding what questions to ask. You’re the one evaluating what comes back and synthesizing it into conclusions.
Same tool. Completely different cognitive relationship.
The Spectrum of AI Relationships
It helps to think about this as a spectrum. There are different ways to relate to AI tools, and they have different effects on your thinking.
At one end is the Oracle relationship. You ask, the AI answers, you accept. This is the default mode for most casual use. It’s also where critical thinking goes to die. The AI becomes the source of conclusions, and your role is to receive them.
Next is the Assistant relationship. You delegate tasks to the AI. It drafts, summarizes, researches, and you work with what it produces. This is better than Oracle mode because you’re still shaping the output, but the risk is that you become an editor of AI-generated work rather than a thinker who uses AI assistance. The distinction is subtle but significant.
Then there’s the Instrument relationship. Here, the AI is a tool you wield to extend your capabilities. You’re in the reasoning seat, using the AI to reach further than you could alone. You might use it to explore a problem space, stress-test your thinking, or surface information you’ll then evaluate yourself. The AI extends your reach without replacing your judgment.
Finally, there’s the Interlocutor relationship. This is the most cognitively demanding and the most valuable. The AI becomes a thinking partner you engage with dialectically. You’re not just extracting information or delegating tasks. You’re using the AI to challenge your assumptions, explore counterarguments, and refine your thinking through dialogue. This actually strengthens critical thinking because it forces you to articulate, defend, and improve your ideas.
Most people operate in Oracle or Assistant mode most of the time. Moving toward Instrument and Interlocutor mode is how you use AI to think more, not less.
What This Looks Like in Practice
Let’s make this concrete. Say you’re trying to understand a complex topic for a project you’re working on.
Oracle mode: You ask the AI to explain the topic. You read the explanation. You move on.
Assistant mode: You ask the AI to research the topic and summarize the key points. You review the summary and incorporate it into your work.
Instrument mode: You start by articulating what you already know and what you’re trying to figure out. You ask the AI to surface information relevant to your specific questions. You evaluate what comes back against your existing knowledge. You identify gaps and follow up with more targeted questions. You synthesize the information yourself into conclusions you can defend.
Interlocutor mode: You do everything in Instrument mode, but you also use the AI to challenge your emerging conclusions. You ask it to argue the other side. You ask what you might be missing. You use the dialogue to stress-test your thinking before you commit to it.
Notice that Interlocutor mode actually involves more cognitive work, not less. You’re thinking harder, not outsourcing your thinking. The AI makes that harder thinking more productive, but it doesn’t replace it.
The Stakes
Why does this matter? Because your capacity for judgment is an asset that compounds over time, or depletes.
Every time you engage in genuine reasoning (framing problems, weighing evidence, making tradeoffs, defending conclusions) you strengthen those capacities. They become more natural, more refined, more reliable. You develop better instincts. You make better decisions. You become someone whose judgment others can trust.
Every time you skip that reasoning and accept AI-generated conclusions, you miss a rep. And if you miss enough reps, your judgment atrophies. You become dependent on the tool not just for convenience but for capability. You lose the ability to function at your previous level without it.
This isn’t hypothetical. The research on cognitive offloading suggests it’s already happening, particularly among younger people who’ve grown up with these tools. They’re showing weaker critical thinking skills, and the mechanism appears to be exactly this: they’re outsourcing cognition and missing the development that comes from doing the work yourself.
The good news is that this is a choice. You can choose to stay in the reasoning seat. You can use AI to extend your capabilities without replacing your judgment. You can think more, not less.
It just requires being intentional about where the thinking happens.
A Simple Practice
Here’s a practical way to start. Before you prompt an AI, pause and ask yourself: what am I actually trying to figure out here? Write it down if it helps. Get clear on the problem you’re trying to solve or the decision you’re trying to make.
Then, after you get the AI’s response, don’t immediately accept or act on it. Ask yourself: can I defend this? Could I explain the reasoning to someone else? Do I understand the tradeoffs? Is this conclusion mine, or am I just passing along what the AI said?
If you can’t defend it, you’re not done. You need to either engage more deeply (ask follow-up questions, challenge the AI’s reasoning, do your own evaluation) or recognize that you’re operating in Oracle mode and decide whether that’s acceptable for this particular task.
Sometimes Oracle mode is fine. Not everything requires deep thinking. But you should make that choice consciously, not slide into it by default.
The Counterintuitive Payoff
Here’s what makes this approach counterintuitive: it seems like more work. And in the short term, it is. It’s faster to just ask the AI and accept the answer. It’s slower to engage, evaluate, challenge, and synthesize.
But in the long term, the math reverses. If you maintain and strengthen your judgment, you become more capable over time. You make better decisions. You catch errors the AI misses. You ask better questions. You develop insights the AI couldn’t generate because they require the kind of tacit knowledge and contextual understanding that only comes from doing the cognitive work yourself.
If you let your judgment erode, you become less capable over time. You become dependent on the tool. You lose the ability to evaluate whether the AI’s output is actually good. You miss things you would have caught before.
The choice isn’t between using AI and not using AI. It’s between using AI in a way that makes you stronger and using AI in a way that makes you weaker. The former requires more effort in the moment. It pays off in capability that compounds over years.
Use AI to think more, not less. Stay the decision owner. And always ask yourself: can I defend it?
