Are You Still the One Doing the Thinking?
AI doesn't make us stupid. It does something subtler.
You have probably used AI at least once this week. Maybe to draft an email, check a fact, or plan something. And odds are, at some point during that session, you stopped steering and started following. You just didn't notice when.
That's the bit that matters, according to Helen Edwards, a researcher at the Artificiality Institute who studies what AI actually does to people's minds. Not to their jobs or their industries - to the way they think. Her argument is simple and hard to shake: the real risk of AI isn't that it makes us stupid. It's that we drift out of authorship of our own thinking without realising it happened.
AI is simply the most powerful version of that pattern. It runs on gigawatts and vast data centres. It can summarise, generate, suggest, and plan in seconds. The deal it offers - let me handle the hard part - is one our brains are wired to accept. And most of the time, the output is fine. Sometimes it's good.
But here's where it gets interesting. A 2025 study of 666 people published in the journal Societies found a strong negative relationship between AI tool use and critical thinking. The more people used AI, the less they engaged their own reasoning. Younger users, aged 17 to 25, were the most affected.
Compare that with a clinical coder at a hospital who uses AI just as heavily. When it suggests a wrong answer, she corrects it and explains her reasoning back to it. She described AI as something that "helps you converse with your own thoughts." She's aware of what she's doing the whole time.
Same tool. Same depth of use. One person drifted. The other stayed the author.
Researchers at MIT's Media Lab ran a four-month study watching what happens to people's brains when they write with ChatGPT. The results were stark. People using the AI showed reduced neural connectivity in the brain regions tied to creativity, memory, and meaning-making. Nobody in the AI group could correctly recall a quote from their own essay just minutes after writing it. They'd produced text. They just hadn't thought it.
But a JISC survey of 462 UK college and university staff found widespread concern that students are leaning too heavily on AI and skipping essential thinking. One respondent working in trades put it plainly: learners cannot rely on AI on a building site.
The worry isn't about cheating. It's about something quieter. If a child gets used to accepting the first suggestion an AI gives them - for an essay structure, a maths approach, a way to phrase something - they might never build the thinking muscles that come from struggling with a problem themselves. They get the output without doing the work that makes the output meaningful.
Only one in five UK secondary school teachers currently teach pupils how AI actually works. Fewer than one in four support pupils to use it in their learning. So most children are figuring this out on their own, with no one helping them understand when to lean on the tool and when to push back against it.
If you can find that line, you're fine. You were aware. You made a decision to hand something over and you can make a decision to take it back.
If you can't find the line - if the whole session blurs into one smooth flow of prompts and suggestions - that's worth sitting with. Not because something went wrong. But because the moment you stop noticing is the moment you stop being the one doing the thinking.
We don't need to use AI less. We probably need to use it more deliberately. The clinical coder had it right. AI works best when you stay in the conversation with your own thoughts - when you treat it as something to think with, not something to think for you.
That distinction is easy to describe and surprisingly hard to maintain. Which is exactly why it's worth paying attention to.
That's the bit that matters, according to Helen Edwards, a researcher at the Artificiality Institute who studies what AI actually does to people's minds. Not to their jobs or their industries - to the way they think. Her argument is simple and hard to shake: the real risk of AI isn't that it makes us stupid. It's that we drift out of authorship of our own thinking without realising it happened.
We have always offloaded our thinking
This isn't new behaviour. Humans have been outsourcing cognitive work for as long as we've had tools. Calculators did it for arithmetic. Sat-navs did it for directions. Our brains run on roughly 20 watts of power - an extraordinary efficiency built over millions of years of evolution. If there's a shortcut, we take it. That's not laziness. It's biology.AI is simply the most powerful version of that pattern. It runs on gigawatts and vast data centres. It can summarise, generate, suggest, and plan in seconds. The deal it offers - let me handle the hard part - is one our brains are wired to accept. And most of the time, the output is fine. Sometimes it's good.
But here's where it gets interesting. A 2025 study of 666 people published in the journal Societies found a strong negative relationship between AI tool use and critical thinking. The more people used AI, the less they engaged their own reasoning. Younger users, aged 17 to 25, were the most affected.
The drift nobody notices
Edwards describes a software developer in her research who stopped paying attention while coding with an AI assistant. The code was fine. That's not the point. The point is he couldn't tell you the moment he stopped making decisions. He drifted, and he didn't notice he drifted.Compare that with a clinical coder at a hospital who uses AI just as heavily. When it suggests a wrong answer, she corrects it and explains her reasoning back to it. She described AI as something that "helps you converse with your own thoughts." She's aware of what she's doing the whole time.
Same tool. Same depth of use. One person drifted. The other stayed the author.
Researchers at MIT's Media Lab ran a four-month study watching what happens to people's brains when they write with ChatGPT. The results were stark. People using the AI showed reduced neural connectivity in the brain regions tied to creativity, memory, and meaning-making. Nobody in the AI group could correctly recall a quote from their own essay just minutes after writing it. They'd produced text. They just hadn't thought it.
Why this matters at home and at school
If you're a parent, this lands differently. Children are growing up with AI tools in their pockets. They'll use them for homework, for revision, for figuring things out. That's not going away, and it shouldn't.But a JISC survey of 462 UK college and university staff found widespread concern that students are leaning too heavily on AI and skipping essential thinking. One respondent working in trades put it plainly: learners cannot rely on AI on a building site.
The worry isn't about cheating. It's about something quieter. If a child gets used to accepting the first suggestion an AI gives them - for an essay structure, a maths approach, a way to phrase something - they might never build the thinking muscles that come from struggling with a problem themselves. They get the output without doing the work that makes the output meaningful.
Only one in five UK secondary school teachers currently teach pupils how AI actually works. Fewer than one in four support pupils to use it in their learning. So most children are figuring this out on their own, with no one helping them understand when to lean on the tool and when to push back against it.
A question worth asking
Edwards suggests a single check. After your next session working with AI, ask yourself one question: at what point did I stop choosing and start following?If you can find that line, you're fine. You were aware. You made a decision to hand something over and you can make a decision to take it back.
If you can't find the line - if the whole session blurs into one smooth flow of prompts and suggestions - that's worth sitting with. Not because something went wrong. But because the moment you stop noticing is the moment you stop being the one doing the thinking.
We don't need to use AI less. We probably need to use it more deliberately. The clinical coder had it right. AI works best when you stay in the conversation with your own thoughts - when you treat it as something to think with, not something to think for you.
That distinction is easy to describe and surprisingly hard to maintain. Which is exactly why it's worth paying attention to.
Sources & Further Reading