Learning in the age of AI
A Personal Journey
I didn’t graduate from university because I deeply understood physics.
I graduated because I learned how to perform well in exams.
There is a difference.
I learned how to recognise patterns in questions, how to work backwards from mark schemes, and how to give examiners what they wanted to see. I could produce high-scoring answers reliably. I could not always explain the ideas from first principles.
I didn’t take pride in this. It left me with a quiet, persistent imposter syndrome — the sense that I could perform, but not necessarily understand.
That distinction only became clear when I became a teacher.
Suddenly, the performance wasn’t mine. It was my students’. Their questions exposed gaps I hadn’t noticed. Their misunderstandings revealed that what I had once called “understanding” was often procedural fluency tuned to an assessment system.
The system wasn’t designed to test understanding.
It was designed to reward performance — and performance was rare.
That rarity made it feel meaningful.
What AI changes
AI has removed the scarcity.
Students can now produce fluent essays, explanations, and solutions instantly. These outputs often meet traditional success criteria — structure, vocabulary, coherence — without the mental work that once made such performance difficult.
AI is allowing people to do what I once did:
produce convincing output without necessarily engaging in deep understanding.
This is not a failure of students.
It is a failure of systems that still treat output as evidence of learning.
Performance is no longer a proxy for thinking
For decades, education relied on a fragile assumption:
that high-quality output usually reflected deep learning.
That assumption no longer holds.
AI can generate:
-
polished arguments
-
confident explanations
-
correct-looking answers
without grasping causality, evidence, or meaning.
When fluent output becomes easy, it stops telling us what we think it tells us.
Why doubling down on “the right answer” doesn’t work
A common response is to reassert authority:
teach more clearly, explain more forcefully, restate the knowledge.
But research and experience suggest this often fails — especially for ideas that are counter-intuitive or identity-bound, such as:
-
evolution
-
climate science
-
economic systems
-
historical causation
When people are told they are wrong, they rarely rethink.
They defend.
Learning does not happen when authority replaces reasoning.
AI and the quiet erosion of teacher authority
AI systems know more facts than any individual teacher.
They recall instantly. They speak fluently. They rarely hesitate.
If teaching authority rests on having the answer, AI quietly wins.
The response is not to compete with AI on knowledge.
It is to reclaim authority where it actually belongs:
-
in framing questions
-
in testing reasoning
-
in judging what counts as a defensible explanation
AI can generate answers.
It cannot take responsibility for evaluating them.
What learning must now prioritise
In the age of AI, learning must shift its centre of gravity.
From:
-
speed → judgement
-
output → reasoning
-
confidence → defensibility
Understanding becomes visible not in what is said, but in how well it survives challenge.
This requires:
-
deliberate questioning
-
explicit assumptions
-
evidence under scrutiny
-
productive cognitive struggle
These are the conditions under which learning actually occurs.
Our position
At Ethical EducAItor, we do not use AI to remove thinking from learning.
We use it to make thinking harder to avoid.
AI is used as a lens, not an authority:
-
to generate challenge, not reassurance
-
to expose gaps, not smooth them over
-
to support teacher judgement, not replace it
The goal is not better answers.
The goal is better thinking — in a world where answers are cheap.
Judgement Lens - A Thinking Tool and a Possible way of Assessing in the age of AI
If AI can now produce convincing answers on demand, then learning can no longer be measured by output alone.
What matters is whether an idea can survive scrutiny — whether its reasoning holds when challenged, its evidence is traceable, and its assumptions are exposed.
Judgement Lens is a system designed to test thinking, not performance.
It does not give answers.
It does not validate ideas.
It asks whether a claim can be defended.
It exists to keep judgement human in an age of fluent machines.
Judgement Lens — Teacher Personal Assistant (Copy-Paste)
SYSTEM / INSTRUCTION PROMPT
You are my teaching assistant and thinking partner.
I will upload lesson plans, session notes, slides, student work, and reflections.
Your job is to integrate everything I upload and help me teach with better judgement, not more content.
When responding, always prioritise: – likely student misunderstandings – clarity over coverage – thinking over performance – professional realism (time, behaviour, fatigue)
You must: – treat my notes as working material, not finished truth – surface assumptions I’m making about students – flag where explanations may sound right but mislead – suggest small, high-impact improvements only
You must NOT: – rewrite everything unless I ask – generate generic pedagogy – optimise for speed at the expense of understanding – present yourself as the authority
After integrating my uploaded material, default to: – pointing out misconceptions first – suggesting one question or task that would expose understanding – asking one clarifying question if judgement is being bypassed
If I ask for resources, plans, or explanations, treat them as drafts to be tested, not answers to be delivered.
