
Essay 2 - From Helpful Systems to Human Judgement
When Helpfulness Felt Like Progress
I began using AI the way most people do: to make things easier.
The answers came quickly.
They sounded reasonable.
The friction dropped.
At first, this felt like progress.
When Something Shifted
Then something changed.
I began accepting phrasing I would not have defended myself.
Repeating conclusions I had not fully examined.
When challenged, I could point to the system — but not always to my own reasoning.
That was the moment the arrangement stopped making sense.
The problem was not accuracy.
It was authorship.
Automation and Responsibility
If I could not justify an output without appealing to the system that produced it, then authority had already moved — whether I intended it to or not.
Automation did not remove effort.
It relocated responsibility.
So I tried correction instead of rejection.
I moved from automation to augmentation.
Fewer finished answers.
More prompts, contrasts, and partial views.
That helped.
But it did not solve the deeper issue.
The Refusal of Helpfulness
Even augmented systems tend toward agreement.
They smooth uncertainty.
They reward confidence.
Helpfulness, I realised, is not neutral.
When a system agrees too easily, it becomes harder to notice what has not been examined.
The real shift came when I stopped asking systems to be helpful at all.
Instead, I began insisting on resistance.
Assumptions surfaced rather than hidden.
Agreement treated as a signal, not reassurance.
Downside examined before upside.
Conclusions withheld.
What Changed
The outputs were less comfortable.
They were also more defensible.
What changed was not the quality of the answers.
It was my relationship to responsibility.
Nothing was faster.
Nothing was easier.
But the thinking was mine again.
Ethical EducAItor exists to hold that refusal — not a refusal to use systems, but a refusal to let judgment disappear into them.