Writing Tests Is Thinking. Generating Tests Is Not.
I’ll be honest - the whole AI situation doesn’t really fill me with fear or anxiety about my future or my profession. What it does evoke is my genuine and unhidden distaste. Why? There are several reasons, and one of them is ignorance. On one hand, it’s a human thing - after all, you can’t know everything (though some would probably disagree with me) - but its level, its omnipresence, has become alarming. And if we add to that the self-confidence of people proclaiming revealed truths, and God forbid, the influence such individuals have on others - we have a ready recipe for disaster. Let’s not fool ourselves that it will get better - hell, it will get worse (up to a certain point)… But!!! There is within me, faintly smoldering, somewhere deep - optimism. When we finally hit that bottom, at least we’ll have something to bounce back from.
This whole lengthy introduction requires elaboration and saying a bit more - about AI and cognitive science, research on the human brain, about intelligence, about memory - which I’ll probably write about, but another time.
Today’s topic, being an echo of the above digressions, is namely: to write tests, or perhaps generate them with AI. Some small reflections came to me, because the latter has started to seep in here and there, pushing itself into our lives (as programmers) through every possible crack.
A long time ago I heard one thing that stems strictly from TDD (I admit, I’m not a very intensive practitioner - but I appreciate the concept), that writing tests forces design thinking because tests act as feedback mechanisms that reveal coupling, clarify interfaces, and demand architectural decisions before implementation. And even if we don’t strictly follow TDD, this mechanism always exists.
The real mechanism is simple: when you write a test, you must define inputs and expected outputs. This forces you to think about the interface of the thing you’re testing. What does it take? What does it return? What are the edge cases?
That’s it. That’s the benefit.
It’s not magic. It’s just that writing a test requires us to use our own code. Let’s say, we become the first client of our API/Interface/etc.etc. If it’s awkward to test, it’s awkward to use.
But here’s the thing - this benefit comes from designing the test, not from typing the test code. The cognitive value is in deciding what to test and how, not in writing XCTAssertEqual.
Do we lose this when AI writes tests?
I keep thinking it all comes down to this: it depends on how we use it.
If you throw your code at AI and say “write tests for this” - yes, you skip the thinking part. The AI will test what exists, not what should exist. You get coverage numbers, not design feedback.
If you design the test yourself (decide what to test, what inputs, what outputs) and then let AI write the boilerplate - you keep the cognitive benefit. The thinking happened. AI just typed faster.
Do we lose the skill to write tests?
I’ve already written about this somewhere before, and it’s still relevant. The answer is, most likely yes.
Any skill you don’t practice atrophies. If AI writes all your tests, you’ll get worse at writing tests. Same as with navigation apps and sense of direction.
I ask myself the question - but does it even matter? Well!!!
Writing test code is probably a means to an end. The valuable skill is knowing what to test and designing testable systems. If AI handles the mechanical part while we keep the design part - we’re fine.
The risk is when people stop thinking about tests entirely and just generate them. Then we lose both the skill and the design benefit.
So maybe the pragmatic take:
Use AI to write test implementation after you’ve decided:
- What behavior to test
- What the inputs are
- What the expected outputs are
That’s where the architectural thinking happens. Let AI handle the syntax.
Either way, I still see one main problem with AI. By taking shortcuts, because it’s more convenient, faster - we will make ourselves dumber. What surprises me in this context is business in the broad sense of the word - after all, given enough time, “offices” will be filled with engineers in name only - we will truly know very little, only the memory will remain, of what was, of what we once knew how to do. So, whoever is wise, as Covey used to say, will sharpen their saw continuously.