Today I broke my fast with a testing exercise from a colleague. (Note: I better not tell you what it is or even who gave it to me, because after you read this it will be spoiled for you, whereas if you read this and at a later time stumble into that challenge, not knowing that’s the one I was talking about, it won’t be spoiled.)
The exercise involved a short spec and an EXE. The challenge was how to test it.
The first thing I checked is if it had a text interface that I could interact with programmatically. It did. So I wrote a program to flood it with “positive” and “negative” input. The results were collected in a log file. I programmatically checked the output and it was correct.
So far this is a perfectly ordinary Agile testing situation. It is consistent with any API testing or systematic domain testing of units you have heard of. The program I wrote performs a check, and the check is produced by my testing thought process and its output analyzed by a similar thought process. That human element qualifies this as testing and not merely naked checking. If I were to hand my automated check to someone else who did not think like a tester, it would not be testing anymore, although the checks would still have some value, probably.
Here’s my public service announcement: Kids! Remember to look at what is happening.
The Power of Looking
One aspect of my strategy I haven’t described yet is that I carefully watched the check as it was running. I do this not as a bored, offhanded, or incidental matter. It’s absolutely vital. I must observe all the output I can observe, rather than just the “pass/fail” status of my checks. I will comb through log files, watch the results in real-time, try things through the GUI, whatever CAN be seen, I want to see it.
As I watched the output flow by in this particular example, I noticed that it was much slower than I expected. Moreover, the speed of the output was variable. It seemed to vary semi-randomly. Since there was nothing in the nature of the program (as I understood it) that would explain slowness or variable timing, this became an instant focus of investigation. Either there’s a bug here or something I need to learn. (Note: that is known as the Explainability Oracle Heuristic.)
It’s possible that I could have anticipated and explicitly checked for performance issues, of course, but my point is that the Power of Looking is a heuristic for discovering lots of things you did NOT anticipate. The models in your mind generate expectations, automatically, that you may not even be aware of until they are violated.
This is important for all testing, but it’s especially important for tool-happy Agile testers, bless their hearts, some of whom consider automation to be next to godliness… Come to think of it, if God has automated his tests for human qualities, that would explain a lot…
@palhed says
If I’m allowed to speculate I suspect that the varying output you saw was impacted by the nature of your input a a double trick? Something along that way. Anyway, sound like a fun test challenge to crack.
[James’ Reply: That would be a devious and educational trick. Indeed, the tools we use may well influence our systems in ways that obscure the truth.]
Claire says
I’ve definitely found that watching my automation execute helps me when I’m testing. Glad it’s a strategy that makes sense to you. 🙂
Monirul says
How much practical it is to watch execution every time it runs? You know many scripts run over night, hour after hour.
[James’ Reply: Is this a trick question? It’s obviously not practical. Also, that’s not the heuristic. The heuristic is the “power of looking,” not “the requirement to look at everything at all times.” Omniscience is not available to us.]
But yes, I guess many people do this just after writing the scipts when they run first time. Maybe the aspects are not same.
[James’ Reply: I recommend that you periodically return and look again. My deeper point is that looking is not part of the automation. You can’t automate that.]
Andrei says
Looking is indispensable, as is judgment.
As complexity increases and ominpresence is no longer an option, I like to think in layers.
Testware has layers, just like ogres and onions.
Each one has a distinct purpose and it grows in time as the experience from looking gets coded.
Automation is not a panacea. It is more like a fine blade: it must be used with care and responsibility. That is unless you want clip your own ear off.