In 1982, when I was still in high school, I read an article in Time Magazine about teenagers who worked as programmers. The article inspired me to quit school and go to work as a programmer, too. I’m writing about that as part of my book about self-education without self-discipline.
Anyway, one of the kids mentioned in that article was Eugene Volokh, who eventually went to law school and is now a professor at UCLA. Looking at his website, I stumbled into an article where he applies ideas from software testing to teaching law.
Volokh’s ideas are especially familiar to me, because Cem Kaner has often told me about how his ideas about scenario testing owe much to his legal training, where reasoning through the implications of complex hypothetical cases is a fundamental part of the curriculum.
insectivorous says
I think this is not merely wrong, but classically wrong, in the sense of what’s wrong with the general layman’s view of software testing. It suffers from the typical “programmer-as-tester” flaw of spending nearly all your resources on demonstrating that the system works. (2+2 = 4, check, and so forth.) And from this arises the commonplace programmer’s perception that testing is boring.
We proceed from different premises, towards a different result:. we know there are defects, and we will find them. We do not spend our thought or our time demonstrating that the thing works — the programmer already did that, or asserted that he did that when he pronounced it finished/ready for test. We spend our energy, and our resources, quite differently. (The programmer has, presumably, had one previous experience with sending us something that doesn’t work at all, and will never willingly engender a repetition of that.)
We want to break the thing we’re testing. We think, from the beginning, about how we might do that. We probe it like a predator waiting for signs of weakness, a vulture looking for flaws in behaviour that may indicate a defect. We react to what we find and change the test conditions to make it stumble, to exploit any vulnerabilities revealed.
(Understanding this vital difference in attitude and approach is the key to understanding why testing cannot, ultimately, be successfully automated – at least not without first creating a fairly high-order artificial intelligence to run it. The simple-minded “demonstrate that it works”, programmer’s approach to testing spoken of in this article, yeah, THAT can be automated. And it’s probably useful for regression testing, but not much else.)
Does this article have some useful insight for software testing? I think mainly in this negative sense. Indeed, I suggest that the writer may indeed benefit more from our approach, that trying to actively find flaws and weaknesses, with the initial assurance that they are present, may be more productive for critical analysis.
[James’ Reply: Really? I liked the article. Can you point to a specific sentence that gives law students bad advice?]
Team12 says
If I allow myself to paraphrase Eugene’s method as “finding what is right” and the comment by insectivorous as “finding what is wrong” in the software, then I think both have a place in testing, albeit often different places. In my experience, the “finding what is wrong” happens later in the development process as we try to uncover bugs and errors that will rear their ugly heads in production and by the customer. Here is the beta or worse yet, the “release candidate”, now you have X time to find everything wrong with it.
I think that proving the system’s functionality or “finding what is right” with it tends to happen during system/unit/integration testing by developers and developer testers. I don’ t see this as any less important, but typically different in focus and in schedule.
One thing I considered is that Eugene offers examples of tests that will prove the functionality of the program is valid and properly working. One thing I would add to this would be the “proof of concept” part, where you also try to assess the validity of the program’s concept and design. To borrow the calculator example, “Yes, it adds and subtracts perfectly, but why is it shrieking like a banshee and is fueled only by hand sanitizer?”. I know for me, I am still working towards the goal of always being involved early enough in the design phase to help influence this process, but it’s similar to “proving” the product, just not the same as proving it works or does not.
I wouldn’t limit testing to these concepts alone, but I can easily digest a trio of facets like “Proving its Value, Proving it Works, and Proving it Doesn’t Work”.
My comment is primarily to say that you can and should have Eugene’s premise and the attitude that insectivorous espoused used in testing, in addition to or after a higher level value decision is made.
[James’ Reply: Well put. I see a lot of richness and care in Eugene’s article. I see a variety of techniques; subtlety of thought. It’s not just a confirmation mentality.]
Anders says
Not sure where Eugene emphasizes a “finding what’s right” approach. Under “What information can this testing provide?” he lists Error, Vagueness, and Surprise before Confirmation. That seems like the bug hunters approach to me. In discussing what should be in a test suite he specifically says “At least some of the cases should be challenging for the proposal.” He also makes several points about using diverse testing techniques.
Also notable are the points on how testing saves time in the long the run and how the techniques can help you avoid tunnel vision when using/evaluating the system/article/software you test.