People ask me a lot of questions about ET. I want to be helpful in my answers. A problem I struggle with is that questions about ET often come with a lot of assumptions, and the first thing I have to do is to make the assumptions visible and try to clear away the ones that aren’t helpful. Otherwise, my answers will sound crazy.
Questions of any kind rest on premises. That’s cool, and normally it’s not a big problem. It becomes a problem when questions are asked across a paradigmatic chasm. And there’s a big chasm between the premises of “traditional testing” and those of context-driven test methodology, and those of Rapid Software Testing, which is what I call my test methodology.
Starting in 1987, I tried to learn software testing. Starting in 1989, I started reinventing testing for myself, having become disillusioned with the empty calories of folklore that I found in books by folks like William Perry, or the misanthropic techniquism of Boris Beizer (Boris once told me that it didn’t bother him if people find his advice impractical, since he was merely concerned with documenting “best practices”, a phenomenon that he seemed to think has nothing to do with applicability or utility).
I “invented” testing (with the help of many colleagues) mainly by discovering that the problems of testing have already been solved in the fields of cognitive psychology, epistemology, and general systems thinking. The lessons of these much broader and older fields having been studiously ignored by the majority of authors in our field. This puts me in the odd position of having to defend exploratory thinking in technical work as if it’s some kind of new fangled idea, rather than a prime driver of scientific progress since the advent of science itself.
Anyway, now my island of testing metaphysics is mostly complete. I can plan and do and defend my testing without any reference to ideas published in testing “textbooks” or any oral folklore tradition. Instead I reference ideas from logic, the study of cognition, and the philosophy of science. My system works, but it’s a big job to explain it to testing traditionalists, unless they read broadly. For instance, if I were to say that I test the way Richard Feynman used to test, some people get it right away.
Let me illustrate my difficulty: Julian Harty asks “Do you expect an Exploratory Tester to be well versed in [traditional testing] techniques? Do you check that they are competent in them, etc?”
I’ve had some discussions with Julian. He seems like a friendly fellow. My brother Jonathan, who’s had more discussions with him, says “Julian is one of us.” That’s a serious endorsement. So, I don’t want to alienate Julian. I hope I can turn him into an ally.
Still, his question poses a challenge.
Not “exploratory tester”, just “tester.”
First, there is no such thing as an “exploratory tester”, separate from a “traditional tester”, except as a rhetorical device. I sometimes call myself an exploratory tester in debates, by which I mean someone who studies exploratory testing and tries to do it well. But that doesn’t seem to be how Julian is using the term. The truth is all testers are exploratory testers, in that we all test in exploratory ways. Some of us know how to do it well; fewer of us can explain it or teach it.
Testers are testers. Some testers are especially good at simultaneous learning, test design, and test execution, an intellectual blend called exploratory testing.
Exploratory testing is not a technique, it’s an approach.
A technique is a gimmick. A technique is a little thing. There are a buh-zillion techniques. Exploratory thinking is not a technique, but an approach, just as scripted testing is an approach. Approaches modify techniques. Any technique of testing can be approach in an exploratory way or a scripted way, or some combination of the two.
Traditional testing techniques are often not really techniques of testing, they are symbols in a mythology of testing.
Consider the technique “boundary testing.” One would think that this involves analyzing boundaries, somehow, and testing that there are no bugs in software products that are boundary-related. But actually, the way testing is written about and taught, almost no guidance is given to testers about how to analyze anything, including boundaries. Boundary testing isn’t so much a technique as a label, and by repeating the label to each other, we think we are accomplishing something. Now, I do have an exploratory approach to boundary testing. I use various heuristics as part of the boundary testing process, but for the most part, boundary testing is ordinary testing. The technique is a tiny part of it compared to the generic skills of modeling, observing, and evaluating that underlie all skilled testing.
I don’t teach boundary testing in my classes because it’s too trivial to worry about.
So, with that preamble, I can answer the question:
Julian, I assume by “traditional test techniques” you aren’t referring the tradition of using one’s eyes and brain to test something, but rather to certain high sounding labels like equivalence class partitioning (a fancy way of saying “run a different test instead of the same test over and over again”) or black-box testing (a fancy way of saying “test without knowing everything”) or cause-effect graphing (a way of saying “I repeat things I see in books even if the ideas are totally impractical”). I don’t teach those labels to novice testers, Julian, because they don’t help a new tester actually test anything, and I want novices to learn how to test.
But to be an educated tester who is effective at explaining test methodology, I think you need to know the buzzwords; you need to know the folklore. This is true whether you are a tester who embraces exploratory testing, or one who still pretends that you don’t do ET.
A tester– any tester, not just one who follows my rapid testing vision– needs to develop the cognitive skills to effectively question technology. Gain those and you automatically gain everything important about “traditional test techniques”, in my opinion.
To see such a skill in action, ask yourself this question: how many dimensions of a wine glass can you list? Then watch what your mind does next. To answer this question you need a skill that I have come to call “factoring”, which is a component of modeling skill. It is a skill, not a technique, though there may be many techniques we might apply in the course of exhibiting our skill.
Michael Bolton says
“…if I were to say that I test the way Richard Feynman used to test, some people get it right away.”
And if they don’t get it, you can point them here:
http://www.sellsbrothers.com/fun/msiview/#Feynman
—Michael B.
Boris Beizer says
My Goodness James. You seem to get a childish kick out of misquoting me out of context.
[James’ Reply: I don’t think I have misquoted you. Nor have I quoted you out of context. But if you would like to be specific, I will consider your case. In the conversations I like to talk about, you and I were the only people present. So, I suppose all we have is our own reputations to go on. People will decide for themselves which of us is the more reliable source.
In the conversation I cited, I specifically challenged you about the utility of some of your advice. You specifically scoffed about it. You told me “I do best practices. I don’t do risk.” I remember you said that because I was so amazed at how dismissive you were toward me and the community I was speaking for (Silicon Valley).]
But then I learned a long time that you anything that is said within your hearing by someone you disagree with is fair game for use in an ad-hominem attack. I notice you don’t provide verifiable references.. just something like ” Boris Beizer once said to me …. ”
[James’ Reply: Please review the definition of “ad hominem.” An ad hominem attack is not the same as an attack on your character. I have attacked you as a bully and as someone who shows contempt for the practical applications of your ideas. I stand by that. My source is personal communication with you. I have tried to faithfully report what happened. You have made your bed. Now go back to sleep.]
I don’t understand why after all this time you still feel that it is necessary to attack me. I suppose it has something to do with the persistent popularity of my writing. must bug the hell out of you. Isn’t your 300,000 + google hits compared to my mere 40,000+ enough for you? Areare all those hits of your making?
[James’ Reply: Honestly, Boris. I don’t feel the need to attack you anymore. You are no longer holding back the craft. Some of the people who are holding it back nurture their negligence and arrogance on ideas you are famous for spreading. But I generally just argue the point on its face.
As a tyro I questioned the internal inconsistency of your positions, your lack of attention to reality (read “empirical evidence”), and your lack of respect for the disciplines of social science.
I am no longer a tyro. Don’t argue with me unless you come armed with a better education.]
Boris Beizer
Samuel says
I’m right now reading Boris Beizer’s book “software Testing techniques”. It is interesting to read the approaches taken by both Boris and James.
As a tester I think I need to be aware of the general approaches to testing.
PS: I read the following somewhere
Boris is famous for pushing code coverage as a big deal and dismissing the idea of exploratory and risk-based testing. He once told me, in 1993, that Microsoft would be out of business “within 5 years” because it was using the kind of testing practices I recommend. — JamesBach
I do not know if this is true :), but I will complete reading that book I think.
Sam
Xiaomei Tai says
“Now, I do have an exploratory approach to boundary testing. I use various heuristics as part of the boundary testing process”
James, can you offor more detailed information about this? I’m really interested in the way you teach boundary test. Thanks.
[James’ Reply: That’s part of my testing class. I haven’t yet written an article about it.]