[Updated: I revamped and added some more examples to the list.]
I got this message from Oliver Vilson, today:
Oliver V.: hi James. Just had a chat with Helena_JM. She reminded me something… don’t know if you’ve written blog about it or pushed it into RST.. One Test lead from another company mentioned me he has problems with his testers. Some of them are saying that they don’t have to do test plans, since your teaching seems to align that…
James Bach: Any more details?
Oliver V.: rough translation from test team lead : “ET seems to have reputation as “excuse for shitty testing”. People can’t explain what they did and why. If you ask them for test plan or explanation, all you get is “but Bach said…”.
I have, from time to time, heard rumors that some people cite my writings and teachings as an excuse to do bad testing. I think it would help to do a public service announcement…
Attention Testers and Managers of Testers
If a tester claims he is justified in doing bad work because of something I’ve published or said, please email me at james@satisfice.com, or Skype me, and I will help you stop that silliness.
I teach skilled software testing for people who intend to do an excellent job. That process is necessarily exploratory in nature. It also necessarily will have some scripted elements– partly due to the nature of thinking and partly due to the requirements of excellent intellectual work.
I do not teach evasiveness or obscurantism. I do not ever tell a tester that he can get away with refusing to explain his test process. Explaining testing is an important part of being a professional.
Why People Get Confused
I reinvented software testing for myself, from first principles. So, I teach from a very different set of premises. This is necessary, because common ideas about testing are so idiotic. But it does result in confusion when my ideas are taken out of context and “mixed in” to the idiocy. Consider: “I won’t create a detailed test plan document” is a perfectly ordinary and potentially reasonable thing to say in RST. It is a statement about things made explicit in a document, not a statement about lack of planning. Yet Factory School methodology confuses documents for content. If you say that to one of them, it may be mistaken for a refusal to apply appropriate rigor to your work.
Here are some examples of how someone might misapply my teachings:
- Rapid Software Testing methodology (RST) is not the same thing as exploratory testing. ET is very simple. Anyone can do ET, just as anyone can look at a painting. But there’s a huge difference between a skilled appraisal of a painting by an expert and a bored glance by a schoolkid. RST is a methodology for doing testing (including scripted and exploratory testing) well. Therefore, anyone doing ET badly is not doing my methodology.
- In RST, a plan is not a document, it’s a set of ideas. Therefore, I say you don’t need to have a test plan template, or any sort of written test plan document in order to have a good test plan. I often document my test ideas, though, in different ways, when that helps. Therefore, the lack of a test plan (a guiding set of ideas) probably represents an immature and possibly inadequate test process, but the lack of a test plan document is not necessarily a problem.
- In RST, a test is not a document, it’s a performance. Therefore the lack of documented tests is not necessarily a problem, but poor testing (which can be determined by direct observation by a skilled tester or test manager, just as poor carpentry or poor doctoring can be detected) is a problem.
- In RST, we have no templates for reporting. But reporting is crucial. Reporting skills are crucial. Accountability is crucial. Credibility is crucial. We teach the art of telling a testing story. Therefore, anyone who declines to explain himself when asked about his testing is not practicing RST. I disavow such testers. (However, just because explaining oneself is an important part of testing doesn’t mean a manager can insist on arbitrarily voluminous documentation or arbitrary metrics. I suspect that, in some cases, managers who complain about testers refusing to document or explain themselves are really just obsessed with a specific method of documentation and refusing to accept other viable solutions to the same problem.)
- In RST we say that testing cannot be automated, and that tools can become an obsession. This leads some to think I am against tools. No, I am against bad work. Unfortunately, some tools, such as expensive HP/Mercury tools, are often used to wastefully automate weak fact checking at he expense of good testing. Yes., tools and the technical skills to create and apply them play an important role in great testing. It’s not automating testing when I use tools, because testing is whatever testers do, not what tools do. Therefore a tester who refuses to learn and use tools in general is not practicing RST.
- In RST we distinguish between checking and testing. This allows us to distinguish between a test process that is appropriately thoughtful and deep, and one (based solely on checking) that would be reckless and shallow. But when we criticize a checking-only test strategy, some people get confused and think we are criticizing the presence of checking rather than the lack of testing. Therefore, a tester who refuses to design or perform checks that are actually economical and helpful is not doing RST.
- In RST, we ban unscientific, abusive attempts at using metrics to control the test process. But when some people hear us attack, say, the counting of test cases, they assume that means we don’t believe in even the concept or principle of measurement. Instead, we support using inquiry-focused metrics (which inspire questions rather than dictating decisions), we promote active skepticism about numbers applied to social systems, and we promote the development of observation, reasoning, and social skills that limit the need for quantification. Therefore any tester who simply refuses to consider using metrics of any kind is not doing RST.
- Some people hear about the freedom of exploratory testing, and they confuse that with irresponsibility. But that’s silly. If you drive a car, you are free to run over pedestrians or smash into buildings– except you don’t, because you are responsible! Also, it’s against the law. Freedom is not the same thing as having a right. Therefore, anyone who accepts the freedom of exploratory testing and cannot or will not manage that testing appropriately is an incompetent or irresponsible tester.
Arslan Ali says
Hi James,
This is Arslan…I don’t know if you remember me but I am the one whom you taught about “Dropped Calculator” exercise a few months ago via Skype:-)
I am facing something which is on the other side of the mirror. Instead of getting excuses of “Bad Testing” = “Teachings of RST or James Bach”, what we are getting here are the allegations from various fake testers and Automation trend followers that because we follow and teach RST and CDT concepts, we are the ones who are challenging such talent distracting trends such as “Automation” and “Scripted” testing. (I call it killing the talent for the trends)
Such mindsets are also mingling the facts of “Testing being a Human skill” or “Testing being a Technical Skills”. Subsequently they are damaging the Fresh Professionals and even students, and that is why we are facing a huge amount of resistance in preaching and training about RST concepts and CDT as well here in Pakistan.
[James’ Reply: RST is not against technical skills, either. I will add that to the post.]
I shall be putting up your blog post on several forums.
Bests
Arslan
(TestersTestified – @arslan0644 and @testtified)
Nischal says
Thank you James for the timely post. I can’t wait to show this to my team (and managers).
Anne-Marie Charrett says
Hi James,
Something Cem Kaner brought to my attention a couple of years ago was testers mixing up the concept of exploratory testing and Session Based Test Management. I didn’t think much of it until I was at CAST this year and heard testers talking about charters as if that was exploratory testing. I think its important to distinguish between the two, ones an approach to testing, the other a method of tracking testing. We can perform Exploratory Testing without using SBTM. We can track Exploratory Testing in other ways besides SBTM.
Anne-Marie
Dean Mackenzie says
Hi James,
With Point 8… Isn’t anyone who isn’t willing to manage themselves (regardless of their testing approach) incompetent or irresponsible? Even the newest tester who is ordered to “run those regression scripts” has a responsibility to manage their workload, learn the system / domain, discover better ways of performing their duties and improving the testing capabilities of the team. As far as I’m concerned, any tester who just turns wanting to “do their job” falls into the incompetent / irresponsible category.
[James’ Reply: Good point. I changed the wording to “manage that testing.” That’s what I was talking about. ET means self-managed testing, via a learning feedback loop, as opposed to following instructions given by someone else.]
I guess that sounds a bit general (and is tangential to your argument), but the point I was trying to come to was that regardless of how you test, you have a duty and responsibility to manage your yourself (and not just if and when you’re using ET… though if you’re properly implementing RST, you’d probably be doing a fair job of managing yourself already).
[James’ Reply: Thanks, Dean. Good call.]
Dean
Vignesh says
Hi James,
Thanks for a good blog post.
I’m having trouble to doing metrics to my work, as people say ‘imperfect metrics’ are better than none. How to do metrics which will be useful and not so imperfect.
Vignesh
[James’ Reply: Ask those people if that means they believe that ANY metric is better than none.
If they answer “no” then tell them “okay, then our discussion should not be about perfect vs. imperfect– it should be about whether a metric is good enough.
If they answer “yes” then tell them that, according to you, that answer has a stupidity level of 100 out of 100. Maximum stupidity! Then ask them if they agree that metric is better than nothing. If they say “yes” then say, “okay you agree that your statement is stupid.” If they say “no” then say “so, that means not ALL imperfect metrics are better than none. I suppose we have to discover which metrics are good enough.”
The problem with metrics is not that they are imperfect. The problem is that when they are applied in a social context they are usually toxic and unnecessary.
Arsenic is not an imperfect food. Arsenic is poison. There are poison metrics, too.
We might play with metrics. We might find useful metrics in the course of our work. But when someone seeks to add metrics to a situation because they think numbers are just better than not having numbers, then they suffer from an obsession and should seek treatment for that.]
Joseph Ours says
You cover the topic well; however, sometimes I think folks need things a little more in their face. Too often folks will say, well I do this, and I do that, while ignoring what they aren’t doing. Two key things I think everyone needs to take away above all else:
1) RST is a methodology, your methodology, which includes techniques and processes among other things. However, ET is a testing technique. Period. ET in absence of a broader strategy is likely to be ineffective and inefficient.
[James’ Reply: Sorry if this seems like nitpicking, but ET is not a technique. It’s an approach, because ET is applied to techniques. ET is no more a technique of testing then “alertness” is a technique of driving a car. Oversteering is a driving technique. Popping the clutch is a technique. But doing those things alertly is not in the same category of action.]
2) RST requires planning. It is an act/action that must be applied before testing techniques are exercised.
[James’ Reply: RST is not an act or action, per se. It’s a methodology– a system of ways of doing things. Test techniques are all part of RST, so RST does not precede them in time, but rather permeates them. Also, I wouldn’t say that RST requires planning, if planning means pre-meditation of behavior. Unpremeditated testing– testing as a spontaneous act– certainly fits within RST.]
“A goal without a plan is just a wish.” ? Antoine de Saint-Exupéry
There are a lot of misguided testers that need to stop wishing.
[James’ Reply: I don’t see what’s wrong with wishing, in that sense. For one thing, wishing together can have a salubrious social effect, bringing people together (I don’t like the song “Imagine” particularly, which is full of wishes, but it has been used to energize and harmonize crowds of people.) For another thing, I may not want to settle on a plan if that would prematurely commit me to an action before I had learned enough to choose my actions wisely. RST is at least as much about unplanning as it is about planning.]
Amit Mistry says
This post is very relevant. I agree with all your point. Exploratory testing is come up to software testing that in a few words explained as real-time knowledge, check design and test implementation.
It has been forever completed by expert testers. I can picture testing circumstances where effectiveness and repeatability are so significant that we should draft or make routine them.
Allen J. Scott says
James- Allen from STeP/ Per Scholas. At work now and came across this post. Being newly trained in RST and CDT I constantly go back to my training materials and exercises to keep fresh fundamental concepts. One of the driving themes in our class teachings is credibility. I believe this post provides such sharp focus and understanding for the new tester. It prevents the new tester from disseminating misinformation and displaying a lack of understanding. I believe your 8 examples should be it’s own document and available to print for new testers and managers.
Kiran says
Hi James
The company where i work is open to Exploratory testing and the different techniques to test to find more bugs, could you refer me to some of your blogs where i can start applying each of the techniques( with examples) to my web application
[James’ Reply: Exploratory testing is not a technique. You just need to learn how to test, then test. It might help to see my book: Lessons Learned in Software Testing.]