Stuart Reid is planning to do a talk on how we should use “evidence” in our debates about what works and doesn’t work in testing.
A funny thing about that is Stuart once spent 30 minutes trying to convince me that the number “35,000” was evidence of how great the ISEB certification is, as in “35,000 happy customers can’t be wrong.” Such a concept of “evidence” wouldn’t not pass muster in a freshman course in logic and rhetoric. How does he know that the 35,000 people are happy? How does he know that they are qualified to judge the quality of the certification? How does he explain the easily checked fact that you can pick out any three ISEB or ISTQB certified testers, ask them if they think the certification has made them better testers or indicates that they are better testers, and at least two of them will do the equivalent of rolling their eyes and smirking? (Don’t believe me? I understand. So TRY IT, as I do on a regular basis in my classes)
You might think Stuart is attempting a bold and classic rhetorical move: attempting to control the terms of the debate. The problem he has is that he will lose the debate even faster if he actually engages on the question of evidence. This is because there is plenty of evidence from other fields and the history of thought itself to justify the positions of the Context-Driven School of testing. We are winning the debates because we are better informed and better educated than the Factory Schoolers, for instance, represented by Reid. For instance, Rikard Edgren (who says he’s not in the Context-Driven School, but looks like a duck to me) wrote about applying Grounded Theory to testing. I wonder if Stuart Reid has ever heard of Grounded Theory. He probably has, because I probably mentioned it at least once in the hours of debate that Stuart and I have had. He didn’t respond or react. My impression was that he wasn’t listening.
There’s something far more important than evidence that we need in our industry: engagement. People need to listen to and respond to the arguments and evidence that are already out there.
Here’s one sort of evidence I put in front of Stuart, in a debate. I claimed that my school of testing represents a different paradigm of thinking about testing than his does. After giving him examples of specific words that we define differently and concepts that we arrange differently, it became clear that the deeper problem is that he thought I was pretending to believe things that I don’t believe, just to be difficult. He actually said that to me!
This is the last resort of the determined idealogue: poke your own eyes out so that you don’t risk seeing contrary evidence. Stuart’s case rests on pretending that no one else is making a case! His demand for evidence is meant to give the impression that the evidence is not already sitting in front of him being ignored.
Cem Kaner, Michael Bolton, and I have been marshaling evidence, pointing out the lack of evidence against our ideas, and demonstrating our methods for many years. Next week it will be exactly 23 years since I first became a full-time software tester, and nearly 17 years since the first time I stood up at a conference and pointed out the absurdity of “traditional” testing methods.
BTW, here some of the kinds of evidence I offer when challenged about my work:
- The Sciences of the Artificial, by Herbert Simon (this establishes, based on a body of research for which he won the Nobel Prize in 1978, the heuristic nature of engineering)
- Collaborative Discovery in a Scientific Domain, Takeshi Okada, Herbert Simon, 1997, (this is an experiment that observed the behaviors of scientists attempting to create and perform experiments together in an exploratory way)
- The Processes of Scientific Discovery: The Strategy of Experimentation, Deepak Kulkarni, Herbert Simon, 1988 (this study analyzes the basic exploratory processes of science)
The first item here is a book, the next two are papers published in the journal Cognitive Science. See, if Stuart wants evidence, he has to look beyond the desert that is Computer Science. He needs to get serious about his scholarship. That will require him to find, in his heart, a passion to learn about testing.
Jon Bach says
After looking at Reid’s proposed talk above (see link), it baffles me how he can explain a question like “Is a certified tester more effective than those without industry certification?” without framing the context of his conclusions.
If he doesn’t reveal that, someone in the audience (a tester, no doubt) would point out another context in 3 seconds where an alternate conclusion could be reached.
The list of questions he is proposing to discuss strikes me — the amount of preparation, experimentation, and testing that has to go into answering them would need so many asterisks and caveats that it would show the CONTEXT he would have had to consider in framing his experiments. If there are testers in the audience, whatever “evidence” he plans to show will be questioned.
It would interesting to see how he faces those questions (*other* than shutting his eyes, hands-to-ears, and humming a loud song to himself).
Michel Kraaij says
Well, James, at least we know ISEB is good at doing something: making money out of ignorance. His “35.000” could at least prove that.
Last week i attended a presentation by Stuart Reid, called “Improving Testing – with or without standards”. Half-way the presentation i really thought i had time-travelled back into time. His slides all mentioned graphs from the stone-ages, while talking about how bad the majority of projects were ran en how bad the majority of the testers were doing their job. (Well, in my opinion at least the last part is mostly true. The majority is copying “stuff” from books without thinking about what they are doing). But the goal of this presentation was about the importance of the “new” standard ISO 29119 (which takes an astonishing 5 years to be developed) and the importance about being certified. One of the testers asked why his graphs were out-dated. His answer: “i couldn’t find more recent info on short notice”. Funny, it seems to me he can’t get his current facts straight and make a compelling story. That was the moment i totally lost interest in his presentation…
This presentation goes into my top 5 worst-spent-hours, next to watching paint dry.
Simon Morley says
James, thanks for the post.
I have one of the ISTQB certificates – and maybe I need to jot down more of my thoughts and experiences – as an ‘experienced’ tester I didn’t agree with parts of the teaching and terminology – but it was a multiple choice exam – ie only one message and one right answer – so suspend disagreement until after the exam…
I think I need to write about those experiences… However, in the meantime if someone asks about whether I’m certified or not I’ll just show them my testing balls.
Jared says
I remember also quizzing Stuart at a conference when he made claims about optimum method/function/procedure lengths for code quality. When I quizzed him about whether the evidence he presented applied to object-oriented languages (which is the majority of my work), he didn’t know.
I googled later. There wasn’t much research on the topic, but what I could find suggested his statements didn’t hold true for OO. The studies he cited referred to procedural languages. At the time I spoke to Stuart, he suggested almost none his work at the time was in OO environments. So context doesn’t seem to be his strong point.
Michael Bolton says
I believe in science. I don’t believe in bogus science. And I believe that it is unethical for marketers to hide behind other people’s caps and gowns for the purpose of bullying testers and hiring managers into buying what the marketers are selling.
Therefore, I believe that it’s very important for our community to confront this presentation on its own grounds, and to act in accordance with our ethics and our testing skills. In order to disarm the bullies, we must respond to bad science with the critique that science demands. I immodestly present one example here:
http://www.developsense.com/blog/2010/01/defect-detection-efficiency-evaluation/
Note that I have no reason to believe that the authors of the paper acted in bad faith; people make erroneous conclusions all the time. Even Ph.Ds who are closely linked to the marketing certification business may not be acting in bad faith; they may simply be spectacularly ignorant of the craft; incurious, if you will. But the kind of incuriosity that allows people to accept and promote this kind of research to accomplish their commercial goals is of the same nature as the incuriosity that leads people to invade and demolish other countries. That is, it’s a major political and social problem.
Thus I urge our community to find the research papers and raise the relevant questions about them. That is, let’s cry havoc and let slip the dogs of… testing.
—Michael B.
Aaron G says
I’m blown away by the list of questions he proposes to answer with “evidence.” They sound like the sort of thing you’d expect to hear from a politician or marketing manager, not a scientist.
“Will exploratory testing detect more bugs than scripted approaches?”
What kind of exploratory testing? Who is performing it, and with what resources? What is a bug defined as? What is “more” defined as? (Does severity matter? Do design flaws count?) Which “scripted approaches” are we talking about, and why are they considered mutually exclusive to exploratory testing? What business domain does this apply to? What kind of software? What about software designed by engineers with no experience in automated testing?
“Does using standards reduce the probability of project failure?”
Which projects? Which standards? Does “using” mean following them to the letter or just borrowing their concepts, and is the definition consistent across samples? What is project failure/success defined as? (Shipping? Popularity? Profits? Lifespan?) How is this “probability” calculated, and how much of a reduction is statistically significant?
“Is a certified tester more effective than those without industry certification?”
Certified by whom? Effective at what, and how? Does “industry” mean the “testing industry” or the actual project domain? If the former, who is considered to be in the “industry” and why? As above, what is the quantum of this measurement, how is it taken, and what are the mitigating factors?
Are formal reviews an efficient way of finding defects or are there better alternatives?
What does “formal” mean? What is the review process, and what specifically is being reviewed? What is “efficient” defined as? (Time? Dollar cost? Scale?) What kinds of defects are being referred to? Is the statistic adjusted for precision (i.e. what if one “alternative” fails to find a critical defect)? And what does “better” mean anyway? Does it mean the same thing that “efficient” means? Is it consistent and valid for all teams and projects? And does this all presuppose that formal reviews cannot be combined with other approaches, that it must be one or the other?
“Do testing tools save you money, cost you money or are they simply a distraction?”
How is this measured, where is it applicable, and over what time frame? What controls are used in these trials in order to account for typical productivity improvements and economies of scale developed over time in any business/team? Don’t distractions cost you money, and if so, isn’t the last part of the question redundant?
“Finally, the talk shall consider the metrics that are used to provide the available evidence and their level of trustworthiness, and suggest more reliable approaches to gathering and presenting the evidence in the future.”
Well, I suppose it’s good that this is being given a passing nod, but shouldn’t this have been the focus of the talk, instead of an afterthought? Isn’t the reliability and applicability of the evidence a front-and-centre concern in a discussion about *evidence?* If the metrics are not trustworthy or do not form sufficient evidence on which to base a conclusion, then what is the point of the preceding discussion?
AdamPo says
Statements like “35,000 happy customers can’t be wrong” are complete BS, and can be a lie through omission of the truth. How many total customers are there? How many *unhappy* customers are there? 100,000 customers can’t be wrong either. Maybe they’re so unhappy they don’t waste their time for your survey so you don’t know they’re unhappy.
Anne-Marie says
I like the fact that someone is standing up and talking about this. I think it will make an interesting debate. It makes a platform from where many testers can learn and decide for themselves what they agree or disagree about the topic.
Its important for all testers to question and not blindly follow any school of testing. Rather exploring and discovering new ways of testing ought to be on every testers path.
Most savvy testers will see through anything that does not stand up for itself.
I’m looking forward to it.
[James’ Reply: Unfortunately, I don’t think it will be a debate at all. I can say the other times I have publicly, at conferences, debated Stuart Reid, he seemed not to debate me back. I would make points, then he would speak as if I hadn’t said anything. Same thing happened when Cem debated him.
I’ve debated Stuart on stage and in a pub just one-on-one. I have the impression that he believes he is making sense, but his statements just don’t add up. For instance, he went to pains to explain to me that he did NOT create the ISTQB and does not endorse the ISTQB foundation syllabus (because I tried to get him to defend some of its more absurd bits). But lo! Look at his bio in the Eurostar brochure. It appears he did have a role in the ISTQB. WTF, Stuart?!
BTW, I recently had a good experience debating an ISTQB representative. It was an experience that left me with a healthy respect for that particular man. I don’t think he’s made a wise choice in supporting the ISTQB, but he struck me as a man of honor, and a good listener. So, Stuart has no excuse for running and hiding. It is possible to engage with your opponents.]
Elaine Conway says
THANK YOU for confirming what I always new to be true! I have been testing for 21 years and have never been certified; many coworkers in my company have ‘played the game’ and have been certified. I ask many of them if they learned anything new or if they thought it was a valuable indicator if someone was qualified or not. The answer was ‘No’ to both questions. They wanted the certificate because it would look good on a resume and/or it would raise their salary.
It wasn’t until I found AST a couple of years ago that I even cared about being in an organization. Pioneers such as James Bach, Cem Kaner, Douglas Hoffman, Michael Bolton, and Scott Barber are making a difference. They know what testing is and tell it like it is. Thanks!
Cheryl says
My concern here James, is that your and Stuart’s ‘messages’ are starting to look like personal agenda and rivalry rather than the healthy debate it should be.
[James’ Reply: That’s a reasonable concern, but I can’t control what things look like to you. I’m like a dog that’s barking at an intruder. You can decide whether to put a pillow over your head or get out of bed and investigate for yourself.
Some people, like Cem Kaner and Jerry Weinberg, also have contempt for what people like Stuart are doing, but believe that speaking publicly about it doesn’t help much. They may be right. But I don’t agree. In any case, my nature is to tell the truth about important things, and trust that the truth will resonate on some level.]
As testers, we have spent many years trying to be recognised as professional career testers and now that we have a recognised ‘ambassadors’ (aka glitterati) in testing i.e. yourself and Stuart you both spend too much time arguing (very publically) about your polarised ideas.
[James’ Reply: I’m sorry but there’s an ideological war going on. You can participate in it or ignore it, but it’s a true struggle for the soul of the craft.
However, look carefully and you will see that this is a not a dispute over what testing is. It’s a dispute over whether freedom to innovate, learn and reason will be maintained in our craft, or whether instead a small group of bullies are going to be permitted to stop the debate and make the testing field into their own little dairy farm that they will milk for cash. Stuart is not creating a standard for testing because testing has reached a point where thoughtful people agree on what testing is and how it should be done. He’s doing it for the business benefit it brings to him and his friends.
If you really care about being a testing professional, then it should also matter to you a great deal what the basis of that profession is. I want a basis that we can be proud of. I insist upon it. And I am rallying like-minded people to that cause.]
Surely you both know that this will dilute the message of the tester and I fear the impact may be on the humble tester on the ground who has to defend which camp they are in and why.
When did ‘making a point’ become more important than helping the tester and why oh why has it all become so political?
[James’ Reply: There are bad people in the world. I wish there weren’t. It’s my mission to oppose them, at least until I feel I can do no more good by trying.
Making a point is not my goal. Saving the craft is my goal.
Do you have a specific suggestion for what I should do differently?]
Mark Waite says
I have a long standing hope that we will find ways to create “evidence based software testing” using the same types of principles as are espoused by practitioners of evidence based medicine. I find so many things to admire in the attempts of evidence based medicine practitioners to apply their best analytical skills to their profession.
They also seem to acknowledge that mistakes are frequently made and we are best served by openly reviewing mistakes. Medical studies are published on treatment results and skilled practitioners examine the studies with a critical eye, seeking to decide if the treatment is likely to help, likely to harm, or insufficient information to decide. I’m sure those critical reviews are uncomfortable for the authors of the studies, but they seem to refine the practice of medicine and improve treatment results.
The topics proposed for Stuart’s presentation seem quite broad (as noted by Aaron G). I suspect those broad topics will (of necessity) have shallow support from hard data. Part of that is the nature of a keynote speech (first speaker needs to excite and enthuse, not graph and bore), part of that will probably be due to the great difficulty in gathering hard data from software testing experience.
I’ve been fascinated by the evidence based management writing of Bob Sutton and Jeff Pfeffer and would like to apply their ideas to the craft of testing as well. Unfortunately, I’ve still not found ways to measure testing (or programming) results which was any better (to me) than not measuring. Yes, I watch the number of open bugs in the bug database. Yes, I watch the status reports in various tracking tools. Yes, I listen to the people around me when they talk about their perceptions of the software. None of those items have great reliability, and all of them can be cheated in so many ways that I haven’t yet found to counteract.
Lanette says
I wrote a comment and deleted it 3 times thinking it is pointless to continue anything on this topic. Then I realized I do have some questions that might help me understand.
1. Considering their goal is to earn money, how can this discussion at the level of the test practitioners ever harm them in that area? As a tester I am already not certified. I’m planning to get more education in my own way. What else can be done? They have reason to promote their certification.
[James’ Reply: They are earning money by harming our craft. I want to rally people to oppose that.]
2. Why do these jerkofskis get so much of your time and attention? You are the best testing teacher we have. I seriously go read http://www.context-driven-testing.com/ often to remind myself there are other people like me and that it isn’t hopeless to stay in testing. What can we do as testers who care about the craft to make the value of testing understood so that companies can judge for themselves better rather than rely on some untrustworthy third party? I think a paper on evaluating testers in interviews might help some people. My suggestion is to offer an alternative for companies and promote the hell out of it. I feel like they actually win in these fights because they pull focus away from what is important, get all of this free publicity. Let’s face it, your blog is a million times more popular, you talks better attended, and you are more respected than they will ever be. Are they worth the time & drama? I don’t get it.
[James’ Reply: I spend time fighting them because if they get their way, smart, committed people will be driven out of the industry, and testing will be a wasteland dominated by their racket. It will be like “Pottersville” in It’s a Wonderful Life.
These bullies are exploiting the ignorance of managers, worldwide. I’m offering an alternative, yes. I’ve developed it and I’m offering it.]
3. I assume that the reason this injustice can’t stand unopposed is that they are ripping people off and harming the profession. I don’t see that as ranking near top risks to testing though. Companies see testers as optional. They don’t get what we do. The feedback loop is too long and the work we do isn’t respected. I see that trying to put that into a certificate is insulting, but even if they were gone tomorrow, the problem remains and the craft isn’t saved. What are you trying to accomplish? When does this end? Is there anything that we can do, those of us who care about the craft of testing (and we are many) to help?
I’m not trying to pick a fight, I just don’t understand the purpose. I don’t see how this can save the craft and I worry that this makes them more relevant than they are in the US. Do I just not understand the sway they have or do I not get what you are doing here?
[James’ Reply: The agenda of Stuart, Rex, and their ilk is to take over the craft of testing so that their friends and associates control who gets work. Today it’s a little certification, tomorrow it will be an ISO testing standard and licensing that prevents thinking testers from working in the European Union. I have already lost some opportunities solely because I oppose certification. Many other people are denied work because they aren’t certified (or get certified solely to get work, which is essentially paying “protection money” to the certifiers).
Our craft will innovate more quickly and raise itself higher by staying free and open.]
Mathew says
I’m new to the discussions (I’ve so much to read and learn about). My suggestion is to map your arguments at a level beyond craft but emphasize that your software testing methods and training can also benefit a tester’s financial picture over the long-haul. And do so with one’s reputation in-tact.
People who have learned from your methods and applied it to their job need to take that message to their management. I think it’s better for testers who have learned from you and applied what they’ve learned to start by teaching their bosses and management the good approaches to testing by the results they produce for their products (i.e. applying what they learn from folks like you, Mr. Kaner, Mr. Bolton, Mr. Weinberg, and others to help deliver value for their companies’ shareholder and customers).
I’m of the opinion that a good and smart management team will recognize this in the form of strong employee performance reviews, which I believe help testers build solid reliable reputations within their companies, as well as their industries. It’s that reputation more than any certification that will drive the success of that person over the long-term.
(A management team that does not, means you should be looking elsewhere anyways).
The argument I would use is this to anyone who has doubts would be something like –
Mr. Reid and his certifications ‘can’ make you more money – good luck with that, I hope that works out for you.
But training, applying what one learns using this approach, in-turn teaching and implementing that within, will yield not only better long-term financial results, but is proceeding based on integrity, honesty, observation, and fact.
John McConda says
I too, used to fall into the camp of wondering why we needed to fracture the testing community by fighting out the war of ideas in such a public manner. What I eventually came to see for myself is that there is too much at stake for us to sit by and let our field stay stuck in the ’70s, as Kaner contends here: http://www.testingeducation.org/a/testingRevolution.pdf.
This becomes literally a matter of life and death when life-critical systems are tested in such a scripted and documentation-heavy way that most of the best testers don’t even apply for these jobs. The status quo has to change for our field to be taken seriously and catch up with the other parts of software development that have evolved while we have stood still conforming to standards and finding best practices. We need more mentors who teach testing as a skill, not a list of vocabulary words, or an exercise in minutely detailed documentation that anyone can repeat.
I don’t enjoy confrontation, so I appreciate people like James and Michael who will call out something that they see as hurting our field. I prefer to help advance context-driven testing ideas through other means, like the BBST online courses, and there are plenty of other ways to get involved.
And as far as the humble tester on the ground defending their camp, why is that a bad thing? Far too many testers simply don’t care. They are hired to a new job, and follow whatever process is handed down to them without question. I hope this debate forces more of us to think about what we actually do every day and how much of it really contributes to the value of the products we test.
Farid Vaswani says
James,
I’d say your courses, content/ideas that you share with others and your passion towards the craft is surely serving a great deal by itself to make your point and save the craft.
Farid
Chad Patrick says
James,
I’m having trouble with your statement about “the absurdity of traditional testing”. Are you talking about traditional methodologies like scripting?
[James’ Reply: I’m talking about the package of unhelpful beliefs about what testing is, should be, and how it should be done, that dominated the industry from 1972 to about 1990, before exploratory and agile approaches began to challenge it.
Heavy procedural scripting is one part of that package.]
A script is a means to an end. Where I am from, that end is providing traceability, education, maintainability and accountability among other things.
[James’ Reply: I notice you didn’t say anything about finding problems in the product. Is that because you don’t feel that finding problems is a priority? Where is it exactly that you ARE from?
There are many ways of scripting and many aspects of scripting, however, so to have a coherent conversation about it, you should define what you mean by scripting. Exploratory and scripted behavior operate on a continuum. They are not mutually exclusive. The kind of scripting I complain about (writing detailed step-by-step instructions for performing checks) is useless for education, terrible at bug finding, and extremely expensive to produce. It’s a hallmark of bad testing.]
Are there bad script based testers? Sure. I would wager that someone that writes a bad script probably doesn’t understand why the script should be written and probably would make a bad tester regardless of what methodology they follow.
[James’ Reply: I don’t think so. Good (heavy) scripted testing requires special skills that aren’t required at all in exploratory testing. Almost all of it I have seen is terrible, partly because almost no testers have training in how to do it. It’s a demanding form of technical writing.]
The methodology we follow doesn’t force us to write scripts if the need doesn’t arise. It’s a time/cost investment that allows us to document ways to test the parts of the application that are known to us via the requirements, design documentation, etc. If the benefit doesn’t exceed the cost, we don’t do it. Our scripts aren’t written in stone. If during mid execution we deem a script is wrong, we assess why it’s wrong, change it or add new ones as necessary.
[James’ Reply: I’m happy to hear that. Except that scripting doesn’t “allow you” to document ways to test– it is by *definition* documenting ways to test. The question is whether such documentation is necessary. I find that the need for it is exceedingly rare.]
Some things I would like to see coming from the exploratory camp that would help me understand the benefits of it and how to effectively use it with what I do are:
1. How do you provide traceability to known requirements?
2. Does it allow for a seasoned tester to write the charter/approach and that be followed by a junior tester? Or is the idea that the person performing the execution figure it out on their own?
3. How does it facilitate replication of a known issue? (This to me sounds like it would have to be a script)
Perhaps I misunderstood your definition of ‘traditional testing’ and if that’s the case, please elaborate.
Thank you for your time.
[James’ Reply: The answers to these questions have been coming from the exploratory camp for years. They have been published, republished, and I teach them, too. They are also, frankly, intuitively and experientially obvious.
I think it is your responsibility to develop yourself into a productive and educated tester. That means if you have not received a methodology of doing exploratory testing from someone else, you should have invented it for yourself. It is not MY responsibility to spoon-feed you the basics of the craft or force you to read all that I have written or watch my videos. However, let’s just take the traceability issue:
1. Traceability is not an issue in much of the industry.
2. Scripting does not solve the traceability problem. (tracing your scripts to requirements does not prove that your scripts were properly executed, nor does it show that your requirements were well tested even if they were properly executed. In fact, the scripts will generally demonstrate that only shallow testing was done, since scripts are inherently shallow.)
3. We can address traceability at least as well as scripts do by making a report of our testing. Exploratory testing does not mean undocumented testing!
4. We can also use automatic function level logging and use the logs to trace to requirements. We even film the testing. There are lots of ways to preserve data.
5. We can use session-based test management to provide a middle level tracing of test charters to requirements.]
Chad Patrick says
I didn’t explicitly mention finding problems in the product because I do not feel that is a function of a script. a script only details steps on how to perform a specific task. Finding problems lies in the observations of the person (or perhaps tool) performing the execution. However, I typically accompany my scripts with notes to the tester which detail potential problem areas.
[James’ Reply: Then you must understand why I avoid scripts. My job is to find problems, not miss them. Scripts impair my ability to find problems. It’s a little bit amazing to me that anyone defends a method of testing that obviously doesn’t work. I understand that you think accountability, etc. is important. So, why not do testing in a way that IS accountable, etc. and still finds problems? Finding problems is probably our most important task, unless of course we were hired to FAKE testing (which some people are indeed hired to do).]
I find it interesting that you say traceability isn’t much of an issue. As a consultant, most of my projects deal with third party software vendors. The contracts are built around requirements and the clients tend to want to see how we plan on addressing those. I’m not advocating that is the best model, but that’s the hand we’re dealt.
[James’ Reply: Okay, well I find it interesting, as a consultant, that you would not be aware of the normal practices across the industry. I’ve been at this for 26 years, myself. The last 11 years I’ve been a traveling consultant. Traceability is an issue in regulated software testing. It’s mildly interesting in contract software testing. It’s nearly unknown outside of that, such as in Silicon Valley innovation culture. I never heard the word during 8 years as a test manager in San Jose, except in textbooks. It’s also not a word you find used in Agile projects.]
Your statement, “The kind of scripting I complain about (writing detailed step-by-step instructions for performing checks) is useless for education, terrible at bug finding, and extremely expensive to produce.” does provide some common ground for discussion. I can relate to this frustration but I still feel a large part of this is due to poor execution and not an inherent flaw in the methodology.
[James’ Reply: What I’m talking about is an inherent flaw. Of course, you need to understand something about the alternatives to scripting in order to recognize the flaw. If you’ve always been taught to eat standing on your head, then maybe it will be a revelation to try eating standing up. If you do nothing else, read Cem Kaner’s Value of Checklists for more on that.]
A blog response probably isn’t the best forum for continuing that discussion so I may follow up with an email after I’ve had time to do some additional reading and chew on what you’ve said.
[James’ Reply: I bet I sound like I’m short tempered about this. That’s because I really am. (I know it’s unproductive to be short-tempered… but I feel that honesty is important and that I shouldn’t pretend to be sanguine when in fact I want to beat my head against the table and sob for the future of humanity…). I don’t know where you got the idea that scripts are good, but it’s not from ANY scientific research, I’ll warrant. I’ve been fighting this battle for years. I can cite books and papers supporting my side of this debate (such as the entire body of work of Herbert Simon and the rat studies of Cem Kaner, not to mention most of the work of Jerry Weinberg, including his PhD thesis). What evidence is offered in favor of the superiority of scripting? Consistently nothing at all. I’m sick of it. Please do some reading about Cognitive Science. Consider reading Things that Make Us Smart, or Exploring Science, or Science as a Questioning Process, or Introduction to General Systems Thinking, or The Black Swan.]
Kranthi says
The article was questioning about “what is that a *certified tester* can bring it on to the desk than a non certified *yet a sapient tester*”. But how come many of us are divulging from the point and getting into a debate on “what a script can do and what a script cannot”? While I totally agree on the point that “certified testers” are not really better always, though they come with strong skills and practices in enforcing “process”, I definitely disagree on a script’s inability to keep the product healthy.
[James’ Reply: “Certified” testers don’t have “strong” skills in enforcing “process” any more than anyone else. In fact, I think the act seeking stupid certification is evidence of less skill in terms of process, since apparently the tester doesn’t know that the process of certification is a sham.]
The primary intention behind any script is to “certify the product testable.”
[James’ Reply: I think you could say that about smoke tests, whether or not they are scripted. But I don’t see how that is true for all scripted tests, or even most.]
I doubt if any one wants to script a scenario which is not going to be effected at all with the changes that are to come in the product. Even if it is a “functionally safe” scenario, who would want to risk a page displaying wrong set of results. The script, if at all, is intelligent, should catch the sin. But, what if your script verifies just whether you have a result set or not? Can any one tell me – out of 10 situations, how many times are you willing to match the search criteria and results set? Atleast how many times when a “certified tester” is involved? And the point on what scenarios have to be scripted totally depends on the decision maker of the testing team. And certainly, whether he be a *certified tester* or not, whether he comes with a *process orientation* or not, at the top of everything, he needs to have a *testing mindset.* So what if whether he is *certified* or not? But James, what is the yardstick to measure the non certified testers?
[James’ Reply: In a sense, there can be no yardstick. We don’t measure important and complex things with yardsticks. We assess a tester by *testing* that tester. We have to know what skills a tester should have, and we observe or create situations where those skills are demonstrated.
This is what I do in my Skype coaching sessions.]
Chris says
To offer a different (and inexperienced) viewpoint, I offer this…
I have found myself, by coincidence, reading the ISTQB Foundation Text book at the same time as being introduced to this blog.
My manager (the man responsible for introducing me), is not big fan of ISTQB, and is rather vocal about it.
From what I’ve read so far, I find myself agreeing. Though I do not have much experience in testing, I question a lot of the content.
But, to move up in this industry, it seems qualifications are what needs to be included in your CV.
What does a guy starting in out in testing do, when he doesn’t agree with a qualification, but knows he needs it to get somewhere?
Do I ‘Play the game’ or not?
[James’ Reply: What game are you referring to? I play the competence and radical honesty game. That means I say brutal things that I believe are true and useful and relevant, but I’m so good at what I do that people forgive me for bursting their fantasy balloons. I was playing this game when I was 20. I’m better at it now.
I avoid the “pretend to be competent by seeking fake certification” game. Read Abe Heward’s latest blog post about how he got his job.]
Chris says
I don’t feel I have to explain which game I am referring to now, since Abe’s blog has answered my question very well.
Thanks for directing me to it James. Now I just need to build on my skills so that I can be competent and not have any need to attain these qualifications.
Thanks for the reply, I’ll continue to read your blog with great interest. I may start following Abe’s too.
[James’ Reply: When you take the competence and integrity road, lots of my friends will help you, and so will I. Stay in touch, man.]
Daytona says
James
I am certified by QAI as a CSTE, and by ASTQB as a CTAL (full advanced level). I make no apologies for these achievements. In my case, it did make me a better tester. Let me explain.
The QAI Common Book of Knowledge by itself will not allow you to pass that exam. It is also a very boring read so I actually couldn’t “study” it. The syllabus from ASTQB may/may not be sufficient to pass their exams. So when I embarked on this, I read through each one time, and then went to the web. I read everything I could on Bach, Kaner, Bolton, and many others, Rex Black included. I took no publically offered course, relying on public blogs instead. I learned a lot just preparing for the exams this way.
One special benefit was finding reference to a book called Lessons Learned in Software Testing. Two years ago, my manager set up a monthly 2 hour brainstorming session where we discussed one topic from that book each month. This was done at my behest. Had I not wanted to get certified initially I most likely would never have found the book. While some may not admit it, that team has gained significantly from this exercise the last two years. I also find the book a constant companion now, looking up a do or don’t on my own at least once a month, and checking the web for the same subject.
Based on what I accomplished, I do agree that any tester worth his/her salt does not need a $2000.00 course to pass these exams. Not just because I did it, but because I wanted the IIST certification as well. That is until I checked it out and found out I had to take nine more courses at $495.00 per course. This seems to prove your money point.
I am not going to bad mouth QAI anymore than the little bit I have done here, nor am I going to bad mouth ASTQB. Those getting rich off of the Certificate based courses, though, is different story. They should be training for testing and not for their pet certification.
Si Spellman says
I’m an engineer and have recently move across from electronics into the world of testing. Immediately I recognised that they are most certainly different crafts. Everyone will happily agree that testing exhaustively is neither feasible or possible in most cases. Because of this I struggle with the idea of rules and structure that the likes of the ISTQB teach. As James continually pushes is that we must be creative at times, structured testing also has it’s place but context is key.
[James’ Reply: The ISTQB is not an advocate of structured testing. They are an advocate of ignorant testing. I’m sorry to say they are too incompetent to understand this. Their incompetence is protected by a thick blubber-like layer of indifference and greed. In any case, pretty much ALL testing is structured.
I do not say that you must be creative “at times”, I say you must know how to test, and use your mind to test well. That means creativity is a core part of testing.]
With the consultancy firm I am now working for the concept of context driven testing is never mentioned, the belief that we can simply imagine all we need to test from reading a requirement and calculating risk is there instead (the thought that we can always do this is almost as bad as the idea of testing exhaustively, we simply cannot think of everything that might go wrong).
The issue with all of this though I feel doesn’t come down to just the people training this factory testing, the industry (in the UK at least) already demand these ‘qualifications’ and it shows the level to which they have already infiltrated testing. If the consultancy firms and trainers do not get people qualified they will not get jobs.
[James’ Reply: I think that’s just another collective delusion. Yes, stupid people make demands. But there are still many smarter people who want skilled testers, rather than certified idiots.
Besides, you have to fight stupidity. It won’t go away on its own.]
For me personally this is something I am already familiar with as having joined the military at a young age I do not have a degree. This in itself cripples me when I goto the job market, even though I may be perfect for a role with experience and evidence to prove it without a degree I will not get a chance.
[James’ Reply: No no no. I also don’t have a degree. It’s not a problem. You can’t use that excuse, man. If you study testing, and you learn how to talk about it and also do it well, and you can show examples of your work, then you will get jobs.]
I truely hope that testing does not become yet another field where a piece of paper holds more weight that ability, but i think it has already started. (recently got asking in an interview what score i got in my ISTQB exam) 🙁
[James’ Reply: How come I don’ t know who you are? Have you tried developing a network of colleagues to learn from and to help learn? Come on, man. Use your imagination, apply yourself, learn your craft, and fight back.]
Simon Spellman says
You don’t know me yet..
As i said previously I’m very new to testing (we’re talking months) and as yet i have significant experiance to display my ability altho that will come in time i know. In the meantime i’m reading alot and trying to learn the craft.
Honestly i’m not someone that would usually post on forums and blog but this is something that winds me up and I think there is a need to show support to the intelligent people out there that will challenge it.
As for the degree thing I personally don’t think i’m any lesser for not having one I think the experiance I have had is in most ways superior. however, employers don’t always see past the bits of paper you have and it was one of the causes for me to join the consultancy/ training firm that i did. (FDM Group).
I’m planning on getting Lessons learned in software testing soon and in the mean time will continue reading what i can from the likes of you and the others involved in the CDT school. Right now i’m somewhat limite in access to the internet but hopefully I will get the chance to pick your brains on these matters soon.
Hussain Maheryar says
Hi Simon,
The speed at which technology is changing your first year of education will become obsolete before you graduate. Even in testing we see automation tools come and go. keep learning new things and try to diversify your talents. Don’t bet your life one technology. Learn, unlearn and learn.
Tessa Benzie says
Hi James
I read your blogs with interest and I just wanted to offer a thought on your comment:
“…you can pick out any three ISEB or ISTQB certified testers, ask them if they think the certification has made them better testers or indicates that they are better testers, and at least two of them will do the equivalent of rolling their eyes and smirking? (Don’t believe me? I understand. So TRY IT, as I do on a regular basis in my classes)”
How likely do you think it is that people may respond differently depending on the person asking the question? Your reputation as an “anti-cert” is wide-spread and well-known. I have been someone who rolls their eyes when asked that question by you. Do you think I rolled my eyes when my boss asked me a similar question?
People are context-driven beings and I would liken this to the old adage of “don’t bite the hand that feeds you”. I don’t deny that this may at times mean we are also weak and spineless beings, pandering to the favours of our audience. I just think it is an interesting observation on human behaviour. I wonder how many of your respondants truly believe it is a complete waste of time and how many are merely avoiding the inevitable conflict of engaging in that debate with you. For those who heartily and genuinely are of the eye-rolling school, did they have the same reaction when asked the question by their sponsor?
[James’ Reply: Anyone who is certified has already demonstrated his ability to tell perceived authority figures what they want to hear, regardless of his true opinion– because that’s what passing the test is really all about. I could not pass the sample tests that I’ve seen from the ISTQB, because I’m not willing to lie about something important like testing. I would get zero questions correct, because I reject the premises of the questions. No testing problem worth solving can be expressed in a multiple choice format.
But do you REALLY think that someone who drops his convictions and runs when I (who am not his boss) challenge him about ISTQB is a reliable poster boy for certification if he says to his boss “Yes sir! Certification is great sir! Please don’t fire me sir!”??]
Hussain Maheryar says
I think some of you are having wrong expectations from certification. I have CSTE certification and one benefit I tell everyone is to reduce miscommunication. Most of the testers are learning terminologies at work which honestly is a bad place. This results in miscommunication when they move to different companies. Specially we need to be very careful with miscommunication when dealing with offshore model.
I feel sorry for those who got certified thinking their resume will look good. As a hiring manager it did caught my attention but at the same time my expectation increased. Sad part was they thought they don’t need to pick books again because they are certified.
[James’ Reply: That argument only holds water if you respect the terminology you have learned. My question is, why are your standards so low? My second question is, even if they started so low that you think that some random organization has good terminology, why haven’t you raised your standards since then? Of course, I speak as one who’s been studying testing for 33 years, as of last week. I’m telling you there is no official terminology of testing. Anyone who claims that they know the right terminology is wrong. Even my own terminology is a work in progress.
Of course, I think my terminology is better than anyone else’s, but that doesn’t mean I would want to force anyone else to use it.
The way to deal with miscommunication is not to standardize, it’s to improve our communication skills, and let the marketplace of ideas take care of the rest.]
Hussain Maheryar says
IEEE does have standards for Software Testing terminologies.
[James’ Reply: I don’t know what this has to do with my post, but I’ll respond anyway. IEEE terminology standards that apply to testing are inconsistent, outdated, and even when they were created they were objectively unrepresentative of how the field uses language and finally, in my expert opinion, the people who created the standards were not experts in testing. Experts take care to do good work. This is sloppy work.]
If I want to prove that a certain English word that I am using is correct then I will use the Oxford dictionary.
[James’ Reply: That a pretty good move, because the OED is a tremendous achievement of scholarship by people who are actually experts. The methodology behind the OED is firmly grounded in following the usage of English speakers. It’s not a paternalistic standard.]
In the testing world. If someone disagrees with any terminology I would use IEEE standards. to prove it.
[James’ Reply: Since IEEE terminology is both incoherent and out of touch with reality, that’s a bad strategy. However, I agree that it’s good to be aware of IEEE standards.]
I would avoid using my work experience or previous companies I worked for as a source to prove that I am right.
[James’ Reply: That makes no sense to me. It’s the opposite of how language works. Language is not given to us, it is created by us. People who construct technically terminology are doing two things: interpreting usage and creating a helpful mental structure. IEEE has failed on both counts. At least if you base your ideas on your personal experience you are doing one of those things.]