Subha asks:
A tester is usually bound by the constraints of specifications when he does functional testing. But what about usability? How much should the tester’s imagination be allowed to flow?
Hello Subha,
Read carefully– this is important:
The specification does not bind you, as a tester. The specification provokes you. In fact, the spec, the product, the things people say on the project– all of it is provocative to the tester. It might be where we start, but not where we end. If the product behaves in some important way (important either positively or negatively), then it is generally the role of the tester to test it, even if there is nothing in the spec about that behavior.
Doing testing well requires a great deal of imagination.
I think a tester is actually bound by six things that come to mind as I write this: the mission, project culture, the paticular constraints of the project, and the skill and knowledge of the tester. All of these except that last one are to some extent negotiable:
Mission: This is the problem that your clients want you to solve for them; the outcome they want you to achieve. If you don’t honor your mission, you will not gain credibility or retain respect. Be sure that you negotiate a mission you are capable of fulfilling; and be sure that the story of your testing features the mission as its primary plot point. Is usability testing a part of your mission?
Project Culture: The other people on your project have expectations about what you will or will not do. These bind you. You can challenge those expectations and suggest alternatives, but you have to be careful about that. If usability is part of your mission, what methods of usability testing are acceptable or expected within your organization?
Project Constraints: On your project, you don’t have all the time and money to do everything you might find useful or interesting. You may need to find inexpensive ways to do the testing that your strategy calls for. You may need to acquire special tools, or use tools that don’t do everything you wish they did. What kind of usability testing is it even possible to do on your project?
Tester Skill & Knowledge: Even if you were granted permission and resources to do everything you want to do, you would still be limited to the things that you know how to do. If you want to do testing well, you need enough command of testing practices and tools to make that possible. One common problem with testers is that they don’t do enough to educate themselves. Do you know how to do usability testing?
I realize that this is not a detailed answer to your question. What I’m trying to do is frame a way for you to think the issue through for yourself.
Ethical Standards: A tester is bound by ethical standards not to, for instance, lie about the results of the tests, or to misrepresent his ability to do the work. Are you suggesting usability testing for selfish reasons, or do you really believe it will help your client?
Legal Standards: A tester is bound by legal standards. In some cases, there are laws, such as Sarbanes-Oxley or HIPAA, which guide how you must test. Is there any legal reason why you must or must not perform usability testing?
Subha says
Thanks Mr. James & Mr. Jonathan,
I feel i have some clear views & ideas about testing our websites for usability.
David Gilbert says
James — I like the response, but I want to go back to the question for a second and look at it from perhaps a slightly different perspective. Subha says : “A tester is usually bound by the constraints of specifications “, and you begin your response by challenging that basic premise. But in many cases, that is true to a large degree. In highly structured corporate / government organizations, where the testing to be done is dictated and documented to a high degree, it is incumbent upon the tester to do and document exactly what they are tasked with.
[James’ Reply: Let’s untangle some issues that you may have confused. The specification is not the mission. The specification is not the test plan. Do not conflate the specification with these other things. The specification does not tell you what to test or how to test. The specification merely raises important questions. If you want to find important problems quickly in that product, you must perform testing that addresses those questions.
To fulfill my mission as a tester, I might not test what is referred to in a particular spec, or I might test everything, or I might test other things not in the spec. I do what fulfills the mission. If the mission is merely to touch the things that are mentioned in the spec, and not to draw inferences about them so as to address the risks that my clients are concerned about, then I will probably miss some important problems. My client may come to regret that they insisted on such a limited mission.]
If, within the freedom of the constraints you listed, you then have the luxury of time to go back and test more than you were tasked (I realize this is one of the constraints you described) then that is great, and exactly what I would recomend.
[James’ Reply: I don’t know what you mean by “tasked”. If its covered by the mission, then we do it. If its not covered by the mission, then we probably don’t worry about it. However, a tester’s mission often includes such ideas as “find important problems, please.” This will require you to go beyond, and sometimes far beyond, the handful of things mentioned in most specifications I’ve seen.]
So the spec is the starting point, but in some cases, even after considering your response, it may also be the ending point, and I believe your response shortchanges that a bit.
[James’ Reply: A specification document is not, and cannot, ever be an ending point. It is a simple impossibility. As soon as you read a specification; as soon as you think about it at all; you extend it. That’s because every specification document involves assumptions and requires background knowledge to interpret. It evokes your imagination.]Â
This is where I was 3 years ago, and in that time, we have managed to expand the constraints you outline, with the biggest change being in Project Culture. The real problem was that our Mission was badly defined, but we had no base of power to change that. By simply finding things we weren’t necessarily asked to find, but that were important, we were able to change the cultural perceptions of our role, and this led to a redefining of our mission. To your point about being careful, we did make many people unhappy along the way, but ultimately their unhappiness was based on the fact that our success was due to their not really supporting the organizations mission properly, and you could write books on that all by itself.
All of this is interwoven very tightly, but at the bottom of the food chain where most of us testers get to spend a lot of time, sometimes life does indeed begin and end with the spec and those other constraints are simply outside of our sphere of influence.
[James’ Reply: One thing that is never outside your influence is your ability to misunderstand a specification. And just as no one can prevent you from misunderstanding the spec, no one can prevent you from understanding it excellently. No one can prevent you from having great ideas about testing a product based on implications you see in the spec, or from any other source.
I’m not talking about flouting your mission. Your mission binds you. The specification is just food for thought.]
David Gilbert says
James — you say “If the mission is merely to touch the things that are mentioned in the spec, and not to draw inferences about them so as to address the risks that my clients are concerned about, then I will probably miss some important problems. My client may come to regret that they insisted on such a limited mission.”
I believe that is exactly the point. In the circumstances I am thinking of, which I get to deal with fairly routinely at the moment, the dev team gives the BA the spec, which says “The product shall…”. The BA writes the test cases based on the spec, which read “The tester shall test that the product shall…”, and then hands the test cases to the tester, whose mission is to validate the product relative to the spec vis a vis the test cases. Anything beyond that is seen as wasting your time and not doing your job.
What was really going on here (IMHO) was that a spec was being reverse engineered to address a set of features already known to work well, and the dev team was defining the mission so narrowly as to avoid any possible delay to their release that we may cause. Quality was not the overriding concern, schedule was. In the process of changing that, relationships have become adversarial, and that is the source of some of the frustration that I am sure can be sensed in my posts…I don’t like it when testers and developers talk to each other with suspicion and disdain. But as long as the test leadership chooses the moral high ground of “Quality for qualities sake” (a whole different discussion), and the development leadership gets whipped for missing delivery dates and deadlines, we continue to muddle along in our little rut.
The more I observe this situation, the more I believe the problem is one of leadership…the true mission has not been defined at the proper level and communicated down to those tasked with carrying it out. This leads to much frustration, which, by the time it gets to the individual testers level, at least LOOKS like we are being told to test strictly to the spec, which we inherently know is not good testing.
Sincerely,
David
[James’ Reply: I believe it is possible to pretend not to draw inferences about the spec, but I think that would just be pretense.]Â
Rachel Silber says
A provocation gets a response by making you uncomfortable. This made me think about the role of discomfort in inspiring testers. I’ve often said to myself, “I’m done with the planned testing, and I’m still not comfortable with saying this is working well. I wonder what I haven’t accounted for in my testing yet?”
[James’ Reply: Well said!]
Michael M. Butler says
It’s also possible , to some degree, to pretend to be following the spec, and actually in part test according to one’s hunches, higher sense of ethics, or duty. One needs to be able to provide some degree of cover in such circumstances, and this can include “misunderstanding” the spec in interesting ways — but beware of this “misunderstanding” being interpreted as bloody-mindedness by the writers of the specification.
Michael Bolton says
To David Gilbert:
In the circumstances I am thinking of, which I get to deal with fairly routinely at the moment, the dev team gives the BA the spec, which says “The product shall…�. The BA writes the test cases based on the spec, which read “The tester shall test that the product shall…�, and then hands the test cases to the tester, whose mission is to validate the product relative to the spec vis a vis the test cases. Anything beyond that is seen as wasting your time and not doing your job.
I do not feel in any way bound by the limitations of that mission. First of all, in my experience, when a B.A. says “the tester shall”, that’s just a suggestion (and usually an inadequate one) of what I shall really do. From most B.A.’s, that stuff is on the level of “the tester shall show up for work”. My job, as I see it, is to take the suggestion and run with it, since (without considerable experience) they have no idea of what I can do; they’re bound by their imaginations. I don’t have to be bound by their imaginations.
Now: I’d bet you have this attitude too, and I’ll bet I understand the reaction that you may have got. I got that kind of reaction from one client. I discovered a fairly spectacular problem, and the in-house tester was indignant. “You’re supposed to be testing [something else]!” What he really meant was “I wouldn’t have thought to do that, and I’m surprised that you did.” My response was twofold. First, I had exposed the bug by throwing the program some extreme data on the way to my primary destination, so in fact I was testing the stuff I was supposed to. But in addition, I said, “Okay, I can see how you might believe that. But tell me: if I happened to find a problem like that at some point while I am testing the stuff I’m supposed to be testing, would you be interested?” Long pause. “Well…” Long pause. “I guess.”
What you were experiencing was that a spec was being reverse engineered to address a set of features already believed to work well, not known to work well. If I’m in such a circumstance, and I reveal some information that might lead to a delay, I remember this: I neither put the problem in, nor am I responsible for the decision to delay or not. Gathering information about the product is my responsiblity; what the team decides to do with that information–delay or release–is the decision of the project owner. This doesn’t have to be an adversarial process. Would the developers prefer to release the product with bugs that they don’t know about? I haven’t yet met a developer who can’t be convinced that I’m trying to make him look good.
samuel says
I do not agree with David Gilbert, even in very controlled environment a tester is bound by his own Imagination, creativity, ability to grasp what he has read (specs) and his state of mind.