When I first joined ST Labs, years ago, we faced a dilemma. We had clients telling us what kind of bugs we should not report. “Don’t worry about the installer. Don’t report bugs on that. We have that covered.” No problem, dear customer, we cheerfully replied. Then after the project we would hear complaints about all the installation bugs we “missed”.
So, we developed a protocol called Mention In Passing, or “mipping”. All bugs shall be reported, without exception. Any bug that seems questionable or prohibited we will “mention in passing” in our status reports or emails. In an extreme case we mention it by voice, but I generally want to have a written record. That way we are not accused of wasting time investigating and reporting the bug formally, but we also can’t be accused of missing it entirely.
If a client tells me to stop bothering him about those bugs, even in passing, I might switch to batching them, or I might write a memo to all involved that I will henceforth not report that kind of problem. But if there is reasonable doubt in my mind that my client and I have a strong common understanding of what should and should not be reported, I simply tell them that I “mip” bugs to periodically check to see if I have accidentally misconstrued the standard for reporting, or to see if the standard has changed.
James Bullock says
That’s very interesting to me, more the clients and their direction than your response to it.
Contrast that with something I did a couple years ago – wrapping up for a maintenance release of a product I asked the developers, QA folks and other product touching people to put into the issue system everything they knew of or could think of, problems, issues, missing information or stuff that maybe wasn’t in scope as defined but probably should have been, and BTW not being crystal clear about what’s in scope or not is itself a problem. That’s one of my favorite non-standard defects: “don’t know.” Anything that’s “don’t know” without also being “don’t care”, well we’re missing something.
I wonder if your client wasn’t looking to focus attention for a team smaller than the mind-space of the problem. (That’s usually the case, actually – the system is usually bigger than our heads.) Maybe it was “get the workflow going at all” time. Or maybe it was “wrapping our brains around our screwed-up architecture” time. Or maybe it was “installer guy is off having a baby” time.
So, I’m curious, first about why the information offered isn’t seen as good and valuable right then. Second about why grabbing this stuff and having it lying around isn’t OK. What multitude of sins have been committed in the name of “defect close rates” and “aging reports.” We throw out information that good smart people generated simply because we won’t bother to write a “not” clause in our defect extracts & reports. Worse, we throw out more powerful insights:
– Defects tend to cluster. Any time you have a hot-spot of questions, issues, misbehaviors, missing information, abandoned coding conventions, whatever, it is more likely that the rest of that chunk is in trouble and you just don’t know it. Throw away information and these hot-spots are that much harder to find.
– We’re not reporting installer issues right now. But, we have lots and lots of them. So, along with the installer probably being hosed in general, we know one thing and may reasonably suspect another. We know the state of the installer is getting in our way doing work. We’re chewing up time every time we hit an installer problem while we’re developing or testing. We may reasonably suspect that we’re bad at making installers. So, if installers are a necessary part of our product, well . . .
“Mention in passing” is an excellent strategy. There’s a need for focus sometimes and many, many testers are unable to let their pet thing of the moment pass in aid of a larger, immediate problem. I do a lot of triage & recovery work. Picking what you are going to deal with first is part of that game. Often, the disaster has been caused in part by folks arguing that “My Thing is Important” all needing to be first.
Yet, I think someone should want to keep the data around. First off we paid for it. Second off, it may be useful later when we can pay attention. So, as a client, boss or “engineering guy”, while I might ask you not to make a big deal about installer bugs *right now*, for example while we’re whacking the code base into any kind of run-time stability at all, I’d also ask you to record everything you find, well enough so we can pick it up later. I’d be inclined to ask everyone, actually, to work with that kind of perspective and focus. Get it recorded, but let it go for now. Maybe “record in passing” or RIP which seems a bit ominous.
On my better days I hope I’d remember to thank you, both for noticing broadly, and for being willing to be the caretaker of this information that we can’t address this instant, because our brains are full right now.
[James’ Reply: Excellent points Mr. Bullock. One reason I will MIP is when I’m worried that the effort to report the problem formally is too much. In some bug tracking regimes, some processophiliac has decreed that there shall be 38 fields filled out, just so, for every problem. Then again, sometimes programmers feel judged and oppressed by metrics and/or sheer paperwork, and I’m trying to get them on my side (or have them accept me as on their side).
But I am definitely an advocate for letting all the data sit in one place, nicely searchable and browsable, if that can be done reasonably economically.]
Tracy says
I can’t tell you how glad I am to hear that someone has worked out SOME way to deal with the odious statement: “We don’t want to hear about [insert name here] bugs.”
I think it’s kind of reprehensible to hire people to test your software but then tell them to just ignore certain kinds of bugs. You want alert testers, testers who automatically notice when things are wrong, and you want them to have a kneejerk response to tell you if something is broken. (Yes, yes, you may have a requirement that they search for a duplicate report before they file. Fine. And they must assign a severity and use their judgment. Good.) But I think it’s a bad precedent to tell testers to start ignoring things. If you make your category of “ignorable” things a bit too mushy, they won’t report stuff you wanted to know about. And if you start training their brains to ignore some things… they may subconsciously decide to ignore other things.
Now I need to write a memo to my boss about that new suggestion the Upstairs Folks had about “usability bugs”….
James Bullock says
Ah, Mr. Bach, once again we are in violent agreement. One day we must work together more closely than we have. The exercise of day after day allowing space for us both to be right would do me good, for sure.
Mr. Bach wrote: “One reason I will MIP is when I’m worried that the effort to report the problem formally is too much. In some bug tracking regimes, some processophiliac has decreed that there shall be 38 fields filled out, just so, for every problem.”
Yep. I tend to have a number of arguments with the issue-system police – clearly people with time on their hands. More honestly, I tend to have arguments with folks who have had a free hand to optimize the system for *one* set of concerns. A system so “complete” that nothing is missing often ends up such a PITA to use that *everything* is missing from it. Situations like you describe go on my stuff to fix list. Sometimes they get fixed first. Sometimes they wait while I take care of bigger problems, first. I can think of at least these possible fixes:
– allocate more time so testers can enter all 38 fields for each ah-ha,
– make some of those entries default / automated so it is only the exceptions that require hand work,
– lighten the heck up on the 38 mandatory fields,
– adjust the perception of how useful and necessary these fields are so that *only* entering 38 fields (in swahili, engraved on granite with a file) seems like a small amout of work for something so important.
The problem isn’t so much the 38 fields it’s the literally insane mismatch of the agenda embodied in the automation and the working reality, respectively. What really leaves them stunned is when I go for this solution *as the QA guy* brought in to fix the mess in QA.
Mr. Bach wrote: “Then again, sometimes programmers feel judged and oppressed by metrics and/or sheer paperwork, and I’m trying to get them on my side (or have them accept me as on their side).”
You shouldn’t be alone in this. The engineering management needs to frame the relationship between the testing activity and the developers, for the developers. “They found buckets of stuff.” is oppressive in some contexts, less so in others.
The engineering management also needs to have coaching available for the developers so they can work through their responses to receiving this kind of information, or work on improving how they do their work, or maybe both. Or the engineering management might need to be available for a chat that goes: “OK, guys. This looks like a lot of stuff. Are we making it harder than it has to be to do this work? And by ‘we’ I particularly mean something I can do something about to help you guys do your jobs?”
Whatever the mechanism, you shouldn’t be alone in this. Not that I’m surprised.
Anurag says
Hello Mr. Bach,
I liked the idea of MIP to handle the issues which I usually observe while testing but no one from development team is willing to listen to them. MIP can be used as a place where we can dump all such issues for future reference.
Although, I got a lot of appreciation for the bugs which developers thought were very minor and i virtually fought with them to convince them. Later those bugs contributed in build crashes on client site.
I will certainly try to implement this idea in my project and will share my experience on this with you.
thanks
Dwain says
Hi Mr. Bach,
I couldn’t agree with you more; “mipping” is a necessary strategy in all projects. As software testers, we should be obligated to report all bugs/defects (that are repeatable or not repeatable) and let the business/stakeholders decide their ultimate severity/priority (based on accurate and detailed information provided by the tester about the issue).
Michael Bolton’s recent article on Sticky Minds titled “An Arsenal of Answers” states the following ” I used to worry about not having enough time to test. Those worries disappeared when I recognized that as testers, we serve the project, rather than drive it. We keep providing service until the client is satisfied.”
My interpretation of serving the project, is to identify all bugs and communicate them within the agreed upon structure of the project (email, defect reporting database/software, verbal). I believe it is also the QA practitioner’s responsibility to clearly define the potential impacts to the related components of the application/system/environment (based on the QA’a knowledge, experience, or additional exploratory testing). This will give the stakeholder the required details to make an informed decision.
Many projects may feel threatened by the many ‘small’ bugs that get reported due to the impact on the reporting metrics, and be tempted to eliminate, filter or hide them. But really, if the reporting is done right, these “mippings” should be documented for future reference (warranty, support or future projects) and used by testers to expand their knowledge of the product and perform better tests.
Jason says
My team often has to strike a balance between two situations: One is where the customer just doesn’t want us to test this or that, like your examples, and the other is where we have coaxed the development team to let us see a module early, even if integration has not been yet achieved.
In the former case, we prefer to log everything, and in cases where the dev team has some mystical issue with open defects (or more likely, their managers have issues with holding open defects over their heads) we can assign the defect with a status that keeps it handy but not “Open”.
In the latter case, we don’t record bugs that are outside the scope of the module/part we are testing. Those things are incomplete and unready, so we don’t spend our time and theirs on things unfinished and outside our target. My customers view this as a service to them.
I wanted to mention a time where we do not record everything, even though all other times I can think of we would.