When a programmer builds a product, should he release it to the testers right away? Or should he test it himself to make sure that it is free of obvious bugs?
Many testers would advise the programmer to test the product himself, first. I have a different answer. My answer is: send me the product the moment it exists. I want avoid creating barriers between testing and programming. I worry that anything that may cause the programmers to avoid working with me is toxic to rapid, excellent testing.
Of course, it’s possible to test the product without waiting to send it to the testers. For instance, a good set of automated unit tests as part of the build process would make the whole issue moot. Also, I wouldn’t mind if the programmer tested the product in parallel with me, if he wants to. But I don’t demand either of those things. They are a lot of work.
As a tester I understand that I am providing a service to a customer. One of my customers is the programmer. I try to present a customer service interface that makes the programmers happy I’m on the project.
I didn’t always feel this way. I came to this attitude after experiencing a few projects where I drew sharp lines in sand, made lots of demands, then discovered how difficult it is to do great testing without the enthusiastic cooperation of the people who create the product.
It wasn’t just malicious behavior, though. Some programmers, with the best of intentions, were delaying my test process by trying to test it themselves, and fix every bug, before I even got my first look at it (like those people who hire house cleaners, and then clean their own houses before the professionals arrive).
Sometimes a product is so buggy that I can’t make much progress testing it. Even then, I want to have it. Every look I get at it helps me get better ideas for testing it, later on.
Sometimes the programmer already knows about the bugs that I find. Even then, I want to have it. I just make a deal with the programmers that I will report bugs informally until we reach an agreed upon milestone. Any bugs not fixed by that time get formally reported and tracked.
Sometimes the product is completely inoperable. Even then, I want to have it. Just by looking at its files and structures I might begin to get better ideas for testing it.
My basic heuristic is: if it exists, I want to test it. (The only exception is if I have something more important to do.)
My colleague Doug Hoffman has raised a concern about what management expects from testing. The earlier you get a product, the less likely you can make visible progress testing it– then testing may be blamed for the apparently slow progress. Yes, that is a concern, but that’s a question of managing expectations. Hence, I manage them.
So, send me your huddled masses of code, yearning to be tested. I’ll take it from there.
Pascal says
Nice article, James.
How do you deal with bug reporting? One thing I’ve noticed is that developers don’t really like to read bug reports on areas they know are not quite yet ready for prime time.
insectivorous says
But, but, but…coders can’t test! Oh, sure, they’re cute and fuzzy and fun to watch when they try, but it’s kinda like watching somebody else’s kid do a finger-painting. It gets hung on the fridge, but nobody mistakes it for Warhol, let alone Van Gogh. “Looky, I made a test!” “That’s sweet, dear, thank you for testing that for me!”
Umm, is that too cynical? Nah. I’m even more cynical than that. When a coder finds a bug, and the spec isn’t crystal clear (stop laughing!) then it will claim that’s the correct behaviour and anything else would be a spec change, thus your fault, not it’s.
Coders don’t have the chops. When you get right down to it, a coder is basically demonstrating that something works. They’re trying to make it work. I’m trying to break it. Coders hate it when stuff breaks. I live to break stuff. That’s why coders can’t understand that testing is not boring. They think it’s all about doing what they think is testing, and they don’t know any better, and what they do IS boring, not least because it’s so ineffective.
But the code is their baby. I can appreciate, in a distant and intellectual sense, that they might (for some strange reason) not appreciate seeing their baby gutted like a trout, mangled, and hung out to dry.
Alas.
Alackaday.
How lamentable.
We come from utterly different places, with utterly different attitudes and skills and approaches. (And we have fangs). How to explain that to a cute & fuzzy coder?
Once upon a time, a company built a big, complicated and expensive widget. It broke, and the king despaired. So the company built another BC&EW, and sent it to the Knave of Hearts for testing. The Knave found many defects in many different parts of the BC&EW, so the company had to toss it out. They started to build yet another BC&EW, but their venture capital ran out and they were placed in Ch.11, and everybody’s stock options turned into Kleenex. But at the creditor’s meeting, the Knave said, “If you let me test it while it’s big, but not complicated, I can find a lot of problems early. If you let me test it again when it’s big and complicated but not expensive yet, I can find even more problems. Then you can fix those problems before it gets expensive, and maybe you’ll stay out of Ch.7” The bigwigs and the midwigs hemmed and hawed while the met and conferred, and they decided to let the Knave test early and often and whenever he wanted to test something. So he tested the big pieces, and the complicated pieces, and even the expensive pieces. Then he tested the big pieces with the complicated pieces, and the expensive pieces with the big pieces, and the complicated pieces with the expensive ones, and then he tested the BC&EW all together, and it worked per spec. The king was pleased.
(Coders like stories like that just before bedtime, especially if they have happy endings and the company gets discharged so their stock options are actually worth something again.)
There are two morals to this story.
The first moral is, “everything’s always ready to test when the Knave wants to test it, especially when it’s not finished yet”.
The second moral is, “the documentation still sucks.”
Dumitru Corobceanu says
Hello,
In my opinion a programmer should test his work before release a build. Maybe not for obvious bugs, but a small “smoke testing� I think is great before shipping the product to testers. Deploying a product takes valuable time, and this time is pretty much wasted if application can not be at least started.
I’m agree width statement “send me the product the moment it exists� for first product release. But when you get second or third version, when you know the product and the build contains just bug fixes, then I better test existing one then waste valuable time on deploying /installing the product.
P.S. I think you pointed this width statement “The only exception is if I have something more important to do�, am I right?
james says
Hi Pascal,
I use a protocol called “MIP’ing” a bug. MIP stands for “mention in passing”. When I MIP a bug I report it informally, and track it informally. If I’m testing very early, I tend to MIP everything. I just mention it to the programmer, perhaps by email.
This protocol allows me to report problems without creating paperwork and resentment.
— James
james says
Hi Dumitru,
I think it’s great when programmers test their own work, too. But I don’t want to make that a requirement, because it just makes me look like I want the programmers to do my work for me. The distinction between smoke testing and all other testing is not clear enough.
We see this confusion in Agile projects, where many programmers believe that their unit tests compeletly remove the need for independent testing.
— James
Kirk Halgren says
In my first job after college, I worked for an aviation instrument manufacturer, where I learned a lot about testing. Their quality systems assumed that since everyone has off days and blind spots, they wanted the production floor technician to run the same test that was then rerun by the QC tester, the idea being that the odds of both people making the same mistake in running the test were extremely low. Of course, with expensive military aircraft and pilots’ lives in the balance, the cost/benefit equation makes lots of testing economical.
I loved your book, Secrets of a Buccaneer Scholar.
[James’ Reply: Thanks, Kirk.]
Chris Miller says
I think you need to find a happy medium. The programmer should do the basic sanity checking, otherwise you waste a lot cycles where the tester is finding obvious bugs that should have been caught first. I like the idea of MIP’ingâ€?, we do that when we give our testers code that’s not fully baked. You also have to factor in personalies. I have worked with testers who want the code “fresh out of the oven”, and I have worked with some who don’t want it until all of the “i’s” have been dotted.
As a developer, I prefer to give the testers frequent builds where the code works. I want the testers to validate that it works and find where it breaks.
[James’ Reply: I don’t know how to find that medium without incurring the unacceptable risk that I will discourage the programmer from working wholeheartedly with me. In my experience, most developers I’ve worked with hold on to the product too long, and the medicine they need is that I don’t mind getting a half-baked product.]Â
Kevin White says
Having done development work, I can’t imagine not trying to test what I’d done. How else am I going to know it worked right? Wait for someone else to prove it? That really isn’t going to fly.
Being a tester (although I don’t think I’m anywhere near James’ level), I have to say I love getting my hands on ‘pre-alpha’ software. I’m endlessly curious about new things, and there’s sublime satisfaction in being handed a piece of software and finding five bugs in five minutes, while just trying to click ‘File -> Open’. I find myself becoming relentlessly paranoid when I work with a program for hours and I can’t find anything wrong.
More eyeballs are always good. “You forgot a comma in this dialog box.” “Am I the only one that thinks it’s a bad idea to put “Delete” next to “Rename?”” etc.
[James’ Reply: If you can indulge your desire to test your own work without delaying the work of the independent tester downstream from you, then by all means do so. I’m speaking from my point of view as that tester. I just want to see your product, please.
This is kind of a dilemma, isn’t it? Television doctors will tell you that you should see a doctor when you have this or that symptom, but mostly when I have done so, the doctor at the hospital seems to think I’m a hypochondriac. Therefore, I now do what most people do– I go to the doctor when I am not one fraction less than half-dead. This is not conducive to good health management.
I do not want to send a mixed signal. I’m a tester. Let me test. It’s my responsibility not to ridicule you or in any way discourage you from showing me your work.]
Jamie says
how important is “end-to-end” testing…testing that simulates “actual” usage (as if you can ever accurately characterize the way an app is going be used in production 🙂 )
does anybody look at an applications performance from a “holistic” perspective in QA/Test? that is, how do the constituent parts conspire to produce some eventual performance/scalability limitations — or are things (code functionality/performance, app server performance, db server performance, etc) just considered on individualized basis?
[James’ Reply: End-to-end testing is a staple of the independent testing process. Few indeed are the contexts in which end-to-end testing is not helpful.]
Rhys23 says
>>We come from utterly different places, with utterly different attitudes and skills and approaches. (And we have fangs). How to explain that to a cute & fuzzy coder?
LOL. As a ‘cute & fuzzy coder’ I couldn’t agree more. I write my unit tests. I click the ui. I monitor, I debug and I profile. I am not a tester. I suck at testing. I’m just not mean enough to find fault in all others (Are good testers Borderline?). IMHO a good tester is indispensable to a good developer.
As far as letting the tester have it early on and often. . . Absolutely. This is the approach that we took on our latest project and it has paid of ten fold. In fact we have ( and some may cringe here ) set up the testers with an eclipse install and all the source, unit tests etc. Testers can rebuild their environment with the simple click of an ant task.
Also we have ( developers ) integrated the QA ruby scripts into own environment. Testers are able to record scenarios and then developers can run these scenarios to have there environment set up correctly. . .
My 2C
Rhys
Jeffrey Fredrick says
It seems easy to give both Chris and James what they want: the developer is allowed to do all the testing they want before giving the build to the tester… as long as that testing is automated and part of the build.
It it is really important to you then create the automated test, otherwise don’t get in the way of the testers.
Close says
It all comes down to the plan for project.
Let’s assume C (the Coder) discusses with T (the Tester) over a cuppa cafe latte that C will give T the half baked cake to taste (pun intended). T takes it happily and tells C that the sugar content is not enough for the sweet toothed customer. They come up with the idea of adding more sweetness to the icing cream. The chief baker didnt want the icing to be sweeter than he prepared. This icing cream had to be used for other cakes afterall. The chief scolded C and T, not for less sugar but for wasting time over the coffee machine.
Later the chief baker sold it as a low-cal cake, but that’s a different story.
It all comes down to the plan for project.
Nguyen says
Developers need to write unit test because that is the only way (s)he can improve the design via refactorings. Without comprehensive unit test cases, (s)he cannot observe the effect of a code change immediately, instead having to wait until later, and that will create a burden to the design improvement process. Developers need to perform a test over his newly written code (by executing it) in order to make sure that it works as the code intents to do (which may not be what the end-users like it to be), not to mention things that testers can hardly think of without looking at the code.
[James’ Reply: As a programmer myself, I am confused about why you believe this. I’m seeing words like “only” and “cannot” and “comprehensive”, etc. used in such a way that imply that there is one and only one way to write high quality software. Have you tried doing it in other ways? Are you aware of the many styles and approaches that are available to you? Here are some honest assertions of my own:
My understanding of refactoring is adjusting software design without changing software functionality. I can refactor code without any testing at all. Testing is certainly not a prerequisite for refactoring. (I’m not saying I would prefer to do it without testing, but neither is there any specific testing requirement for the process)
Show me a “comprehensive” set of unit tests, and I’ll show you ten more tests, off the top of my head, that aren’t in your test suite. I’ll show you a hundred more.Â
A developer doesn’t need to do any testing, at all, if someone like me is working directly with him. Again, I’m not saying that developer testing is bad. Personally, I appreciate developer testing, but I consider it optional. To make it a demand or expectation endangers the very idea of skilled independent testing.]
Testers are people who assure that the functionalities written by coders really meet end-users’ needs (maybe different from what coders think they have to code.) In addition, testers has to assure that all the non-functional requirements such as security, performance etc. are met properly.
[James’ Reply: That looks to me like a weak idea of what testers are. I prefer a more potent idea.]Â
Saying that, IMO, in good software projects, both coders and testers much be responsible for testing.
[James’ Reply: Are you also prepared to say that coders and testers are both responsible for coding? If not, why are coders supposed to encroach on my job when I’m not allowed to encroach on their job? If so, aren’t you really saying that you want everyone to be a programmer, and some programmers do more testing than other programmers?
The trap, here, Nguyen, and it’s a pretty big trap, is that expecting coders to test either requires coders to develop testing skills, or it requires coders to test incompetently. In my experience so far, coders who test do so without much competence. I don’t mind that, as long as they don’t start lecturing those of us who study testing about how it should be done.]Â
Nick says
“Many testers would advise the programmer to test the product himself, first. I have a different answer. My answer is: send me the product the moment it exists.”
So what, you want your developers to email you each time they write a line of code?
[James’ Reply: I love a comment that gets to the meat of the argument! Thank you!
The answer to your question is probably not. (Of course, if you write one line of code per day, and you do one build per day. Then I would like that build. You don’t necessarily have to write an email.) What I mean by “exists” is that it exists in a form that I could reasonably work with and that you can reasonably part with. For one thing, it should compile and link. For another, the process of releasing to me and working with me should not cause you to interrupt your flow of thinking. You and I would work together to figure out what is reasonable. You will not hear me ask you to delay until you’ve tested it, however.]
Aside from wasting the developer’s time sending out emails every few minutes, there is no use ‘testing’ code which has nothing in common with what will be shipped (which is what many first builds are).
[James’ Reply: I’m not suggesting that you waste your time. Please don’t waste your time.
However, there is considerable use to “testing” code that has nothing in common with what will be shipped. I suggest that you should not try to decide for me what is useful to me as a tester, any more than I would presume to second guess your development process. But I’m happy to discuss with you the value that I get out of seeing things early. Our specific discussions, on a specific project, would trump any heuristic.]Â
If you really have nothing better to do, find some education to go through or test other parts of the program (even if you think you have found all the bugs, I’m certain you missed some considering how many I’ve found that our test teams have missed). In fact, if developers send off code without running the most basic tests first, the most likely consequence is that they will break the entire product and you won’t be able to test a thing. Plus it is much more economical for developers to find bugs during development than for them to wait for the testers to find it.
[James’ Reply: I don’t find that it is necessarily more economical for developers to test, unless you are talking only about the simplest kinds of tests. One of the problems with developers testing is that many of them are bad at it, and most of them are uninterested. When I worked as a production coder, in the early 80’s, I certainly was both of those things. Furthermore, I want to recommend, if you are a developer working with me, that you and I test the product together, at the earliest reasonable time (which will be very early indeed), on your own system. When I have done that with a developer, it has been fabulously productive.]
I’m not saying developers should share the sole responsibility for testing, I don’t think anyone is arguing that (at least for a commercial product, if I’m doing something on my own I’m not going to go out and hire a test team to help me). Sure, a second eye is needed to make sure the damn thing works like it is supposed to and not just how the developer thinks it should (in fact, that includes other developers as well, that is why we have code reviews). And no, developers should not pass of their code to the test team the day before the product’s release and expect the test team to just rubber stamp it. But that doesn’t mean the developer should pass of his code without first testing it all. He shares some responsibility for getting it to work as well.
[James Reply: You can take on all the responsibility you want. However, if you and I are working together, I am going to make it my mission to charm, cajole, bribe, and otherwise do whatever I can do to relax your iron grip on your baby so that I can do my job and help you do yours.
My suggestion may not work for you if you are working with low competence testers. Frankly, I wasn’t aiming my post at developers, so much as at other testers. I’m trying to convince other testers to change their attitudes.]
Chetty says
I am strong believer of the idea that programmers should test their code before releasing to a tester. The question is what you consider as a “test”.
Clearly, the context of a test done by a developer and a tester is different.
What I expect of a programmer is to do Unit testing. By that I mean the program should of course link and compile successfully, and perform what his/her change is supposed to do. That is when I would consider that the code “exists” and that I can “reasonably” work with.
If all that a programmer does is ensure that the code compiles successfully and hand it over to the tester, well, the tester’s life will be a nightmare.
[James’ Reply: If my life is a nightmare, then I will talk it over with the programmer and we’ll fix the problem. Working together on specifics trumps any general heuristic I can utter. However, I find myself more worried about the nightmare of a programmer whose reticence to share his work leaves me with too little time to do my work.]Â
To me, a tester should look at testing in a bigger picture…as a system with all the relationships and integrations with other systems. I would not expect a developer to do this kind of testing for sure.
I guess it will help if you define what you mean by a “test”.
You have mentioned about being of service to your customer – the developer. How exactly are you helping the developer? For all you know, you will have the programmer get even more frustrated when a tester starts looking at a half baked product. In fact I have done this myself because of my curiosity to see how the product looks like before it is handed over to me. The moment I communicate my observations (informal or not), more often I get “Can’t you just wait till I hand it over to you?”.
[James’ Reply: I appreciate this line of thinking. It’s an important consideration. Part of the answer lies in the fact that finding bugs is not the only thing I do with a product. As I wrote in my original post, even a completely inoperable product can help me. It can help me prepare for highly productive testing, later on. Another part of the answer is that adjust the way I work so that I don’t frustrate the programmer. You may not realize that I also am a programmer. I have some empathy for the difficulties of software development. My focus is on serving the programmer and my other clients, as well, by working in such a way that the programmer’s productivity is maximized. So, if what I’m doing isn’t helping, then I adjust what I’m doing.]
Of course it is going to be difficult without enthusiastic cooperation of the people who developed the product. However offering your services to test it before they have a chance will not get you what you want.
In my experience, most often I get into a confrontation with developers is when the testers due to whatever reason do not test a product the way it should be tested. As a result create a bunch of defects that a developer has to analyse, only to find that there is nothing wrong with the code.
No, the developer would not prefer a tester to test before he/she is “done” with it.
So what if your testing process is delayed, you can always do something else until a “reasonable” product is delivered to you. If not, inform the management the reasons why your testing is being delayed.
Oh well…just my 2 cents.
Chetty
p.s.: BTW, I did see your video on becoming testing experts. It was a good presentation and I am going to recommend my team members to watch it as well.
[James’s Reply: Thanks for the plug. Bear in mind that what I wrote is not speculation, it’s the way that I can and have worked with programmers, for a number of years. I push to get the software as early as reasonable, and that is generally earlier than most programmers are comfortable with, at first. As I work with them, I generally can make them increasingly comfortable with earlier deliveries. This has to do with charm and diplomacy and establishing my technical credentials.]
Dave Churchville says
Interesting post, and one I happen to resonate with.
I’ve posted a further exploration of this from an Agile development perspective at:
http://www.extremeplanner.com/blog
kumar says
Its true that testers need to test their product, but again, we should also achieve enough isolation so that the realtime bugs are exposed. I have personally as a test engineer found the following benefits by absolutely isolating testing from development environment.
1. The real time bugs are exposed
2. unseen and unvisualised dimensions of the product are exposed, this further leads to reverting to design stage make better desing decesions
3.Product gets good marketing
4.Leads to healthy criticism and discussion
5.provides a chance to developers to think from a wider and wiser perspective.
At the same time unit testing should be left to developers, team and management together must be able to decide what needs to go to developers and what must be left to test engineers (substantial part of end user testing must be left to test engineers).
In case of parallel testing its a good idea that both test engineers and developers test the product together as this saves lot of time, effort and cost.
[James’ Reply: I don’t follow your reasoning on this. But anyway, have you considered the downside of total independence? There’s always a tradeoff, don’t you think?]
Alejandro Ramirez says
I agree with you James.
This topic creates a gray area for delimiting the end of development testing and the start of system testing.
We need to make sure that the acceptance criteria is clearly defined for everyone throughout our development effort so that the developers not only test the technicalities and architectural side of code (conformance to specifications), but also that their code implements the desired functionality for the end users (conformance to user needs and wants).
[James’ Reply: I’m interested in where this need comes from, for you, since I have not experienced a context where this need exists. I find that acceptance criteria is not a pre-requisite for testing, but rather that testing is a process of refining acceptance criteria. By the end of testing, ideally, we know what we are accepting and why we are accepting it.
Also, as I’ve already written, I don’t particularly need the programmers to do any testing. I appreciate it when they do, just as a paramedic appreciates when people on the scene render CPR to a heart attack victim, but don’t delay calling me in because you feel like you should test. Don’t delay. That’s my message to the programmers I work with.]
This knowledge can be leveraged by jointly reviewing all documents directly derived from requirements: use cases, design specifications, functional specifications, technical specifications, test cases, and of course, the requirements specification document itself.
Developers must be completely aware of what is expected of their code, and to achieve this, cross-functional meetings can be organized before and after every phase of the SDLC to: prevent defect injection, identify defect predictability patterns, and minimize heavy reliance on quality control to find bugs.
By doing this, we are empowering the creators of every deliverable in the SDLC with the knowledge to produce the best quality interim products in terms of compliance to standards, and customer satisfaction (and yes, that includes code).
[James’ Reply: In general, I don’t find that quality software comes from extensive planning and documentation. I think that’s because so much of the extensive planning is bad planning, and so much documentation is bad documentation. I suppose, like cocaine, it’s possible to indulge in that stuff and not get in trouble, but mostly I just see people get in trouble with it. I prefer a more agile and incremental approach to achieve excellent quality software.]
We are together in this software development thing; let everybody do what they do best and remember that everybody else can learn something from it.
SB says
I was spoiled by the last company I worked for as we had great testers there. They were great to work with and very understanding. Unfortunately, now we have testers that are so bad it is better to test it myself. They are too busy playing solitare to find even the obvious bugs….
[James’ Reply: On behalf of skilled and diligent testers everywhere, please accept my apologies. Indeed, a skill-free tester is not able to do much with a partly-baked product. My heuristic of instant release applies to me, testers who work for me, and testers who are like me, but not to every tester. It is something I’m suggesting that testers say to their programmer counterparts; something I hope we aspire to, in any case. I’d like to see no distinction between the testing phase and the programming phase. The whole concept of separate phases seems to me based on a myopic and defensive idea of what programming and testing can be.]
Toby says
James
The catch with your way of working is that it will only work with highly skilled testers that at the same time know a lot about programming. That leaves some 99 percent of the testers I have come in contact with including myself. Yes, I did some programming a long time ago but do not know enough to throw myself over half-baked products. That would for me to sped my time badly. However I would like to share what works for me. “My process”
Early involvment in the project: I start reading any piece of information that exists so far on the project and start analysing. Sometimes i visualise it by creating some sort of model. Then I start asking questions regarding what is unclear to me. This way a lt of missing or wrong requirements are noticed and updated. Sometimes I read through the program spec. It really depends on how readable it is for me. Formulas and logic I can understand. language specifics I leave alone. I do find some stuff, mostly due to sloppy work.
Next thing is to volonteer to create test cases for each module, program or whatever piece of code there is on a level that I can understand. Few programmers do any “real testing” other than the most obvious things. Then I hand these test cases over to the programmer so they can run them in THEIR own environment when THEY feel ready to do so. This way nobody has to know whatever embarrassing bugs they put in the code, no defect reports are written, no statistics saved. However I know that they have actually run some pretty good tests AND corrected the defects they found. which means that when they hand over their code to the test team we will find less simple bugs and are allowed to concentrate on more hogh level tests. They look good, testers are happy, management are thrilled.
Then I create test cases for higher levels as System and acceptance test. Some are fairly detailed but i tend to write down less information nowadays. Fully detailed test cases can be handed over to customers, testers with low skills and also the developers themselves. test cases with less detail are run be me or other highly skilled testers who does not want or need too much control. I think they are closer to what you call “charters” than they way they are described in the books you dislike. Developers often run some of the system test cases just to ensure that they will continue looking good, not creating broken builds etc. Like a smoke test on each build. How it works? Great! The test team find fewer bugs, but since we know it is because there are less to be found we are still happy, customers find almost no bugs at all in their acceptance test, if they find any they are usually due to bad requirements…
So I too feel the need to cooperate with developers using my superior skills in testing together with their superior skills in programming. Like you say yourself -“testers must add value”. I think that your way of working works for you and a few other “blingual” tester-developers but my way will work for a larger part – but not all – of the current tester community. Or maybe it can be described on a scale like exploratory-scripted, where you are on the far end of the scale and I am closer to the middle and the certification people and other famous authors are on the other end?
/Toby
PS: How did you spend your spare time before you had comments turned on to your blog 🙂
[James’ Reply: If all comments were like this, I would have more spare time. Good points.]
Dave Nicolette says
Lots of interesting perspectives here. Seems to me some of the discussion overlooks the context of testing. Let’s say you are running an agile development project. There’s going to be some up front planning before the iterative development starts. As part of the release process, there’s going to be (in any corporate IT environment of appreciable size, anyway) a formal testing phase. Iterative development occurs in between those phases.
Some of the discussion seems to mix up the kind of testing you do during the development phase and the kind of testing you do during the formal testing phase. Agile development teams may include testing specialists, but part of the idea of agile development is that everyone on the team is just “the team”, and you don’t want to subdivide the work along the lines of professional specializations. Having a testing specialist on the team is good because he/she can infuse the tester’s perspective into the development process. Developers can learn to write more-testable code, and to think a little bit more like testers. Testers, for their part, can pick up some development skills and gain an appreciation for what, how, and why certain things can go wrong, giving them good information for creating more effective tests. But none of that is the same thing as the formal, after-the-fact testing phase.
[James’ Reply: I’m not convinced that what you are describing is an agile project, Dave. You may be describing an “Agile” project, but that isn’t the same thing. Agility (I’m talking about the English word, not the word made up by some programming consultants that is spelled and pronounced the same) may be served by many approaches and ideas. Personally, I find the “no specialists” rule bizarre. It’s just another way of saying that a homogenous workforce is better than a diversified workforce. It denies education and temperament. It denies experience. I urge you to rethink it.
I like the idea of working as a team. “No specialists” doesn’t mean that. My experience discussing this with Agilists is that “no specialists”, in practice, means that only programming skills are valued.]
At our company, the real value-add of the testing group comes during the formal testing phase that precedes a production release. They are equipped to carry out testing at a level the developers are not. For example, they can test the new or changed solution in the context of a shared server environment alongside the other applications that “live” in that environment. A development team usually is not set up to perform that kind of testing. Their development environment doesn’t mirror the production environment completely, and they don’t have other applications installed there. The testing group is also best equipped to perform comprehensive system testing for performance, maximum load, security, and other factors. Testers also have an objective perspective about the application and won’t make any assumptions about what “should” work.
[James’ Reply: I wonder what role testing skill plays in this. You don’t seem to be talking about it.]Â
To enable the testing group to do its job, it’s incumbent on development teams to deliver code that will, at least, run. When development teams deliver code that hasn’t even been unit tested, quite often the testing group has to spend all the time allocated in the project schedule to running unit, integration, and functional tests to expose defects that really should not exist by the time they receive the code. They need to spend their time doing the kind of testing the development team just can’t do. Otherwise they’ll run out of time before they get around to the level of testing they’re really supposed to be doing.
[James’ Reply: It’s great when it runs. However, I don’t need running code to do certain aspects of my work. I don’t want you to delay giving me access to the product because you are concerned about running code. I think I have a different view of my job than you do. If we worked together, I would try to open your eyes to the things I can do with a product that don’t involve running it. I don’t expect you to know much about testing. But I would expect you not to presume to tell me, a tester, what I need in order to do my job. It’s for me to tell you that.]
However you want to divvy up the work is fine, as long as everyone is engaged in a constructive way and doesn’t have a confrontational attitude. I think this is far more important than specific procedures or techniques for development or testing. James’ original post states, “I worry that anything that may cause the programmers to avoid working with me is toxic to rapid, excellent testing.” A negative attitude toward others who are working in different roles than yourself certainly qualifies as “toxic.” Insectivorous’ comments illustrate this point very succinctly. He doesn’t sound like a person I would want to work with, whether as a tester or a developer or in any other role. People with that sort of attitude can destroy a project, whether they are working as developers, testers, managers, or whatever.
[James’ Reply: I think Insectivorous was not talking to you, a programmer. He was talking to his fellow testers. The spirit of Insectivorous’ words, I think, has to do with recognizing what a testing specialist brings to the table. You seem like a nice guy, Dave, so I’d like you to consider that your views on testing may come across as patronizing to those of us who like being testers and take our testing skills seriously. I bet that is not your intention.
In case you didn’t know, I also am a programmer. I am not a programmer-philosopher, though, and I don’t write about how programmer’s should do their work. I don’t feel qualified to do that. I’m surprised at how many programmers feel qualified to tell me how to do testing, not having ever studied the subject, and having no experience beyond intuitive plunking. I’m not sure if you are one of those; I hope you’re not.]
Jos Berends says
James,
I agree that there is little use in postponing a product release to test. As long as there is agreement over how to work on or work around unfinished parts.
However, I do wonder what kind of testing you expect to be performed in a project.
[James’ Reply: Guess!]
You say that you do not demand a programmer to do any unit testing. Does this mean that it is sufficient to perform high level functional testing (both black and white box testing).
[James’ Reply: What is sufficient depends on the situation. It may be sufficient to do no testing at all. I don’t prejudge what is needed. I read the situation and solve the problems that I find there.]
Or should code-level testing of e.g. modules, interfaces etc. also be performed, though not by the programmer but by the tester?
The latter would probably demand quite advanced programming skills from a tester.
[James’ Reply: I have experimented with independent unit-level whitebox testing. The experiment convinced me that it would be very hard to do that productively. It isn’t just a programming skills issue, although there is that. It’s also an issue of keeping the tests up to date with the code, learning the code, and staying coordinated with the programmer. In general, I wouldn’t recommend independent unit-level whitebox testing, but perhaps there are contexts where someone has made it work.]
Stan James says
I’m late to this thread, but just found your site. I love your attitude. I find it challenging some ideas that I held. For example, I’ve been known to say some set of requirements and some set of test cases contain exactly the same information, so we could define a development task by the tests it must pass. I wouldn’t claim that’s a sufficient set of test cases, and you’ve emphasized that testing beyond or without the requirements or any test cases is something humans can do with great effect. Still, is that set of tests useful? Something you might want the developer to do?
[James’ Reply: Thanks, Jim (or is it Stan? James?). You raise an interesting issue that has a strong role in the philosophy of knowledge: How to distinguish, if at all, between the instrument by which we recognize a “truth” and truth itself?
A short answer is: it gets easier when you consider evidence and risk, rather than truth.
When sorting this out, I find it useful to imagine testing a square root function. Let’s say you have square root function for a 32-bit floating point value. The specification for square roots is clear, and you even have a heuristic oracle, in the form of re-multiplying the square root to arrive at the original value (you still have to deal with rounding errors, but there are algorithms for handling that).
Does a “test suite” that consists of a single test of, say, taking the square root of 4, contain exactly the same information as the definition of square root itself? Clearly it doesn’t, unless square roots are just the same as dividing by 2. Will passing that one test be equivalent to satisfying the requirement of performing square roots? Clearly it won’t. Passing that test suite only tells us that the product is capable of performing square roots in some situation. It’s a question of can vs. will. We will know it can (in some situation); we cannot know it will (in every situation that matters).
If you imagine adding tests to that test suite, the evidence of capability that we are collecting speaks more and more to the question of reliability. The trick in testing is to gather enough of the right kind of evidence to make a well grounded leap of inference from capability to reliability. In the square root case, we must run 2 to the 32 square root cases to have done nearly exhaustive testing (not really exhaustive, but pretty close in some respects). We will then have a pretty strong idea of the reliability of that algorithm, at least with respect to direct input data. We might get away with running fewer tests, depending on the risks we care about.
Problems with specification-by-example and subsequent confusion of examples with tests include accidental inclusion of irrelevant details in the spec and accidental omission of important details. I believe examples are powerful and in most cases necessary, but it’s important to remember that an example is an instance, whereas our software exists to serve vast families of instances.
Quality is equivalent to passed tests only when the tests represent the only input, the only sequence of input, the only combinations of states, the only platform, the only user, the only…{add every other factor here} that you or anyone else who matters cares about.
Coming back to your question: An excellent test process can conceivably stand in for a requirements spec. However, we must always recognize that this our testing is focusing on some things and not others. There is never a perfect alignment. The excellent tester strives to understand the real requirements and seek the real risk, while remaining wary of complacency and lax assumptions.]
Stan James says
Thanks, that makes great sense. I have found the “by example” technique to shine a light on some ambiguities in the customer’s textual description of the requirement. A field that “shall be 8 characters” was really a max of 8 and the test example of “7 characters fails” got the right people’s attention. There are many ways to seek out such ambiguities, none of them 100% reliable. We can hope the combination of several will get most of them.
There is certainly a need for judgement in probing the risks without making an overwhelming number of tests. I read your posts to say those are important human tester skills, along with an expectation that we likely won’t define all the most interesting tests up front.
[James’ Reply: Yes, you seem to get it. Thanks. BTW, the best book on requirements I have seen is Exploring Requirements: Quality Before Design, by Gause and Weinberg. It deals with the need to use a variety of methods to get at the real issues.]
Chris says
I agree, all things considered, I probably want the software sooner rather than later.
However, with some developers, this can encourage an attitude that the testers are there to “clean up” after the developers. Any bugs that make it through to customers are the tester’s fault because they didn’t catch them. It seems to erode the shared responsibility for quality by skewing the responsibility toward testing rather than balancing it with development. I’ve seen this happen a number of times.
Every time I get a build with new functionality, say a new dialog window, and I press the one button on the dialog and it crashes the software, I wonder about this issue. Am I encouraging the developer in a way that diminishes their responsibility for bugs and hurts the overall quality effort?
[James’ Reply: These are common problems, and they matter. But I find that they can be managed easily without denying me what I need to get my own work done.
Blaming testers for bugs is literally laughable. I don’t take it seriously. It doesn’t happen, though, because I don’t accept the role of quality gatekeeper. Testers must avoid that trap by being clear with everyone what their mission is: discover and report relevant information, NOT make the product work or prove that the product works.
We want to encourage programmers to have a high standard in their work by the time that it gets out the door. This doesn’t mean we isolate them until they guarantee it’s a clean product. We are working with them to help them make it clean. Of course working with a team means that people will lean on each other. But we also learn from each other. Leaning and learning are not bad things.]
Bart Piotrowski says
should he test it himself to make sure that it is free of obvious bugs?
The answer to this is a resounding YES, in the form of UNIT TESTING. As far as how much unit testing is enough unit testing… I have no clue. I can say with 100% confidence that the answer is greater than zero.
And this is coming from a developer who finds the author’s eagerness to take responsibility for quality highly appealing.
Sagar Shende says
James, What will you suggest in 2020 (era of DevOps) where people are looking to cut down testing teams and expecting developers to to do their own testing.
[James’ Reply: I suggest learning about testing and getting good at it. People can look to cut down testing without knowing anything about it, based on hand waving about automation. Obviously I think that’s a foolish path, but fools can get rich before their companies implode, so I’m not entirely surprised that foolishness is popular.]