[Note: This post is here only to serve as a historical example of how I used to speak about “automated testing.” My language has evolved. The sentiment of this post is still valid, but I have become more careful– and I think more professional– in my use of terms.]
I enjoy using tools to support my testing. As a former production coder, automated tests can be a refreshing respite from the relatively imponderable world of product analysis and heuristic test design (I solve sudoku puzzles for the same reason). You know, the first tests I ever wrote were automated. I didn’t even distinguish between automated and manual tests for the first couple of years of my career.
Also for the first six years, or so, I had no way to articulate the role of skill in testing. Looking back, I remember making a lot of notes, reading a lot of books, and having a feeling of struggling to wake up. Not until 1993 did my eyes start to open.
My understanding of cognitive skills of testing and my understanding of test automation are linked, so it was some years before I came to understand what I now propose as the first rule of test automation:
Test Automation Rule #1: A good manual test cannot be automated.
No good manual test has ever been automated, nor ever will be, unless and until the technology to duplicate human brains becomes available. Well, wait, let me check the Wired magazine newsfeed… Nope, still nothing human brain scanner/emulators.
(Please, before you all write comments about the importance and power of automated testing, read a little bit further.)
It is certainly possible to create a powerful and useful automated test. That test, however, will never have been a good manual test. If you then read and hand-execute the code– if you do exactly what it tells you– then congratulations, you will have performed a poor manual test.
Automation rule #1 is based on the fact that humans have the ability to do things, notice things, and analyze things that computers cannot. This is true even of “unskilled” testers. We all know this, but just in case, I sprinkle exercises to demonstrate this fact throughout my testing classes. I give students products to test that have no specifications. They are able to report many interesting bugs in these products without any instructions from me, or any other “programmer.”
A classic approach to process improvement is to dumb down humans to make them behave like machines. This is done because process improvement people generally don’t have the training or inclination to observe, describe, or evaluate what people actually do. Human behavior is frightening to such process specialists, whereas machines are predictable and lawful. Someone more comfortable with machines sees manual tests as just badly written algorithms performed ineptly by suger-carbon blobs wearing contractor badges who drift about like slightly-more-motivated-than-average jellyfish.
Rather than banishing human qualities, another approach to process improvement is to harness them. I train testers to take control of their mental models and devise powerful questions to probe the technology in front of them. This is a process of self-programming. In this way of working, test automation is seen as an extension of the human mind, not a substitute.
A quick image of this paradigm might be the Mars Rover program. Note that the Mars Rovers are completely automated, in the sense that no human is on Mars. Yet they are completely directed by humans. Another example would be a deep sea research submarine. Without the submarine, we couldn’t explore the deep ocean. But without humans, the submarines wouldn’t be exploring at all.
I love test automation, but I rarely approach it by looking at manual tests and asking myself “how can I make the computer do that?” Instead, I ask myself how I can use tools to augment and improve the human testing activity. I also consider what things the computers can do without humans around, but again, that is not automating good manual tests, it is creating something new.
I have seen bad manual tests be automated. This is depressingly common, in my experience. Just let me suggest some corollaries to Rule #1:
Rule #1B: If you can truly automate a manual test, it couldn’t have been a good manual test.
Rule #1C: If you have a great automated test, it’s not the same as the manual test that you believe you were automating.
My fellow sugar blobs, reclaim your heritage and rejoice in your nature. You can conceive of questions; ask them. You are wonderfully distractable creatures; let yourselves be distracted by unexpected bugs. Your fingers are fumbly; press the wrong keys once in while. Your minds have the capacity to notice hundreds of patterns at once; turn the many eyes of your minds toward the computer screen and evaluate what you see.
Dave Churchville says
Speaking as a Sugar Blob (or carbon-unit, if you prefer), I think you’re right on the money here.
From the developer perspective, since much of what testers do is hidden until a bug report appears, many developers think all that a tester does is follow a set of steps that are extracted directly from specifications (that they asked the developer to write).
To be frank, I’ve worked in some environments where this was *exactly* what the testers did. I wouldn’t consider that a good use of anyone’s time, as these were basically Sugar Blobs performing poor manual tests.
Thus the excitement around developer test automation, especially the kind that appears to REPLACE those tedious specification documents.
The reality is that automated tests replace neither testers nor specifications, but can change the role of both to be more focused on adding value. What that value add is exactly is something that needs to be better communicated to the development team.
–Dave
Brad C says
I don’t see the point in constantly comparing manual and automated tests. You make the point quite clearly that manual and automated tests are very different, but you seem to be trying to devalue the automated test. Unlike manual tests, ANY test you can automate has great value in that the cost for future execution of that test is now very low.
[James’ Reply: If you overvalue automated tests, then I suppose I would like you to stop doing that, for your own good. In a way, I guess that is “devaluing” overrated automated tests. However, I have a deep appreciation for automated testing, as evidenced by the articles I’ve written about it and the automated tests I have created. If I showed them to you, would you believe me then?
Please don’t confuse the lower cost of certain aspects of automation with the value of the tests. It’s a common mistake, and yes, it’s one that I once made on a regular basis. I got burned, and now I know better. The bug finding value of tests written to mimic a fixed set of human actions is almost always low. Famously and notoriously low. The reasons for this is that, contrary to popular belief, there are many many more places for bugs to hide in typical software than any fixed test can look, and many more kinds of bugs than any automated test is capable of detecting.
I use automation despite its dangers. Still, I can lecture on its dangers, and demonstrate its dangers. Can you? Or have you not yet had the experience of discovering that a test suite you’ve labored long upon is almost useless at discovering important bugs?]
“A good manual test cannot be automated.”
I think it’s more accurate to say that a good manual TESTER cannot be automated. The human will pick things up that the machine will not, but some literal interpretation of any test can almost always be automated.
[James’ Reply: The literal aspects of a test, if by that you mean the observable physical aspects, are not the most interesting aspects of the test. The test is what the tester thinks and does, in my view. Of course you can try to automate anything you want. You can use a text generator to write a love letter. You might even fool your girlfriend with it, but that is not automated human love. Similarly, by saying you are automating a manual test, I worry that you are implying there is nothing important about testing that humans bring to it.]
“If you can truly automate a manual test, it couldn’t have been a good manual test.”
I don’t care if it’s a “good manual test”, I care if it’s a good test. If 50 steps of a 200 step manual test can be automated then I think it’s a great idea to do so. Just don’t stop doing the other 150 steps.
[James’ Reply: Well, the point I’m making is that the automated test, however good, does not reproduce the manual test, nor the entirety of the value the manual test provides. I would hope that you meditate on that and bear it in mind. Or you could do what I did and just experience several major failures of your automation to find critical bugs. Maybe that’s the path you need to take. Pain can be a good teacher.]
“If you have a great automated test, it’s not the same as the manual test that you believe you were automating.”
True, but you can’t say for sure if the manual test was better or worse than the automated one. Just like humans will find things machines will not they will also miss things that a machine never would. Both methods are equally useful and sometimes it makes sense to automate a test and still continue running the same test manually.
[James’ Reply: I can’t say much of anything for absolute certainty, and I wouldn’t try. I think what we’re up to, here, is talking about dynamics that influence rational decision-making. If you choose to approach this rationally then it may pay to look closely at the specific kinds of things humans are good at, and the specific kinds of things that computers are good at, and then to think about how to use computers and humans together in a way that makes for a powerful test project. Anyway, that’s what I’m up to.]
Do you think traditional manual testing be replaced with a combination of automated and exploratory testing? What do you think would be lost by doing this?
[James’ Reply: I don’t know what you mean by traditional testing. To me, it is traditional for testers to be untrained in what I consider the basic cognitive skills of excellent testing. It is traditional, I suppose, for testers to have almost no ability to defend what they do, except through the use of cliches and appeals to authority and folklore. That tradition has poorly served our industry, in my opinion. I’d like to replace that with a tradition of context-driven, skilled testing. This naturally will include automated testing as a consideration, at least. We may not use automation. I can also imagine a context where purely automated testing is sufficient. I can imagine a context where no testing at all is sufficient. Context-driven methodologists are wary of prejudging these things.]
Patrick Lightbody says
James,
Good post – I agree that we can’t get rid of manual tests. I would, however, claim that (using made up estimates) roughly 80-90% of today’s QA efforts could be automated. That is, we should be able to then use that free time to really focus on the hard problems that require manual work.
[James’ Reply: I don’t know which QA group you are talking about, or how you are calculating. If you are talking about a QA group that behaves like machines, then I suppose you could be right.]Â
I really like your point about how automated tools can _augment_ manual testing. Our product, HostedQA (http://www.hostedqa.com), takes screenshots of every step that is automated in a web browser. We store those screenshots in the test reports. Right now, they can be viewed quickly by QA engineers.
However, we’re thinking about adding a feature that will analyze those screenshots “horizontally” and “vertically”. That is, we’ll highly the change in the UI over time (from test #1 to test #2) as well as during each teach (from step #1 to step #2). A simple process as this (such as a “red marker” circling the changed pixels) would make the manual work of looking for visual goofs that computers can’t easily detect _much_ easier for humans.
Of course, we haven’t done this yet, but we hope to soon. This is exactly the type of stuff that guys like Mercury/HP and Segue/Borland have never done well, and why there are opportunities for smaller vendors such as myself!
[James Reply: Sounds interesting.]Â
Nitya Swarup Mishra says
James,
Nice article, i completely agree with your view. In my career in QA last 5 years i have seen many Automation project failures where they send SOS for the manual test team to pull the things up.
Automating a project is something like adding another Project in side the developement schedule
which has its own SDLC (Documentation, Choosing the right tool , Scripting, issues, Bug fixing, regression etc) and hence the schedule of the project get affected.
A good manual test team is always the best bet over the Automation.
Thanks and keep posting.
Nitya
Alexey Nikolayev says
Considering Test Automation Rule #1, does it mean that good manual test even cannot be written down as a sequence of steps with fixed expected results? I don’t mean the steps like “Do , then check that eveything went correctly”, that might be too “exploratory” 😉
[James’ Reply: Yes! That is corollary 1C. Just as no part of the source code of a software product is the same as the entire codebase; just as the Perl code I write is nothing without the Perl kernel and libraries; so it is also true that whatever is written down about a manual test is not the whole test. If you have ever tried to follow instructions that you don’t understand, you know all about this.]
Chris Meisenzahl says
Great sentiments James. I presume that your comments imply that a manual test contains some implied exploratory testing (something I’ve long been a proponent of)?
That is, if I see something awry on the periphery of a manual test, I can note it and deal with it. An automated test wouldn’t notice it unless specifically coded to.
Chris
http://amateureconblog.blogspot.com/
[James’ Reply: Yes. That’s a big part of what I’m saying.]
Ainars Galvans says
I believe my observations are likely, however the conclusions slightly differ. But first of all – regarding A.I. (brain emulators) the best I believe is the Neural Networks, http://www.cs.stir.ac.uk/~lss/NNIntro/InvSlides.html An introduction says that this could be used when “we can’t formulate an algorithmic solution.â€?
[James’ Reply: Neural nets and Bayesian analyzers are simply pattern matchers. With them I can simulate part of what humans do, but not nearly the whole spectrum. They can recognize a signal, if we train them. But what if the specification changes? What about unanticipated signals?Â
Pattern matchers have no ability to inquire, or to re-model themselves spontaneously based on information overheard at the project meeting today. With humans you get lots of automatic testing behaviors.]
While trying to understand “the specific kinds of things humans are good at, and the specific kinds of things that computers are good at� I have come to conclusion that in one word this is what I call intuition (which you don’t like as term associated with zero credibility).
[James’ Reply: It’s just that when you say intuition I have no idea what mechanism you’re talking about. I suspect you don’t have any idea, either. I think you can do better than to reduce it to one word.]
Algorithmic (which I believe include logical, structured, etc.) solutions are what computers are good at: executing pre-scripted steps and doing predicted validation. If we replace at least validation or even test execution step (input values) selection from following “test techniques and strategyâ€? or even pre-defined heuristics with short act-observe-analyse-act cycles maximally utilizing “human information processingâ€?… this is what I call a good manual tests.
Shrini Kulkarni says
James – this is indeed a great new direction to Test automation. As I have discussed with you – I have been struggling all these days trying to retrofit sets of manual test cases into automation bucket and to a certain degree claimed that I have “automated” set of manual tests. I will work on coming up with good examples of good manual tests and good automated tests that are designed as such. I am already seeing a lots of eye brows raising in my company about this new concept …
One question however, though is comes up – that is related “Regression Testing”. will this blog post change if you were to talk about “Test automation of Regression Testing’ ? Often we hear about arguements like
“These are regression tests that need to be run on every build of the product – to make sure that (to the extent those tests can do it) existing features of the products are still working the same” and
“Since these are repeatitive and hence consume time – we should automate them to save time – and in that saved time (having run all mundane regression tests to prove some thing on the lines 1+4 is still 5) we can do some good manual testing”
What do you have to say about these?
[James’ Reply: It may be useful to create automated tests for regression purposes. However, it would be misleading to say that those tests duplicated the value that humans provided with manual tests, unless the humans were pretty bad testers.]
As you have rightly mentioned – a good manual test can never be automated and a good automated test is born as such – where are so called regression tests whose sole purpose to check what was before is still the same.
I think there is total lack of understanding of world of automation in the testing community. Your blog post can create a new era in Automation … would like to you see writing more on this especially addressing the problem or misconception surrounding “Regression tests and commercial GUI regression tools”
Shrini
Nick Olivo says
I think we need to consider the examples James gave here, specifically the Mars Rover and unmanned submarine. These don’t replace the scientists that study Mars or the bottom of the ocean, rather they give them a new tool that allows them to do something they otherwise would be unable to do. That’s the attitude I’ve adopted on automation – not to replace manual testers but to give them the ability to test things that would otherwise be extremely difficult, or perhaps even impossible. For example, previous employers would often ask me to write scripts to verify certain bugs had been fixed. These were simple tests that a manual tester could validate in a few minutes, and human eyes could easily pick up any obvious deficiencies in the fix that a script may not have been coded for (e.g. yes, the button displays the correct text, but it’s blazing pink now. Was that expected?). This isn’t an effective use of automation, but it seems to be the most common approach to it.
However, this doesn’t mean automation should be abandoned in favor of exclusive manual testing. Instead, look to where you can create automation that will augment your manual testers’ ability to do their job. For example, if you have to perform boundary testing, where fields can accept strings of up to X thousand characters, write an app to generate those strings for them and then put that string right on the clipboard. The manual tester could then just paste it in. That’s automation enhancing manual testing, and I think that’s the model to move toward.
Jerrad anderson says
well I wasn’t going to chime in on this old automated vs. manual testing debate, but I don’t know how to keep my mouth closed 🙂 We should all agree that automation is a tool and should not replace manual testing, we should also agree that good automation while difficult to achieve will at the very least be helpful over manual testing alone. Where these benefits are seen are questions for other places and times.
I believe the point of view in test automation is key. From a managers point of view its seen as a way to reduce manual tester salary overhead. From manual testers point of view it’s a way to eliminate un-interesting testing. I think automation is a computer world driven task. Automation speaks to code utilization and it does it well. Its measurable in lines of code covered whereas manual testing is measured in features covered and bugs uncovered. We all need to realize this huge difference before attempting to do automated or manual testing.
[James’ Reply: I think you’re missing something important when you talk as if automation is about making computers do the “uninteresting” things. Yes, that is one aspect of automation, but another aspect is that test tools can enable testers to do MORE interesting things. That’s why I mentioned deep sea submarines and the Mars Rover. Those are tools to extend people, not to remove people.
I often speak of test automation as “tool-supported testing.” It’s a more accurate term that encompasses classic automated test execution without giving the impression that an automated test is necessarily not one that involves a person.]
Toby says
James –
You have written an excellent paper on agile automation. I think that paper explains very clearly what kind of automation you prescribe. “How to use tools to enhance your testing”. I call them quick wins. The only successful automation projects I have ever participated in are small efforts that solves a relatively simple problem that would take lots of time to do manually. Like adding 200 users to a database through the GUI or analyzing log files of thousands of rows to find errors.
Most books on automation as well as sales people for automation tools are focused on automating all your manual test cases in order to be able to run regression tests. The only chance of really saving time and money is to run those scripts at least five times and hope you don´t have to rewrite them with every new release. This is, I think, the “common” view on automation and is totally different from agile automation which give you success right here and now. Thus enhancing your testing – not replacing your manual testing.
I see one big reason why this is the case – money! Tool vendors earn loads of money on their tools and to continue this they advertise a lot and sponsor our own testing conferences. This might stop too critical presentations?! We believe what they say because few of our own people talk( or even know?) about the alternative view. As true testers we should question BOTH views and see for ourself which one works the best. Many people talk about failed automation projects, yet we seem to keep trying hard in same same way over and over again…and fail.
Therefore I encourage you, James, to let the agile auto paper enlighten the testing community.
[James’ Reply: I wrote a blog entry about it. The paper is at https://www.satisfice.com/articles/agileauto-paper.pdf]
Sha says
James,
Your rules bring forth a very interesting aspect of automated testing. But I don’t think these are comprehensive enough to cover all aspects of tests that are automated. Please consider the following:-
1. I do not use tests that are automated to uncover “important defects”. I simply use them to make sure that there are no major failures in existing functionality (Regression). There may even be defects that the test missed, but I wasn’t planning to catch them anyway with tests that were automated. So there are certain types of manual test that can be automated. Ex. Verify application screens all have “Cancel” buttons that close the window.
[James’ Reply: There are two problems I see with your idea.
First, you are not automating the manual test, you are automating some aspects of the manual test while ignoring other aspects. That will be, in important ways, a different test. The whole point I’m trying to make with my post is that automation does not capture or duplicate human activity, as a whole, just little bits and pieces of it. That may be enough for you. It occasionally is enough for me. But I am careful to acknowledge what my automation does not and cannot do.
Second, your test cannot “make sure” there are no major failures. What any test does is collect evidence about the risk of failure. You can run all the tests you want, and yet there may be simple tests not yet run that would have detects major failures. Or you could re-run the same tests on a slightly different hardware or software configuration and then discover those major bugs. From the evidence you collect you draw an inference about risk. I think what you mean to say is that you are content with the evidence revealed by your tests. That’s fine. But “making sure” oversimplifies the case.]
2. Tool assisted testing and test automation are two different things in my opinion. The former being probing tools that assist a human in monitoring/inspecting something he/she cannot. An example of this would be using Windows Task Manager to check resource utilization while performing a test. The term “Automated Tests” is usually used in the context of a program that replicates the action of a tester/user. You can argue that the definition is wrong, but that’s as good a definition as yours.
There were a few other things I wanted to add, but I am out of time, there is a weekly Sugar Blob congregation that I gotta rush to. 🙂
Later..
[James’ Reply: I don’t prefer your definition, because for one thing it promotes an unhelpful myth about test automation, and because my definition opens up many more possibilities.
You simply aren’t replicating what humans do. That you believe you are replicating what humans do is evidence that you have succumbed to the myths promoted by tool companies, and implies to me that you are uninterested in most of what humans actually do– such as, for instance, thinking of better tests while performing tests. As a tester, my job is to look at the whole picture.
I’ve been working with test automation for a long time. I’m suggesting that a powerful (as opposed to weak) way of thinking about test automation is that it is any tool-supported testing. I will assume that you wish to be powerful instead of weak. I agree that most people don’t think of test automation as any tool-supported testing, but I’m convinced that they will, eventually, since thinking that way leads to far better testing, dollar for dollar and hour for hour.]
Â
Alexey Nikolayev says
James, I was thinking on the “Test Automation Rule #1” and came to conclusion that it can be expanded and detailed so that it doesn’t consider automation in particular. Taking into account your response to my comment above I can propose the following statement: “None of good tests can be formalized”, under “formalization” we understand the process of noting the test down so that any further interpretation of the note (by human or computer) can restore the test (or test idea) with 100% authenticity.
Is it worth living? 😉
[James’ Reply: There can be good tests, I think, and there can be good formal tests. There can even be formal manual tests (by some reasonable definition of formal).
I don’t believe any test can ever be repeated– if to repeat means to repeat EXACTLY and in all dimensions. Even approximate repetition can be a challenge, depending on what you’re trying to repeat. So, if you write down a test for someone else to perform, it can be an interesting challenge to make sure the right thing is repeated.]
Shrini Kulkarni says
Sha brings out a typical Automation Regression Testing scenario –
Note the core theme of notion – such tests are not expected to uncover any different bug other than what is coded – Check if cancel button exists in 100 different windows? Test pass
If yes – Test fail if otherwise.
Are these tests powerful? May be not.
Are these tests good in finding bugs – yes – only those bugs (deviations from expected behavior) coded in automated tests?
Are these tests important? Yes from the perspective of someone who matters.
Is it a good way to test? May be not – some of these tests could be executed as part of unit tests – in a cheaper way.
Let me make a statement: The way you state (communicate) your manual Test will decide whether it could be automated or not. The same thing will also decide whether a test case is good or bad.
This communication (hence documentation) is key aspect of Test design right?
For example – let us consider following test case
Test step or Action : launch Add Member Wizard Screen
Expected Result: Check that Cancel button is present on the screen and is in enabled state.
It is a manual test that can be automated – But is it a good manual Test? May be not.
I would be over-simplifying and overselling the value of above test if I claim that “through this automated test I am making sure that there are no major failures in the screen”. What I am checking here is the existence and state of control on the screen – nothing more and nothing less.
Shrini
Chris J says
Hi James,
Thanks for the great article. I support Test Automation Rule #1, but with one caveat. Some organizations will have a hard time swallowing it whole!
The company I work for is firmly entrenched in Brett Pettichord’s Factory School of testing. The idea is that if we write down all of the requirements, the coders code from them, the testers test from them, and we trace everything together. Once all of the requirements are tested, we’re done! Easy as pie, right? (Wrong!) Our automation efforts, therefore, consist entirely of automating really bad manual tests. Since the community is convinced that all of these tests are necessary to ensure Quality, it is difficult to help them understand that there are more effective and more efficient ways to test a system than these rigid requirements-based scripts.
Right now I’m trying to help them quickly automate their manual scripts first, so we have some breathing room to think about other strategies. I’m also working on a model-based testing solution, and researching some more API-level automated testing approaches.
I’m afraid that helping to dig the community out of the hole of manual testing will be the easy part. The hard part will be convincing the community that the whole idea of how we do testing could be drastically improved…
[James’ Reply: Good point, Chris. This calls for a new rule. Rule #0: All rules for humans shall be treated as heuristics (including this one, of course).]
Michael Bolton says
I think it might be useful to point out a key difference between a “manual” test and an “automated” test. (This difference has little to do with whether machines are used in the test, since we use machines in every test of computer software in which we operate the software. All tests in which a computer is involved are “automatic” to some degree.)
When people say “manual test”, what they really mean is “human eyes, ears, hands, and brains are engaged in the execution of the test.” When they say “automated test”, what they really mean is “human eyes, ears, hands, and brains are not (necessarily or usually) involved in the execution of the test.” That reframe points to the fact that “automated” tests are entirely scripted tests, where “manual” tests are unscripted or exploratory tests. Automation can be used to aid exploration, but the exploratory act requires that human engagement–until we hear something different from WIRED.
Would any of us willingly get into an airplane that had only been tested by automation, and never by human test pilots? Would it be possible to test the software on a plane without assistance from automation? For most sane people, the answer to both questions would be No. So in evaluating the value of automation, we need to think about how we’re using it to assist and extend human capabilities.
I think it’s also important to underscore James point that low cost does not necessarily mean high value. This seems to be a source of some confusion for some testers and programmers in particular.
—Michael B.
[James’ Reply: Good points, Michael. Thank you. I would just point out that a manual test is not necessarily an exploratory test (not on purpose, anyway, and maybe not otherwise). However, I agree that most manual tests are at least a small bit exploratory and probably more than that. A good manual test, even if partly scripted, will make use of unscriptable human faculties, to be sure.]Â
Shrini says
Continuing with what Michael said about Manual+explotatory+Scripted vs Automated +scripted test scenario — how about following rules ?
# 3 All automated tests (by design, nature, definition) are scripted (any exceptions ?)
[James’ Reply: This depends on your specific definition of automation and script. If to be automated means that the action is carried out by a machine made by humans, instead of by humans themselves, and if to be scripted means to act in a way that is determined in part by a set of instructions, then some automated tests may not be scripted, because some machines do not operate using instructions (I can use a hammer to hit a keyboard, which is a machine that is operating by direct manipulation, instead of by a script). I can also use automation to *design* manual tests, in which case it would be a partially-scripted partially automated test. There are many other examples.
In general, I would say that most people who point at test automation are going to be pointing at a fairly scripted test.]
# 4 (converse of #3) All scripted tests can be automated (if tool and Oracle willing)
So aren’t we limited by tool and oracle to satisfy rule #4
Shrini
[James’ Reply: It is not true that all scripted tests can be automated, because of the limits of what machines can do. There are straightforward computability limits, such as those that would require a computer to spend 10,000 years to solve the problem. There are also scripted commands that no one could perform, under any circumstances, including the instruction “run all possible tests” or “notice all possible bugs.”
There are many possible actions that we can script to some degree, and which a human could do in an acceptable way, but which would be very hard or impossible for a computer to do. These include “check to see if something went wrong” which is almost always an implicit part of a test script, and “record any new test ideas that occur to you while performing this test.”]
Scott Sehlhorst says
Thanks James for an inspiring idea. We’ve expanded on it some at Tyner Blain (linked to my name). Hope you like it. Here’s the excerpt:
There’s a piece of North American folklore about John Henry, who was a manual laborer during the expansion of the railroads in our country. His job was being replaced by steam-driven heavy equipment, as the railroad industry applied technology to become more efficient. The same dynamics are happening today with manual testers. We need to make sure that manual testers avoid John Henry’s fate – read on to see why.
Ken Sommerville says
Maybe I oversimplify things, but I tend to think of test automation not only as a means to exercising a system under test, but as way of completing many of the daily tasks that I do as a tester. For example, generating test data, analyzing test results, preparing defect reports, retrieving information about system configurations, scouring log files, etc. All of these are tasks that are routinely done but seldom automated.
Why does it seem that many testers rule out these kinds of activities when talking about “test automation”? Many people seem to only focus on the “test” in “test automation”.
Another challenge I have in working with my non-technical colleagues is that they focus on the “point, click, record, playback” type of automation, which I despise and hesitate to even call testing. I have used record/playback as a means to understand a commercial tool, but have rarely relied on it to actually exercise the system under test, mainly because I find it inflexible as far as responding to changes in the system.
Any suggestions on handling these types of “automation evangelists”?
[James’ Reply: I’m of the same mind, Ken. I define test automation as “tool-supported testing.” I think of the typical scripted regression testing atuomation as one facet of that. I usually pull out my agile test automation slides to try and expand the imagination of people who think tools should replace human minds.]
Rick Fisk says
Great article. The way I see it, automated tests provide us with two important things:
1. A general comfort level – presuming the tests are valid – in areas where many different combinations must be excercised.
2. Time to explore and mumble the keys.
JenK says
Maybe this is the dumb thought, but I tend to think of automating first those things that cannot be effectively tested manually.
Examples: A web service that clients will access to upload information to your DB; a console application that will transfer information between DBs; or evaluating how a site performs under load. None of these can be effectively tested by manual means.
Better yet, if management is pushing automated UI testing, a dose of reality – in terms of the work and/or programming expertise involved in creating effective automation for the above – may put the discussion on firmer ground.
[James’ Reply: For each of your examples, I can think of ways to test manually. Ultimately, the human users of those functions must take an action or see some kind of result. You wisely used the word “effectively”, however. I would certainly consider using tools to test those things better than I otherwise would be able.
One thing I like to do is use tools to create an interface that allows me to manually and interactively test. Then I can fluidly develop test ideas.]
Mohan Raj says
I totally agree with your comments, its very clear that the complete manual process for testing can not be automated by using any tool. and i had observed one more thing in my 4 years of qa experience that only 10% of automation what we do is reliable. Automation will help in future prospects where a normal verification has to be done on the same application or functionality after some time. Manual tests will always produce better results and here we can find lot of bugs which automation process can not do. in manual process human brain is involved each time when he starts testing , but where as the human brain is involved only one time at the develoment stage of scripts.
Mohan Raj says
automation is usefull when we have to test any application for load, stress and performance. its very clear that if i take an example of ATM where concurrently 10 users have to access it for some kind of operation , this is not possible by manual process and even if we try to do that we can not get accurate results with which we can set a bench mark that till this particular number the application will behave without any issues.
[James’ Reply: I think what you’re really saying is that you have a particular test in mind that requires a specific kind of operation that you don’t know how to get using only people working through the ordinary user interfaces. I accept that. Now, I’d like you to accept that the very test process you are referring to also may reveal problems that you don’t know how to detect except by humans like you looking over the test while it is in progress. While witnessing this test, you may get ideas about how to redesign it– your software certainly won’t re-design itself.Â
There is no fundamental controversy in our industry about test automation, itself. Of course we can and we will use tools to test our software. The controversy I’m discussing has to do with the relationship between humans and their tools. I am warning against the uncritical belief that to automate a test is to reproduce eveything interesting and important that a skilled human tester does when he tests. Automation simply can’t do that.]
Jon Ward says
What do you think of the idea of automating only the execution of the workflow of some more complex “end-to-end” regression test cases, leaving the checking of the detailed results covered by that workflow to the testers – by capturing a dublog (movie/screenshots) of the workflow as executed by the automated tool, the manual tester can then quickly scan the automated execution visual log for abberations, and to check the specific complex outcomes expected at various points, for all the automated tests where the workflow has completed to the expected end-point. (if the automated workflow fails, that’s the usual auto debig process)
Overall, because the automation takes care of the test data setup, and the detailed steps execution, with both of these at lower long term cost (because we have quarterly regression requirements ad infintum for our massive legacy system), I wonder if this could be an efficient and hopefully effective way to best use automation to free up time for more manual “change” testing whilst minimising the “automated quality loss” that we are discussing here.
[James’ Reply: Hybrid automation like this can work nicely. Bear in mind what you give up, however. By having test execution computer controlled, the tester will not be injecting variations into the test coverage, which will lessen the bug finding power of the tests. The tester will also not be as mentally engaged, I bet, leading to fewer new test ideas during test execution. However, there may be a valuable tradeoff gained in terms of the ease or speed of doing complex tests.]Â
Vijay Murugan B says
Hi James,
That was a very nice article and it makes everyone proud of being a Manual Tester including me. I have been in the QA field for around 6 years performing both Manual and Automated Testing. I completely agree with your comments, but you know there is nothing in the world which is useless.
Also from a manager’s perspective automated testing looks to be useful and cost effective in a scenario there is a lot of regression test cases to be run. For Eg: (We have a product with different modules and say there is a change in only some part of the module. however we are not sure if this is going to affect the other modules). In this particular scenario automation testing seems to be more useful for both tester and also for manager.
Since a tester gets bored by testing the same product, since there are no changes or bugs in it. Also it would be cost effective for the manager to run a automated test instead of wasting a manual resource, whom he can employ in other areas which has undergone many changes.
Your comments please…
Regards,
Vijay
[James’ Reply: I think automation is potentially a wonderful thing. But your automation simply does not replicate all that a good human does while testing. Your automation is doing something different.
BTW, if I ever find that testing is boring, I just change how I test. It isn’t testing the same product that causes boredom, but doing the same things.]Â
Emmet Healy says
I agree with your premise. From experience of manual and automated testing experience I can agree that most manual tests cannot and should not be automated even when regression testing. There is no substitute for the human eye in spotting defects and identifying something that doesn’t look right.
An automated regression test may tell you something has been fixed or is still broken. It won’t tell you if a new problem has been uncovered which indeed has not been covered in any test cases.
Writing automated test cases is often a fixation with managers in their quest to cut time and budget and get a quick release. But it invites leaving a myriad of holes in the testing process and unrevealed bugs which a quick look over by an experienced manual tester would quickly spot.
Dmitriy says
James,
I’ve found your point very interesting but also pretty much self-explanatory. Here are some definitions of the word “automatic” from thefreedictionary.com:
“Acting or done without volition or conscious control; involuntary”,
“resembling the unthinking functioning of a machine”, etc.
Thus, all the dis- and advantages of test automation. In my case, to confirm your point, I’ve found more of the latter not in the GUI but performance test automation.
Best regards,
Dmitriy
srinivas says
Hi James,
This is with regard to the following statement in the article
“A classic approach to process improvement is to dumb down humans to make them behave like machines. This is done because process improvement people generally don’t have the training or inclination to observe, describe, or evaluate what people actually do”
Thought that by way of observation,investigation,the very suggestion(question) will bring about the knowledge already inside.
By documenting the findings in the form of processes, the knowledge can be reused,reviewed and improved by measurement and feedback. Please correct me if I am wrong
[James’ Reply: I’m not sure what you are trying to say, but as I understand what a process is, it is not a document. A process is how things happen. What you mean is probably a “process description.”
And yes, you are wrong if you believe that only through documents can we improve our processes. (Especially if you aren’t very good at writing.) Or even if you think that is necessarily a good way to do it. Please see The Social Life of Information about this. It’s a wonderful book.
The main thing I want you to think about is that you do not now and you have never written down what your process actually and completely is for doing ANYTHING. You have only written partial descriptions, at best.]
srinivas says
Thank you for providing the information on social life information. I agree tacit knowledge is required, however for process improvement I think there is no need for having all tacit knowledge to be completely explicit and represented in the process description.Thought vital information when captured is sufficient to reproduce and improve
[James’ Reply: My point is that there may be no “vital” information to capture. There may be information, but it may not be worth capturing, or it maybe harmful to capture.
On the other hand, some information might be worth capturing.]
Venkat says
Hi James,
While I agree with all of what you’re saying, I am curious to know what your experience has been in using test automation to run a gamut of combinations of test conditions and using a technique such as genetic algorithms (of course with well-defined validity criteria for the system under test) as an effective tool that a manual tester cannot possibly accomplish in the same time it takes to execute?
Thanks
[James’ Reply: I have never used genetic algorithms for testing. I have not seen that used. I don’t think I’ve ever been in a situation where it seemed reasonable to consider doing so. But it seems like a cool idea and I would like to see it.
I have done a lot with tools that *create* interesting coverage. Sometimes that’s all-pairs coverage, sometimes all combinations, sometimes random testing.
I am constantly applying tools to my testing, but that is not to say that I have automated the testing. Testing is a sapient activity, but machines can help.]
Brian Brumfield says
Hmmm. All tests, before there was a means to automate them, were manually tested (assuming that it was actually tested at all). I think that it’s fair to say that a test that requires intuition or powers of observation, is likely not a good automation candidate, but to say that any manual test cannot be automated!? I think it’s a stretch. I am a huge fan James, but I can’t defend this one. 😉
[James’ Reply: How is it a stretch? Is it because you think a test is just the explicit part of the test, and not the part where you notice something that no one told you to notice and you were not in any way conscious of before the moment where it occurred to you? You don’t think that is part of the test? Or if you do think it’s part of the test, you believe that you can AUTOMATE that? Not only CAN you defend the point I made in this blog post, but to defend anything else forces you to account explicitly for every single thing you ever notice and to show how you could have written a program to do the same thing EVEN IF YOU HAD NOT BEEN THERE TO NOTICE IT.
We do an exercise in my class where I ask students to tell me all their expectations for a situation. No one can do it. Some people say more and some people say less, but no one can say everything that I then go on to tell them– every one of which, as soon as I say it, they tell me “oh yeah, I also expect that.”
In a world where you aren’t even aware of your own expectations, you can’t conceivable write a tool that will check the product against these latent notions. And yet, whenever such a latent expectation is violated, you immediately notice it. Your brain is on the job, even though the other part of your brain can’t write it all down.
In short, ALL TESTING INVOLVES TACIT KNOWLEDGE TO SOME DEGREE. Anything else is simply fact checking. There is no such thing as a good test that can be automated. What can be automated are specific fact checks (which may, of course, be components of a test).
Hey, I understand that people sometimes don’t see what value you, as a skilled and knowledgeable human, bring to the table of testing. That’s sad. What’s much sadder, to me, is when you don’t even give YOURSELF credit for what you do.]
TestingWhiz says
From your comparison its seems that you are giving more value to the manual testing.But according to me automated testing tools can be more useful and thus avoiding tedious process of test cycles done manually.Moreover with testing tool you need not to be a core programmer that is any novice tester can tests on applications.
[James’ Reply: Are you sure you’re a testing “whiz?” You don’t seem to know much about testing. Or reading a blog. Please re-read the first paragraph of my post. Then read the rest.
But first, please learn how to test.]
Neethu says
When you are testing a new feature, under what circumstances would you deviate from a script while performing manual testing?
[James’ Reply: Why would I have a script at all?]