These thoughts have become better because of these specific commenters on part 1: Jeff Nyman, James Huggett, Sean McErlean, Liza Ivinskaia, Jokin Aspiazu, Maxim Mikhailov, Anita Gujarathi, Mike Talks, Amit Wertheimer, Simon Morley, Dimitar Dimitrov, John Stevenson. Additionally, thank you Michael Bolton and thanks to the student whose productive confusion helped me discover a blindspot in my work, Anita Gujarathi.
Integration testing is a term I don’t use much– not because it doesn’t matter, but because it is so fundamental that it is already baked into many of the other working concepts and techniques of testing. Still, in the past week, I decided to upgrade my ability to quickly explain integration, integration risk, and integration testing. This is part of a process I recommend for all serious testers. I call it: reinventing testing. Each of us may reinvent testing concepts for ourselves, and engage in vigorous debates about them (see the comments on part 1, which is now the most commented of any post I have ever done).
For those of you interested in getting to a common language for testing, this is what I believe is the best way we have available to us. As each of us works to clarify his own thinking, a de facto consensus about reasonable testing ontology will form over time, community by community.
So here we go…
There several kinds of testing that involve or overlap with or may even be synonymous with integration testing, including: regression testing, system testing, field testing, interoperability testing, compatibility testing, platform testing, and risk-based testing. Most testing, in fact, no matter what it’s called, is also integration testing.
Here is my definition of integration testing, based on my own analysis, conversations with RST instructors (mainly Michael Bolton), and stimulated by the many commenters from part 1. All of my assertions and definitions are true within the Rapid Software Testing methodology namespace, which means that you don’t have to agree with me unless you claim to be using RST.
What is integration testing?
Integration testing is:
1. Testing motivated by potential risk related to integration.
2. Tests designed specifically to assess risk related to integration.
Notes:
1. “Motivated by” and “designed specifically to” overlap but are not the same. For instance, if you know that a dangerous criminal is on the loose in your neighborhood you may behave in a generally cautious or vigilant way even if you don’t know where the criminal is or what he looks like. But if you know what he looks like, what he is wearing, how he behaves or where he is, you can take more specific measures to find him or avoid him. Similarly, a newly integrated product may create a situation where any kind of testing may be worth doing, even if that testing is not specifically aimed at uncovering integration bugs, as such; OR you can perform tests aimed at exposing just the sort of bugs that integration typically causes, such as by performing operations that maximize the interaction of components.
The phrase “integration testing” may therefore represent ANY testing performed specifically in an “integration context”, or applying a specific “integration test technique” in ANY context.
This is a special case of the difference between risk-based test management and risk-based test design. The former assigns resources to places where there is potential risk but does not dictate the testing to be performed; whereas the latter crafts specific tests to examine the product for specific kinds of problems.
2. “Potential risk” is not the same as “risk.” Risk is the danger of something bad happening, and it can be viewed from at least three perspectives: probability of a bad event occurring, the impact of that event if it occurs, and our uncertainty about either of those things. A potential risk is a risk about which there is substantial uncertainty (in other words, you don’t know how likely the bug is to be in the product or you don’t know how bad it could be if it were present). The main point of testing is to eliminate uncertainty about risk, so this often begins with guessing about potential risk (in other words, making wild guesses, educated guesses, or highly informed analyses about where bugs are likely to be).
Example: I am testing something for the first time. I don’t know how it will deal with stressful input, but stress often causes failure, so that’s a potential risk. If I were to perform stress testing, I would learn a lot about how the product really handles stress, and the potential risk would be transformed into a high risk (if I found serious bugs related to stress) or a low risk (if the product handled stress in a consistently graceful way).
What is integration?
General definition from the Oxford English Dictionary: “The making up or composition of a whole by adding together or combining the separate parts or elements; combination into an integral whole: a making whole or entire.”
Based on this, we can make a simple technical definition related to products:
Integration is:
v. the process of constructing a product from parts.
n. a product constructed from parts.
Now, based on General Systems Theory, we make these assertions:
An integration, in some way and to some degree:
- Is composed of parts:
- …that come from differing sources.
- …that were produced for differing purposes.
- …that were produced at different times.
- …that have differing attributes.
- Creates or represents an internal environment for its parts:
- …in which its parts interact among themselves.
- …in which its parts depend on each other.
- …in which its parts interact with or depend on an external environment.
- …in which these things are not visible from the outside.
- Possesses attributes relative to its parts:
- …that depend on them.
- …that differ from them.
Therefore, you might not be able to discern everything you want to know about an integration just by looking at its parts.
This is why integration risk exists. In complex or important systems, integration testing will be critically important, especially after changes have been made.
It may be possible to gain enough knowledge about an integration to characterize the risk (or to speak more plainly: it may be possible to find all the important integration bugs) without doing integration testing. You might be able to do it with unit testing. However, that process, although possible in some cases, might be impractical. This is the case partly because the parts may have been produced by different people with different assumptions, because it is difficult to simulate the environment of an integration prior to actual integration, or because unit testing tends to focus on what the units CAN do and not on what they ACTUALLY NEED to do. (If you unit test a calculator, that’s a lot of work. But if that calculator will only ever be asked to add numbers under 50, you don’t need to do all that work.)
Integration testing, although in some senses being complex, may actually simplify your testing since some parts mask the behavior of other parts and maybe all you need to care about is the final outputs.
Notes:
1. “In some way and to some degree” means that these assertions are to be interpreted heuristically. In any specific situation, these assertions are highly likely to apply in some interesting or important way, but might not. An obvious example is where I wrote above that the “parts interact with each other.” The stricter truth is that the parts within an integration probably do not EACH directly interact with ALL the other ones, and probably do not interact to the same degree and in the same ways. To think of it heuristically, interpret it as a gentle warning such as “if you integrate something, make it your business to know how the parts might interact or depend on each other, because that knowledge is probably important.”
By using the phrase “in some way and to some degree” as a blanket qualifier, I can simplify the rest of the text, since I don’t have to embed other qualifiers.
2. “Constructing from parts” does not necessarily mean that the parts pre-existed the product, or have a separate existence outside the product, or are unchanged by the process of integration. It just means that we can think productively about pieces of the product and how they interact with other pieces.
3. A product may possess attributes that none of its parts possess, or that differ from them in unanticipated or unknown ways. A simple example is the stability of a tripod, which is not found in any of its individual legs, but in all the legs working together.
4. Disintegration also creates integration risk. When you takes things away, or take things apart, you end up with a new integration, and that is subject to the much the same risk as putting them together.
5. The attributes of a product and all its behaviors obviously depend largely on the parts that comprise it, but also on other factors such as the state of those parts, the configurations and states of external and internal environments, and the underlying rules by which those things operate (ultimately, physics, but more immediately, the communication and processing protocols of the computing environment).
6. Environment refers to the outside of some object (an object being a product or a part of a product), comprising factors that may interact with that object. A particular environment might be internal in some respects or external in other respects, at the same time.
- An internal environment is an environment controlled by the product and accessible only to its parts. It is inside the product, but from the point vantage point of some of parts, it’s outside of them. For instance, to a spark plug the inside of an engine cylinder is an environment, but since it is not outside the car as a whole, it’s an internal environment. Technology often consists of deeply nested environments.
- An external environment is an environment inhabited but not controlled by the product.
- Control is not an all-or-nothing thing. There are different levels and types of control. For this reason it is not always possible to strictly identify the exact scope of a product or its various and possibly overlapping environments. This fact is much of what makes testing– and especially security testing– such a challenging problem. A lot of malicious hacking is based on the discovery that something that the developers thought was outside the product is sometimes inside it.
7. An interaction occurs when one thing influences another thing. (A “thing” can be a part, an environment, a whole product, or anything else.)
8. A dependency occurs when one thing requires another thing to perform an action or possess an attribute (or not to) in order for the first thing to behave in a certain way or fulfill a certain requirement. See connascence and coupling.
9. Integration is not all or nothing– there are differing degrees and kinds. A product may be accidentally integrated, in that it works using parts that no one realizes that it has. It may be loosely integrated, such as a gecko that can jettison its tail, or a browser with a plugin. It may be tightly integrated, such as when we take the code from one product and add it to another product in different places, editing as we go. (Or when you digest food.) It may preserve the existing interfaces of its parts or violate them or re-design them or eliminate them. The integration definition and assertions, above, form a heuristic pattern– a sort of lens– by which we can make better sense of the product and how it might fail. Different people may identify different things as parts, environments or products. That’s okay. We are free to move the lens around and try out different perspectives, too.
Example of an Integration Problem
This diagram shows a classic integration bug: dueling dependencies. In the top two panels, two components are happy to work within their own environments. Neither is aware of the other while they work on, let’s say, separate computers.
But when they are installed together on the same machine, it may turn out that each depends on factors that exclude the other. Even though the components themselves don’t clash (the blue A box and the blue B boxes don’t overlap). Often such dependencies are poorly documented, and may be entirely unknown to the developer before integration time.
It is possible to discover this through unit testing… but so much easier and probably cheaper just to try to integrate sooner rather than later and test in that context.
Roger Foden says
James, I was wondering if you had come across the phrase “emergent behaviour”, in the context of systems ? It seems relevant.
From wikipedia “An emergent property of a system, in this context, is one that is not a property of any component of that system, but is still a feature of the system as a whole”.
If I understand your thoughts, integration testing is especially concerned with finding undesired behaviours that arise in a system that is itself an integration.
Thanks for all your enlightening thoughts on testing, Roger.
[James’ Reply: Yes. I tried to express the idea of emergent behaviors by saying that products depend upon their parts but have attributes that differ from their parts. The tripod example is an example of emergent properties: unstable legs + structure = stable tripod]
Adam White says
Just (re)reading this post a few years later. Roger – you may find this book interesting.
“Linked: How Everything Is Connected to Everything Else and What It Means for Business, Science, and Everyday Life” by Albert-Laszlo Barabasi
Lauren Braid says
James, using the example of a tripod that you use in the blog, you give an example of separate parts coming together to do something that they cannot do separately. You define the legs as parts. However there is also a chance that the legs themselves are the product of multiple parts. There may be two elements to the legs to allow them to extend which may in turn could be controlled by a screw or bolt. The bottom of the legs may have a base. Each of the parts are in turn made up of parts that would have been tested. Stripping it back even further, the metal that the tripod is made from is a composition of different elements mixed together, tested and repackaged as a new material. All of these are parts, each tested.
So by this thinking, isn’t all testing integration testing (just with different perspectives of integration at different layers).
[James’ Reply: Yes. Think of these ideas as a lens. They work on any level you want to apply them. Since I am always thinking in these terms, I don’t find it necessary to SAY “integration testing” very often.]
Oliver says
Hi James,
My 2c…
I haven’t yet had the time to read all responses from #1 so not sure if anyone touched this. When I started contemplating defining when an integration occurs, it bothered me that there was this talk about complex and simple integrations and the such. That seems like a difficult boundary to define. So I looked at it from the top down view, the view of the “user”.
When an integration happens (i.e. I combine two or more things) the “user” (or the consumer of functionality) changes. With that the scope of the functionality and usefulness. That means my expectations change, risks change, value changes,…. and because of all that my testing will change and the purpose of my testing will.
So an example: A search engine has (simplified) two parts. The frontend and the backend. Both can be tested exhaustively on their own. It would go a long way to also proving that theoretically they will work together (if you ignore hardware, protocols and other technical detail). But what this would ignore is that the user has changed.
The user for the backend is the frontend (maybe also an API of sorts). So the questions answered by testing are of the type “I have sent a valid request. Do I get a valid answer?”, “If I send request X can I assert I get the answer Y?”,… This is all fine and dandy but now comes integration. The user changes.
[James’ Reply: This applies for one kind of integration, but not for all. In an integration, the parts may be changed beyond recognition, not just the user. And then there is integration where the user does not change at all, such as a change to a unit that depends upon a corresponding change in a totally different unit. Here there is integration risk (will they work together) but no change of user.]
The questions from above might still be valid in some way but my user now is the actual customer. He/she is really interested in “When I request something from the frontend do I get all the data displayed that I care about?”, “Does it contain the information I wanted/expected?”, “Is the style/design one that I can efficiently digest?”,…
I think looking at integration from a user perspective makes it easy to know when an integration has occurred. If you have a very loosely (or not) coupled integration you would notice because the user would not change or rather the questions to the system would not change. They would remain on the same level as before integration.
[James’ Reply: If I change the server hardware, isn’t that an integration issue? How has the end user changed in that case?]
If we look at the phases of testing that are bantered around in most projects like unit testing, functional testing, system testing and integration testing (and others) they are just different levels of integration and testers are taking on different roles of users. I think what we colloquially tend to mean by integration testing is that we have reached a stage, where we see a system coming together as a “whole”. This “whole” is relatively arbitrary and is only defined in the project context. It is more a certain point that carries importance or value for some reason.
Thing is that the example you give above doesn’t quite fit my view here. The user didn’t change. The environment did. Still thinking on that one….
Back to respones on #1.
Cheers Oliver
Oliver says
Hi,
Sorry I have trouble with the below.
[… the parts may be changed beyond recognition, not just the user. And then there is integration where the user does not change at all, such as a change to a unit that depends upon a corresponding change in a totally different unit.]
Can you expand a bit or note an example? I’m drawing a blank.
[James’ Reply: Let’s say I want to join two parts together, and I do that by moving some code from B into A and some code for A into B, and then changing the interface. What if I integrate by completely rewriting A and B into a brand new C with bits and pieces of code intermingled from the original pieces. That’s integration but where is the “new user?”
And how does the “user” change when two parts are changed without changing any interface or orientation of either of the parts?]
As for your second comment a hardware change is an integration feature and would be a test that is part of the user scope at that integration level. But it would not be for the level below/before. The user doesn’t change in that scenario. “User” here is just a word for a mode here. There might be different users per integration with different scope focus. But for each the scope would change from one integration level to the next or even become irrelevant.
[James’ Reply: I don’t understand why you are thinking of integration this way. Why not think of it in a simple way: putting stuff together. What you are saying looks like a special case of my more general view. If something changes at a low level, we may test that at a high level, and that may fairly be called integration testing (although I would be more likely to call it system testing).]
So in my eample that would be the question “Can I access and use the search engine on a Mac and on Windows” or “Can the whole system run under a 100Mbit network as well as under a 1Gb network”. That is the same integration level but it might be irrelevant to the integration level before.
My current view is that if the “user”/”scope” doesn’t change you don’t have an integration.
[James’ Reply: Okay, but how can you defend that definition? It doesn’t seem to be consistent with the dictionary. I am not necessarily opposed to being inconsistent with the dictionary, but there should be a good reason for that.]
Oliver says
“[James: Let’s say I want to join two parts together, and I do that by moving some code from B into A and some code for A into B, and then changing the interface. What if I integrate by completely rewriting A and B into a brand new C with bits and pieces of code intermingled from the original pieces. That’s integration but where is the “new user?””
In my model what you describe would cease to be an integration. To have an integration you need communication. Now you would argue that there is communication within a module.
[James’ Reply: No, I would argue that to have integration you have to put things together– a dynamic event– which may result in a profound change to those parts. I’m concerned about the risk that comes from building things by the process of accretion and refactoring. It sounds like you are tackling integration from a static perspective: analyzing a product as a set of communicating parts. From my point of view that is part of integration but not the only part.]
If you go to that level you again have users with different scopes at different integration levels. But you have made a mode change in your example. You changed the communication from a protocol between two programs to intra program communication. This is a fundamental change and will have an equal fundamental change in what is meant by a “user”. (Not saying that it isn’t a valid example).
[James’ Reply: So you’re saying you define integration differently than I do. That just leaves me wondering why you choose to see it that way. One reason I see it the way I do is so that when a developer says he “integrated” some things together, I can productively test in that situation, and that means being aware of all of the things he might have done to effect what he calls “integration.” In my experience, people who say the word “integration” may mean lots of different things, so I’m looking for an umbrella concept that is reasonably compatible with common usage, and would not require people to change their usage much in order to adopt my proposal, yet has enough sharpness and structure to be a useful heuristic.]
“And how does the “user” change when two parts are changed without changing any interface or orientation of either of the parts?]”
The user doesn’t change at the integration level. The user scope at the part level would.
By the comments on article #1 I see that the issue is defining where integration takes place and how to define what an integration is. I side step that by defining that by the change in user scope. I now only have to define what is the scope of the user at each level. You can slice and dice levels as seems practical but if the user doesn’t change you don’t have an integration.
Say you’re testing a SOAP API and you “integrate” and you test the same SOAP API with the same or similar data (i.e. same scope) then you haven’t integrated anything (or you’re doing it wrong 😉
But maybe I am barking up the wrong tree.
Idea…. what if I replace the word user with the word risk? I think risk is abit restrictive but that’s just gut feel. I’d need to think about that a bit longer. i.e. define more cleary what I mean by “user”.
Sean McErlean says
Is it necessarily possible to determine the clash in your example by unit testing? If the dependency is that A and B rely on differing but incompatible versions of a library or runtime, they will happily run forever on separate computers. It is only if you move them to run on the same environment that there will be a problem. It’s not a problem for the unit that you can’t have two versions of the same library on a box, just for the integrated software.
[James’ Reply: I don’t know if it is necessarily possible, but I don’t see that it is definitely impossible. More that it seems expensive to try.]
This fits in with something else that is nagging at me. It’s not quite emergent behaviour. It’s the idea that in some real sense the product is the integration and not the parts. It’s not just that stability emerges from the how you arrange the legs of the tripod, it’s that the tripod *is* the arrangement of the legs. That may be the most appropriate place to reason about and assess the product.
Flaws or improvements might only become apparent when considering it at this level, or there might be some defect or weakness in the arrangement itself. Take the two units on different library versions that clash – assume it prevents startup. You could modify the units to solve the problem. Or you could just move the units into different environments. In my experience, that can tend to be considered a work around. But who’s to say it is? It might be the most sensible option for a given product at a given point in time. In both cases you have modified the product in response to testing so that you have a working overall system.
Sean McErlean says
[James’ Reply: I don’t know if it is necessarily possible, but I don’t see that it is definitely impossible. More that it seems expensive to try.]
I think it may actually be impossible, at least under some circumstances. It’s this bit:
“1. Is composed of parts:
…that come from differing sources.
…that were produced for differing purposes.
…that were produced at different times.
…that have differing attributes.”
The answer to those questions defines the context a part was written for. If that context never considered the integration, then no amount of unit testing will ever uncover the problem.
[James’ Reply: Ah, now I get it.]
The consequence, I think, is that units that have never been integrated before, or units that are being integrated in new ways become your highest source of risk when thinking about integration tests. That’s a fairly standard heuristic – something new is more likely to have problems than something old – but I think it applies more so here, since you might be actively discovering new requirements.
Not really a brilliant insight, but I was interested in the question of whether unit testing was theoretically sufficient, if you had infinite tests. I think I’m leaning towards no.
Connor Roberts says
James, are there any good time-management heuristics that we can use to determine how much time we should spend on a risk (actual/realized/has bitten us in the past) vs potential risks (possible but unrealized/we lack perspective on severity)?
[James’ Reply: We spend no time on an ACTUAL risk, strictly speaking. Once the risk is known, it’s not our problem. We spend all our time on potential risks (to understand if they are actual risks) or on non-risk-oriented activities that may help us uncover potential risks.
When you say something has bitten you in the past, that just means it is a strong potential risk for the present. If you know for sure this is a bug in the product, though, it is now a definite risk to occur in the field, rather than a potential.
So, I would reframe your question to be: how much time to spend on specific potential risks vs. time spent in general testing to uncover new and heretofore unknown potential risks. My general policy is 80/20.]
The best I have come up with so far is to show the test strategy to Product Management in advance. Each parent node within that strategy (e.g. in a mind-map format) will have a testing priority, and the PO can suggest time reallocation or approve at that time. Some would say this is for the purpose of CYA, but I am more interested in providing better testing.
Eric says
I’m not sure I agree with your definition of integration: “the process of constructing a product from parts”.
[James’ Reply: I wrote a long and detailed post about this, Eric. If you want to question it, then do me the courtesy of explaining your objection.]
What if I am integrating two products by using middleware? Perhaps system A needs information from system B and this is achieved in some form of messaging solution in a service broker. Is this integration or not by your definition?
[James’ Reply: Sounds like the two products and the middleware are a set of parts. Sounds like you are making one product out of them. You say you are integrating them, too. So, of course that is integration.]
I guess one could argue that the above example would create a new product C, that consists of product A and B (now seen as parts), but that seems to me highly theoretical and would probably not be used in practice.
[James’ Reply: Would you like to make an argument to that effect, or did you write in just to register an arbitrary concern?]