Have you ever asked yourself, what is the most important quality of a good test engineer? If you google now to find out what are the job requirements for a software tester, you’ll probably find that a tester should be proficient in X tool and Y technology. Are these the fundamental qualities of a tester? What will happen in 2 years from now, when these technologies and tools will be replaced by newer technologies and tools? What happens if you change companies, and your new company uses other tools and technologies? How proficient will you be as a tester if you can only work with certain tools and technologies? The fundamental qualities of a software tester, the skills that create the base for his ability to test proficiently a software solution, are not even technology related. The objective of this read is to detail one of these skills, which is the ability to ask good questions based on an analysis of the thinking / reasoning process.
For the sake of example, let’s assume the system under test is a new feature, called newFeature. As a software tester, it is your responsibility to plan the test strategy for the newFeature, discover as many relevant problems as possible and assure that, when deploying newFeature to the live environment, the newFeature has the best quality possible under the current conditions.
What you test and how you test the newFeature depends very much on your reasoning process. This is the keyword here: thinking or the reasoning process. Whenever you try to develop the steps for your testing strategy or your test case development plan, you think, you reason. The way you think will eventually lead to a high quality test strategy with high quality test cases that will uncover the most relevant issues of the newFeature or it will lead to a low quality test strategy with low quality test cases that will uncover only trivial issues of the newFeature.
In order to become very good at testing software, you have to learn how to analyze your thinking process and you have to learn how to ask the most useful questions, so that your test strategy results in a meaningful and useful point of view.
All of your testing activity should have a purpose. Your purpose can be defined as your goal, what are you trying to accomplish with your testing activity. Take the time to define your purpose clearly, based on your motives and intentions. Even after defining your purpose, make sure you periodically check this purpose to be sure you are not deviating from what you intended to do. Sometimes (especially in iterative development workflows) your purpose might even change. Define your purpose by asking questions like:
- What is the purpose of testing the newFeature?
- What is the newFeature supposed to do?
- What is the final objective of developing tests for the newFeature?
- Should the process be reusable after finishing with testing of the newFeature?
Testing as a process is an attempt to settle some questions. These questions can be a more general question like “Is the newFeature working without serious issues?”, or more specific in nature like “Will the newFeature also support Armenian characters or do we have to build a transliteration service on top of it?”. For each test case or test scenario that you develop, state the questions clearly and precisely, maybe even express them in several ways to clarify the meaning. Go as far as breaking each question into sub-questions, until each question has a definite answer (this is where you will have to distinguish which questions are objective and which questions are a matter of opinion or require other points of view).
Define your questions by questioning them:
- What is the question I am trying to answer?
- What is/are the important question/questions embedded in this newFeature?
- Is my question complex? Can it be split in sub-questions?
- Am I asking this question based on technical reasons or ethical reasons?
- Can I further explain this question?
- How can I answer these question?
All testing activities are based on data and information. In order to develop useful test cases, restrict your claims to the ones based on actual information that you have. At the same time search for information that opposes your suppositions as well as for information that supports your assumptions (and also make sure that this information is accurate and important). The test cases that uncover valuable holes in a system are those based not on assumption, but on real and supported information. This information is not necessarily documented or written-in-stone so to speak. Information can include facts, data and evidence, but also experiences that we can use, as long as it is accurate.
Gather your information by asking question like:
- What information do I need to test the newFeature?
- What data are relevant to test the newFeature?
- Do I have enough or do I need to gather more information?
- Is the information that I have relevant to my previously defined purpose?
- What is the experience that convinced me of this? Can my experience be distorted?
- How can I know this data is accurate?
- Am I missing something? Have I left something out?
The quality of your test cases and the efficiency of your test strategy relies on your ability to interpret the information that you have. All testing activities contain interpretations by which you draw conclusions and define your test cases. For this reason, you have to pay attention to only interpret what the data implies and always identify the assumptions that underline your interpretations. In this way you can assure that your interpretations logically follow from the data that you already have.
Analyze your interpretations by asking questions like:
- To what conclusion am I coming to?
- Did I interpret the information in a logical way?
- What other conclusions should I consider?
- Does my interpretation of the newFeature data make sense?
- How did I reach this conclusion?
- Can there be an alternative plausible conclusion?
- Considering the information available for the newFeature, is this the best possible conclusion?
Every test case that you will develop is shaped by a concept or an idea that you have. After you have interpreted the data available, you have to identify the key points of newFeature and explain them clearly. By doing this you can identify test cases and alternative test cases for your testing framework. Be extremely clear about what concepts you are using and use them in a justifiable and meaningful way.
Your concepts will define your testing scope, so make sure you ask the right questions:
- What idea am I using in my thinking? Is this idea bringing value to the newFeature?
- Is the test strategy complete? Can I make it even more specific?
- What is the main scope I am using in my test plan development?
- Are the principles used for the test strategy creation aligned with the testing scope for newFeature?
Until you have tested it, you can only assume that something is working (no matter the information received from documentation or developers), therefore all testing will be based on assumptions. Define clearly your assumptions and consider how these assumptions are going to shape your test cases. Make sure these assumptions are justified by data or information.
When defining your assumptions, ask questions like:
- Why am I assuming this?
- Am I assuming something that I shouldn’t?
- What assumption is leading to this test case?
- What is the documentation//regulation/developer assuming?
- Do I have predefined assumptions that determine the creation of this test case?
- Are my assumptions biased in any way?
Defining and implementing the test cases will have implications to your test coverage and to the quality of the newFeature and of course also consequences. Trace these implications by documenting the test coverage and newFeature quality reports (from users for example) and monitor the positive and negative consequences of your test suite (logging of newFeature performance for example).
In order to effectively determine the implications of your test suite, ask questions like:
- If I decide to test “X”, what implications does this have on the quality of newFeature?
- If I decide not to test “X”, what consequences should I expect?
- What is likely to happen if I test “X” instead of “Y”?
- With test_case_123 am I suggesting that this functionality is problematic?
- How significant is this test case for the purpose of the test suite?
- What do I imply by the fact that there are N times more tests implemented for this functionality of newFeature?
The Results(Tester’s point of view)
All testing activity will be finished from some (the tester’s) point of view at a specific time. Having a point of view means, basically viewing the newFeature in a certain way (the newFeature is unstable/stable/good enough/satisfying requirements). It is also important to understand the limitations of your point of view and maybe consider other points of view. Only in this way, the quality of the newFeature will continuously improve over time.
When defining your point of view related to the newFeature, ask yourself the following type of questions:
- How am I analyzing the newFeature test results?
- Can I analyze the results in any other way?
- What exactly was the focus of the test suite?
- Is my point of view ignoring something?
- Have I considered the way developers/product managers/users will view the newFeature?
- Is any other viewpoint making more sense than the one I have?
- What is the viewpoint of the person that developed newFeature? Did I consider it?
Analyzing the way you reason is a habit, and with time it will become automatic and not at all time consuming. Although it requires repetition, in time, the ability to reason logically will become useful and the test cases that you will develop will be efficient, precise and accurate.