The results of the survey are described in this section. Initially, Section 4.1 defines some profiles, each representing a group of subjects, based on the experience reported in the profile characterisation form. Then, Section 4.2 shows the results with respect to the level of importance of TMMi practices according to each profile.
Profile definition
Figure 6 summarises the level of knowledge of both subjects and their institutions according to the profile characterisation questions. A description of the charts comes in the sequence.
-
a)
Experience: this chart shows that 46 % of subjects (17 out of 37) have more than three years of experience in testing either in industry or academy; only 11 % (4 out of 37) have less than one-year experience.
-
b)
Testing Process: this chart shows that 65 % of subjects (24 out of 37) work (or have worked) in a company that has a testing process officially implemented (i.e. an explicit testing process). From the remaining subjects, 22 % (8 out of 37) do not (or have not) worked in a company with an explicit testing process, while around 14 % (5 out of 37) have not answered this question.
-
c)
Certification: this chart shows that 59 % of subjects (22 out of 37) work (or have worked) in a company that has been certified with respect to a software process maturity model (e.g. CMMI, MR-MPS). The remaining subjects have never worked in a certified company (24 %) or have not answered this question (16 %).
-
d)
Type of Certification: from the subjects that reported to work (or have worked) in a certified company – chart (c) of Fig. 6 –, half of them (i.e. 11 subjects) are (or were) in a CMMI-certified company, while the remaining are (or were) in a MR-MPS-certified company.
-
e)
TMMi: this chart reveals that only 8 % of subjects (3 out of 37) have had any practical experience with TMMi. Besides this, 59 % of subjects (22 out of 37) have stated to have only theoretical knowledge of TMMi, whereas 32 % (12 out of 37) do not know this reference model.
Based on the results depicted in Fig. 6, we concluded that the sample is relevant with respect to the goals established for this work. This conclusion relies on the fact that, amongst the 37 subjects who have fully answered the questionnaire, (i) 89 % of them have good to high knowledge of software testing (i.e. more than one-year experience); (ii) 65 % work (or have worked) in companies that officially have a software testing process; (iii) 59 % work (or have worked) in a CMMI- or MR-MPS-certified company; and (iiii) 67 % are knowledgeable of TMMi, at least in theory. For CMMI-certified companies, the maturity levels vary from 2 to 5 (i.e. from Managed to Optimising). For MR-MPS-certified companies, the maturity levels range from G to E (i.e. from Partially Managed to Partially Defined).
To analyse the results regarding the level of importance of TMMi practices according to the subjects’ personal opinion, we defined three different profiles as follows:
-
Profile-Specialist: compound by 12 subjects who have at least three years of experience with software testing and work (or have worked) in a company that has a formally implemented software testing process.
-
Profile-MR-MPS: compound by 20 subjects that are knowledgeable of MR-MPS and use this reference model in practice.
-
Profile-TMMi: compound by 25 subjects that are knowledgeable of TMMi.
The choice for a MPS.BR-related profile was motivated by the straight relationship between the reference model and context of Brazilian software companies. Furthermore, these three specific profiles were defined because we believe the associated subjects’ tacit knowledge is very representative. Note that the opinion of experts in CMMI was not overlooked at all; instead, such experts’ opinion are spread over the analysed profiles. Finally, we also considered the answers of all subjects, in a group named Complete Set.
Characterising the importance of TMMi practices
As previously mentioned, the results herein described are based on the three profiles (namely, Profile-Specialist, Profile-MR-MPS and Profile-TMMi) as well as on the whole survey sample. Within each profile, we identified which practices were mostly ranked as mandatory. The Venn diagram depicted in Fig. 7 includes all mandatory practices, according to each profile. The practices are represented by numbers and are listed in the table shown together with the diagram.
In Fig. 7, the practices with grey background are also present in the set obtained solely from the statistical analysis described in Section 3.3. As the reader can notice, this set of practices appears in the intersection of all profiles. Furthermore, practices with bold labels (e.g. practices 5, 7, 22, 31 etc.) are present in the set aimed to compose a lean testing process (this is analysed in details in Section 4.3). Next we describe the results depicted in Fig. 7.
-
Complete Set: taking the full sample into account, 31 practices were assigned level 4 of importance (i.e. ranked as mandatory) by most of the subjects. The majority of them are also present in the other profile-specific sets, as shown in Fig. 7. The reduced set of practices to compose a lean testing process includes these 31 items, and is complemented with practices #5 and #7 (the justification is presented in Section 4.3).
-
Profile-Specialist: 49 practices were ranked as mandatory by most of subjects within this profile. From these, 27 practices appear in the intersection with at least another set;
-
Profile-MR-MPS: subjects of this profile ranked 33 practices as mandatory, from which only 30 are in intersections with the other profiles; only 3 practices are considered mandatory exclusively for subjects of this profile.
-
Profile-TMMi: for those who know TMMi, 42 practices are mandatory, from which 41 ones appear in the intersections with the other profiles.
The obtained streamlined process
Before the definition of the aimed reduced set of practices, we analysed the results of the second questionnaire, which has been designed to resolve some dependencies observed in the initial dataset (i.e. based on the 37 analysed answers). The dependencies have been identified by Höhn (2011), who pointed out some practices that must be implemented before the implementation of others. Based on the feedback of 14 subjects, all included in the initial sample, we were able to resolve the observed dependencies, which are related to the following practices: Analyse product risks, Define the test approach, and Define exit criteria.
Regarding Analyse product risks (practice #3 in Fig. 7), the subjects were asked if this task should be done as part of the testing process. We got 12 positive answers, thus indicating this practice is relevant, for example, to support the prioritisation of test cases. In fact, the Analyse product risks practice was already present in the reduced set of practices identified from the first part of the survey. In spite of this, we wanted to make sure the subjects have had clear comprehension that it should be performed as part of the testing process.
The subjects were also asked whether a testing approach could be considered fully defined when the product risks were already analysed, and items and features to be tested were already defined. This question was motivated by the fact that Define the test approach (practice #5 in Fig. 7) was not present in the reduced set of practices derived from the initial questionnaire. For this question, we received 10 negative answers; that is, one cannot consider the testing approach fully defined only by analysing product risks and defining items and features to be tested. Therefore, we included the Define the test approach practice in the final set, thus resolving a dependency reported by Höhn (2011).
The third question of the second questionnaire addressed the Define exit criteria practice (#7 in Fig. 7), since it was not identified as mandatory after the first data analysis. Subjects were asked whether it is possible to run a test process without explicit exit criteria (i.e. information about when test should stop). Based on 9 negative answers (i.e. 65 %), this practice was also included in the reduced set.
This second analysis helped us to either clarify or resolve the aforementioned dependencies amongst TMMi practices. In the next sections we analyse and discuss the survey results. For this, we adapted Höhn’s mind map (Höhn 2011) (Figs. 8, 9, 10, 11 and 12), according to each phase of a generic testing process. Practices highlighted in grey are identified as mandatory and should be implemented in any testing process.
Our analysis was also supported by the IEEE-829 Standard for Software and System Test Documentation (IEEE 2008). This standard presents a model for test plan and clearly indicates what this plan should contain. Maturity models present what should be done to complete a phase, but do not indicate what must be included in the documentation.
4.3.1 Planning
Planning the testing activity is definitely one of the most important process phases. It comprises the definition of how testing will be performed and what will be tested; it enables proper activity monitoring, control and measurement. The derived test plan includes details of the schedule, team, items to be tested, and the approach to be applied (IEEE 2008). In TMMi, planning-related practices also comprise non-functional testing, definition of the test environment and peer reviews. In total, 29 practices are related to planning (see Fig. 8), spread over the nine specific goals (labelled with SG in the figure).
To achieve these goals, the organisation must fulfil all the practices shown in Fig. 8. Despite this, our results show that only 8 out of these 29 practices are mandatory, according to the Complete Set subject group. According to Höhn’s analysis, TMMi has internal dependencies amongst practices, some related to the Planning phase. Therefore, 2 other practices are necessary to resolve such dependencies (this is discussed in the sequence). Thus, the final set of 10 mandatory practices for the Planning phase is shown in grey background in Fig. 8.
Amongst these practices, Identify product risks and Analyse product risks demonstrate the relevance of evaluating product risks. Their output plays key role in the testing approach definition and test case prioritisation. The product risks consist of a list of potential problems that should be considered while defining the test plan. Figure 7 shows that these two practices were mostly ranked as mandatory considering all profiles.
According to the IEEE-829 Standard for Software and System Test Documentation (IEEE 2008), a test plan should include: a list of what will be and will not be tested; the approach to be used; the schedule; the testing team; test classes and conditions; exit criteria etc. In our survey, Identify items and features to be tested, Establish the test schedule and Plan for test staffing practices were mostly ranked as mandatory. They are directly related to Establish the test plan, and address the definition of most of the items listed in the IEEE-829 Standard. This is complemented with the Define exit criteria, selected after the dependency resolution. This evinces the coherence of the survey’s subject choices for mandatory practices with respect to the Planningphase.
The Planing phase also includes practices that address the definition of the test environment. In regard to this, Elicit test environment needs and Analyse the test environment requirements are ranked as mandatory and as clearly inter-related.
To conclude this analysis regarding the Planning phase, note that not all TMMi specific goals are achieved only with the execution of this selection of mandatory practices. Despite this, the selected practices are able to yield a feasible test plan and make the process clear, managed and measurable.
After Planning, the next phase is related to Test Case Design. The input to this phase if the test plan, which includes some essential definitions such as risk analysis, the items which will be tested and the adopted approach.
4.3.2 Test case design
Figure 9 summarises the results of our survey for this phase, based on the set of TMMi practices identified by Höhn (2011). As the reader can notice, only two practices were mostly ranked as mandatory by the Complete Set group of subjects: Identify and prioritise test cases and Identify necessary specific test data (both shown in grey background in Fig. 9).
According to the IEEE-829 Standard, the test plan encompasses some items related to test case design, such as the definition of test classes and conditions (IEEE 2008). Due to this, it is likely that part of the subjects considers that the test plan itself already fulfils the needs regarding test case designing, thus most of the practices are not really necessary. For instance, if we considered solely the Profile-MR-MPS, none of the practices within this phase would appear in the results (see Fig. 7 to double-check this finding). On the other hand, subjects of the other profiles consider some other practices of this phase should be explicitly performed in a testing process. For instance, subjects of the Profile-Specialist profile ranked Identify and prioritise test conditions, Identify necessary specific test data and Maintain horizontal traceability with requirements as mandatory. For the Profile-TMMi subjects, Identify and prioritise test cases and Maintain horizontal traceability with requirements should be mandatory.
From these results, we can conclude that there is uncertainty about what should indeed be done during the test case design phase. Moreover, this uncertainty may also indicate that not always test cases are documented separately from the test plan. From our observations in the industry context, a common practice is not to have a clear phase for test case designing, in general due to time constraints. The planning phase usually includes the designing of test. So, it is reasonable that the test plan itself includes the test cases, testing approach (and its underlying conditions) and the exit criteria. Thus, the two selected practices for this phase complement the needs to compose a feasible, streamlined testing process.
4.3.3 Setup of test environment and data
As discussed in Section 4.3.1, in the Planning phase test environment requirements are identified and described. The Setup of Test Environment and Data phase addresses the prioritisation and implementation of such requirements. Figure 10 shows the TMMi specific goals and practices for this phase.
According to TMMi, Develop and prioritise test procedures consists in determining the order test cases will be executed. Such order is defined in accordance with the product risks. The classification of this practice as mandatory is aligned with the practices selected for the Planning phase, some of which related to risk analysis. Another practice ranked as mandatory is Develop test execution schedule, which is directly related to the prioritisation of test case execution. The other two practices (i.e.
Implement the test environment and Perform test environment intake test) address the environment implementation and ensuring it is operational, respectively. The conclusion regarding this phase is that the four practices are sufficient to create an adequate environment to run the tests.
4.3.4 Execution and evaluation
The next phase of a generic testing process consists of test case execution and evaluation. At this point, the team runs the tests and, eventually, creates the defect reports. The evaluation aims to assure the test goals were achieved and to inform the results to stakeholders (Hass 2008. For this phase, Höhn (2011) identified 13 TMMi practices, which are related to test execution goals, management of incidents, non-functional test execution and peer reviews. This can be seen in Fig. 11. As the reader can notice, only four practices were not ranked as mandatory. This makes evident the relevance of this phase, since it encompasses activities which are related to test execution and management of incidents.
The results summarised in Fig. 11 include practices that regard the execution of non-functional tests. However, in the Planning an Test Case Design phases, the selected practices do not address the definition of such type of tests. Although this sounds incoherent, this may indicate that, from the planning and design viewpoints, there is not a clear separation between functional and non-functional testing. The separation is a characteristic of the TMMi structure, but for the testing community these two types of testing are performed in conjunction, since the associated practices as described in TMMi are very similar in both cases.
4.3.5 Monitoring and control
The execution of the four phases of a generic testing process yields a substantial amount of information. Such information needs to be organised and consolidated to enable rapid status checking and, if necessary, corrective actions. This is addressed during the Monitoring and Control phase (Crespo et al. 2010).
Figure 12 depicts the TMMi practices with respect to this phase. Again, the practices ranked as mandatory by most of the subjects are highlighted in grey. Note that there is consensus amongst all profile groups (i.e.
Profile-Specialist, Profile-MR-MPS, Profile-TMMi and the Complete Set) about what is mandatory regarding Monitoring and Control. This can be cross-checked in Fig. 7.
Performing the Conduct test progress reviews and Conduct product quality reviews practices means keeping track of both the testing process status and the product quality, respectively. Monitor defects addresses gathering metrics that concern incidents (also referred to as issues), while Analyse issues, Take corrective action and Manage corrective action are clearly inter-related practices. The two other practices considered mandatory within this phase are Co-ordinate the availability and usage of the test environments and Report and manage test environment incidents. Both are important since either unavailability or incidents in the test environment may compromise the activity as a whole.
As a final note with respect to the survey results, we emphasise that the subjects were not provided with any information about dependencies amongst TMMi practices. Besides this, we were aware that the inclusion of practices not mostly ranked as mandatory might have been created new broken dependencies. Despite this, the analysis of the final set of mandatory practices shows that all dependencies are resolved.