 Research
 Open Access
 Published:
Similarity testing for rolebased access control systems
Journal of Software Engineering Research and Developmentvolume 6, Article number: 1 (2018)
Abstract
Context
Access control systems demand rigorous verification and validation approaches, otherwise, they can end up with security breaches. Finite state machines based testing has been successfully applied to RBAC systems and enabled to obtain effective test cases, but very expensive. To deal with the cost of these test suites, test prioritization techniques can be applied to improve fault detection along test execution. Recent studies have shown that similarity functions can be very efficient at prioritizing test cases. This technique is named similarity testing and assumes the hypothesis that resembling test cases tend to have similar fault detection capabilities. Thus, there is no gain from similar test cases, and fault detection ratio can be improved if test diversity increases.
Objective
In this paper, we propose a similarity testing approach for RBAC systems named RBAC similarity and compare to simple dissimilarity and random prioritization. RBAC similarity combines the dissimilarity degree of pairs of test cases with their relevance to the RBAC policy under test to maximize test diversity and the coverage of its constraints.
Method
Five RBAC policies and fifteen test suites were prioritized using each of the three test prioritization techniques and compared using the Average Percentage Faults Detected metric.
Results
Our results showed that the combination of the dissimilarity degree to the relevance of a test case to RBAC policies in the RBAC similarity can be more effective than random prioritization and simple dissimilarity, by itself, in most of the cases.
Conclusion
The RBAC similarity criterion is suitable as a test prioritization criteria for test suites generated from finite state machine models specifying RBAC systems.
Introduction
Access control is one of the major pillars of software security. It is responsible for ensuring that only intended users can access data and only the required permissions to accomplish a task is guaranteed (Ferraiolo et al. 2007). In this context, the RoleBased Access Control (RBAC) model has been established as one of the most significant access control paradigms. In RBAC, users receive privileges through role assignments and activate them during sessions (ANSI 2004). Despite its simplicity, mistakes can occur during development and lead to faults, or either security breaches. Therefore, software verification and validation becomes necessary.
Finite State Machine (FSM) has been widely used for modelbased testing (MBT) of reactive systems (Broy et al. 2005). Previous investigations using random FSMs have shown that recent test generation methods (e.g., SPY (Simão et al. 2009)), compared to traditional methods (e.g., W (Chow 1978) and HSI (Petrenko and Bochmann 1995)), tend to rely on fewer and longer test cases, reducing the overall test cost without impacting test effectiveness (Endo and Simao 2013). In the RBAC domain, although very effective and less costly, recent test generation methods still tend to output large amounts of test cases (Damasceno et al. 2016). Thus, there is a need for additional steps during software testing, such as test prioritization (Mouelhi et al. 2015).
Test case prioritization aims at finding an ideal ordering of test cases so that maximum benefits can be obtained, even if test execution is prematurely halted at some arbitrary point (Yoo and Harman 2012). A test prioritization criterion that has recently shown very promising results is similarity testing (Cartaxo et al. 2011; Bertolino et al. 2015). In similarity testing, we assume that resembling test cases tend to cover identical parts of an SUT, have equivalent fault detection capabilities, and no additional gain can be expected if executed simultaneously. This concept has been investigated under MBT (Cartaxo et al. 2011), access control testing (Bertolino et al. 2015) and software product line (SPL) testing (Henard et al. 2014) domains, but it has never been applied to RBAC. Moreover, since the fault detection effectiveness of test criteria are strongly related to its ability to represent faults of specific domains (Felderer et al. 2015), similarity testing may not be necessarily effective on RBAC domain.
In this paper, we investigate similarity testing for RBAC systems. A similarity testing criterion named RBAC similarity is introduced and compared to random prioritization and simple dissimilarity criteria using Average Percentage Faults Detected (APFD) metric, five RBAC policies, and three FSMbased testing methods. Our results show that RBAC similarity makes test prioritization more suitable to the specificities of the RBAC model and achieve higher APFD values compared to simple dissimilarity and random prioritization, in most of the cases.
This paper is organized as follows: Section 2 shows the theoretical background related to our investigation. Sections 2.1 to 2.3 give a brief introduction to FSMBased Testing. The RBAC model and an FSMbased testing approach for RBAC systems are introduced in Sections 2.4 and 2.5. The test case prioritization problem and similarity testing are discussed in Section 2.6. Section 3 details our proposed similarity testing criteria named RBAC similarity. Section 4 depicts the experiment we performed to compare RBAC similarity to simple dissimilarity and random prioritization techniques. The results obtained from our experiments are analyzed and discussed. The threats to validity and final remarks are presented in Sections 6 and 7, respectively.
Background
This section introduces the background behind our similarity testing approach for RBAC systems. First, we present the concept of FSMbased testing and three test generation methods (i.e., W, HSI, and SPY) which were considered in this study. Second, the RBAC model and an FSMbased testing approach for RBAC systems are described. At last, the test case prioritization problem and the specificities of the similarity testing are detailed.
Finite state machine based testing
A Finite State Machine (FSM) is a hypothetical machine composed of states and transitions (Gill 1962). Formally, an FSM can be defined as a tuple M=(S,s_{0},I,O,D,δ,λ) where S is a finite set of states, s_{0}∈S is the initial state, I is the set of input symbols, O is the set of output symbols, D⊆S×I is the specification domain, δ:D→S is the transition function, and λ:D→O is the output function. An FSM always has a single current (origin) state s_{ i }∈S which changes to destination (tail) state s_{ j }∈S by applying an input x∈I where s_{ j }=δ(s_{ i },x), and returns an output y=λ(s_{ i },x). An input x is defined for s if in state s there is a transition consuming input x (i.e. (s,x)∈D). Such transition is said defined. An FSM is complete if all inputs are defined for all states, otherwise it is partial. Figure 1 depicts an example of a complete FSM with three states {q_{0},q_{1},q_{2}}.
A sequence α=x_{1}x_{2}...x_{ n }∈I is defined for state s∈S, if there are states s_{1},s_{2},...,s_{n+1} such that s=s_{1} and δ(s_{ i },x_{ i })=s_{i+1}, for all 1≤i≤n. The concatenation of two sequences α and ω is denoted as αω. A sequence α is a prefix of a sequence β, denoted by α≤β, if β=αω, for some given input sequence ω. An empty sequence is denoted by ε and a sequence α is a proper prefix of β, denoted by α<β, if β=αω for a given ω≠ε. The set of prefix sequences of a set T is defined as pref(T)={α  ∃β∈T and α<β}, if T=pref(T), T is prefixclosed.
The transition and output functions can be lifted to input sequences as usual; for the empty sequence ε, we have that δ(s,ε)=s and λ(s,ε)=ε. For a sequence αx defined for state s, we have that δ(s,αx)=δ(δ(s,α),x) and λ(s,αx)=λ(s,α)λ(δ(s,α),x). A sequence α=x_{1}x_{2}...x_{ n }∈I is a transfer sequence from s to s_{n+1} if δ(s,α)=s_{n+1}, thus s_{n+1} is reachable from s. If every state of an FSM is reachable from s_{0} then it is initially connected and if every state is reachable from all states, it is strongly connected.
The symbol Ω(s) denotes all input sequences defined for a state s and Ω_{ M } abbreviates Ω(s_{0}), which refers to all defined input sequences for an FSM M. A separating sequence for two states s_{ i } and s_{ j } is a sequence γ such that γ∈Ω(s_{ i })∩Ω(s_{ j }) and λ(s_{ i },γ)≠λ(s_{ j },γ). In addition, if γ is able to distinguish every pair of states of an FSM, it is a distinguishing sequence. Considering the FSM presented in Fig. 1, the sequence a is a separating sequence for states q_{0} and q_{1} since λ(q_{0},a)=0 and λ(q_{1},a)=1.
Two FSMs M_{ S }=(S,s_{0},I,O,D,δ,λ) and M_{ I }=(S^{′},s0′,I,O^{′},D^{′},δ^{′},λ^{′}) are equivalent if their initial states are. Two states s_{ i },s_{ j } are equivalent if ∀ α∈Ω(s_{ i })∩Ω(s_{ j }), λ(s_{ i },α)=λ^{′}(s_{ j },α). An FSM M may have a reset operation, denoted by r, which takes to s_{0} regardless the current state. An input sequence α∈Ω_{ M } starting with a reset symbol r is a test case of M. A test suite T consists of a finite set of test cases of M, such that there are no α,β∈T where α<β. Prefixes α<β are excluded from test suite since the execution of β implies the execution of α. The length of a test case α is represented by α and describes the cost of executing α plus the reset operation. The number of test cases of one test suite T also describes the number of resets of T which is depicted as T.
Mutation analysis in FSMbased testing
In FSMbased testing, given a specification M, the symbol I(M) denotes the set of all deterministic FSMs, variants of M, with the same inputs of M for which all sequences in Ω_{ M } are defined. The set I(M) is called fault domain for M and these variants of M are named mutants and can be obtained either manually or by automatically performing simple syntactic changes using mutation operators (Andrews et al. 2006). Given m≥1, then I_{ m }(M) denotes all FSMs of I(M) with at most m states. Given a specification M with n states, a test suite T⊆Ω_{ M } is mcomplete if for each N∈I_{ m } distinguishable from M, there is a test case t∈T that distinguish M from N. The following mutation operators are often used on FSMbased testing (Chow 1978): change initial state (CIS), which changes the s_{0} of an FSM to s_{ k }, such that s_{0}≠s_{ k }; change output (CO), which modifies the output of a transition (s,x), using a different function Λ(s,x) instead of λ(s,x); change tail state (CTS), which modifies the destination state of a transition (s,x), using a different function Δ(s,x) instead of δ(s,x); and add extra state (AES), which inserts a new state such that mutant N is equivalent to M. Figure 2 shows examples of mutants of the FSM shown in Fig. 1 using CIS, CO, CTS, and AES operators. Changes are marked with an asterisk (*).
If the output of a mutant is different from the original FSM, for any test case, the mutant is distinguished (or killed) and the seeded fault denoted by the mutant is detected. Moreover, some mutants can be syntactically different but functionally equivalent to the original model. These are called equivalent mutants. The process of analyzing if test cases trigger failures and kill mutants is called mutation analysis and is often used in software testing research (Jia and Harman 2011; Fabbri et al. 1994).
The main outcome of the mutation analysis is the mutation score, which indicates the effectiveness of a test suite. Given a test suite T, the mutation score (or effectiveness) can be calculated using the equation \(T_{\text {eff}}=\tfrac {\#km}{(\#tm\#em)}\). The #km parameter represents the number of killed mutants; the #tm defines the total number of generated mutants; and #em denotes the number of mutants equivalent to the original SUT. Thus, the mutation score consists of the ratio of the number of detected faults over the total number of nonequivalent mutants. An mcomplete test suite has full fault coverage for a given domain I_{ m }(M) and can detect all faults in any FSM implementation with at most m states. Thus, it scores 1.0, by definition.
FSMbased testing methods
FSMbased testing relies on FSM models to derive test cases and evaluate if the behavior of an SUT conforms to its specification (Utting et al. 2012). To check this behavioral conformance, two basic sets of sequences are often used: the state cover (Q) and transition cover (P) sets (Broy et al. 2005).
A set of input sequences is a state cover set of M if for each state s_{ i }∈S there exists an α∈Q such that δ(s_{0},α)=s_{ i } and ε∈Q to reach the initial state. A set of input sequences P is named transition cover set of M if for each transition (s,x)∈D there are sequences α,αx∈P, such that δ(s_{0},α)=s, and ε∈P to reach the initial state. The transition cover set of an FSM is obtained by generating the testing tree of this FSM (Broy et al. 2005). The state and transition cover sets of the FSM depicted in Fig. 1 are respectively Q={ε,a,b} and P={ε,a,aa,ba,b,ab,bb}. After obtaining state and transition coverage, FSMbased testing methods require some predefined sets to identify the reached parts of an FSM. These are the characterization set and separating families.
A characterization set (W set) contains at least one input sequence which distinguishes each pair of states of an FSM. Formally, it means that for all pairs of states s_{ i },s_{ j }∈S,i≠j, ∃α∈W such that λ(s_{ i },α)≠λ(s_{ j },α).
A separating family, or harmonized state identifiers, is a set of sequences H_{ i } for each state s_{ i }∈S that satisfies the condition ∀s_{ i },s_{ j }∈S,s_{ i }≠s_{ j }∃β∈H_{ i },γ∈H_{ j } that has a common prefix α such that α∈Ω(s_{ i })∩Ω(s_{ j }) and λ(s_{ i },α)≠λ(s_{ j },α). In the worst case, the separating family is the W set itself.
The characterization set of the FSM model shown in Fig. 1 is W={a,b}, and the separating family of states q_{0},q_{1},q_{2} are respectively H_{0}={a,b}, H_{1}={a}, and H_{2}={b}. These sets are building blocks for most traditional and recent testing methods, such as W (Chow 1978; Vasilevskii 1973), HSI (Petrenko and Bochmann 1995), and SPY (Simão et al. 2009).
W method
The W method is the most classic FSMbased test generation algorithm (Chow 1978; Vasilevskii 1973). It uses the P set, to traverse all transitions, concatenated to the W set, for state identification. Moreover, it can also detect an estimated number of extra states using a traversal set \(\bigcup ^{mn}_{i=0}(I^{i})\), such that (m−n) is the number of extra states and I^{i} contains all sequences of length i combining symbols of I. Thus, by concatenating P, the traversal set, and W, the W method can detect (m−n) extra states (e.g., AES mutants). Assuming the FSM in Fig. 1, no extra states (m=n) or proper prefixes, W method can generate T_{ W }={aaa,aab,aba,abb,baa,bab,bba,bbb}, and T_{ W }=8.
HSI method
The Harmonized State Identifiers (HSI) method (Petrenko and Bochmann 1995) uses state identifiers H_{ i } to distinguish each state s_{ i }∈S of an FSM model. The HSI test suite is obtained by concatenating the transition cover set P with H_{ i }, such that δ(s_{0},α)=s_{ i }, s_{ i }∈S and α∈P. The HSI method can be applied to complete and partial FSMs. Assuming the FSM in Fig. 1, no extra states or proper prefixes, HSI method can generate T_{ H S I }={aaa,aba,abb,baa,bba,bbb}, and T_{ H S I }=6, which is 75% the size of T_{ W }.
SPY method
The SPY method (Simão et al. 2009) is a recent test generation method able to generate mcomplete test suites onthefly. First, the state cover set Q is concatenated to the state identifiers H_{ i }. Afterwards, differently from traditional methods, such as W and HSI, the traversal set is distributed over the set containing Q concatenated with H_{ i } based on sufficient conditions (Simão et al. 2009). Thus, by avoiding testing tree branching, test suite length and the number of resets can be reduced.
Experimental studies have indicated that SPY can generate test suites on average 40% shorter than traditional methods (Simão et al. 2009). Moreover, it can achieve higher fault detection effectiveness even if the number of extra states is underestimated (Endo and Simao 2013). Assuming the FSM in Fig. 1, no extra states or proper prefixes, SPY method can generate T_{ S P Y }={aaaba,abbb,baa,bba}, and T_{ S P Y }=4, which is 50% the size of T_{ W }.
Rolebased access control
Access Control (AC) is one of the most important security mechanisms (JangJaccard and Nepal 2014). Essentially, it ensures that only allowed users have access to protected system resources based on a set of rules, named security policies, that specify authorizations and access restrictions (Samarati and de Vimercati 2001). In this context, the RoleBased Access Control (RBAC) model has been established as one of the most significant access control paradigms (Ferraiolo et al. 2007). It uses the concept of grouping privileges to reduce the complexity of security management tasks (Samarati and de Vimercati 2001).
In RBAC, roles describe organizational figures (e.g., functions or jobs) which own a set of responsibilities (e.g., permissions). Roles can be assigned or revocated to users via role assignments and performed under sessions through role activations. Role hierarchies can be specified as inheritance relationships between senior and junior roles (e.g., sales director inherits permissions from sales manager). Thus, the mapping between security policies and the organizational structure can be more natural. These elements compose the ANSI RBAC model (ANSI 2004) which can also be extended to groups of administrative roles and permissions (Ben Fadhel et al. 2015). In Fig. 3, the ANSI RBAC and, within dashed lines, the Administrative RBAC models are depicted.
Masood et al. (2009) define an RBAC policy as a 16tuple P=(U,R,Pr,UR,PR,≤_{ A },≤_{ I },I,S_{ u },D_{ u },S_{ r },D_{ r },SSoD,DSoD,S_{ s },D_{ s }), where:

U and R are the finite sets of users and roles;

Pr is the finite set of permissions;

UR⊆U×R is the set of userrole assignments;

PR⊆Pr×R is the set of permissionrole assignments;

≤_{ A }⊆R×R and ≤_{ I }⊆R×R are the role activation and inheritance hierarchies relationships;

I={AS,DS,AC,DC,AP,DP} is the finite set of types of RBAC requests which respectively stand for userrole assignments (AS), deassignments (DS), activations (AC) and deactivations (DC); and permissionrole activations (AC) and deactivations (DC);

\(S_{u},D_{u}: U \rightarrow \mathbb {Z}^{+}\) are static and dynamic cardinality constraints on users;

\(S_{r},D_{r}: R \rightarrow \mathbb {Z}^{+}\) are static and dynamic cardinality constraints on roles;

SSoD,DSoD⊆2^{R} are the Static and Dynamic Separation of Duty (SoD) sets, respectively;

\(S_{s}: SSoD \rightarrow \mathbb {Z}^{+}\) specifies the cardinality of SSoD sets;

\(D_{s}: DSoD \rightarrow \mathbb {Z}^{+}\) specifies the cardinality of DSoD sets.
Role inheritance hierarchy is a roletorole relationship (e.g., r_{ j }≤_{ I }r_{ s }) that enable users assigned to a senior role (r_{ s }) to have access to all permissions of junior roles (r_{ j }). Role activation is a variant of role hierarchy (e.g., r_{ j }≤_{ A }r_{ s }) which enable users assigned to a senior role (r_{ s }) to activate junior roles (r_{ j }) without being directly assigned to that junior role (Masood et al. 2009). Cardinality constraints specify a bound on the cardinality of userrole assignment and role activation relationships (Ben Fadhel et al. 2015). Static cardinality constraints (S_{ u } and S_{ r }) bound userrole assignments and dynamic cardinality constraints (D_{ u } and D_{ r }) limit userrole activations (i.e., role activations) and they can be specified from a user (S_{ u } and D_{ u }, respectively) and role (S_{ r } and D_{ r }, respectively) perspectives. Separation of Duty (SoD) constraints define static and dynamic (SSoD and DSoD, respectively) mutual exclusion relationships among roles based on a positive integer number n≥2 to avoid the simultaneous assignments or activations of conflicting roles (ANSI 2004) (e.g., given SSoD={staff, accountant, director} and n=2, S_{ S S o D }=2 defines that no user can be assigned to more than two roles of SSoD_{ i } set). Listing 1 shows an example of RBAC policy with two users (line 1), one role (line 2), and two permissions (line 3).
User u1 is assigned to role r1 (line 4) that is assigned to the permissions pr1 and pr2 (line 5). Both users can be assigned and activate at most one role (line 67). Role r1 can be assigned to at most two users (line 8); however, it can be activated by one user per time (line 9).
FSMbased testing of RBAC systems
Masood et al. (2009) propose an approach based on FSMs to specify and test the behavior of RBAC systems. Given an RBAC policy P, an FSM(P) consists of a complete FSM modeling all access control decisions that an RBAC mechanism must enforce. Formally, an FSM(P) is a tuple FSM(P)=(S_{ P },s_{0},I_{ P },O,D,δ_{ P },λ_{ P }) where

S_{ P } is the set of states that P reach given its mutable elements;

s_{0}∈S is the initial state where P currently stands given UR and PR;

I_{ P } is the input domain where I_{ P }={(rq,up,r)} for all rq∈I, u∈{U∪Pr} and r∈R};

O is the output domain formed by granted and denied;

D=S_{ P }×I_{ P } is the specification domain;

δ_{ P }:D→S_{ P } is the state transition function; and

λ_{ P }:D→O is the output function.
Each state s∈S_{ P } is labeled using a sequence of pairs of bits containing one pair for each combination of userrole and permissionrole. A pair userrole can be assigned (10), activated (11) or not assigned (00); and a pair permissionrole can be assigned (10) or not assigned (00). The maximum number of states of FSM(P) is bounded to 3^{U×R} and the number of reachable states depends on the constraints of P. The set of input symbols I_{ P } contains all combinations of users, roles, permissions and types of RBAC requests which can be applied to P. Formally, it means that I_{ P }={(rq,up,r)}∀ rq∈I, up∈{U∪Pr} and r∈R.
Transitions of FSM(P) denote access control decisions on destination states (s_{ j }∈S_{ P }) and output symbols (granted or denied) given the specification domain, that is complete (Masood et al. 2009) and composed by pairs of an origin state (s_{ i }∈S_{ P }), and an input symbol (rq,up,r)∈I_{ P }, and the constraints of P. Given the constraints of P, an origin state s_{ i } and an input symbol (rq,up,r), a destination state s_{ j }=δ_{ P }(s_{ i },(rq,up,r)) is defined by flipping the bits of s_{ i } label related to an user (or permission) up and role r, if the constraints of P allow such request. This procedure denotes how the state transition function δ_{ P } operates.
Regarding the output function λ_{ P }, a denied symbol is returned to inputs (requests) which do not change the state of P, such as userrole assignments already performed or requests denied due to some cardinality constraint. Thus, denied is only returned on selfloops. Transitions with different origin and destination states always return granted. The generation of an FSM(P) can be iteratively performed by evaluating all defined inputs of state s_{0} given the constraints of P (Ω_{FSM(P)}).
Figure 4 shows the FSM(P) of the RBAC policy presented in Listing 1. Selfloop transitions, corresponding to requests returning denied, and transitions related to permissions are not shown to keep the figure uncluttered. The initial state 1000 depicts line 4 of Listing 1 where u1 is assigned to r1. From state 1000 all defined inputs are applied once to reach states 1100, 1010 and 0000 where respectively user u1 activates r1, u1 and u2 are assigned to r1, and none is assigned to r1. This procedure is iteratively repeated over all reached states until no new state is obtained. At the end, the resulting FSM(P) has a total of eight states due to Dr(r1)=1 which makes state 1111 unreachable, but not 9=3^{U×R}, which is the maximum number of states.
Test generation from FSM(P)
Given an RBAC system implementing a policy P, FSMbased testing can verify if the behavior of such system conforms to P using its respective FSM(P) and some test generation method, such as W or transition cover (Masood et al. 2009).
Let \(\mathcal {R}\) denote the set of all RBAC policies. Given a policy \(P \in \mathcal {R}\), the set \(\mathcal {R}\) can be partitioned into two subsets of policies: Equivalent (conforming) to P (\(\mathcal {R}^{P}_{conf}\)); and Faulty policies (\(\mathcal {R}^{P}_{fault}\)). Since \(\mathcal {R}\) is infinitely large, Masood et al. (2009) proposed a mutation analysis technique to measure the effectiveness of a test suite as its ability to detect if an RBAC system behaves as some faulty policy \(P' \in \mathcal {R}^{P}_{fault}\).
The RBAC mutation analysis restricts \( \mathcal {R}^{P}_{fault}\) to be finite by only considering policies mutants P^{′}=(U,R,Pr,UR^{′},PR^{′},≤A′,≤I′,I,Su′,Du′,Sr′,Dr′,SSoD^{′},DSoD^{′},Ss′,Ds′) generated by making simple changes to policy P=(U,R,Pr,UR,PR,≤_{ A },≤_{ I },I,S_{ u },D_{ u },S_{ r },D_{ r },SSoD,DSoD,S_{ s },D_{ s }). Note that all mutants share the same set of users (U), roles (R), permissions (Pr) and inputs (I) of the original policy P. The set \( \mathcal {R}^{P}_{fault}\) of faulty policies is generated by making changes using two kinds of operators: mutation operators and element modification operators.
The mutation operators generate RBAC mutants by adding, modifying and removing elements from UR, PR, ≤_{ A }, ≤_{ I }, SSoD, and DSoD sets (e.g. add role to SSoD set). The element modification operators mutate policies by incrementing or decrementing the cardinality constraints S_{ u },D_{ u },S_{ r },D_{ r },S_{ s }, and D_{ s }. Each of these RBAC faults has corresponding faults on the FSM domain (Chow 1978), and FSMbased testing methods are also able to detect them (Masood et al. 2009). Figure 5 illustrates a part of one testing tree generated from four test cases and the FSM(P) in Fig. 4.
By executing this test suite, an RBAC mutant generated from the policy shown in Listing 1 by applying the element modification operator to increment Dr(r1)=1 to Dr(r1)=2 can be detected. The FSM of this variant has state 1111 as reachable and, since test case t3 covers the transition 1110−AC(u2,r1)→1110, it can detect this fault.
Test case prioritization
Although very effective, FSMbased testing of RBAC systems tends to generate a large number of test cases regardless the methods used (Damasceno et al. 2016). Thus, development processes of RBAC systems with time and resources constraints may demand improvements on test execution. To cope with this issue, different techniques have been proposed to improve costeffectiveness of test suites, such as Test Suite Minimization, also called test suite reduction, where redundant test cases are permanently removed; and Test Case Selection, which selects test cases based on changed parts of a System Under Test (SUT) (Yoo and Harman 2012). These techniques reduce time effort, but they may not work effectively, since they may also omit important test cases able to detect certain faults (Ouriques 2015).
Test Case Prioritization improves test execution without filtering out any test case. It aims at identifying an efficient test execution ordering so that maximum benefits can be obtained, even if test execution is prematurely halted at some arbitrary point (Ouriques 2015). To that, it uses a function f which quantitatively describes the quality of an ordering as test criteria (e.g., test effectiveness, code coverage). To illustrate test prioritization, consider an hypothetical SUT with 10 faults and five test cases A,B,C,D,E, as shown in Table 1.
In this example, all faults can be detected by running test cases C and E, since they respectively have 70% and 30% of faultdetection effectiveness. Test case A, on the other hand, can detect only 20% of the faults so it can negatively affect fault detection along test execution if placed at the beginning of a test suite. Thus, it is possible to speed up fault detection during test cases execution by placing C and E at the beginning of the test suite.
After test prioritization, the quality of a ordering can be measured using the Average Percentage Faults Detected (APFD) metric. The APFD is a metric commonly used in test prioritization research (Elbaum et al. 2002), and it is defined as follows:
In Eq. 1, the parameter n describes the total number of test cases, l defines the number of faults under consideration and F_{ i } specifies the number of faults detected by a test case i. The APFD value depicts the detection of faults (i.e., test effectiveness) along with test execution given test cases ordering. This value ranges from 0 to 1 and the greater the APFD is, the better is test cases ordering. Table 2 shows the APFD for three prioritized test suites, T1, T2 and T3 obtained from test cases in Table 1. In this example, the APFD points that T3 performs better than T2 and T1.
Similarity testing
Similarity testing is a promising test case prioritization approach that uses similarity functions to calculate the degree of similarity between pairs of tests and define test ordering (Cartaxo et al. 2011; Bertolino et al. 2015; Coutinho et al. 2014). It is an alltoall comparison problem (Zhang et al. 2017) and, as most test prioritization algorithms (Elbaum et al. 2002), it has complexity O(n^{2}). It assumes that resembling test cases are redundant in a sense they cover the same features of an SUT and tend to have equivalent fault detection capabilities (Bertolino et al. 2015).
To run similarity testing, a similarity matrix describing the resemblance between all pairs of test cases of a test suite T must be calculated with a similarity function d_{ x }. The similarity matrix SM of a test suite T with n test cases is a matrix where each element SM_{ ij }=d_{ x }(t_{ i },t_{ j }) describes the similarity degree between two test cases t_{ i } and t_{ j }, such that 1≤i<j≤n. In Eq. 2 an illustrative example of similarity matrix is presented.
After calculating the similarity matrix, test ordering is defined based on similarity degrees (Cartaxo et al. 2011; Bertolino et al. 2015; Henard et al. 2014; Coutinho et al. 2014). According to Elbaum et al. (2002), the ordering process can use total or additional information. Test prioritization based on total information uses only pairwise similarity for ordering test cases, whereas additional information includes the similarity of previously executed test cases to improve ordering (i.e., the most distinct test case compared to all previous).
Cartaxo et al. (2011) showed that similarity testing can be more effective than random prioritization when applied to test sequences automatically generated from Labelled Transition Systems (LTS) (Cartaxo et al. 2011). In their study, the similarity degree (d_{ s d }) between two test cases was calculated as the number of identical transitions (nit) divided by the average test case length. The average length was used to avoid small (large) similarity degrees due to similar short (long) test sequences. An extensive investigation on similarity testing for LTS is found in (Coutinho et al. 2014).
Bertolino et al. (2015) also investigated the application of similarity testing on XACML systems. XACML is an XMLbased declarative notation for specifying access control policies and evaluating access requests (OASIS 2013). Essentially, they proposed a test prioritization approach named XACML similarity (d_{ x s }) which considers three values for test prioritization: (i) a simple similarity (d_{ s s }), which describes how much resembling are two test cases (t_{ i },t_{ j }) based on their lexical distance; (ii) an applicability degree (AppValue), which points the percentage of parts of an XACML policy affected by a test case; and (iii) a priority value (PriorityValue) which gives weight to pairs of test cases based on their applicability degree. Although investigations have shown that simple similarity d_{ s s } is comparable to random prioritization, XACML similarity enabled significant improvements compared to simple similarity and random prioritization.
It should be noticed that the XACML standard can be used to specify and implement RBAC policies (OASIS 2014). However, its current version (OASIS 2014) does not support the specification of SSoD and DSoD constraints. Moreover, since the effectiveness of test criteria is strongly related to its ability to represent specific domain faults (Felderer et al. 2015), there is no guarantee that similarity testing can be as effective on RBAC as they were on XACML and LTS.
Similarity testing for RBAC systems
In this section, we introduce our similarity testing approach specific to RBAC systems, named RBAC similarity. The RBAC similarity consists of a similarity testing approach based on Cartaxo et al. (2011) and Bertolino et al. (2015) approaches and suitable for FSMbased testing of RBAC systems. A prioritization algorithm used to perform ordering test cases based on similarity criteria is also discussed.
RBAC similarity
In XACML similarity, applicability is the relation between an access request and an XACML policy which quantitatively describes the impact of this request (i.e., test case) to the rules of the policy (Bertolino et al. 2015). In our work we extend the concept of XACML applicability to the RBAC domain and propose the RBAC similarity, a similarity testing approach specific to RBAC systems.
Essentially, the RBAC similarity (d_{ r s }) takes an RBAC policy P and a test suite T generated from an FSM(P) and evaluates the degree of resemblance between all pairs of test cases t_{ i },t_{ j }∈T. To that, it uses a dissimilarity function and the applicability of this pair of test cases to the policy P under test. Given this information, a test case prioritization algorithm performs test ordering from the most distinct and relevant tests to the less diverse and suitable ones. To support similarity testing for RBAC, we proposed the concept of RBAC applicability which quantitatively describes the relevance of a test case to one RBAC policy. The dissimilarity function and the RBAC applicability are detailed in the following sections.
Simple dissimilarity:
The simple dissimilarity between test cases is measured based on the number of distinct transitions (ndt). Given two test cases t_{ i } and t_{ j }, the degree of simple dissimilarity (d_{ s d }) is calculated as presented in Eq. 3.
The number of distinct transitions (ndt) between two test cases (t_{ i },t_{ j }) is counted and then divided by the average length of the test cases t_{ i } and t_{ j }. Transitions are considered distinct when there is a mismatch between their origin states, input or output symbols, or destination (tail) states. The average test cases length is used to avoid small (or large) similarity degrees due to similar short (or long) test case lengths. Listing 2 shows an example of four test cases and their respective transitions and states covered given the FSM(P) previously shown in Fig. 4. The number of distinct transitions, the average length and the simple dissimilarity d_{ s d } for each pair of test cases are shown in Table 3.
RBAC applicability:
The idea of the RBAC applicability is to quantitatively describe the relevance of a test case to one RBAC policy under test. An RBAC constraint is applicable to a test case if there is a match between the users, roles, or permissions of any input of this test case and the attributes of the constraint. For example, if an RBAC policy contains a static cardinality constraint S_{ u }(u1)=1, this constraint must regulate (i.e., apply some regulation to) all test cases with user u1 as test input (e.g., AS(u1,r2)). This idea enables to measure how much a test case t may impact a given policy P, without considering dynamic (behavioral) aspects of the RBAC model (e.g., FSM(P) states/transitions). Thus, it describes the structural or static coverage of a test case t over one policy P.
However, since RBAC is essentially a reactive system, a behavioral view of a test case is also necessary. In order to satisfy this requirement, we also propose the concept of behavioral or dynamic coverage. An RBAC constraint of a policy Preacts to a test case when this constraint is applicable to any input symbol and it influences on (enforces) the access control decision. As example, the test case t3, shown in Fig. 5, depicts a scenario of an RBAC policy containing a dynamic cardinality constraint D_{ r }(r1)=1 and two users u1 and u2 attempting to activate r1. This constraint is applicable (and reacts) to the last input requesting the second role activation of r1, and enforces a denied response. This information is associated with many transitions of the FSM(P) and used as requirementsbased coverage criteria (Utting et al. 2012). Thus, by quantifying the number of RBAC constraints reacting to the inputs of a test case, the dynamic coverage of a policy P can be measured and support test prioritization.
Based on the concepts of static and dynamic coverage, we proposed the RBAC Applicability Degree (AD), which is an array of four values defined as shown in Eq. 4.
The RBAC Applicability Degree (AD) of a test case t to a given a policy P consists of four values:

Policy Applicability Degree (pad_{P(t)}), which shows the ratio of test inputs applicable to any RBAC constraint over the test case length;

Assignment Applicability Degree (asad_{P(t)}), which shows the number of RBAC constraints related to assignment faults reacting to t;

Activation Applicability Degree (acad_{P(t)}), which shows the number of RBAC constraints related to activation faults reacting to t; and

Permission Applicability Degree (prad_{P(t)}), which shows the number of RBAC constraints related to permission faults reacting to t.
The pad_{P(t)} measures how much applicable one test case t is to a given policy based on all RBAC constraints applicable to t. The asad_{P(t)} gives a quantitative information about how many RBAC constraints related to assignment faults (i.e., UR, S_{ u }, S_{ r }, SSoD, and S_{ s }) react to t. The acad_{P(t)} gives a quantitative information about how many RBAC constraints related to activation faults (i.e., ≤_{ A }, D_{ u }, D_{ r }, DSoD, and D_{ s }) react to t. Finally, the prad_{P(t)} gives a quantitative information about how many RBAC constraints related to permission faults (i.e., PR, ≤_{ I }) react to t.
Based on the values of AD, the RBAC Applicability Degree (RA_{P(t)}) is calculated. The RA_{P(t)} value is a single quantitative attribute which summarizes the relevance of a single test case t to one policy P by summing the four applicability degrees.
However, since test similarity is calculated for pairs of test cases, we also defined the RBAC Applicability Value (AppValue) which sums the applicability degrees of test cases (Eq. 6).
A priority value (PriorityValue) is calculated to weight the pairwise relevance of two test cases. This PriorityValue is a constant number α, β, γ, or δ defined based on the \(pad_{P(t_{i})}\) and \(pad_{P(t_{j})}\) values. These α, β, γ, and δ constants are defined by the user, such that α>β>γ>δ. The α is given for pairs of test cases where all test inputs are applicable, and δ is given if none of test inputs are applicable to the constraints of the RBAC policy P. The values 3, 2, 1 and 0 are suggested by Bertolino et al. (2015). Equation 7 shows the formula which derivates the PriorityValue
The RBAC Similarity (d_{ r s }) of a pair of test cases consists of the sum of the d_{ s d }, AppValue and PriorityValue values, if d_{ s d }(t_{ i },t_{ j })≠0, as shown in Eq. 8. The RBAC similarity was designed based on Bertolino et al. (2015) approach for similarity testing for XACML policies.
As an example, the applicability degrees of each test case presented in Listing 2, given the RBAC policy in Listing 1, are presented in Table 4.
As shown in Table 4, all test inputs of t_{3} are applicable to at least one RBAC constraint and test case t_{3} has the greatest RBAC applicability degree. Test case t_{2} has the second greatest value, followed by t_{1} and t_{0} with the same applicability degree. Afterwards, the simple dissimilarity, RBAC application value, and priority value are calculated for all pairs of test cases. All these values are joined in the RBAC similarity (d_{ r s }) that is calculated for each pair of test cases, as presented in Table 5.
Test prioritization algorithm
Given the similarity of all pairs of test cases, a test prioritization algorithm has to be used for scheduling test cases execution. The pseudocode of the test prioritization algorithm used in this study is presented in Algorithm 1. Essentially, the test prioritization algorithm iterates a similarity matrix calculated using a similarity function d_{ x }, from the most distinct pairs of test cases to the less dissimilar ones of a test suite S. Given each pairwise similarity, the longest test case is included in the list of prioritized test cases. Otherwise, the shortest is included, if not previously included. This process is performed until all test cases of S are included in L, which stands for the prioritized test suite.
Using the RBAC similarity and the test suite shown in Listing 2, the similarity matrix shown in Eq. 9 is obtained.
Using Algorithm 1, the first most dissimilar pair of test cases (t_{2},t_{3}) is selected and the longest test case t_{3} is added to L. Afterwards, test case t_{0} is included since it is the longest test case from the next most dissimilar pair (t_{0},t_{3}). The last pair considered is (t_{1},t_{3}) and t_{1} is the next to be included. The prioritization ends with test case t_{2}, from pair (t_{0},t_{2}), scheduled at the end of the test execution. Listing 3 shows the L resulting test suite prioritized according to RBAC similarity.
Experimental evaluation
According to Damasceno et al. (2016), a larger number of test cases tends to be generated regardless the FSMbased testing methods for RBAC systems. Thus, the higher the number of states and transitions of FSM(P) increase, the greater the test suites are concerning the number of resets, total test suite length, and average test case length. Thus, additional steps become necessary to make software testing more costeffective.
We proposed RBAC similarity to fill this research gap and designed an experiment to evaluate the cumulative effectiveness and the APFD of the RBAC similarity and compare to simple dissimilarity and random prioritization using test suites generated from FSMbased testing methods on RBAC systems. An schematic overview of this experiment is presented in Fig. 6.
Fifteen test suites were taken from a previous study (Damasceno et al. 2016) where test characteristics (i.e., number of resets, test suite length, and avg. test case length) and effectiveness were analyzed based on the FSM(P) characteristics (i.e., numbers of states, and transitions). These test suites were generated from five RBAC policies specified as FSM(P) models using the RBACBT software (Damasceno et al. 2016) and implementations of the W (Chow 1978), HSI (Petrenko and Bochmann 1995), and SPY (Simão et al. 2009) methods. Table 6 shows a summary of the five RBAC policies and the total number of RBAC mutants.
The RBACBT^{Footnote 1} is an FSMbased testing tool designed by Damasceno et al. (2016) to support FSMbased testing of RBAC systems and the automatic generation of FSM(P) models and RBAC mutants. RBACBT was extended to support test prioritization using RBAC similarity and simple dissimilarity. Due to the high number of pairwise comparisons required to perform test prioritization, a time constraint of 24 hours for each test prioritization procedure was defined. Procedures with a duration above this limit were canceled and random subsets of the complete test suites, named as subtest suite, were taken for prioritization.
On preliminary experiments, the prioritization of the test suites of policies P03, P04, P05 took more than 24 hours.
Thus, subtest suites of the aforementioned policies containing 2528 test cases were randomly generated 30 times. The number 2528 was taken from the largest complete test suite with test prioritization duration below the 24 hours threshold, the W test suite of policy P02. Table 7 shows the characteristics of the FSM(P) models and their respective complete test suites.
The six complete test suites were prioritized using each test prioritization, and the cumulative effectiveness of these test suites was measured in twentyone parts. Afterwards, the cumulative effectiveness was used to calculate the APFD of each scenario. The APFD value was calculated using Eq. 1, F_{ i } as the number of faults detected by one test fragment i and l as the number of RBAC mutants. Random prioritization was performed 10 times to the 30 random subtest suites of P03, P04 and P05.
Using the R statistical package, we calculated mean APFD with confidence interval (CI) of 95% to all test scenarios and performed the nonparametric Wilcoxon matchedpairs signed ranks test to verify if the RBAC similarity reached different APFDs compared to simple dissimilarity and random prioritization with a confidence interval of 95%. As the alternative hypothesis, we considered that RBAC similarity performed better (i.e., greater mean cumulative effectiveness) than the other criteria.
To complement hypothesis tests, we analyzed the effect size by computing unstandardized (i.e., median and mean differences) and standardized measures (i.e., Cohen’s d Hedges g (Kampenes et al. 2007) and VarghaDelaney’s Â_{ 12 } (Arcuri and Briand 2011)) using R and the effsize package (Torchiano 2017).
Analysis of the complete test suites
In this section, we discuss the results of the experiments comparing RBAC similarity, simple and random prioritization based on complete test suites. The mean cumulative effectiveness for P01 and P02 are respectively shown in Tables 8 and 9, and Figs. 7 and 8 with error bars calculated with a confidence interval of 95%. At the end of this section, we also show the mean APFD and the results of the Wilcoxon matchedpairs signed ranks test.
In most of the cases, there was no statistically significant difference between the prioritization algorithms in the P01 and P02 scenarios. The P01 + HSI scenario was the only exception where RBAC similarity reached an APFD higher than simple dissimilarity and random prioritization. In the five remaining scenarios, RBAC similarity performed without significant difference compared to at least one of the methods. The mean APFD for each scenario are shown in Table 10 with their respective confidence intervals of 95% subscripted.
Table 11 shows the results of the Wilcoxon matchedpairs signed ranks test using a confidence interval of 95% to the mean cumulative effectiveness. In this case, we compared RBAC similarity to simple and random prioritization and random prioritization to simple dissimilarity. Significant results are highlighted in bold.
Table 11 corroborates to the finding of Fig. 7 and Table 10 where RBAC similarity had a statistically significant difference compared to the other criteria in P01 + HSI scenario; and random prioritization reached significantly different APFDs compared to simple dissimilarity in the all scenarios.
Analysis of the subtest suites
Since test prioritization for P03, P04 and P05 was too expensive, we considered 30 random subtest suites with 2528 test cases. Random prioritization was run 10 times for each of the 30 subtest suites.
The mean cumulative effectiveness of P03, P04, and P05 are respectively presented in Tables 12, 13, and 14. Figs. 9, 10, and 11 show the mean cumulative effectiveness with error bars calculated using a confidence interval of 95%.
In the P03 test scenarios, the first 5 to 10% of the W, HSI, and SPY subtest suites (i.e., a subset of 125 to 250 test cases) became sufficient to reach the maximum effectiveness. All test prioritization approaches presented similar results and no statistical significance was found between RBAC and the other approaches. In scenarios like this, test minimization techniques may be more costeffective than test prioritization due to its O(n^{2}) complexity.
In the P04 scenario, the benefits of RBAC similarity started to become more visible and statistically significant, as shown in Fig. 10 and Table 13. There was one exception where no significant difference was obtained. In the P04 + W scenario, the W method generated an extremely large test suite and, to enable test prioritization, we selected random subtest suites containing 2528 test cases. This random selection may have reduced test diversity. In the other scenarios, P04 + HSI and P04 + SPY, we found that the cumulative effectiveness of the RBAC similarity had a statistically significant difference compared to the other methods.
The mean cumulative effectiveness for the P05 test scenarios are presented in Fig. 11 and Table 14. In the P05 scenario, RBAC similarity, simple dissimilarity, and random prioritization clearly had statistically different cumulative effectivenesses. Respectively, 65% of the W and HSI, and 80% of the SPY subtest suites prioritized using RBAC similarity became capable of reaching the highest effectivenesses. RBAC similarity presented a significantly greater cumulative effectiveness compared to random prioritization and simple dissimilarity.
To the P03, P04 and P05, we also calculated the mean APFD based on the cumulative effectiveness of all runs of the 30 random subtest suites. The mean APFD of each test scenario with confidence interval of 95% is shown in Table 15. The highest APFD values are highlighted in bold.
In P03 scenario, the fault distribution along the FSM(P03) may have benefited fault detection and all methods performed similarly. In P04 scenario, there was only one case where RBAC similarity did not work well and no statistically significant difference was found (i.e., P04 + W). Regarding simple dissimilarity, it did not reach an APFD higher than random prioritization. At last, in all P05 scenarios, we found statistically significant differences between RBAC, simple and random prioritization. Table 16 shows the results of the Wilcoxon matchedpairs signed ranks test in the test scenarios of policies P03, P04 and P05. Significant results are highlighted in bold.
The analysis of the mean APFD and the confidence intervals of the subtest suites indicated that RBAC similarity performed better than simple dissimilarity and random prioritization in some scenarios. In addition to assessing whether an algorithm performs statistically better than another, it is crucial to measure the magnitude of such improvement. To analyze such aspect, effect size measures are required (Kampenes et al. 2007; Arcuri and Briand 2011; Wohlin et al. 2012).
Effect size to subtest suites
Effect size measures allow for quantifying the difference (i.e., magnitude of the improvement) between two groups (Wohlin et al. 2012). Kampenes et al. (2007) found that only 29% of software engineering experiments report some effect size measure. Thus, to improve our analysis, we also evaluated the effect that one test prioritization method had on the APFD compared with the other methods.
There are two main classes of effect size: (i) unstandardized, which are dependent from the unit of measurement; and (ii) standardized, which are independent from the evaluation criteria measurement units. For each pair of different prioritization method, we computed five different measures: two unstandardized (i) mean and (ii) median differences; and three standardized (iii) Cohen’s d (Cohen 1977), (iv) Hedges’ g (Hedges 1981), and (v) VarghaDelaney’s Â_{ 12 } (Vargha and Delaney 2000).
Mean and median differences, Cohen’s d, and Hedges’ g are presented as often referred metrics in the software engineering literature (Kampenes et al. 2007). Cohen’s d, and Hedges’ g are computed based on the mean difference and an estimate of population standard deviation σ_{ p o p } and compared using standard conventions (Cohen 1992).
VarghaDelaney (VD) Â_{ 12 } is an effect size measure based on stochastic superiority that denotes the probability of a method outperform another (Vargha and Delaney 2000). If both methods are equivalent then Â _{ 12 }=0.5. An effect size Â _{ 12 }>0.5 means that the treatment method has higher probability of achieving a better performance than the control method, otherwise viceversa. VarghaDelaney’s Â_{ 12 } is recommended by Arcuri and Briand (2011) as a simple and intuitive measure of effect size for assessing randomized algorithms in software engineering research. Table 17 shows the pairwise comparison of the three test prioritization methods. The metrics presented can also be used in future research (e.g., metaanalysis (Kampenes et al. 2007)).
We did not compute the effect size to P01 and P02 due to the deterministic nature of RBAC and simple prioritizations and its consequent σ_{ p o p }=0. The analysis of effect size corroborated to the mean APFDs and Wilcoxon matchedpairs signed ranks tests and RBAC similarity had good results in P04+HSI, P04+SPY, and all P05 scenarios.
We found differences of medium magnitude between RBAC compared with simple and random prioritizations in P04+HSI; and large magnitude in P04+SPY and all P05 scenarios. There was only one case (i.e., P03+HSI) where RBAC prioritization did not outrun the other methods. In the other scenarios, we found negligible to medium differences between the techniques. Thus, the following order was observed, from the method with the lowest to the highest APFDs, Simple ≺ Random ≺ RBAC.
Discussion
Recently, Cartaxo et al. (2011) and Bertolino et al. (2015) showed that similarity functions can be helpful when it is necessary to prioritize exhaustive test suites automatically generated for LTS models and XACML policies, respectively. In our previous study (Damasceno et al. 2016), we found that, no matter what FSMbased testing methods are applied to RBAC systems, when the number of users and roles increase, larger test suites tend to be generated. Thus, specific domain test criteria are required to optimize FSMbased testing for RBAC systems. To this end, there are three main approaches: (i) Test minimization, (ii) Test selection, and (iii) Test prioritization.
Unlike (i) test minimization and (ii) test selection, that may compromise fault detection capability; (iii) test prioritization aims at finding an order of execution to an entire test suite (i.e., without filtering out any test case) based on some test criteria (Yoo and Harman 2012). In this paper, we investigated the test prioritization for RBAC systems, and we proposed the RBAC similarity.
RBAC similarity compared to the other criteria
Our results showed that RBAC similarity performed better than simple dissimilarity and random prioritization in some of the scenarios, especially those with large FSM(P) models. To policies P01 and P02, we did not find statistically significant differences between the test prioritization criteria in most of the scenarios. The only exception was to P01 + HSI, where a statistically significant difference between RBAC similarity and the other criteria was found. The HSI method reduces test dimensions by using harmonized state identifiers instead of the characterization set (Petrenko and Bochmann 1995). In this scenario, the characteristics of the HSI may have affected test diversity and, as a result, benefited RBAC similarity.
Due to the large number of test cases generated from policies P03, P04, and P05, prioritizing the complete test suites became infeasible. To overcome this issue, we opted to apply test prioritization on random subtest suites.
To policy P03, all test prioritization approaches increased the cumulative effectiveness to the maximum value yet at the first 5 to 10% and we did not find statistically significant differences between them. Thus, the fault distribution along the FSM(P03) model benefited fault detection and test prioritization. In scenarios like this, test minimization may be more suitable than test prioritization, which has an O(n^{2}) cost. However, as we highlighted earlier, there is a risk of reducing the capability of test suites detecting faults out of the RBAC domain.
The benefits of the RBAC similarity became more evident in P04 and P05 scenarios, the largest FSM(P) models. In policy P04, we found a statistically significant difference between RBAC similarity to the subtest suites generated from HSI and SPY. The only exception was the P04 + W scenario where the random selection of subtest suites may have compromised test diversity.
In the P05 scenario, the RBAC similarity outperformed both test prioritization criteria with statistically significant differences. The analysis of the mean APFD values and effect size corroborate to the mean cumulative effectivenesses depicted in Figs. 9 to 11.
Random prioritization vs. simple dissimilarity
Our results showed a statistically significant difference between random prioritization and simple dissimilarity. In ten out of 15 scenarios, random prioritization presented APFD significantly different and higher than simple dissimilarity. RBAC faults can be exhibited across many different transitions of FSM(P) (Masood et al. 2010). Thus, test diversity may not imply on higher APFD.
Practical feasibility
We found that RBAC similarity may not be feasible to large complete test suites, as seen in scenarios P03, P04 and P05. The O(n^{2}) complexity is an inherent characteristic of most test prioritization approaches (Elbaum et al. 2002), especially similarity testing, that is also an alltoall comparison problem (Zhang et al. 2017). However, RBAC similarity can still be improved through (i) test minimization and/or (ii) parallel programming.
The RBAC applicability can be used in test minimization as requirements coverage criteria to find test cases relevant to the constraints (i.e., requirements) of RBAC policies. Afterwards, RBAC similarity can be applied as we proposed. Thus, a significant test cost reduction can be achieved, but at the risk of reducing the faultdetection capability (Yoo and Harman 2012).
Recent studies have proposed parallel algorithms to efficiently calculate similarity matrices for mathematical modelling of heterogeneous hardware (Rawald et al. 2015) and ontology mapping (GîzăBelciug and Pentiuc 2015). However, to the best of our knowledge, they have never been investigated for similarity testing. RBAC similarity as a test minimization criterion and parallel algorithms to calculate similarity matrices for test prioritization could boost up similarity testing but this is out of the scope of this study and left as future work.
Threats to validity
Conclusion Validity: Threats to conclusion validity relate with the ability draw correct conclusions about the relation between the treatment and the outcomes. To mitigate this, we used the Wilcoxon matchedpairs signed ranks test to verify if the RBAC similarity reached different APFDs compared to simple dissimilarity and random prioritization with a confidence interval of 95%. We also computed the mean APFD with a confidence interval of 95% and five effect size measures to quantify the difference between the methods. The statistical analysis were performed using the R statistical package and the effsize package (Torchiano 2017). The R scripts, input and output statistical data are included in the RBACBT repository.
Internal Validity: Threats to internal validity are related with influences that can affect independent variables with respect to causality. They threat conclusions about a possible causal relationship between treatment and outcome. To mitigate this threat, random tasks (i.e., subtest suite generation and random prioritization) were repeatedly performed to avoid results obtained by chance. Most of artifacts used in this work were reused from the lab package of our previous study (Damasceno et al. 2016).
Construct Validity: Construct validity concerns with generalizing outcomes to the concept or theory behind the experiment. We used firstorder mutants from the RBAC fault domain (Masood et al. 2009) to simulate simple faults and evaluate the effectiveness of each prioritization criteria. Mutation analysis is a common assessment approach of software testing investigations (Jia and Harman 2011). Other RBAC fault models could be used in this experiment, such as malicious faults (Masood et al. 2009) and probabilistic models of fault coverage (Masood et al. 2010). These fault models could be used to analyse RBAC similarity testing from a perspective of faults of different nature, but they were left as future work. Moreover, despite the relatively low number of faults, the RBAC fault model is still representative to functional faults of RBAC systems (Masood et al. 2009).
External Validity: It concerns with the generalization of the outcomes to other scenarios. To mitigate this threat, we included test suites generated from three different test generation methods and RBAC policies with different characteristics.
Conclusions
Essentially, the RBAC model reduces the complexity of security management routines by grouping privileges through roles which can be assigned to users and activated in sessions. Access control testing is one important activity during the development of RBAC systems since implementation mistakes may lead to security breaches. In this context, previous studies have shown that FSMbased testing can be effective at detecting RBAC faults, but very expensive. Thus, additional steps become necessary to make RBAC testing more feasible and less costly.
Test case prioritization comes as a solution to this problem and it aims at finding an ordering for test cases execution to maximize some test criteria. Similarity testing is a variant of test case prioritization which has been investigated under the XACML and LTS domains and enabled to find better orders for test cases execution. In this paper we introduce a test prioritization technique named RBAC similarity which uses the dissimilarity between pairs of test cases and their pairwise applicability to the RBAC policy under test (i.e., the relevance of these test cases to the RBAC constraints) as test prioritization criteria.
Our RBAC similarity approach was experimentally evaluated and compared with simple dissimilarity and random prioritization as baselines. The obtained results pointed out that RBAC similarity improved the mean cumulative effectiveness and the APFD and enable to reach the maximum effectiveness of the test suites at a faster rate with significant difference in most of the cases. In some scenarios, prioritizing HSI and SPY test suites with RBAC similarity resulted on better APFD values than applying the technique to W test suites. The characteristics of the test cases generated from HSI and SPY favoured the similarity testing algorithms while random selection applied to complete test suites generated from W negatively impacted test prioritization using similarity functions. Moreover, random prioritization also outperformed simple dissimilarity in most of the cases. We analyze our data using Wilcoxon matchedpairs signed ranks test and error bars with CI=95%, and five effect size metrics (i.e., mean and median differences, Cohen’s d, Hedges’ g and VarghaDelaney’s Â_{ 12 }) and found statistically significant in some scenarios.
All test artifacts (i.e., RBACBT tool, test suites, test results, RBAC policies, and statistical data) are available online ^{Footnote 2} and can be used to replicate, verify and validate this experiment. As future work, we want to investigate alternative algorithms for ordering test cases, such as algorithms using total information for test prioritization, other fault models, such as simulated malicious faults and probabilistic fault models. We also intend to investigate the usage of RBAC similarity as a requirements coverage criterion for test minimization and as a fitness function in searchbased software testing (McMinn 2004).
References
Andrews, JH, Briand LC, Labiche Y, Namin AS (2006) Using mutation analysis for assessing and comparing testing coverage criteria. IEEE Trans Softw Eng. 32(8):608–624. https://doi.org/10.1109/TSE.2006.83.
ANSI (2004) Role based access control. Technical report, American National Standards Institute, Inc.ANSI/INCITS 3592004.
Arcuri, A, Briand L (2011) A practical guide for using statistical tests to assess randomized algorithms in software engineering In: Proceedings of the 33rd International Conference on Software Engineering. ICSE ’11, 1–10.. ACM, New York, NY, USA. https://doi.org/10.1145/1985793.1985795. http://doi.acm.org/10.1145/1985793.1985795.
Ben Fadhel, A, Bianculli D, Briand L (2015) A comprehensive modeling framework for rolebased access control policies. J Syst Softw. 107(C):110–126. https://doi.org/10.1016/j.jss.2015.05.015.
Bertolino, A, Daoudagh S, Kateb DE, Henard C, Traon YL, Lonetti F, Marchetti E, Mouelhi T, Papadakis M (2015) Similarity testing for access control. Inf Softw Technol. 58:355–372. https://doi.org/10.1016/j.infsof.2014.07.003.
Broy, M, Jonsson B, Katoen JP, Leucker M, Pretschner A (2005) ModelBased Testing of Reactive Systems: Advanced Lectures (Lecture Notes in Computer Science). Springer, Secaucus, NJ, USA.
Cartaxo, EG, Machado PDL, Neto FGO (2011) On the use of a similarity function for test case selection in the context of modelbased testing. Softw Test Verif Reliab. 21(2):75–100. https://doi.org/10.1002/stvr.413.
Chow, TS (1978) Testing software design modeled by finitestate machines. IEEE Trans Softw Eng. 4(3):178–187. https://doi.org/10.1109/TSE.1978.231496.
Cohen, J (1977) Statistical Power Analysis for the Behavioral Sciences. Revised edn.. Academic Press, New York. https://doi.org/10.1016/B9780121790608.500013. https://www.sciencedirect.com/science/article/pii/B9780121790608500013.
Cohen, J (1992) A power primer. Psychol Bull. 112(1):155–159. https://doi.org/10.1037/00332909.112.1.155.
Coutinho, AEVB, Cartaxo EG, Machado PDdL (2014) Analysis of distance functions for similaritybased test suite reduction in the context of modelbased testing. Softw Qual J.1–39. https://doi.org/10.1007/s112190149265z.
Damasceno, CDN, Masiero PC, Simao A (2016) Evaluating test characteristics and effectiveness of fsmbased testing methods on rbac systems In: Proceedings of the 30th Brazilian Symposium on Software Engineering. SBES ’16, 83–92.. ACM, New York, NY, USA. https://doi.org/10.1145/2973839.2973849. http://doi.acm.org/10.1145/2973839.2973849.
Elbaum, S, Malishevsky AG, Rothermel G (2000) Prioritizing test cases for regression testing. SIGSOFT Softw Eng Notes. 25(5):102–112. https://doi.org/10.1145/347636.348910.
Elbaum, S, Malishevsky AG, Rothermel G (2002) Test case prioritization: A family of empirical studies. IEEE Trans Softw Eng. 28(2):159–182. https://doi.org/10.1109/32.988497.
Endo, AT, Simao A (2013) Evaluating test suite characteristics, cost, and effectiveness of fsmbased testing methods. Inf Softw Technol. 55(6):1045–1062. https://doi.org/10.1016/j.infsof.2013.01.001.
Fabbri, SCPF, Delamaro ME, Maldonado JC, Masiero PC (1994) Mutation analysis testing for finite state machines In: Software Reliability Engineering, 1994. Proceedings., 5th International Symposium On, 220–229. https://doi.org/10.1109/ISSRE.1994.341378.
Felderer, M, Zech P, Breu R, Büchler M, Pretschner A (2015) Modelbased security testing: a taxonomy and systematic classification. Softw Test Verif Reliab. https://doi.org/10.1002/stvr.1580.
Ferraiolo, DF, Kuhn RD, Chandramouli R (2007) RoleBased Access Control. 2nd edn. Artech House, Inc., Norwood, MA, USA.
Gill, A (1962) Introduction to the Theory of Finite State Machines. McGrawHill, New York.
GîzăBelciug, F, Pentiuc SG (2015) Parallelization of similarity matrix calculus in ontology mapping systems In: 2015 14th RoEduNet International Conference  Networking in Education and Research (RoEduNet NER), 50–55. https://doi.org/10.1109/RoEduNet.2015.7311827.
Hedges, LV (1981) Distribution theory for glass’s estimator of effect size and related estimators. J Educ Stat. 6(2):107–128. https://doi.org/10.3102/10769986006002107. https://doi.org/10.3102/10769986006002107.
Henard, C, Papadakis M, Perrouin G, Klein J, Heymans P, Traon YL (2014) Bypassing the combinatorial explosion: Using similarity to generate and prioritize twise test configurations for software product lines. IEEE Trans Softw Eng. 40(7):650–670. https://doi.org/10.1109/TSE.2014.2327020. arXiv:1211.5451v1.
JangJaccard, J, Nepal S (2014) A survey of emerging threats in cybersecurity. J Comput Syst Sci 80(5):973–993. https://doi.org/10.1016/j.jcss.2014.02.005. Special Issue on Dependable and Secure Computing.
Jia, Y, Harman M (2011) An analysis and survey of the development of mutation testing. Softw Eng IEEE Trans. 37(5):649–678. https://doi.org/10.1109/TSE.2010.62.
Kampenes, VB, Dyb T, Hannay JE, Sjberg DIK (2007) A systematic review of effect size in software engineering experiments. Inf Softw Technol. 49(11):1073–1086. https://doi.org/10.1016/j.infsof.2007.02.015.
Masood, A, Bhatti R, Ghafoor A, Mathur AP (2009) Scalable and effective test generation for rolebased access control systems. IEEE Trans Softw Eng. 35(5):654–668. https://doi.org/10.1109/TSE.2009.35.
Masood, A, Ghafoor A, Mathur AP (2010) Fault coverage of constrained random test selection for access control: A formal analysis. J Syst Softw. 83(12):2607–2617. TAIC PART 2009  Testing: Academic & Industrial Conference  Practice And Research Techniques.
McMinn, P (2004) Searchbased software test data generation: A survey: Research articles. Softw Test Verif Reliab. 14(2):105–156. https://doi.org/10.1002/stvr.v14:2.
Mouelhi, T, Kateb DE, Traon YL (2015) Chapter five  inroads in testing access control, Advances in Computers, vol. 99. Elsevier. https://doi.org/10.1016/bs.adcom.2015.04.003. http://www.sciencedirect.com/science/article/pii/S0065245815000327.
OASIS (2013) eXtensible Access Control Markup Language (XACML) Version 3.0. Technical report, Organization for the Advancement of Structured Information Standards (OASIS). http://docs.oasisopen.org/xacml/3.0/xacml3.0corespecosen.pdf.
OASIS (2014) XACML v3.0 Core and Hierarchical Role Based Access Control (RBAC) Profile Version 1.0. http://docs.oasisopen.org/xacml/3.0/rbac/v1.0/cs02/xacml3.0rbacv1.0cs02.pdf.
Ouriques, JaFS (2015) Strategies for prioritizing test cases generated through modelbased testing approaches In: Proceedings of the 37th International Conference on Software Engineering  Volume 2. ICSE ’15, 879–882.. IEEE Press, Piscataway, NJ, USA. http://dl.acm.org/citation.cfm?id=2819009.2819204.
Petrenko, A, Bochmann GV (1995) Selecting test sequences for partiallyspecified nondeterministic finite state machines. In: Luo G (ed)7th IFIP WG 6.1 International Workshop on Protocol Test Systems. IWPTS ’94, 95–110.. Chapman and Hall, Ltd., London, UK. http://dl.acm.org/citation.cfm?id=236187.233118.
Rawald, T, Sips M, Marwan N, Leser U (2015) Massively parallel analysis of similarity matrices on heterogeneous hardware In: Proceedings of the Workshops of the EDBT/ICDT 2015 Joint Conference (EDBT/ICDT), Brussels, Belgium, March 27th, 2015, 56–62.. CEURWS, Brussels.
Samarati, P, de Vimercati SC (2001) Access Control: Policies, Models, and Mechanisms(Focardi R, Gorrieri R, eds.). Springer, Berlin, Heidelberg. http://dx.doi.org/10.1007/3540456082_3.
Simão, A, Petrenko A, Yevtushenko N (2009) Generating Reduced Tests for FSMs with Extra States. In: Núñez M, Baker P, Merayo MG (eds), 129–145.. Springer, Berlin, Heidelberg. https://doi.org/10.1007/9783642050312_9. http://dx.doi.org/10.1007/9783642050312_9.
Torchiano, M (2017) Effsize: Efficient Effect Size Computation (v. 0.7.1). CRAN package repository. https://cran.rproject.org/web/packages/effsize/effsize.pdf. CRAN package repository. [Online; accessed 20November2017].
Utting, M, Pretschner A, Legeard B (2012) A taxonomy of modelbased testing approaches. Softw Test Verif Reliab. 22(5):297–312. https://doi.org/10.1002/stvr.456.
Vargha, A, Delaney HD (2000) A critique and improvement of the cl common language effect size statistics of mcgraw and wong. J Educ Behav Stat. 25(2):101–132. https://doi.org/10.3102/10769986025002101. https://doi.org/10.3102/10769986025002101.
Vasilevskii, MP (1973) Failure diagnosis of automata. Cybernetics 9(4):653–665. https://doi.org/10.1007/BF01068590.
Wohlin, C, Runeson P, Höst M, Ohlsson MC, Regnell B, Wesslén A (2012) Measurement. Springer, Berlin, Heidelberg. https://doi.org/10.1007/9783642290442_3.
Yoo, S, Harman M (2012) Regression testing minimization, selection and prioritization: A survey. Softw Test Verif Reliab. 22(2):67–120. https://doi.org/10.1002/stv.430.
Zhang, YF, Tian YC, Kelly W, Fidge C (2017) Scalable and efficient data distribution for distributed computing of alltoall comparison problems. Futur Gener Comput Syst. 67:152–162.
Acknowledgements
We acknowledge the help from all the LabES’s members (Software Engineering Laboratory) at the University of Sao Paulo (USP) for their valuable comments. We also thank the reviewers for all valuable comments and suggestions to this study.
Funding
Carlos Diego Nascimento Damasceno’s research project was supported by the National Council for Scientific and Technological Development (CNPq), process number 132249/20146.
Author information
Affiliations
Contributions
CDND designed and conducted the experiment, adapted the RBACBT tool and analyzed the results. PCM and AS supported the validation of the experiment protocol and analysis of results. All authors read and approved the final manuscript.
Corresponding author
Correspondence to Carlos Diego N. Damasceno.
Ethics declarations
Competing interests
The authors declare that they have no competing interests
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Received
Accepted
Published
DOI
Keywords
 Finite state machines
 RoleBased Access Control (RBAC)
 Test prioritization
 Similarity testing