vres

Embed Size (px)

Citation preview

  • 8/10/2019 vres

    1/6

    An Hybrid Binary Multi-Objective Particle SwarmOptimization with Local Search for Test Case

    Selection

    Luciano S. de Souza , Ricardo B. C. Prudencio, Flavia de A. Barros

    Center of Informatics (CIn),

    Federal University of Pernambuco (UFPE)

    Recife - PE - Brazil

    {lss2, rbcp, fab}@cin.ufpe.brFederal Institute of Education Science and Technology of the North of Minas Gerais (IFNMG)

    Pirapora - MG - Brazil

    [email protected]

    AbstractDuring the software testing process a variety of testsuites can be generated in order to evaluate and assure the qualityof the products. However, in some contexts the execution of allsuites does not fit the available resources (time, people, etc). Insuch cases, the suites could be automatically reduced based onsome selection criterion. Automatic Test Case (TC) selection couldbe used to reduce the suites based on some selection criterion.This process can be treated as an optimization problem, aimingto find a subset of TCs which optimizes one or more objectivefunctions (i.e., selection criteria). In this light, we developed two

    new mechanisms for TC selection which consider two objectivessimultaneously: maximize branch coverage while minimizingexecution cost (time). These mechanisms were implemented usingmulti-objective techniques based on Particle Swarm Optimization(PSO). Additionally, we create hybrid multi-objective selectionalgorithms in order to improve the results. The experimentswere performed on the space program from the SIR repository,attesting the feasibility of the proposed hybrid strategies.

    I. INTRODUCTION

    During the software development process, the softwaretesting activities has grown in importance due to the need ofhigh quality products. These activities aim to assure qualityand reliability of the developed products. However, it is a very

    expensive activity. In this scenario, automation seems to be thekey solution for improving the efficiency and effectiveness ofthe testing process.

    The related literature highlights two main approaches fortesting: White Box (structural) or Black Box (functional)testing. In both approaches, the testing process relies on the(manual or automatic) generation and execution of Test Suites(TS) aiming to provide a suitable coverage of the adoptedtest adequacy criterion (e.g., code coverage, requirementscoverage) in order to satisfy the test goals.

    In general, the (manually or automatically generated) testsuites tend to be large, aiming to provide a good coverage ofthe adopted test adequacy criterion (i.e., to cover all possibletest scenarios). However, although it is desirable to fully satisfythe test goals, the execution of large suites is a very expensivetask, demanding a great amount of resources (time and people)[1].

    Fortunately, it is possible to identify redundancies in largetest suites (i.e., two or more TCs covering the same require-ment/piece of code). Hence, it is possible to reduce the suitein order to fit the available resources. This task of reducing atest suite based on a given selection criterion is known asTestCase selection.

    TC selection is not easy or trivial since there may be alarge number of TC combinations to consider when searching

    for an adequate TC subset. A very promising approach to dealwith the TC selection problem relies on the use of searchoptimization techniques (see [2], [3], [4], [5], [6]), which arethe focus of our research. Here, the aim is to search for asubset of TCs which optimizes a given objective function (i.e.,the given selection criterion).

    Regarding multi-objective TC selection, we can cite the useof evolutionary approaches [4] and the use of Particle SwarmOptimization (PSO) techniques (our previous work) [7]. In [7],we investigated the multi-objective TC selection consideringboth the functional requirements coverage (quality) and the ex-ecution effort (cost) of the selected subset of TCs as objectivesof the selection process. Despite of the obtained good resultson a case study, further improvements could be performed.

    In the current work, we propose the use of hybrid multi-objective algorithms for TC selection, More specifically, wedeveloped hybrid algorithms by adding local search capabili-ties to the Binary Multi-Objective Particle Swarm Optimizationwith Crowding Distance and Roulette Wheel (BMOPSO-CDR)algorithm (see [7]), based on the ideas presented in [8]. Thelocal search procedure intends to explore the neighborhoodof a solution to (possibly) obtain better solutions nearby thissolution. Two hybrid algorithms were created: (1) the BinaryMulti-Objective Particle Swarm Optimization with Crowd-ing Distance, Roulette Wheel and Local Search (BMOPSO-CDRLS), which combines the BMOPSO-CDR with the For-ward Selection and Backward Elimination [9] local methods;and (2) the BMOPSO-CDRLS2 which adopts the 1-opt method[10] as the local search procedure. The space program fromthe Software-artifact Infrastructure Repository (SIR) programs[11] was used in the experiments in order to compare thetwo hybrid algorithms with the BMOPSO-CDR and the Non-

    2014 Brazilian Conference on Intelligent Systems

    978-1-4799-5618-0/14 $31.00 2014 IEEE

    DOI 10.1109/BRACIS.2014.80

    414

    2014 Brazilian Conference on Intelligent Systems

    978-1-4799-5618-0/14 $31.00 2014 IEEE

    DOI 10.1109/BRACIS.2014.80

    414

  • 8/10/2019 vres

    2/6

    Dominated Sorting Genetic Algorithm (NSGA-II) [12].

    Each implemented algorithm was applied to the multi-objective TC selection problem, providing to the user a setof solutions (test suites) with different values of codes branchcoverage versus execution cost. The user may then choose thesolution that best fits the available resources. It is importantto highlight that, although the focus of our research is the

    multi-objective TC selection problem, the proposed algorithmscan also be applied to multi-objective optimization in othercontexts.

    The following section details our multi-objective PSOapproach for TC selection, as well as the proposed algorithms.Section III shows the experiments and the obtained results.Section IV brings conclusions and future work.

    II. BINARY M ULTI-O BJECTIVEP SO T OT ES TC AS ESELECTION

    In our work, we propose the use of Particle Swarm Opti-mization (PSO) methods to solve multi-objective TC selectionproblems. In contrast to single-objective problems, Multi-

    Objective Optimization (MOO) aims to optimize more thanone objective at the same time.

    An MOO problem considers a set ofk objective functionsf1(x), f2(x),...,fk(x) where x is an individual solution forthe problem being solved. The output of an MOO algorithm isusually a population of non-dominated solutions consideringthe objective functions. Formally, letx and x be two differentsolutions. We say that x dominates x (denoted byx x) ifx is better than x for at least one objective function and x isnot worse than x for any objective function. x is said to benot dominated if there is no other solution xi in the currentpopulation, such that xi x. The set of non-dominatedsolutions in the objective space returned by an MOO algorithmis known as Pareto frontier [13].

    PSO is a population-based search approach inspired by thebehavior of birds flocks. PSO has shown to be a simple andefficient algorithm when compared to other search techniques,including for instance the Genetic Algorithms [14]. The basicPSO algorithm starts its search process with a random popula-tion (also called swarm) ofparticles. Each particle represents acandidate solution for the problem being solved, and it has fourmain attributes: (1) the position (t) in the search space (eachposition represents an individual solution for the optimizationproblem), (2) the current velocity (v), indicating a direction ofmovement in the search space, (3) the best position (t) foundby the particle (the particles memory) and (4) the best position(g) in the particles neighborhood (the social guide).

    For a number of iterations, the particles fly through the

    search space, being influenced by their own experience tand by the experience of their neighbors g. Particles changeposition and velocity continuously, aiming to reach betterpositions and to improve the considered objective functions.

    In this work, we developed two hybrid algorithms bymerging the BMOPSO-CDR algorithm (see [7]) with somewell known local search algorithms. Furthermore, the pro-posed hybrid algorithms were applied to select structural tests(whereas in [7], we worked with functional tests). The chosenobjective functions to be optimized are the branch coverage

    (referring to code coverage) and the execution cost of theselected TCs, in such a way that the first function is maximizedwhile the second function is minimized1.

    The proposed hybrid algorithms return a set of non-dominated solutions (a Pareto frontier) considering the afore-mentioned objectives. By receiving a set of diverse solutions,the user can choose the one that best matches his/her currentgoals (e.g., available time to execute the TCs).

    We would like to point out that the related literaturelacks works using multi-objective PSO for the TC selectionproblem. For Functional TC selection we could only identifyour previous work [7], and for Structural TC selection, to thebest of our knowledge, there is no previous work. Hence, this isan promising research topic and this work aims to explore it alittle further by creating new hybrid multi-objective algorithms.

    A. Problem Formulation

    In this work, the particles positions were defined as binaryvectors representing candidate subsets of TCs to be applied

    in the software testing process. Let T = {T1, . . . , T n} be atest suite with n test cases. A particles position is defined ast = (t1, . . . , tn), in which tj {0, 1} indicates the presence(1) or absence (0) of the test case Tj within the subset ofselected TCs.

    As said, two objective functions were adopted: branchcoverage and execution cost. The branch coverage (functionto be maximized) consists of the ratio (in percentage) be-tween the amount of code branches covered by a solutiont and the amount of branches covered by T. Formally, letB = {B1, . . . , Bk} be a given set of k branches covered bythe original suite T. Let F(Tj) be a function that returns thesubset of branches in B covered by the individual test caseTj . The branch coverage of a solution t is given by:

    B Coverage(t) = 100 |

    tj=1{F(Tj)}|

    k (1)

    In eq. (1),

    tj=1{F(Tj)} is the union of branches subsets

    covered by the selected test cases (i.e., Tj for which tj = 1).

    The execution cost (function to be minimized) representsthe amount of time required to execute the selected suite.Formally, each test case Tj T has a cost score cj . Thetotal cost of a solution t is given by:

    Cost(t) =tj=1

    cj (2)

    Finally, the proposed algorithms are used to deliver a goodPareto frontier regarding the objective functions B Coverageand Cost.

    1In this paper we do not aim to discuss about which objectives are moreimportant for TC selection. Regardless the arguments about their suitability,branch coverage is a likely candidate for assessing quality, and execution timerepresents realistic measure of cost

    415415

  • 8/10/2019 vres

    3/6

    B. The BMOPSO-CDR algorithm

    The Binary Multi-objective Particle Swarm Optimizationwith Crowding Distance and Roulette Wheel (BMOPSO-CDR)was firstly presented in [7]. It uses an External Archive (EA) tostore the non-dominated solutions found by the particles duringthe search process. See [7] for more details of BMOPSO-CDRalgorithm.

    The following summarizes the BMOPSO-CDR:

    1) Randomly initialize the swarm, evaluate each particleaccording to the considered objective functions andthen store in the EA the particles positions that arenon-dominated solutions;

    2) WHILE stop criterion is not verified DO

    a) Compute the velocity v of each particle as:

    v v+C1r1(t t) +C2r2(g t) (3)

    where represents the inertia factor; r1 andr2 are random values in the interval [0,1];

    C1 and C2 are constants. The social guide(g) is defined as one of the non-dominatedsolutions stored in the current EA.

    b) Compute the new position t of each particlefor each dimension tj as:

    tj =

    1, ifr3 sig(vj)0, otherwise

    (4)

    wherer3 is a random number sampled in theinterval [0,1] and sig(vj) is defined as:

    sig(vj) = 1

    1 +evj(5)

    c) Use the mutation operator as proposed by

    [15];d) Evaluate each particle of the swarm and

    update the solutions stored in the EA;e) Update the particles memory t;

    3) END WHILE and return the current EA as the Paretofrontier.

    C. Local Search algorithms

    Generally speaking, local search algorithms choose, at eachstep, the locally best node (which yields the best fitness). Thelocal search algorithms used in this work are the ForwardSelection (FS), the Backward Elimination (BE) and the 1-optalgorithm. For more details about these algorithms we suggest

    reading [9] and [10].

    According to [9], FS technique, also known as SequentialForward Selection, is a bottomup procedure which builds asolution by iteratively adding new features to an initiallyempty set, until a stopping criterion is reached. The finalsolution is then returned to the user. In turn, the BE, orSequential Backward Selection, is the topdown analogy toforward selection [9]. The BE algorithm starts with a completeset, and iteratively removes the features from the set until astopping criterion is reached. Finally, the 1-opt local search

    algorithm is a special case of the k-opt algorithm [10]. The 1-opt algorithm tries to find better solutions by changing a singleelement at time in the current solution.

    In our work, we create hybrid algorithms by adding lo-cal search capabilities into our multi-objective global searchBMOPSO-CDR algorithm. The local search procedures aimto explore the less-crowded area in the external archive topossibly obtain more non-dominated nearby solutions.

    1) The BMOPSO-CDRLS algorithm: After the step (e) ofthe main loop (2) of BMOPSO-CDR we introduced the localsearch procedure, in order to create the BMOPSO-CDRLS, asfollows: (1) Select 10%2 of the solutions stored in the EA byusing Roulette Wheel; (2) For each selected solution randomlyselect one objective to improve - IFthe selected objective is tobe maximized,THENuse the FS local search algorithm; ELSE(if it is to be minimized) use the BE local search algorithm.

    The FS algorithm takes the solution t and iterates asfollows: for each test case tj not yet present in the currentsolutiont (i.e., for each tj = 0) a new candidate solution t

    isproduced by setting tj 1. For each candidate solution, thepreviously chose objective function is computed. The candidatesolution which yields the highest objective value and is notdominated by EA is then adopted as the new current solution inthe search process. The algorithm stops (1) when all candidatesolutions found at an the current iteration are dominated bythe EA, or (2) when all test cases have been already added tothe current solution.

    Contrarily, the BE algorithm takes the solution t anditerates as follows: for each TC present in the current solution(i.e., for each tj = 1) a candidate solution t

    is producedby setting tj 0. The objective function is computed andthe candidate solution which yields the lowest objective valueand is not dominated by the EA is considered as the currentsolution for the next iteration. This process is repeated until (1)when all candidate solutions found at an the current iterationare dominated by the EA, or (2) when all test cases have beenalready removed from the current solution.

    The FS algorithm is used for maximization because, byadding test cases to a solution t, the objective value increasesor - at least it remains the same. The inverse analogy is appliedto the BE algorithm in case of minimization. Additionally,it is important to highlight that, instead of only adding toEA the final returned solution of the FS or BE algorithms,we always add all non-dominated candidate solutions foundat each iteration. In preliminary experiments, this procedureshowed to improve the Pareto frontiers returned by the hybridalgorithms.

    2) The BMOPSO-CDRLS2 algorithm: Based on the ideaof the BMOPSO-CDRLS, we create the BMOPSO-CDRLS2by using the well known 1-opt algorithm as local search in thefollowing way: select 10% of the solutions stored in the EA byusing Roulette Wheel. For each test case tj t we flip the bitof tj (i.e if tj = 0 then tj 1, or the contrary) in orderto create the neighborhood solutions. Each non-dominatedneighborhood solution is added to EA.

    2This 10% value is suggested by [8]

    416416

  • 8/10/2019 vres

    4/6

    III. EXPERIMENTS AND R ESULTS

    This section presents the experiments performed in orderto evaluate the search algorithms implemented in this work. Inaddition to the aforementioned implemented algorithms, wealso implemented the well known NSGA-II algorithm [12]in order to compare whether the proposed algorithms arecompetitive as multi-objective search optimization techniques.

    The experiments were performed using the space programfrom the Software-artifact Infrastructure Repository (SIR) [11],from the European Space Agency. This program is an inter-preter for an array definition language (ADL). The programreads a file that contains several ADL statements, and checkswhether the content of the file adheres to the ADL grammarand to specific consistency rules. If the ADL file is correct,the program outputs an array file containing a list of arrayelements, positions, and excitations; otherwise, the programoutputs error messages.

    A. Experiments Preparation

    Thespace program is a common used benchmark and has6,199 lines of code. Since the program has a large number

    of available test suites, we randomly selected 4 suites. Thesesuites (referred here as T1, T2,T3 andT4) have 160, 156, 151and 162 test cases respectively.

    For each TC the execution cost information was computedby using the Valgrind profiling tool [16]. We know that TCexecution time is hard to measure accurately. It involves manyexternal parameters that can affect the execution time, such asdifferent hardware, application software and operating system.In order to circumvent these issues we used Valgrind, whichexecutes the program binary code in an emulated, virtual CPU.The computational cost of each test case was measured bycounting the number of virtual instruction codes executed bythe emulated environment. These counts allows to argue thatthey are directly proportional to the cost of the TC execution.

    Additionally, the branch coverage information was measuredusing the profiling tool gcov from the GNU compiler gcc.

    B. Metrics

    In order to evaluate the results (i.e., the Pareto frontiers)obtained by the algorithms, for each test suite, we usedfour different quality metrics usually adopted in the literatureof multi-objective optimization. The following metrics wereadopted in this paper: Hypervolume (HV) [17], GenerationalDistance (GD) [13], Inverted Generational Distance (IGD) [13]and Coverage (C) [17]. Each metric considers a different aspectof the Pareto frontier.

    1) Hypervolume (HV) [17]: computes the size of the

    dominated space, which is also called the area underthe curve. A high value of hypervolume is desired inMOO problems.

    2) Generational Distance (GD) [13]: The GenerationalDistance (GD) reports how far, on average, one Paretoset (called P Fknown) is from the true Pareto set(called as P Ftrue).

    3) Inverted Generational Distance (IGD) [13]: is theinverse of GD by measuring the distance from theP Ftrue to the P Fknown. This metric is complemen-tary to the GD and aims to reduce the problem

    whenP Fknown has very few points, but they all areclustered together. So this metric is affected by thedistribution of the solutions of P Fknown compara-tively to P Ftrue.

    4) Coverage (C) [17]: The Coverage metric indicates theamount of the solutions within the non-dominated setof the first algorithm which dominates the solutions

    within the non-dominated set of the second algorithm.

    Both GD and IGD metrics requires that the P Ftrue beknown. Unfortunately, for more complex problems (with big-ger search spaces), as the space and flex programs, it isimpossible to know P Ftrue a priori. In these cases, instead,a reference Pareto frontier (called here P Freference) canbe constructed and used to compare algorithms regardingthe Pareto frontiers they produce [4]. The reference frontierrepresents the union of all found Pareto frontiers, resultingin a set of non-dominated solutions found. Additionally, theC metric reported in this work refers to the coverage of theoptimal set P Freference, over each algorithm, indicating theamount of solutions of those algorithms that are dominated, e.g. that are not optimal.

    C. Algorithms Settings

    All the algorithms were run for a total of 200,000 objec-tive function evaluations. The BMOPSO-CDR and the hybridalgorithms used 20 particles, mutation rate of 0.5, linearlydecreases from 0.9 to 0.4, constantsC1 and C21.49, maximumvelocity of 4.0 and EAs size of 200 solutions. These valuesare the same used in [7] and represent generally used valuesin the literature. The NSGA-II algorithm used a mutation rateof 1 / population size, crossover rate of 0.9 and populationsize of 200 individuals. As the NSGA-II algorithm does notuse an external archive to store solutions, we decided to use200 individuals to permit a fair comparison. This way, all thealgorithms are limited to a maximum of 200 non-dominated

    solutions.

    D. Results

    After 30 executions of each TC selection algorithm, thevalues of branch coverage and execution cost observed in thePareto frontiers were normalized since they are measured usingdifferent scales. All the evaluation metrics were computed andstatistically compared using the Wilcoxon rank-sum test. TheWilcoxon rank-sum test is a nonparametric hypothesis test thatdoes not require any assumption on the parametric distributionof the samples. All the values of each metric were statisticallydifferent with an level of 0.95 and Table I presents theaverage values of the adopted metrics, as well as the observedstandard deviations.

    From the table, we can observe that the BMOPSO-CDRLSoutperformed the other selection algorithms for all the col-lected metrics. It is possible to observe, from the HV met-ric, that the BMOPSO-CDRLS dominates bigger objectivespace areas when compared to the others. Furthermore, theGD values obtained by the algorithm shows that its Paretofrontiers have better convergence to the optimal Pareto fron-tier (represented by the P Freference). Additionally, the IGDmetric shows that the Pareto frontiers are also well distributedcomparatively to optimal Pareto set. Finally, the Coverage

    417417

  • 8/10/2019 vres

    5/6

    TABLE I. MEAN VALUE AND STANDARD DEVIATION FOR THEMETRICS

    Algorithm Suite HV GD IGD C

    BMOPSO-CDR

    T10.806 0.024 0.016 1.0

    (0.007) (0.004) (0.001) (0)

    T20.781 0.018 0.018 1.0

    (0 .01 0) (0. 003 ) (7.3 E-4) (0 )

    T30.814 0.021 0.015 1.0

    (0 .00 7) (0. 003 ) (9.4 E-4) (0 )

    T40.794 0.018 0.016 1.0

    (0 .00 8) (0. 003 ) (9.4 E-4) (0 )

    BMOPSO-CDRLS

    T10.977 7.9E-5 8.5E-5 0.57

    (8.0E-5) (2.7E-5) (1.3E-5) (0.08)

    T20.971 1.0E-4 1.0E-4 0.56

    (7.1E-5) (1.1E-5) (7.3E-6) (0.08)

    T30.978 2.9E-5 5.6E-5 0.22

    (4.0E-5) (1.1E-5) (7.1E-6) (0.11)

    T40.970 7.7E-5 9.0E-5 0.64

    (1.2E-4) (1.6E-5) (1.2E-5) (0.09)

    BMOPSO-CDRLS2

    T10.972 7.9E-4 6.9E-4 0.83

    (0.002) (3.4E-4) (3.0E-4) (0.01)

    T20.960 0.001 0.001 0.77

    (0.002) (4.1E-4) (3.6E-4) (0.01)

    T30.971 0.001 9.4E-4 0.80

    (0.001) (3.1E-4) (2.4E-4) (0.01)

    T40.964 7.9E-4 6.9E-4 0.79

    (0.002) (2.6E-4) (2.4E-4) (0.02)

    NSGA-II

    T10.876 0.002 0.015 1.0

    (0 .01 3) (7 .7E -4 ) (0.0 01) (0 )

    T20.855 0.002 0.016 1.0

    (0.012) (7.5E-4) ( 7.5E-4) (0)

    T30.880 0.002 0.013 1.0

    (0 .01 4) (7 .0E -4 ) (0.0 01) (0 )

    T40.858 0.002 0.015 1.0

    (0 .01 2) (4 .2E -4 ) (0.0 01) (0 )

    metric indicates that the BMOPSO-CDRLS algorithm wasthe least dominated algorithm by the optimal Pareto set,hence several of its solutions are within the optimal frontier.Additionally, from these results, it is possible to see that thelocal search procedure used by the BMOPSO-CDRLS (FS andBE) outperformed the local search procedure of the BMOPSO-CDRLS2.

    In addition to aforementioned results, we also can seethat both hybrid algorithms outperformed the NSGA-II andBMOPSO-CDR for all metrics. This indicates that the additionof the local search procedure indeed improved the BMOPSO-CDR algorithm, and that these selection algorithms are com-petitive multi-objective algorithms. Additionally, we highlightthat all the solutions found by NSGAII and BMOPSO-CDRare dominated by the optimal Pareto frontier for the spaceprogram, indicating, this way, that no single solution found bythem is member of the optimal Pareto frontier.

    IV. CONCLUSION

    In this work, we propose the creation of hybrid algorithmsby adding of local search mechanisms into the binary multi-objective PSO for structural TC selection. The main contri-bution of the current work is to investigate whether the localsearch mechanism can improve the multi-objective PSO, from[7], for selecting structural test cases considering both branchcoverage and execution cost. We highlight that hybrid binarymulti-objective PSO with local search was not yet investigatedin the context of TC selection. Besides, the developed selectionalgorithms can be adapted to other test selection criteria andare not limited to two objective functions. Furthermore, we

    expect that the good results can also be obtained on otherapplication domains.

    In the performed experiments, both hybrid algorithms(BMOPSO-CDRLS and BMOPSO-CDRLS2) outperformedthe BMOPSO-CDR and NSGA-II for all metrics. Hence, wecan conclude that the local search mechanism indeed improvedthe BMOPSO-CDR algorithm and the hybrid algorithms are

    competitive multi-objective search strategies. Additionally, theBMOPSO-CDRLS outperformed the BMOPSO-CDRLS2 in-dicating that the FS and BE strategy was better than 1-opt.

    As future work, we can point the investigation of otherstrategies to perform local search, and perform the sameexperiments on a higher number of programs in order to verifywhether the obtained results are equivalent to those presentedhere, and also whether these results can be extrapolated toother testing scenarios other than from the SIR repository.Also, we will investigate whether the performance of PSO, onTC selection problem, is impacted by changing its parameters.

    ACKNOWLEDGMENT

    This work was partially supported by the National Instituteof Science and Technology for Software Engineering (INESwww.ines.org.br), CNPq, CAPES, FACEPE.

    REFERENCES

    [1] M. J. Harold, R. Gupta, and M. L. Soffa, A methodology for controllingthe size of a test suite,ACM Trans. Softw. Eng. Methodol., vol. 2, no. 3,pp. 270285, 1993.

    [2] N. Mansour and K. El-Fakih, Simulated annealing and genetic algo-rithms for optimal regression testing,Journal of Software Maintenance,vol. 11, no. 1, pp. 1934, January 1999.

    [3] X.-Y. Ma, B.-K. Sheng, and C.-Q. Ye, Test-suite reduction usinggenetic algorithm, Lecture Notes in Computer Science, vol. 3756, pp.253262, 2005.

    [4] S. Yoo and M. Harman, Using hybrid algorithm for pareto efficient

    multi-objective test suite minimisation,J. Syst. Softw.

    , vol. 83, pp. 689701, April 2010.

    [5] L. S. Souza, R. B. C. Prudencio, and F. d. A. Barros, A constrainedparticle swarm optimization approach for test case selection, inIn Proc.of the 22nd Int. Conf. on Soft. Eng. and Knowledge Eng. (SEKE 2010),Redwood City, CA, USA, 2010.

    [6] L. S. de Souza, R. B. Prudencio, F. de A. Barros, and E. H. da S. Aranha,Search based constrained test case selection using execution effort,

    Exp. Syst. with App., vol. 40, no. 12, pp. 4887 4896, 2013.

    [7] L. S. Souza, P. B. C. Miranda, R. B. C. Prudencio, and F. d. A. Barros,A multi-objective particle swarm optimization for test case selectionbased on functional requirements coverage and execution effort, in InProc. of the 23rd Int. Conf. on Tools with Art. Int. (ICTAI 2011), BocaRaton, FL, USA, 2011.

    [8] C.-S. Tsou, S.-C. Chang, and P.-W. Lai, Using crowding distance toimprove multi-objective pso with local search, Swarm Intelligence:Focus on Ant and Particle Swarm Optimization, pp. 7786, 2007.

    [9] A. R. Webb, Statistical Pattern Recognition, 2nd Edition. John Wiley& Sons, Oct. 2002.

    [10] C. Papadimitriou and K. Steiglitz, Combinatorial optimization: algo-rithms and complexity, ser. Dover books on mathematics. DoverPublications, 1998.

    [11] H. Do, S. Elbaum, and G. Rothermel, Supporting controlled experi-mentation with testing techniques: An infrastructure and its potentialimpact, Emp. Softw. Engg., vol. 10, no. 4, pp. 405435, Oct. 2005.

    [12] K. Deb, S. Agrawal, A. Pratap, and T. Meyarivan, A fast elitist non-dominated sorting genetic algorithm for multi-objective optimization:Nsga-ii, inParallel Problem Solving from Nature PPSN VI, ser. LNCS.Springer Berlin Heidelberg, 2000, vol. 1917, pp. 849858.

    418418

  • 8/10/2019 vres

    6/6

    [13] C. A. C. Coello, G. B. Lamont, and D. A. van Veldhuizen,EvolutionaryAlgorithms for Solving Multi-Objective Problems. Springer, 2007,vol. 5.

    [14] R. C. Eberhart and Y. Shi, Comparison between genetic algorithms andparticle swarm optimization, LNCS, vol. 1447, pp. 611616, 1998.

    [15] C. Coello, G. Pulido, and M. Lechuga, Handling multiple objectiveswith particle swarm optimization, IEEE Transactions on EvolutionaryComputation, vol. 8, no. 3, pp. 256279, 2004.

    [16] N. Nethercote and J. Seward, Valgrind: A program supervision frame-work, in In Third Workshop on Runtime Verification (RV03), 2003.

    [17] K. Deb and D. Kalyanmoy,Multi-Objective Optimization Using Evolu-tionary Algorithms, 1st ed. Wiley, jun 2001.

    419419