Upload
nicolas-maycock
View
215
Download
0
Embed Size (px)
Citation preview
Test del Software, con elementi di Verifica e Validazione,
Qualità del Prodotto Software
G. Berio
Argomenti Introduttivi
• Definizione(i) di test• Test d’accettazione e test dei difetti• Test delle unità e test in the large (test
d’integrazione e test di sistema) (strategie di test)
• Risultato e reazione al test• Test, qualità del software, verifica e
validazione
Testing: Definition(s)Pressman writes: “Testing is the Pressman writes: “Testing is the
process of process of exercising a exercising a programprogram with the specific with the specific intent of finding errors prior to delivery intent of finding errors prior to delivery toto the end user”. the end user”. (Dijkstra, 1987)
Testing is an activity performed for evaluating product quality, and for improving it, by identifying defects and problems.
Software testing consists of the dynamic verification of the behavior of a program on a finite set of test cases, suitably selected from the usually infinite executions domain (inputs), against the expected behavior.
Better (SWEBOK):
Traduzione Pressman: Test = Collaudo
Selection of Test Cases• Validation (acceptance) testing (*)
– To demonstrate to the developer and the customer that the software meets its requirements;
– A successful test shows that the software operates as intended.
• Defect testing– To discover defects in the software where its behavior
is incorrect or not in conformance with its specification;
– A successful test is a test that makes the software perform incorrectly and so exposes a defect in the software itself.
(*) Traduzione Pressman: Validation testing=Collaudo di convalida o Collaudo di validazione
What is a “Good” Test Case?
• A good defect test case has a high probability of finding an error (failure) i.e. an unexpected and unwanted behavior
• A good test case is not redundant.• A good test case should be neither too simple nor too
complex• A good test case should normally be repeatable (i.e.
leading to the same results)
Elaborated from Pressman
Symptoms (Failure) & Causes (Fault, Defect)
symptomsymptom
cause (defect)cause (defect)
symptom and cause may be symptom and cause may be geographically separated geographically separated
symptom may disappear when symptom may disappear when another problem is fixedanother problem is fixed
cause may be due to a cause may be due to a combination of non-errors combination of non-errors
cause may be due to a system cause may be due to a system or compiler erroror compiler error
cause may be due to cause may be due to assumptions that everyone assumptions that everyone believesbelieves
symptom may be intermittentsymptom may be intermittent
faultfault failurefailure
Elaborated from Pressman
Testing and lack of "continuity"
• Testing behaviors of software code by examining only a set of test cases
• Impossible to extrapolate behavior of software code from a finite set of test cases
• No continuity of behavior– it can exhibit correct behavior in infinitely many
cases, but may still be incorrect in some cases
When stopping defect testing?
• If defect testing (on a set of test cases) does not detect failures, we cannot conclude that software is defect-free
• Still, we need to do testing driven by sound and systematic principles…
Su cosa è eseguito il test?
• Sul codice,
• Ma quale codice?
• Il codice delle unità di test oppure insiemi di unità opportunamente aggregate oppure sull’intero software installato in un ambiente proprio
Unità di Test non è (necessariamente) una Componente
Calcolare Costo Unità ApplicaOpzioniCosto
Unità Prossima richiesta?()
Unità Creare()
Testing Strategy• We begin by ’testing-in-the-small’(or unit test) and
move to ‘testing-in-the-large’• The software architecture is the typical way for
incrementally driving the ‘testing-in-the-large’• For conventional components:
– The module or part of module (unit) is the initial focus (in the small)
– Integration of modules
• For object-oriented components:– the OO class or part of class (unit) that encompasses
attributes and operations, is the initial focus (in the small)– Integration is communication and collaboration among
classes
Elaborated from Pressman
Software Testing Strategy
unit testunit test integrationintegrationtesttest
ValidationValidation(acceptance)(acceptance)
testtest
systemsystemtesttest
Defect testing
Elaborated from Pressman
In the large
In the small: component testing
Test Cases e Expected Behavior nel Defect Testing• Lo expected behavior per una unità si dovrebbe ottenere da:
– Specifica dei requisiti
• Tuttavia le componenti introdotte nella progettazione non hanno un diretto legame con la specifica dei requisiti ma nel progetto hanno tuttavia una loro specifica (opposta al codice) +/- precisa:
– Statecharts
– Pre-post condizioni
– Descrizione dei tipi di Input e Output
– Diagrammi di sequenza
– Etc.
• La specifica del componente deve comprendere la specifica dell’unità su cui il test deve essere eseguito (o permettere di derivare la specifica dell’unità); dalla specifica dell’unità si dovrebbe ottenere test cases ed expected behavior
• Lo expected behavior di una unità di trova nella specifica della componente solo se la specifica della componente è ben fatta – i.e. nella progettazione si garantisce la testabilità - (infatti, nella teoria del test, si parla spesso di ORACOLO per indicare che l’expected behavior è semplicemente noto)
Who perform defect testing?
developerdeveloper independent testerindependent tester
Understands the system Understands the system
but, will test "gently"but, will test "gently"
and, is driven by "delivery"and, is driven by "delivery"
Must learn about the system,Must learn about the system,
but, will attempt to break itbut, will attempt to break it
and, is driven by qualityand, is driven by quality
From Pressman
Test Cases e Expected Behavior nel Acceptance Testing
• To be performed by the customer or end-users
• The expected behavior (dell’intero software) should be fixed since the beginning: a special section in the requirement document should explicitly be devoted to “how to perform acceptance testing” (or “which are the test cases for acceptance”)
Esempio
fermare partire
Pre: utente preme bottone and prima richiesta per ascensore
Post: ascensore arriva in 1’
Caso d’uso di accettazione (descrive il test case ed anche lo expected behavior)
Casi d’uso di specifica dei requisiti
Richiedi ed Ottieni Ascensore
richiesta
Expected behavior
Test case
Effort to Repair Software
0,15 0,5 12
5
20
02468
101214161820
rela
tive
effo
rt to
re
pa
ir
Re
qm
ts
De
sig
n
Co
din
g
Un
it te
st
Acc
. Te
st
Ma
inte
na
nce
(defects detected in different activities)
Effort to Repair Software• Testing should only confirm that the
performed work has been done rightly• It remains easier to build the correct
software the main issue of Software Engineering… eliciting and specifying requirements and moving systematically to design are ways towards building correct software as well
• So, due to high effort to repair defects, it is required to confirm any time that the work is performed in the right direction
Contesto
• Communication• Planning• Modeling
– Requirements analysis– Design
• Construction– Code generation– Testing
• Dep loyment
Prevent the non quality; quality forecasting on work products
Evaluating the quality; quality assessment on deliverables
Parte del Prodotto Software (delivered to the customer)
Prodotto intermedioProcess frameworkProcess framework
Framework activitiesFramework activitieswork taskswork taskswork productswork productsmilestones & milestones & deliverablesdeliverablesQA checkpointsQA checkpoints
Umbrella ActivitiesUmbrella Activities
System engineering
Il prodotto software
User manual
Code
Software and SystemRequirement specifications
Design Models
Prodotto software
Prodotti del lavoro
Technical documentation
ExpectedQuality
AttributesInstalled
Code
Requirement Document
Development Plan
Project Reports
Test, qualità, verifica e validazione• Quality assessment
– Prodotto Software:• Codice prodotto:
– Test (orientato a correttezza, affidabilità, robustezza, safety, prestazioni)
– Verifica e Validazione (orientate a correttezza, affidabilità, robustezza, safety, prestazioni)
– …ogni altro attributo di qualità
• Quality forecasting– Prodotti intermedi:
• Modello analitico e di progetto:– Verifica e validazione (orientate a correttezza, affidabilità,
robustezza, safety, prestazioni)– …ogni altro attributo di qualità
Verifica e Validazione• La verifica si applica generalmente ai modelli costruiti
durante lo sviluppo del software (ad esempio, specifica dei requisiti, al modello di progetto etc.) e non necessariamente (solo) al codice, o in generale sui prodotti intermedi (es. se il modello di progetto è equivalente al modello analitico, se non vi sono errori formali nel modello di progetto, se il modello analitico è consistente con il documento dei requisiti)– (build the (work) product in the right way)
• La validazione comprende la valutazione se i requisiti sono stati ben compresi (chiamata convalida dei requisiti da Pressman) e, in ogni momento, se i prodotti intermedi corrispondono a ciò che il committente ha richiesto – (build the right (work) product)
Test, V&V• Il test sul codice indica contemporaneamente il fatto che il codice è eseguito
con cui si vorrebbe dimostrare l’esistenza di difetti oppure l’assenza di difetti in alcuni casi predeterminati
• La verifica e validazione (V&V) indica un insieme di attività svolte in diversi punti dell’Ingegneria dei requisiti e dell’Ingegneria della progettazione, non solo sul codice
• La verifica e validazione possono usare tecniche di analisi statica (automatica o manuale) del codice – cioè il codice non è eseguito - ma più generalmente sono svolte sui prodotti intermedi; talvolta, l’obiettivo della verifica è provare la correttezza ovvero provare alcune proprietà che traducono formalmente gli attributi di qualità considerati
• Poiché il testing richiede normalmente l’esecuzione del codice e quindi può considerarsi coincidente con le tecniche dinamiche di verifica e validazione del software (ma V&V sono più generali…)
• La verifica e validazione sono, a loro volta, parte del controllo di qualità (quality assurance) del prodotto, molto focalizzate su alcuni attributi di qualità (correttezza, affidabilità, robustezza, safety e prestazioni)
Traduzione del Pressman: Convalida=Validazione
Tecniche di Verifica e Validazione• Dinamiche --- eseguono il codice del software, quindi sono anche tecniche
di test e si classificano in:– Black box– White box (o Glass box)
• Statiche --- non eseguono il codice del software, quindi sono tipiche della V&V e, a loro volta, parte della quality assurance e distinte in:– Automatizzate
• Model checking• Correcteness proofs• Symbolic execution• Data flow analysis
– Manuali (formal technical reviews)• Ispezione (Inspection)• Walkthrough
• Le tecniche statiche e dinamiche possono essere applicate insieme (cioè non sono alternative); talvolta, il risultato di una tecnica statica può essere usato come punto di partenza per applicare una tecnica dinamica
Sintesi
• Communication• Planning• Modeling
– Requirements analysis– Design
• Construction– Code generation– Testing sul Codice
• Deployment
• Communication• Planning• Modeling
– Requirements analysis– Design
• Construction– Code generation– Testing sul Codice
• Deployment
Tes
t in g
del
co d
ice
(whi
t e e
bla
ck b
ox)
Ver
i fic
a e
Val
idaz
i one
su
Wor
k P
rodu
cts
e D
eli v
erab
les
(Ver
ifi c
a su
Cod
ice
= T
ecni
che
Sta
t ich
e)
Qua
lit y
for
ecas
ting
su
Wor
k-P
rodu
cts
Qua
lit y
ass
essm
ent s
u D
eli v
erab
l es
Orientati a: correttezza, affidabilità, robustezza, safety e prestazioni e a work products o deliverable quali il modello analitico, il modello di progetto, il codice
Qualunque attributo di qualità del codice e di altri deliverables
Orientato a: correttezza, affidabilità, robustezza, safety e prestazioni legati al codice
Foundamenti di Test del Codice
Definizioni (1)• P (codice), D (dominio degli ingressi),
R (dominio delle uscite):– P: D R, P potrebbe essere definita come funzione parziale
• Expected Behavior (Comportamento Atteso) è definito come
EB D R:– P(d) si comporta come atteso sse <d, P(d)> EB
– P si comporta come atteso sse ogni P(d) si comporta come atteso
Definizioni (2)• Test case t (in Italiano, caso di test)
– Un qualunque elemento di D
• Il test difettivo (defect test(ing)) è di successo se almeno uno dei casi di test previsti mostra un comportamento inatteso (unexpected behavior)
• Il test di accettazione (acceptance test(ing)) è di successo se per ogni caso di test t previsto, P(t) si comporta come atteso
Definizioni (3)• Insieme ideale di casi di test (defect testing)
– Se P non si comporta come atteso, c’è almeno un caso di test t nell’insieme tale che P(t) non si comporta come atteso
– Se P: D R corrisponde al normale comportamento dell’algoritmo programmato con P, non è possibile avere un algoritmo per costruire un insieme ideale di casi di test
• Tuttavia, un insieme di casi di test che approssima un insieme ideale di casi di test, dovrebbe comunque essere definito, definendo i casi di test parte dell’insieme
Test Case Design for Defect Testing
"Bugs lurk in corners "Bugs lurk in corners and congregate at and congregate at boundaries ..."boundaries ..."
Boris BeizerBoris Beizer
OBJECTIVEOBJECTIVE
COVERAGECOVERAGE
CONSTRAINTCONSTRAINT
to discover defectsto discover defects
in a complete mannerin a complete manner
with a minimum of effort and timewith a minimum of effort and time
From Pressman
Tecniche di Software Defect Testing
Practices
white-box
black-box
Tecniche
Conventional Unit Defect Testing
Black box techniques
derives test cases from the expected behavior on a given input domain
Black-Box Testing• Unit viewed as a black-box, which accepts
some inputs and produces some outputs
• Test cases are derived solely from the expected behavior, without knowledge of the internal unit code
• Main problem is to design (a minimal set of) test cases increasing the probability of finding failures, if any
Black Box Test-Case Design Techniques
• Equivalence class partitioning
• Boundary value analysis
• Cause-effect graphing
• Other
Equivalence Class Partitioning• To identify the unit input domain and to make a partition
of it in equivalence classes (i.e. assuming that data in a class are treated identically by the unit code)– The basis of this technique is that test of a representative value of each
class is equivalent to a test of any other value of the same class.
• To identify valid as well as invalid equivalence classes (valid equivalence classes are usually defined in term of a given unit specification, providing how
certain inputs are treated by the unit code and for which expected behavior is known) i.e. D = (DV DIV)
• For each equivalence class, generating one test case to exercise (execute) the unit with one input representative of that class
Example• Possible input for x of type INT but with the additional
specification : 0 <= x <= max
valid equivalence class for x : 0 <= x <= max
invalid equivalence (wrt the unit specification) classes for x : x < 0, x > max
• 3 test cases can be generatedIt might be part of the Expected Behavior (paramter types are usually an incomplete idea of the possible input); additionally, test is also for evaluating robustness; and finally, integration testing is simplified if we also know how a unit behaves in unexpected situations
Guidelines for Identifying Equivalence Classes
Input specification Valid Eq Classes Invalid Eq Classes
range of values one valid two invalid(e.g. 1 - 200) (value within range) (one outside each
each end of range)
number N valid values one valid two invalid(less than, more than
N)Set of input values one valid eq class one invalideach handled per each value (e.g. any value not
in valid input set )differently by the program (e.g. A, B, C)
Guidelines for Identifying Equivalence Classes
Input specification Valid Eq Classes Invalid Eq Classes
Any other condition one one(e.g. ID_name must begin (e.g. it is a letter) (e.g. it is not a letter)
with a letter )
• If you know that elements in an equivalence class are not handled identically by the program, split the equivalence class into smaller equivalence classes.
• In very special cases, some of the generated classes cannot be tested (just because they are explicitly forbidden by the program)
Identifying Test Cases for Equivalence Classes
• Assign a unique identifier to each equivalence class
• Until all valid equivalence classes have been covered by test cases, write a new test case covering as many of the uncovered valid equivalence classes as possible
• Each invalid equivalence class covered by a separate test case
Boundary Value Analysis• Design test cases that exercise values that lie at the
boundaries of an input equivalence class and for situations just beyond the ends.
• Also identify output equivalence classes, and write test cases to generate o/p at the boundaries of the output equivalence classes, and just beyond the ends.
• Example: input specification 0 <= x <= maxTest cases with values : 0, max ( valid inputs)
-1, max+1 (invalid inputs)
Why testing boundary values
• Equivalence classes input domain in classes, assuming that behavior is "similar" for all data within a class
• Some typical code defects, however, just happen to be at the boundary between different classes
Esempio (dai requisiti-progetto)
Applicazione della tecnica di “equivalence class partitioning”
Valide e non valide
Valide e non valide
Non valide, valide già coperte
Valide e non valide
Valide e non valide
e
, 1995, “speciale”
Applicazione del“boundary value analysis”
Black box test case design with pre-post conditions
(see Ghezzi et al. for more details)
Pre-post conditions of “insertion of invoice record in a file”
X: Invoice, f: Invoice_File
{sorted_by_date(f) and not exist j, k (j ≠ k and f(j) =f(k)}
insert(x, f)
{sorted_by_date(f) and for all k (old_f(k) = z implies exists j (f(j) = z)) and for all k (f(k) = z and z ≠ x) implies exists j (old_f(j) = z) andexists j (f(j). date = x. date and f(j) ≠ x) implies j < pos(x, f) andresult x.customer belongs_to customer_file andwarning (x belongs_to old_f or x.date < current_date or ....)}
TRUE implies sorted_by_date(f) and for all k old_f(k) = z implies exists j (f(j) = z) and for all k (f(k) = z and z ≠ x) implies exists j (old_f(j) = z)
and(x.customer belongs_to customer_file) implies resultand not (x.customer belongs_to customer_file and ...)
implies not resultandx belongs_to old_f implies warningandx.date < current_date implies warningand....
Apply partitioning to post-conditions…Rewrite them in a more convenient way…
Coverage in Black Box Testing
partition
Possible Inputs Expected Outputs
Requirements specification
Component / Unit specification
Analysis model
Design model
EB = Possible Inputs + Expected Outputs
Black-box vs White-Box• Black box testing can experiment defects such as missing or
functionality not behaving according to its expected behavior
– tests what the unit is supposed to do– It is less suitable for experiment defects such as unreachable code, hidden
functionality (i.e. what is un-expected), run-time errors raised by code
• White box testing can experiment defects in unit code, even disregarding its expected behavior
– tests what the unit does– It is suitable for experiment (especially unexpected) defects in code (and
sometime, in design) but cannot find missing or incomplete functionality
• Therefore, both techniques are required.
Black-box vs White-boxBlack-box White-box
Test unit code on its expected behavior (EB)
Test (control) structureof the unit code (P)
Can find missing andincomplete behavior (EB-P)
Can’t find missing or incomplete behavior (EB-P)
Need the expected behavior (e.g.module specification)
Do not necessarily need expected behavior (try
to find unexpected behavior)
Can’t find unexpected behavior (P-EB)
Can find unexpectedbehavior (P-EB)
where P: D R and EB D R
White box methods
derives test cases from unit code
White Box: Exhaustive Testing
loop < 20 Xloop < 20 X
Elaborated From Pressman
There are 10 possible paths! If we execute oneThere are 10 possible paths! If we execute onetest per millisecond, it would take 3,170 years totest per millisecond, it would take 3,170 years to
test this program!!test this program!!
1414
j19
1j
19
1i
i5
Coverage is still important as in the black-box testing: however, we require to cover executions of a program (control flow)
White Box: Selective Testing
loop < 20 Xloop < 20 X
Selected pathSelected path
Elaborated from Pressman
Coverage of control flow = Executed (exercised) paths
Coverage types
• Statement coverage• Decision coverage (edge coverage)• Condition coverage• Basis Path coverage• Path coverage (with loops coverage)
Statement-coverage
• Select test cases such that
– coverage - every statement in the unit (or whatever) P is executed at least once by some test case
• An input datum executes many statements try to minimise the number of test cases still preserving the coverage
Exampleread (x); read (y);if x > 0 then
write ("1");else
write ("2");end if;if y > 0 then
write ("3");else
write ("4");end if;
{<x = 2, y = 3>, <x = - 13, y = 51>, <x = 97, y = 17>, <x = - 1, y = - 1>}covers all statements
{<x = - 13, y = 51>, <x = 2, y = - 3>} is minimal
Weakness of the statement-coverage
if x < 0 then x := -x;
end if;z := x;
{<x=-3>} covers all statements
it does not exercise the case when x is positiveand the then branch isnot entered
Example
void function eval (int A, int B, int X ){if ( A > 1) and ( B = 0 )
then X = X / A;if ( A = 2 ) or ( X > 1)
then X = X + 1;}Statement coverage test cases:1) A = 2, B = 0, X = 3 ( X can be assigned any value)
Decisions and Conditions
void function eval (int A, int B, int X ){if ( A > 1) and ( B = 0 )
then X = X / A;if ( A = 2 ) or ( X > 1)
then X = X + 1;}
A decision
A condition in the decision
Another condition in the decision
( B <> 0 )Decisions are made of several conditions usually combined by logic operators
Decision, Condition Coverage
• Decision coverage– write test cases to exercise - coverage - at least once
every decision
• Condition coverage– write test cases to exercise – coverage – at least once
every condition
– these test cases may not always satisfy decision coverage
Control graph with decisions construction rules
G G1 2 G1
G1
G1
G 2
I/O, assignment, or procedure call
if-then-else if-then
while loop
two sequential statements
Little bit different from rules in Pressman
Simplification
a sequence of edges can be collapsed into just one edge
. . .n n nnn k-1 k1 2 3
n1n
k
Example
void function eval (int A, int B, int X )
{
if ( A > 1) and ( B = 0 ) then
X = X / A;
if ( A = 2 ) or ( X > 1) then
X = X + 1;
}
Decision coverage test cases:
A > 1and
B = 0
A = 2or
X > 1
X = X+1
X = X/ A
a
cT
F
b
eT
F
d
2) A = 2, B = 1, X = 1 (abe)
1) A = 3, B = 0, X = 3 (acd)
flow chart < > control graph
Example (with the control graph)
A > 1and
B = 0
A = 2or
X > 1
X = X+1
X = X/ A
a
cT
F
b
eT
F
d
X = X / A
X = X+1
A > 1 and B = 0
A <= 1 or B != 0
A = 2 or X > 1
A != 2 and X <= 1
Decision coverage =
To exercise at least once each edge in the control graph
Weakness of the decision-coverage{<x = 0, z = 1>, <x = 1, z = 3>} causes the execution of decisions but fails to expose the risk of a division by zero
if x ≠ 0 then y := 5;
else z := z -
x; end if;if z > 1 then
z := z / x; else
z := 0; end if;
Control Graph with ConditionsA > 1and
B = 0
X = X/ A
cT
F
b
With decisions
With conditions
A>1 A<=1
B=0 B!=0
X = X/ A
X = X / A
Example
• Condition coverage test cases must cover conditionsA>1, A<=1, B=0, B !=0
A=2, A !=2, X >1, X<=1
• Test cases (do not satisfy decision coverage and do not cover edges in the control graph with decision or extended with conditions):1) A = 1, B = 0, X = 3
2) A = 2, B = 1, X = 1
A > 1and
B = 0
A = 2or
X > 1
X = X+1
X = X/ A
a
cT
F
b
eT
F
d
X = X / A
X = X+1
A > 1 and B = 0
A <= 1 or B != 0
A = 2 or X > 1
A != 2 and X <= 1
Decision-Condition and Multiple Condition Coverage
• Decision-Condition coverage– write test cases such that each condition in a
decision is exercised at least once and each decision is exercised at least once
– write test cases such that each edge in the control graph with conditions is exercised at least once
• Multiple Condition coverage– write test cases to exercise all possible combinations of conditions within every decision
Example• Decision-Condition coverage test
cases must cover conditionsA>1, A<=1, B=0, B !=0
A=2, A !=2, X >1, X<=1
and decisions ( A > 1 and B = 0) ( A <= 1 or B < > 0)
( A = 2 or X > 1) ( A < > 2 and X <= 1)
• Test cases:1) A = 2, B = 0, X = 42) A = 1, B = 1, X = 1
A > 1and
B = 0
A = 2or
X > 1
X = X+1
X = X/A
a
cT
F
b
eT
F
d
Example• Multiple Condition coverage must cover
conditions1) A >1 and B =0 5) A=2 and X>1 2) A >1 and B !=0 6) A=2 and X <=13) A<=1 and B=0 7) A!=2 and X > 1
4) A <=1 and B!=0 8) A !=2 and X<=1
• Test cases (cover possible combination of decisions,
at least in the example but this is not always the case):1) A = 2, B = 0, X = 4 (covers 1,5)
2) A = 2, B = 1, X = 1 (covers 2,6)
3) A = 1, B = 0, X = 2 (covers 3,7)
4) A = 1, B = 1, X = 1 (covers 4,8)
A>1 and B=0 A!=2 and X <= 1A = 3, B = 0, X = 1 (to cover possible combination of decisions you should combine couples in T,T; T,F; F,T and F,F and introduce new test cases accordingly)
A > 1and
B = 0
A = 2or
X > 1
X = X+1
X = X/A
a
cT
F
b
eT
F
d
Path coverage
• Select test cases which traverse all paths from the initial to the final node of P’s control graph
• However, number of paths may be too large (in the case of loops)– additional constraints must be provided
Basis Path Testing
First, compute the cyclomatic complexity:
number of simple decisions + 1
or
number of enclosed regions + 1
Referring to the figure, V(G) = 4
r1r2
r3
Cyclomatic ComplexityA number of industry studies have indicated A number of industry studies have indicated
that the higher V(G), the higher the probability that the higher V(G), the higher the probability of defects.of defects.
V(G)V(G)
modulesmodules
modules in this range are modules in this range are more defect pronemore defect prone
From Pressman
Basis Path Testing
1. Draw control graph of program from the program detailed design or code.
2. Compute the cyclomatic complexity V(G) of the control graph using any of the formulas:
V(G) = #Edges - #Nodes + 2or V(G) = #regions in control graph + 1
3. Write at least V(G) test cases
From Pressman
White Box Testing Review
White box testing is concerned with the degree to which test cases exercise or cover the logic (source code) of the program. This kind of test is especially suitable for testing unexpected behavior!
White box test case design:
Statement coverage
Decision coverage Loop testing
Condition coverage
Decision-condition coverage Data flow testing
Multiple condition coverage
Basis Path Testing
Loop Testing
Nested Nested LoopsLoops
ConcatenatedConcatenated Loops Loops Unstructured Unstructured
LoopsLoops
Simple Simple looploop
From Pressman
Loop Testing: Simple Loops
Minimum conditions—Simple LoopsMinimum conditions—Simple Loops
1. skip the loop entirely1. skip the loop entirely
2. only one pass through the loop2. only one pass through the loop
3. two passes through the loop3. two passes through the loop
4. m (average) passes through the loop m < n4. m (average) passes through the loop m < n
5.5. n (n-1, too) passes through n (n-1, too) passes through the loopthe loop
where n is the (expected) maximum number where n is the (expected) maximum number of allowable passes (if exists)of allowable passes (if exists)
Elaborated from Pressman
Changes from the textbook:
n+1 referred in the book has been leaved out!
average has been added
Loop Testing: Nested Loops
1 Start at the innermost loop. Set all outer loops to their 1 Start at the innermost loop. Set all outer loops to their minimum iteration parameter values.minimum iteration parameter values.
2 Test the min+1, typical, max-1 and max for the 2 Test the min+1, typical, max-1 and max for the innermost loop, while holding the outer loops at their innermost loop, while holding the outer loops at their minimum values.minimum values.
3 Move out one loop and set it up as in step 2, holding all 3 Move out one loop and set it up as in step 2, holding all other inner loops at typical values. Continue this step until other inner loops at typical values. Continue this step until
the outermost loop has been tested.the outermost loop has been tested.
If the loops are independent of one another If the loops are independent of one another then treat each as a simple loopthen treat each as a simple loop else* treat as nested loopselse* treat as nested loopsendif* endif*
for example, the final loop counter value of loop 1 is for example, the final loop counter value of loop 1 is used to initialize loop 2.used to initialize loop 2.
Nested LoopsNested Loops
Concatenated LoopsConcatenated Loops
From Pressman
Remember: white and black box testing are not alternative!
• What if the code omits the implementation of some part of the expected behavior?
• White box test cases derived from the code will ignore that part of the expected behavior!
Perform black box testing
Further problems in White Box testing
• Not reacheable statements, decisions, etc.
Read(x);
If x>0 then
if x<0 then
Read(x);
If x>0 then
x:=f(x);
if x>0 then
Easier
Complex (f(x) always assigns negative values to x)