Upload
anurag
View
217
Download
0
Embed Size (px)
Citation preview
7/31/2019 mtp07
1/18
Master Test Plan
for Software Testing Course 8102020, Spring 2005
Tampere University of TechnolgyVersion 0.7
Antti Kervinen
11th February 2005
1
7/31/2019 mtp07
2/18
Contents
References 4
Glossary 4
1 Introduction 4
1.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2 Tasks and deadlines . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Structure of this document . . . . . . . . . . . . . . . . . . . . . 5
I Test plan for system testing 6
2 Test items 6
3 Features to be tested 6
4 Features not to be tested 6
5 Approach 6
6 Item pass/fail criteria 8
7 Test deliverables 8
7.1 Test design and test case specifications (the first task) . . . . . . . 8
8 Testing tasks 10
8.1 First task . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
9 Environmental needs 11
II Test plan for unit testing 12
10 Test items 12
11 Features to be tested 12
12 Features not to be tested 12
13 Approach 13
13.1 Class hierarchy . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
13.2 Stubs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
13.3 Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
14 Test deliverables 14
14.1 Test design (the second task) . . . . . . . . . . . . . . . . . . . . 14
2
7/31/2019 mtp07
3/18
7/31/2019 mtp07
4/18
References
[CPPUNIT] Feathers M., Lepilleur B.: CppUnit Cookbook
http://cppunit.sourceforge.net/doc/lastest/cppunit_cookbook.html
[HTML] Korpela, J.: Documents about WWW written or recommended by Jukka
Korpela. http://www.cs.tut.fi/jkorpela/www.html
[IEEE829] IEEE Standard for Software Test Documentation. IEEE Std
829-1998. September, 1998. Available from TUT addresses in
http://www.ieeexplore.ieee.org/xpl/tocresult.jsp?isNumber=16010[Mozilla] Mozilla web pages. http://www.mozilla.org.
[Mye04] Myers G. J., Sandler C., Badgett T., Thomas T. M.: The Art of Software
Testing, 2nd edition. John Wiley & Sons, 2004.
[RFC2396] Uniform Resource Identifiers (URI): Generic Syntax.
http://www.ietf.org/rfc/rfc2396.txt
[RFC2732] Format for Literal IPv6 Addresses in URLs.
http://www.ietf.org/rfc/rfc2732.txt
Glossary
Mozilla Web browser developed from the source code released by Nets-
cape
RFC Request for comments, a series of Internet informational docu-
ments and standards
URL Unified Resource Locator, a string that identifies a resource by its
location
1 Introduction
This document describes the course project for software testing course in Tampere
University of Technology, Spring 2005.
1.1 Overview
In this project the URL parsers of Mozilla [Mozilla] web browser will be tested.
The tests are designed and executed according to the following V model:
System test design System test execution
Unit test design Unit test execution
4
7/31/2019 mtp07
5/18
In the system tests the parsers will be tested by running the real Mozilla, giving it
some URLs to parse and by examining the results of the parser. In the unit testing
a standard URL parser class is tested separately by calling its methods from test
drivers written by the students.
The project must be done in pairs.
1.2 Tasks and deadlines
At first you should find a pair and register your team to the Kuha system. You canfind a link to Kuha in the course project page:
http://www.cs.tut.fi/testaus/harjoitustyo/
The project is divided in four tasks:
1. System test design report due 9.2.2005 12:00.
2. Unit test design report due 9.3.2005 12:00.
3. Unit test report due 6.4.2005 12:00.
4. System test report due 27.4.2005 12:00.
In Kuha you can book times to preliminary reviews of your reports. The reviews
are voluntary. The final version of a preliminarily reviewed and accepted report
will give you 14 points. If not reviewed, you can get 04 points. Only the first
two reports can be preliminary reviewed.
1.3 Structure of this document
This document contains two parts: a test plan for system testing and a test plan
for unit testing. Both parts follow the template for test plans in IEEE standard for
software test documentation [IEEE829].
The first part includes the requirements for tasks 1 and 4 , the second part for tasks
2 and 3.
5
7/31/2019 mtp07
6/18
Part I
Test plan for system testing
2 Test items
The URL parser of Mozilla 1.7.2 will be tested. The very same parser that is used
in Mozilla 1.7.3 (which is the latest stable Mozilla release at the moment) and in
Firefox 1.0.
The syntax of URLs is specified in [RFC2396]. The syntax is extended
in [RFC2732] to contain IPv6 addresses. Both documents should be used as a
specification in the test case design.
3 Features to be tested
Test how Mozilla parses valid URLs and what it does to invalid ones. The objective
is to find correct URLs that are parsed incorrectly and incorrect URLs that are
considered valid. Finding an URL that cause a crash or some other unexpected
behaviour is, of course, even more desired, and you may even get rewarded for that
(see http://www.mozilla.org/security/bug-bounty.html ).
We limit the testing to the URLs in http scheme, that is,
absolute URLs that begin with http:,
for example http://www.cs.tut.fi/testaus/
relative URLs in the cases where the base URL is in http scheme,
for example ../../images/foobar.jpg
4 Features not to be tested
Others than URLs in http scheme (such as ftp://ftp.funet.fi/README,
mailto:[email protected] etc.) will not be tested at this point.
Other functionality than the URL parser will not be tested.
5 Approach
The source code of Mozilla (in /share/tmp/testaus) has been altered to
support system testing in this project. Every time a URL beginning with http is
parsed, it is printed to standard output. Also, the result of the parsing (valid URL
6
7/31/2019 mtp07
7/18
or invalid URL) is printed, as well as some details if the URL was considered
valid. In the details the given URL is separated to components. It should be verified
that the separation is correct.
In the system testing phase you should run Mozilla so that it parses the URLs in
your test cases. All results of the parsing should recorded.
You may decide how you give the URLs in your test cases to Mozilla.
There are several ways to do this. One possibility is to write an HTML [HTML]
file containing all the URLs to be tested. Here is one example of the file contents.
(Warning: this example contains way too few test cases to pass the course.)
Valid relative paths
starts with /
starts with //
starts with *
Invalid relative paths
Valid absolute paths
Invalid absolute paths
Given that you have the file that contains your test cases, you can then run (in
bash):
dist/bin/mozilla file:path&file 2>/dev/null | tee
test.log
tee program reads standard input and copies it to the given files and standard
output. 2>/dev/null throws away the stuff Mozilla writes to standard error.
Whenever your mouse cursor is over a link, the corresponding URL is parsed. This
is how you can check the parsing results URL by URL.
7
7/31/2019 mtp07
8/18
This is just one possibility. You are allowed to separate your test cases in many
HTML files or feed the URLs one by one in the location text box. If it makes
your life easier, you may write a program that, for example, reads a list of URLs,
generates an HTML file that contains the URLs, runs Mozilla and checks that the
results of parsing are what you expected.
After all, what you have to return with your report is the output of Mozilla when it
parsed your URLs. No matter how you get there. (The other contents of the report
are listed in Section 7.)
6 Item pass/fail criteria
The system test fails if and only if any (valid or invalid) URL causes Mozilla to
crash or stop responding, or if a valid URL is incorrectly parsed.
7 Test deliverables
7.1 Test design and test case specifications (the first task)
You should return a test design document due 9.2.2005 12:00. The document
should contain the following information. An example of wanted information is
written in italics below.
1. Features to be tested
Identify test items and describe the features and combinations of fea-
tures that is going to be tested.
Numerical IPv4 addresses
User info (names and passwords) . . .
2. Approach refinement
Tell how you are going to run the tests and what you will need to do
that.
For example, list the files you create or modify for test runs and the
files that will be created during the test runs. Tell what the files are for
and what they contain. List the commands that will be executed; what
they do and why they need to be executed. What should the tester do
during test runs? How is the data going to be gathered and analysed?
Summarise common attributes of all test cases.
3. Test identification
8
7/31/2019 mtp07
9/18
Give an identifier and a brief description of each test case: what are the
features that are tested by this test case. Note that the same test case
can be used to test many features.
For example, you can present identified features and test cases in a table
like the this:
Username Password Hostname
Empty TC1 TC1
Very long TC2 TC2 TC2
Illegal
In the table it is easy to see that empty username and empty passwordare tested in test case TC1. It also shows that there is no test case that
would test an URL with somehow illegal hostname. (Your table should
have more rows and columns.) However, do not forget features like
handling too many ../../ in relative URLs, which might not fit
in the table.
4. Feature pass/fail criteria
Specify the criteria to be used to determine whether a feature has
passed or failed.
5. Priorisation
Imagine that, when a new version of the browser is finally given to
testers, there will be no time to execute all the test cases you have
created.
To prepare for the situation divide you test cases in three classes based
on their importance:
Critical. At least these test cases should be executed and passed
before the new browser can be released.
Important. These test cases should be run next, if there is still time
left.
Low.
Explain why you divided the test cases like you did. (A small scale risk
analysis is wanted.)
6. Test cases At least the following information on every test case should be
given.
Test case identifier.
Priority
Input specifications.
Output specifications.
In the following example of test cases the first test case ok-allfields includes
all the fields that the modified Mozilla parser outputs when it is given a valid URL.
In the output specifications of test cases you need to specify only the values of the
9
7/31/2019 mtp07
10/18
fields that should be checked in the test case. In the output specifications of test
case err-IPv4 we show what the parser outputs in case of an invalid URL.
Test case id ok-allfields
Priority Critical
Input http://usr:[email protected]/d1/f.txt;p1?q#r
Output result valid URL
port -1
scheme http
authority usr:[email protected]
username usrpassword pwd
hostname tut.fi
path /d1/f.txt;p1?q#r
filepath /d1/f.txt
directory /d1/
basename f
extension txt
param p1
query q
ref r
Test case id err-IPv4
Priority Low
Input http://127.0.0../999/foo.txtOutput result invalid URL
Special notes Two dots in a row are not allowed in IP address or in host
name
8 Testing tasks
8.1 First task
1. Familiarise yourself with the URL specifications [RFC2396] and
[RFC2732].
2. Based on the RFC documents, identify the features to be tested (empty user-
name, extremely long password, invalid IPv6 address. . . ).
3. Design test cases that test the features you identified. Specify inputs (URLs),
expected outputs (the results of the parsing).
4. Decide how you run the test cases. (Approach.)
5. Prioritise the test cases. (Risk analysis.)
6. Write system test design document, outlined in Section 7.1. Finnish students
should use the given template for test plans. Foreign students should use thecover page of the template, the outline in Section 7.1. All students are also
adviced to check out Sections 5 and 6 in [IEEE829].
7. Return the document due 9.2.2005 12:00.
10
7/31/2019 mtp07
11/18
9 Environmental needs
The source code of Mozilla 1.7.2 or 1.7.3 is needed. (Their URL parses are exactly
the same.)
A commercial tool called CTC++ is used to measure the code coverage. The tool
is installed in Lintula, and it currently works on Solaris only.
11
7/31/2019 mtp07
12/18
Part II
Test plan for unit testing
10 Test items
The class hierarchy that implements the URL parser of Mozilla 1.7.2 will be tested.
11 Features to be tested
The methods of utility class nsStdURLParser should be tested. The most of the
methods are implemented in its base classes. The methods are
ParseURL
ParseAfterScheme
ParseAuthority
ParseUserInfo
ParseServerInfo
ParsePath
ParseFilePath
ParseFileName
The code segments that are compiled only for Windows and OS/2 operating sys-
tems (that is, XP_WIN or XP_OS2 is defined) should be tested also.
The goal is to achieve as high multicondition coverage (moniehtokattavuus in
Finnish) of the code as possible [Mye04].
12 Features not to be tested
The methods of the nsStdURLParsers base classes that are not inherited by
nsStdURLParser are not tested. For example, ParseAuthority of nsBaseURLParser
class is not tested, but ParseAuthority of nsAuthURLParser is tested, because the
former is not but the latter is inherited by nsStdURLParser.
Non-functional features of the methods are not tested.
12
7/31/2019 mtp07
13/18
13 Approach
13.1 Class hierarchy
The class hierarchy to be tested is located in nsURLParsers module. The code
inside nsURLParsers has not been touched, but the module has been extracted
from the Mozilla source tree and relocated to
/share/tmp/testaus/unittest-READONLY
The directory includes all the necessary header files (most of them are empty)
needed for compiling the module. To run the tests, a driver and some stubs has to
be implemented.
13.2 Stubs
nsIURLParser module includes the skeletons of two functions,
(net_isValidScheme, IsAsciiAlpha), one method (ToInteger)
and one class constructor (nsCAutoString). They must be implemented in
order to test the nsStdURLParser class.
The stubs should pass the cppunit tests that are defined in testCAutoString
and testUtilityFunctions modules. You can run the tests by setting up the
unit test environment, explained in Section 15.3, and running in there
% make run_stubtest
You are allowed to extract the code for the stubs from the Mozilla source tree, or
you can write your own implementations. The latter may be easier because the
needed functionality of the stubs is very limited.
13.3 Driver
cppunit will be used to run tests, so you should write the test cases in cpp-
unit classes. To find out how to do that, see modules testCAutoString and
testUtilityFunctions, and refer to cppunit documentation [CPPUNIT].
Note that the same test cases should be immediately runnable also with the new
versions of the unit under test. Therefore, do not touch the code of the unit
under test.
13
7/31/2019 mtp07
14/18
14 Test deliverables
14.1 Test design (the second task)
The outline of the unit test design document is very similar to the system test
design. The only difference is that priorisation is not required here.
1. Features to be tested
In this phase the goal is high multicondition coverage of the code. Identifyconditions in the source code of the unit under test.
For example, in ParseURL method there is row
for (p = spec; len && *p && !colon && !slash;
++p, --len) {
which has four conditions:
1. len!=0
2. *p!= 0
3. colon==0
4. slash==0
These conditions form 16 combinations of truth values (false, false, false,
false; false, false, false, true; . . . ; true, true, true, true), but all of them might
not be possible.
In the features to be tested try to describe, separately for each method to
be tested, what kind of inputs are needed to cover all possible truth value
combinations. For example, length of spec is 0, there is no colon nor
slash before 0, slash comes before colon and 0, etc.
2. Approach refinement
Tell how you are going to run the tests and what you will need to do
that.
For example, list the files you create or modify for test runs and the
files that will be created during the test runs. Tell what the files are for
and what they contain. List the commands that will be executed; what
they do and why they need to be executed. What should the tester do
during test runs? How is the data going to be gathered and analysed?
Summarise common attributes of all test cases.
3. Test identification
Give an identifier and a brief description of each test case. List thefeatures that are tested by this test case.
4. Feature pass/fail criteria
14
7/31/2019 mtp07
15/18
Specify the criteria to be used to determine whether a feature has
passed or failed.
5. Test cases At least the following information on every test case should be
given.
Test case identifier.
Input specifications (which method is called and what are the argu-
ments).
Output specifications (what should be the return value, what should be
returned in the arguments, what else has changed its value).
Environmental needs (which stubs are required to execute this test case)
15 Testing tasks
What you should do in the second and the third tasks is told in the first two sub-
sections. The tools that should be used to do it, are shortly introduced in the other
subsections.
15.1 Unit test design (the second task)
1. Read the code of the methods that will be tested.
2. For each method, identify the features to be tested as described in Sec-
tion 14.1 item 1.
3. For each method, design test cases so that every identified feature will be
tested. Specify the parameters that are given to the method and what output
is expected (what is returned by the function and what is returned in the
parameters).
4. Describe how the test cases will be run. Specify which test cases will be
implemented in which cppunit test suite.
5. Write the unit test design document.
6. Return the document due 9.3.2005 12:00.
15.2 Unit testing (the third task)
1. Setup unit testing environment.
2. Implement the stubs in nsIURLParser.cpp file. The stubs should passtests in testCAutoString and testUtilityFunctions.
3. Implement the test cases you have specified to cppunit test suites.
15
7/31/2019 mtp07
16/18
4. Modify the Makefile so that command gmake run_unittest runs the
cppunit test suites and measures the multicondition coverage of the code in
nsURLParsers (MON.sym and MON.dat files are generated).
5. Write the test report (the outline will be given later).
6. Return the test report due 6.4.2005 12:00. Enclose unittest.tar.gz package
to the electonic version of the submission. The package should contain the
modified Makefile and all source codes of cppunit tests and stubs that you
have written. When the course robot executes
mkdir cleandir; cd cleandir. /share/testwell/bin/testwell_environment
echo yes | /share/tmp/testaus/scripts/setup.unittest.sh
cd unittest
gtar xzf /submitted/by/you/unittest.tar.gz
rm -f MON.sym MON.dat *.o
gmake run_unittest
all your test suites should be compiled and run. Also files MON.sym and
MON.dat should be (re)created. Make sure this really happens before
submitting anything!
15.3 Setting up the unit testing environment
Execute
/share/tmp/testaus/scripts/setup.unittest.sh
in a directory where you want the unittest environment to be setup.
Less than 300 kB of space will be required. Therefore, it is recommended to use
a location that is more reliable than /share/tmp/. Home directories are a good
choise.
15.4 Setting up CTC++
CTC++ is used to instrument the code during the compilation. When the instru-
mented code is run, MON.dat and MON.sym files will be created. They contain
the coverage information of the instrumented code.
You can examine the results by executing ctcpost MON.dat. Whenever
CTC++ tools are used or instrumented code executed, you must have the CTC++
environment variables set. It can be done by running
. /share/testwell/bin/testwell_environment
(The dot and the space in the front of the line are important!) When the environ-
ment is set, you can see the manual page of CTC++ with command man ctc. For
16
7/31/2019 mtp07
17/18
the purposes of this project it should be enough to understand the ctc commands
that are explained in Section 15.5.
15.5 Using cppunit and CTC++
Cppunit gives means of dividing test cases into test suites. It provides macros for
implementing test suites and stating assertions in the test cases. What you need is
the unit under test (object code is enough)
the main program (object code is again enough)
a class for each test suite (a test case in a test suite is a method in the corres-
ponding class. The classes and the methods are what you need to implement)
Given that you have the unit test environment (see 15.3) and you also have CTC++
environment variables set (see 15.4), run
gmake clean
gmake stubtest
The GNU Make output is explaned here:
ctc -i m g++ -o stubs.o -c nsIURLParser.cpp
This compiles nsIURLParser.cpp to stubs.o object file.
nsIURLParser.cpp contains the stubs that you will implement in task
three. The code is instrumented with CTC++, because in stubtest the stubs
are the unit under test. (When you test nsURLParsers.cpp, it will be
instrumented instead of the stubs.) -i m switch tells CTC++ to instrument thecode for measuring the multicondition coverage.
g++ -I. -Wall -pedantic -o testRunner.o
-c testRunner.cpp
This compiles the main program. The main runs all the test cases that are writ-
ten in cppunit classes in the other test*.o files. You do not need to edit
testRunner.cpp in any task.
g++ -I. -Wall -pedantic -o testCAutoString.o
-c testCAutoString.cpp
This compiles the cppunit test class that will test the implementation of CAuto-
String stub class.
17
7/31/2019 mtp07
18/18
g++ -I. -Wall -pedantic -o testUtilityFunctions.o
-c testUtilityFunctions.cpp
Similarly to the previous command this compiles again a class containing test
cases. These test cases will test the stubs of utility functions.
ctc -i m g++ -o stubtest stubs.o testRunner.o
testCAutoString.o testUtilityFunctions.o
-L. -lcppunit
This links the object files to an executable binary. Note that the unit under test
(in this case stubs.o) and the main program (testRunner) do not need changes
nor recompiliation when new test cases are created and added. Just relink the two
object files with what ever test case object files you have, and you will get a new
binary which executes all linked test cases.
Test cases are located in cppunit test classes (test suites). Every test suite has
setUp and tearDown methods that are called in the beginning and at the end of
the test suite. There should be also one method for each test case in the suite.
Test cases do the testing and then check the results with assertion macros, for ex-
ample with CPPUNIT_ASSERT_MESSAGE(assertion,message) macro.
This macro checks the given assertion and, if it fails, causes the execution of the
test case to return with verdict fail and the given message to be printed. The test
case is passed if the execution of a test case reaches the end of the method.
16 Environmental needs
Unit tests has to executed in Lintula Sparc/Solaris servers or workstations.
Only Sparc/Solaris version of CTC++ is currently installed in Lintula and only
Sparc/Solaris version of cppunit is ready for use in the unit test environment.
18