77
1 Πληροφοριακά Συστήματα Επιχειρήσεων Διδάσκων Κώστας Κοντογιάννης Αναπλ. Καθηγητής, Ε.Μ.Π

software-testing.ppt

Embed Size (px)

DESCRIPTION

 

Citation preview

  • 1. . , .. 1
  • 2. Course Outline Introduction to software engineering Requirements Engineering Design Basics Traditional Design OO Design Design Patterns Software Architecture Design Documentation Verification & Validation Software Process Management and Economics 2
  • 3. These slides are based on: Lecture slides by Ian Summerville, see http://www.comp.lancs.ac.uk/computing/resources/ser/ 3
  • 4. Overview Basics of Testing Testing & Debugging Activities Testing Strategies Black-Box Testing White-Box Testing Testing in the Development Process Unit Test Integration Test System Test Acceptance Test Regression Test Practical Considerations 4
  • 5. Static and dynamic V&V Static verification Requirements Detailed Implementation Implementation Architecture specification design V1 V1 Prototype Dynamic V &V Special cases Executable specifications 5 Animation of formal specs
  • 6. Program testing Can reveal the presence of errors NOT their absence Only exhaustive testing can show a program is free from defects. However, exhaustive testing for any but trivial programs is impossible A successful test is a test which discovers one or more errors Should be used in conjunction with static verification Run all tests after modifying a system 6 Ian Sommerville 1995
  • 7. Testing in the V-Model Requirements Acceptance Customer test Developer Architectural System test Design Detailed Integration test Design Functional (BB) Module Structural Unit test (WB) implementation 7
  • 8. Testing stages Unit testing Testing of individual components Integration testing Testing to expose problems arising from the combination of components System testing Testing the complete system prior to delivery Acceptance testing Testing by users to check that the system satisfies requirements. Sometimes called alpha testing 8
  • 9. Types of testing Statistical testing Tests designed to reflect the frequency of user inputs. Used for reliability estimation. Covered later in section on Software reliability. Defect testing Tests designed to discover system defects. A successful defect test is one which reveals the presence of defects in a system. 9 Ian Sommerville 1995
  • 10. Some Terminology Failure A failure is said to occur whenever the external behavior does not conform to system spec. Error An error is a state of the system which, in the absence of any corrective action, could lead to a failure. Fault An adjudged cause of an error. 10
  • 11. Some Terminology fault, bug, error, It is there in the program defect Fault Program state Error Observed Failure 11
  • 12. Overview Basics of Testing Testing & Debugging Activities Testing Strategies Black-Box Testing White-Box Testing Testing in the Development Process Unit Test Integration Test System Test Acceptance Test Regression Test Practical Considerations 12
  • 13. Testing and debugging Defect testing and debugging are distinct processes Defect testing is concerned with confirming the presence of errors Debugging is concerned with locating and repairing these errors Debugging involves formulating a hypothesis about program behaviour then testing these hypotheses to find the system error 13 Ian Sommerville 1995
  • 14. Debugging Activities Locate error & fault Design fault repair Repair fault Re-test program 14 Ian Sommerville 1995
  • 15. Testing Activities Identify Test conditions (What): an item or event to be verified. How the what can be tested: realization Design Build test cases (imp. scripts, data) Build Run the system Execute Test case outcome with Compare expected outcome Test result 15
  • 16. Testing Activities Test condition What: Descriptions of circumstances that could be examined (event or item). Categories: functionality, performance, stress, robustness Derive Using testing techniques (to be discussed) (Refer to the V-Model) 16
  • 17. Testing Activities Design test cases: the details Input values Expected outcomes Things created (output) Things changed/updated database? Things deleted Timing Expected outcomes Known Unknown (examine the first actual outcome) Environment prerequisites: file, net connection 17
  • 18. Testing Activities Build test cases (implement) Implement the preconditions (set up the environment) Prepare test scripts (may use test automation tools) Structure of a test case Simple linear Tree (I, EO) {(I1, EO1), (I2, EO2), } I EO1 EO2 18
  • 19. Testing Activities Scripts contain data and instructions for testing Comparison information What screen data to capture When/where to read input Control information Repeat a set of inputs Make a decision based on output Testing concurrent activities 19
  • 20. Testing Activities Compare (test outcomes, expected outcomes) Simple/complex (known differences) Different types of outcomes Variable values (in memory) Disk-based (textual, non-textual, database, binary) Screen-based (char., GUI, images) Others (multimedia, communicating apps.) 20
  • 21. Testing Activities Compare: actual output == expected output?? Yes Pass (Assumption: Test case was instrumented.) No Fail (Assumption: No error in test case, preconditions) 21
  • 22. Overview Basics of Testing Testing & Debugging Activities Testing Strategies Black-Box Testing White-Box Testing Testing in the Development Process Unit Test Integration Test System Test Acceptance Test Regression Test Practical Considerations 22
  • 23. Goodness of test cases Exec. of a test case against a program P Covers certain requirements of P; Covers certain parts of Ps functionality; Covers certain parts of Ps internal logic. Idea of coverage guides test case selection. 23
  • 24. Overview Basics of Testing Testing & Debugging Activities Testing Strategies Black-Box Testing White-Box Testing Testing in the Development Process Unit Test Integration Test System Test Acceptance Test Regression Test Practical Considerations 24
  • 25. Black-box testing Approach to testing where the program is considered as a black-box The program test cases are based on the system specification Test planning can begin early in the software process Possible test design strategy Equivalence partitioning 25 Ian Sommerville 1995
  • 26. Equivalence partitioning Invali d in pu ts Vali d in pu ts S y stem Ou tput s 26 Ian Sommerville 1995
  • 27. Equivalence partitioning Partition system inputs and outputs into equivalence sets If input is a 5-digit integer between 10000 and 99999, equivalence partitions are 99999 If you can predict that certain set of inputs will be treated differently in processing from another one, put them into separate partitions, e.g., Canadian and US addresses (will validate state/province), addresses in other countries (no state validation), etc. Two test case selection strategies Choose one random test case from each partition Choose test cases at the boundary of the partitions 09999, 10000, 99999, 100000 27 Ian Sommerville 1995
  • 28. Equivalence partitions 09999 10000 50000 99999 100000 Less than 10000 Between 10000 and 99999 More than 99999 28 Ian Sommerville 1995
  • 29. Search routine specification procedure Search (Key : ELEM ; T: ELEM_ARRAY; Found : in out BOOLEAN; L: in out ELEM_INDEX) ; Pre-condition -- the array has at least one element TFIRST = i