23
 080380107013&080380107019 Testing RKCET(CE) Page 96 7.0 TESTING 7.1 TESTING PLAN In general, in a project, testing commences with a test plan and terminates with successful execution of acceptance testing. A test plan is a general document for the entire  project that defines the scope, approach to be taken, and the schedule of testing, as well as identifies the test items for testing and the personnel responsible for the different activities of testing. The test planning can be done well before the actual testing commences and can be done in parallel with the coding and design activities. The inputs for forming the test plan are: (1) project plan, (2) requirements document, and (3) architecture or design document. The  project plan is need ed to make sure that the test plan is consis tent with the overal l quality plan for the project and the testing schedule matches that of the project plan. The requirements document and the design document are the basic documents used for selecting the test units and deciding the approaches to be used during testing. A test plan should contain the following:  Test unit specification Features to be tested  Approach for testing  Test deliverables  Schedule and task allocation As seen earlier, different levels of testing have to be performed in a project. The levels are specified in the test plan by identifying the test units for the project. A test unit is a set of one or more modules that form a software under test (SUT). The identification of test units establishes the different levels of testing that will be performed in the project. Generally, a number of test units are formed during the testing, starting from the lower-level modules, which have to be unit-tested. That is, first the modules that have to be tested individually are specified as test units. Then the higher-level units are specified, which may be a combination of already tested units or may combine some already tested units with some untested modules. The basic idea behind forming test units is to make sure that testing

Ch 7 Testing

Embed Size (px)

DESCRIPTION

test criteria for e-voting system

Citation preview

080380107013&080380107019 Testing

080380107013&080380107019 Testing

080380107013&080380107019 Testing

7.0 TESTING 7.1 TESTING PLAN In general, in a project, testing commences with a test plan and terminates with successful execution of acceptance testing. A test plan is a general document for the entire project that defines the scope, approach to be taken, and the schedule of testing, as well as identifies the test items for testing and the personnel responsible for the different activities of testing. The test planning can be done well before the actual testing commences and can be done in parallel with the coding and design activities. The inputs for forming the test plan are: (1) project plan, (2) requirements document, and (3) architecture or design document. The project plan is needed to make sure that the test plan is consistent with the overall quality plan for the project and the testing schedule matches that of the project plan. The requirements document and the design document are the basic documents used for selecting the test units and deciding the approaches to be used during testing. A test plan should contain the following: Test unit specification Features to be tested Approach for testing Test deliverables Schedule and task allocation As seen earlier, different levels of testing have to be performed in a project. The levels are specified in the test plan by identifying the test units for the project. A test unit is a set of one or more modules that form a software under test (SUT). The identification of test units establishes the different levels of testing that will be performed in the project. Generally, a number of test units are formed during the testing, starting from the lower-level modules, which have to be unit-tested. That is, first the modules that have to be tested individually are specified as test units. Then the higher-level units are specified, which may be a combination of already tested units or may combine some already tested units with some untested modules. The basic idea behind forming test units is to make sure that testing is being performed incrementally, with each increment including only a few aspects that need to be tested. An important factor while forming a unit is the testability of a unit. A unit should be such that it can be easily tested. In other words, it should be possible to form meaningful test cases and execute the unit without much effort with these test cases. For example, a module that manipulates the complex data structure formed from a file input by an input module might not be a suitable unit from the point of view of testability, as forming meaningful test cases for the unit will be hard, and driver routines will have to be written to convert inputs from files or terminals that are given by the tester into data structures suitable for the module. In this case, it might be better to form the unit by including the input module as well. Then the file input expected by the input module can contain the test cases. Features to be tested include all software features and combinations of features that should be tested. A software feature is a software characteristic specified or implied by the requirements or design documents. These may include functionality, performance, design constraints, and attributes. The approach for testing specifies the overall approach to be followed in the current project. The techniques that will be used to judge the testing effort should also be specified. This is sometimes called the testing criterion or the criterion for evaluating the set of test cases used in testing. In the previous sections we discussed many criteria for evaluating and selecting test cases. Testing deliverables should be specified in the test plan before the actual testing begins. Deliverables could be a list of test cases that were used, detailed results of testing including the list of defects found, test summary report, and data about the code coverage. The test plan typically also specifies the schedule and effort to be spent on different activities of testing, and the tools to be used. This schedule should be consistent with the overall project schedule. The detailed plan may list all the testing tasks and allocate them to test resources who are responsible for performing them. Many large products have separate testing teams and therefore a separate test plan. A smaller project may include the test plan as part of its quality plan in the project management plan. 7.2 TESTING STRATEGY Developers are under great pressure to deliver more complex software on increasingly aggressive schedules and with limited resources. Testers are expected to verify the quality of such software in less time and with even fewer resources. In such an environment, solid, repeatable, and practical testing methods and automation are a must. In a software development life cycle, bug can be injected at any stage. Earlier the bugs are identified, more cost saving it has. There are different techniques for detecting and eliminating bugs that originate in respective phase. Software testing strategy integrates software test case design techniques into a wellplanned series of steps that result in the successful construction of software. Any test strategy incorporate test planning, test case design, test execution, and the resultant data collection and evaluation. Testing is a set of activities. These activities so planned and conducted systematically that it leaves no scope for rework or bugs. Various software-testing strategies have been proposed so far. All provide a template for testing. Things that are common and important in these strategies are Testing begins at the module level and works outward : tests which are carried out, are done at the module level where major functionality is tested and then it works toward the integration of the entire system. Different testing techniques are appropriate at different points in time: Under different circumstances, different testing methodologies are to be used which will be the decisive factor for software robustness and scalability. Circumstance essentially means the level at which the testing is being done (Unit testing, system testing, Integration testing etc.) and the purpose of testing. The developer of the software conducts testing and if the project is big then there is a testing team: All programmers should test and verify that their results are according to the specification given to them while coding. In cases where programs are big enough or collective effort is involved for coding, responsibilities for testing lies with the team as a whole. Debugging and testing are altogether different processes. Testing aims to finds the errors whereas debugging is the process of fixing those errors. But debugging should be incorporated in testing strategies. A software strategy must have low-level tests to test the source code and high-level tests that validate system functions against customer requirements. Once it is decided who'll do testing then the main issue is how to go about testing. That is in which manner testing should be performed. As shown in fig. first unit testing is performed. Unit testing focuses on the individual modules of the product. After that integration testing is performed. When modules are integrated into bigger program structure then new errors arise often. Integration testing uncovers those errors. After integration testing, other high order tests like system tests are performed. These tests focus on the overall system. Here system is treated as one entity and tested as a whole. Now we'll take up these different types of tests and try to understand their basic concepts. Fig7.2.1 Sequence of Tests 7.2.1 Unit Testing We know that smallest unit of software design is a module. Unit testing is performed to check the functionality of these units. it is done before these modules are integrated together to build the overall system. Since the modules are small in size, individual programmers can do unit testing on their respective modules. So unit testing is basically white box oriented. Procedural design descriptions are used and control paths are tested to uncover errors within individual modules. Unit testing can be done for more than one module at a time. The following are the tests that are performed during the unit testing: Module interface test: here it is checked if the information is properly flowing into the program unit and properly coming out of it. Local data structures: these are tested to see if the local data within unit(module) is stored properly by them. Boundary conditions: It is observed that much software often fails at boundary conditions. That's why boundary conditions are tested to ensure that the program is properly working at its boundary conditions. Independent paths: All independent paths are tested to see that they are properly executing their task and terminating at the end of the program. Error handling paths: These are tested to check if errors are handled properly by them. See fig. for overview of unit testing

080380107013&080380107019 Testing

080380107013&080380107019 Testing

080380107013&080380107019 Testing

RKCET(CE) Page 96

RKCET(CE) Page 96

RKCET(CE) Page 97

Figure: Unit Testing Fig7.2.1.1: Unit Test Procedure Unit testing begins after the source code is developed, reviewed and verified for the correct syntax. Here design documents help in making test cases. Though each module performs a specific task yet it is not a standalone program. It may need data from some other module or it may need to send some data or control information to some other module. Since in unit testing each module is tested individually, so the need to obtain data from other module or passing data to other module is achieved by the use of stubs and drivers. Stubs and drivers are used to simulate those modules. A driver is basically a program that accepts test case data and passes that data to the module that is being tested. It also prints the relevant results. Similarly stubs are also programs that are used to replace modules that are subordinate to the module to be tested. It does minimal data manipulation, prints verification of entry, and returns. Fig. illustrates this unit test procedure. Drivers and stubs are overhead because they are developed but are not a part of the product. This overhead can be reduced if these are kept very simple. Once the individual modules are tested then these modules are integrated to form the bigger program structures. So next stage of testing deals with the errors that occur while integrating modules. That's why next testing done is called integration testing, which is discussed next. 7.2.2 Integration Testing Unit testing ensures that all modules have been tested and each of them works properly individually. Unit testing does not guarantee if these modules will work fine if these are integrated together as a whole system. It is observed that many errors crop up when the modules are joined together. Integration testing uncovers errors that arises when modules are integrated to build the overall system. Following types of errors may arise: Data can be lost across an interface. That is data coming out of a module is not going into the desired module. Sub-functions, when combined, may not produce the desired major function. Individually acceptable imprecision may be magnified to unacceptable levels. For example, in a module there is error-precision taken as +- 10 units. In other module same error-precision is used. Now these modules are combined. Suppose the errorprecision from both modules needs to be multiplied then the error precision would be +-100 which would not be acceptable to the system. Global data structures can present problems: For example, in a system there is a global memory. Now these modules are combined. All are accessing the same global memory. Because so many functions are accessing that memory, low memory problem can arise. Integration testing is a systematic technique for constructing the program structure while conducting tests to uncover errors associated with interfacing. The objective is to take unit tested modules, integrate them, find errors, remove them and build the overall program structure as specified by design. There are two approaches in integration testing. One is top down integration and the other is bottom up integration. Top-down integration is an incremental approach to construction of program structure. In top down integration, first control hierarchy is identified. That is which module is driving or controlling which module. Main control module, modules sub-ordinate to and ultimately sub-ordinate to the main control block are integrated to some bigger structure. For integrating depth-first or breadth-first approach is used. Fig.7.2.2.1 Top down integration In depth first approach all modules on a control path are integrated first. See fig. 9.6. Here sequence of integration would be (M1, M2, M3), M4, M5, M6, M7, and M8. In breadth first all modules directly subordinate at each level are integrated together. Using breadth first for fig. 9.6 the sequence of integration would be (M1, M2, M8), (M3, M6), M4, M7, andM5. Bottom-up integration testing starts at the atomic modules level. Atomic modules are the lowest levels in the program structure. Since modules are integrated from the bottom up, processing required for modules that are subordinate to a given level is always available, so stubs are not required in this approach. A bottom-up integration implemented with the following steps: Low-level modules are combined into clusters that perform a specific software subfunction. These clusters are sometimes called builds. A driver (a control program for testing) is written to coordinate test case input and output. The build is tested. Drivers are removed and clusters are combined moving upward in the program structure.

Fig7.2.2.2 (a) Program Modules (b)Bottom-up integration applied to program modules in (a) Fig shows the how the bottom up integration is done. Whenever a new module is added to as a part of integration testing, the program structure changes. There may be new data flow paths, some new I/O or some new control logic. These changes may cause problems with functions in the tested modules, which were working fine previously. To detect these errors regression testing is done. Regression testing is the re-execution of some subset of tests that have already been conducted to ensure that changes have not propagated unintended side effects in the programs. Regression testing is the activity that helps to ensure that changes (due to testing or for other reason) do not introduce undesirable behavior or additional errors. As integration testing proceeds, the number of regression tests can grow quite large. Therefore, regression test suite should be designed to include only those tests that address one or more classes of errors in each of the major program functions. It is impractical and inefficient to re-execute every test for every program functions once a change has occurred. 7.2.3 Validation Testing After the integration testing we have an assembled package that is free from modules and interfacing errors. At this stage a final series of software tests, validation testing begin. Validation succeeds when software functions in a manner that can be expected by the customer. Major question here is what are expectations of customers. Expectations are defined in the software requirement specification identified during the analysis of the system. The specification contains a section titled Validation Criteria Information contained in that section forms the basis for a validation testing. Software validation is achieved through a series of black-box tests that demonstrate conformity with requirements. There is a test plan that describes the classes of tests to be conducted, and a test procedure defines specific test cases that will be used in an attempt to uncover errors in the conformity with requirements. After each validation test case has been conducted, one of two possible conditions exists: The function or performance characteristics conform to specification and are accepted, or A deviation from specification is uncovered and a deficiency list is created. Deviation or error discovered at this stage in a project can rarely be corrected prior to scheduled completion. It is often necessary to negotiate with the customer to establish a method for resolving deficiencies. 7.2.3.1 Alpha-Beta Testing For a software developer, it is difficult to foresee how the customer will really use a program. Instructions for use may be misinterpreted; strange combination of data may be regularly used; and the output that seemed clear to the tester may be unintelligible to a user in the field. When custom software is built for one customer, a series of acceptance tests are conducted to enable the customer to validate all requirements. Acceptance test is conducted by customer rather than by developer. It can range from an informal test drive to a planned and systematically executed series of tests. In fact, acceptance testing can be conducted over a period of weeks or months, thereby uncovering cumulative errors that might degrade the system over time. If software is developed as a product to be used by many customers, it is impractical to perform formal acceptance tests with each one. Most software product builders use a process called alpha and beta testing to uncover errors that only the end user seems able to find. Customer conducts the alpha testing at the developers site. The software is used in a natural setting with the developer. The developer records errors and usage problem. Alpha tests are conducted in a controlled environment. The beta test is conducted at one or more customer sites by the end user(s) of the software. Here, developer is not present. Therefore, the beta test is a live application of the software in an environment that cannot be controlled by the developer. The customer records all problems that are encountered during beta testing and reports these to the developer at regular intervals. Because of problems reported during beta test, the software developer makes modifications and then prepares for release of the software product to the entire customer base. 7.2.4 System Testing Software is only one element of a larger computer-based system. Ultimately, software is incorporated with other system elements and a series of system integration and validation tests are conducted. These tests fall outside the scope of software engineering process and are not conducted solely by the software developer. System testing is actually a series of different tests whose primary purpose is to fully exercise the computer-based system. Although each test has a different purpose, all work to verify that all system elements have been properly integrated and perform allocated functions. In the following section, different system tests are discussed. 7.2.4.1 Recovery Testing Many computer-based systems must recover from faults and resume operation within a pre-specified time. In some cases, a system may be fault tolerant; that is, processing faults must not cause overall system function to cease. In other cases, a system failure must be corrected within a specified period or severe economic damage will occur. Recovery testing is a system test that forces the software to fail in a variety of ways and verifies that recovery is properly performed. It the recovery is automated (performed by system itself), re-initialization mechanisms, data recovery, and restart are each evaluated for correctness. If the recovery requires human intervention, the mean time to repair is evaluated to determine whether it is within acceptable limits. 7.2.4.2 Stress Testing Stress tests are designed to confront program functions with abnormal situations. Stress testing executes a system in a manner that demands resources in abnormal quantity, frequency, or volume. For example, (1) special tests may be designed that generate 10 interrupts are seconds, when one or two is the average rate; (2) input data rates may be increased by an order of magnitude to determine how input functions will respond; (3) test cases that require maximum memory or other resources may be executed; (4) test cases that may cause excessive hunting for disk resident data may be created; or (5) test cases that may cause thrashing in a virtual operating system may be designed. The testers attempt to break the program. 7.2.4.3 Security Testing Any computer-based system that manages sensitive information or causes actions that can harm or benefit individuals is a target for improper or illegal penetration. Security testing attempts to verify that protection mechanism built into a system will protect it from unauthorized penetration. During security testing, the tester plays the role of the individual who desires to penetrate the system. The tester may attack the system with custom software designed to break down any defenses that have been constructed; may overwhelm the system, thereby denying service to others; may purposely cause system errors, hoping to find the key to system entry; and so on. Given enough time and resources, good security testing will ultimately penetrate a system. The role of the system designer is to make penetration cost greater than the value of the information that will be obtained in order to deter potential threats. 7.2.5 User Acceptance Test Once the payroll system is ready for implementation, the Payroll department will perform User Acceptance Testing. The purpose of these tests is to confirm that the system is developed according to the specified user requirements and is ready for operational use. 7.3 TESTING METHODS There are mainly two types of the testing methods which are as follows: 7.3.1 Black-box Testing The black-box approach is a testing method in which test data are derived from the specified functional requirements without regard to the final program structure. It is also termed data-driven, input/output driven, or requirements-based testing. Because only the functionality of the software module is of concern, black-box testing also mainly refers to functional testing -- a testing method emphasized on executing the functions and examination of their input and output data. The tester treats the software under test as a black box -- only the inputs, outputs and specification are visible, and the functionality is determined by observing the outputs to corresponding inputs. In testing, various inputs are exercised and the outputs are compared against specification to validate the correctness. All test cases are derived from the specification. No implementation details of the code are considered. It is obvious that the more we have covered in the input space, the more problems we will find and therefore we will be more confident about the quality of the software. Ideally we would be tempted to exhaustively test the input space. But as stated above, exhaustively testing the combinations of valid inputs will be impossible for most of the programs, let alone considering invalid inputs, timing, sequence, and resource variables. Combinatorial explosion is the major roadblock in functional testing. To make things worse, we can never be sure whether the specification is either correct or complete. Due to limitations of the language used in the specifications (usually natural language), ambiguity is often inevitable. Even if we use some type of formal or restricted language, we may still fail to write down all the possible cases in the specification. Sometimes, the specification itself becomes an intractable problem: it is not possible to specify precisely every situation that can be encountered using limited words. And people can seldom specify clearly what they want -- they usually can tell whether a prototype is, or is not, what they want after they have been finished. Specification problems contributes approximately 30 percent of all bugs in software. The research in black-box testing mainly focuses on how to maximize the effectiveness of testing with minimum cost, usually the number of test cases. It is not possible to exhaust the input space, but it is possible to exhaustively test a subset of the input space. Partitioning is one of the common techniques. If we have partitioned the input space and assume all the input values in a partition is equivalent, then we only need to test one representative value in each partition to sufficiently cover the whole input space. Domain testing partitions the input domain into regions, and consider the input values in each domain an equivalent class. Domains can be exhaustively tested and covered by selecting a representative value(s) in each domain. Boundary values are of special interest. Experience shows that test cases that explore boundary conditions have a higher payoff than test cases that do not. Boundary value analysis requires one or more boundary values selected as representative test cases. The difficulties with domain testing are that incorrect domain definitions in the specification can not be efficiently discovered. Good partitioning requires knowledge of the software structure. A good testing plan will not only contain black-box testing, but also white-box approaches, and combinations of the two. 7.3.2 White-box Testing Contrary to black-box testing, software is viewed as a white-box, or glass-box in white-box testing, as the structure and flow of the software under test are visible to the tester. Testing plans are made according to the details of the software implementation, such as programming language, logic, and styles. Test cases are derived from the program structure. White-box testing is also called glass-box testing, logic-driven testing or design-based testing There are many techniques available in white-box testing, because the problem of intractability is eased by specific knowledge and attention on the structure of the software under test. The intention of exhausting some aspect of the software is still strong in white-box testing, and some degree of exhaustion can be achieved, such as executing each line of code at least once (statement coverage), traverse every branch statements (branch coverage), or cover all the possible combinations of true and false condition predicates (Multiple condition coverage). Control-flow testing, loop testing, and data-flow testing, all maps the corresponding flow structure of the software into a directed graph. Test cases are carefully selected based on the criterion that all the nodes or paths are covered or traversed at least once. By doing so we may discover unnecessary "dead" code -- code that is of no use, or never get executed at all, which can not be discovered by functional testing. In mutation testing, the original program code is perturbed and many mutated programs are created, each contains one fault. Each faulty version of the program is called a mutant. Test data are selected based on the effectiveness of failing the mutants. The more mutants a test case can kill, the better the test case is considered. The problem with mutation testing is that it is too computationally expensive to use. The boundary between black-box approach and white-box approach is not clear-cut. Many testing strategies mentioned above, may not be safely classified into black-box testing or white-box testing. It is also true for transaction-flow testing, syntax testing, finite-state testing, and many other testing strategies not discussed in this text. One reason is that all the above techniques will need some knowledge of the specification of the software under test. Another reason is that the idea of specification itself is broad -- it may contain any requirement including the structure, programming language, and programming style as part of the specification content. We may be reluctant to consider random testing as a testing technique. The test case selection is simple and straightforward: they are randomly chosen. Study in indicates that random testing is more cost effective for many programs. Some very subtle errors can be discovered with low cost. And it is also not inferior in coverage than other carefully designed testing techniques. One can also obtain reliability estimate using random testing results based on operational profiles. Effectively combining random testing with other testing techniques may yield more powerful and cost-effective testing strategies. 7.4 TEST CASES: Table 7.4 Test Cases Test case id Test case name Test case desc. Test steps Test case stat us Test stat us ( P/F) Test prio rity Defect serverity

Step Expected Act ual

Login 1 Validate login To verify the login_ Id and it must be >4 and no special symbol allow. If entered user_id say (abc) Error message user_id must be >4

To verify the login_ Id and it must be >4 and special symbol not allowed. if entered user_id say (abc$) Error massage user_id Should not contain any symbol.

To verify the login_ Id and it should not be >4 and symbol not allowed. If entered user say (abcde) Successful login.

Login 2 Validate login To verify the login_ Id and it must be >4 and If entered user_id say (abc) Error message user_id

080380107013&080380107019 Testing

080380107013&080380107019 Testing

080380107013&080380107019 Testing

RKCET(CE) Page 100

RKCET(CE) Page 100

RKCET(CE) Page 100

no special symbol allow. must be >4

To verify the login_ Id and it must be >4 and special symbol not allowed. if entered user_id say (abc$) Error massage user_id Should not contain ant symbol.

To verify the login_ Id and it should not be >4 and symbol not allowed. If entered user say (abcde) Successful login.

Login 3 Validate login To verify the login_ Id and it must be >4 and no special symbol allow. If entered user_id say (abc) Error message user_id must be >4

To verify the login_ Id and it must be >4 and special symbol not allowed. if entered user_id say (abc$) Error massage user_id Should not contain ant symbol.

To verify the login_ Id and it should not be >4 and symbol not allowed. If entered user say (abcde) Successful login.

Pass word 1 Valid ate Pass word To verify the password and it should not be same as user_id and symbol are allow. If entered password say(abcde) Error message password and user-id must not be same.

Pass word 2 Validate Pass word To verify the password and it should not be same as user-id and symbol are allow. If entered password say(abcde) Error message password and user-id must not be same.

PassValidTo verify the If entered Error

word 3 ate Pass word password and it should not be same as user-id and symbol are allow. password say(abcde) message password and user-id must not be same.

Sto ck entry Valid ate itemname To varify that item name must be accoring to its id. If clerk 1 enter item name as cosmetic and click on submit Item n ame is incorrect

If clerk 1 enter item_name as jwellery and click on submit Item name is incorrect

If clerk 1 enter item_name as dress and click on submit Product is correct and stockentry page is opened

Valid ate fields To varify that fields(Item__nam e,price,date) are ffully field. If clerk enter Item_name ,price and click on submit Date must be entered

If clerk enter item_name,date and click on submit Price must be entered

If clerk enter price and date click on submit Item_name must be entered

For logout To varify that user want to logout If clerk clik on logout button Window asking whether you want to logout (yes or No).

If clerk clik on yes You never been logout

If clerk clik on No You cant logout

Order Valid ate Itemname To varify that ordering item must be correct If manager enter the item_name like furniture or other item than ( dress,cosmetic,j ewellary) Invalid item

If manager enter the correct item i.e. dress,cosmetic,jEntered the Item_name,quality

ewellary and click on order ,date.

Varif y fields To varify that fields (Item_name,quantity,date)ar fully filled. If manager enter itemname,quantity and click on order Date must be entered

If manager enter item-name,date and click on order Quqntity must be entered

If manager enter quantity date and click on order Item_name must be entered

If manager enter itemname,quantity,d ate and click on order Item have been order

Varif y logout To varify that manager logout If Manager entered logout button Window open asking want to logout.(yes or not)

If clerk clik on yes You never been logout

If clerk clik on No You cant logout

billin g Valid ate Itemname field must not be empty If manager enter the item_name other than three product likedress,cosmet ic,jewellary and click on tab than. Product name is invalid

If manager not entered the item name than Field must not be empty

If manager enter the correct item i.e. dress,cosmetic,j ewellary and click on tab Entered the Item_name,quality ,date.

Valid ate no_ field must not be empty If manager not entered the no_of items Field must not be empty

080380107013&080380107019 Testing

080380107013&080380107019 Testing

080380107013&080380107019 Testing

RKCET(CE) Page 110

RKCET(CE) Page 110

RKCET(CE) Page 100

of qualit y than

If manager enter the correct item i.e. dress,cosmetic,j ewellary and click on tab Entered the Item_name,quality ,date.

Valid ate prize field must not be empty If manager not entered the prize than Field must not be empty

If manager enter the correct item i.e. dress,cosmetic,j ewellary and click on tab Entered the Item_name,quality ,date.

Valid ate total prize field must not be empty If manager not entered the totalprize than Field must not be empty

If manager enter the correct item i.e. dress,cosmetic,j ewellary and click on tab Entered the Item_name,quality ,date.

Sto ck update Valid ate itemname To varify that item name must be accoring to its id. If admin enter item name as cosmetic and click on submit Item n ame is incorrect

If admin enter item_name as jwellery and click on submit Item name is incorrect

If admin enter item_name as dress and click on submit Product is correct and stockentry page is opened

Valid ate prize field must not be empty If admin not entered the prize than Field must not be empty

If admin enter the correct item i.e. dress,cosmetic,j ewellary and click on tab Entered the Item_name,quality ,date.

Validfield must not be If admin not Field must

ate total prize empty entered the totalprize than not be empty

If admin enter the correct item i.e. dress,cosmetic,j ewellary and click on tab Entered the Item_name,quality ,date.

Acco unt update Valid ate itemname To varify that item name must be accoring to its id. If admin enter item name as cosmetic and click on submit Item n ame is incorrect

If admin enter item_name as jwellery and click on submit Item name is incorrect

If admin enter item_name as dress and click on submit Product is correct and stockentry page is opened

Verif y old prize Validate the old prize field If admin does not give the old prize and clik on submit then Field must not be empty

Verif y new prize Validate the new prize field i If admin does not give the new prize and clik on submit then Field must not be empty

Prouct ratio Verofy prod uct ratio To varify that item name must be accoring to its id. If admin enter item name as cosmetic and click on submit Item n ame is incorrect

If admin enter item_name as jwellery and click on submit Item name is incorrect

If admin enter item_name as dress and click on submit Product is correct and stockentry page is opened

Verif y no_ of product saled To verify field must not be empty If admin does not enter the no_ of product salled Field must not be empty

If admin enter Successfull

the no_ of product salled y logged in

month Valid ate month To verify field must not be empty If admin does not enter the month salled Field must not be empty

RKCET(CE) Page 110

RKCET(CE) Page 110

RKCET(CE) Page 100