50
1. What is bidirectional traceab ility? Bidirectional traceability needs to be implemented both forward and backward (i.e., from requirements to end products and from end product back to requirements). When the requirements are managed well, traceability can be established from the source requirement to its lower level requirements and from the lower level requirements back to their source. Such bidirectional traceability helps determine that all source requirements have been completely addressed and that all lower level requirements can be traced to a valid source. 2. What is stub? Explain in testing point of view? Stub is a dummy program or component, the code is not ready for testing, it's used f or testing...that means, in a project if there are 4 modules and last is remaining and there is no time then we will use dummy program to complete that fourth module and we will run whole 4 modules also. The dummy program is also known as stub. 3. For Web Applications what type of tests are you going to do?  Web-based applications present new challenges, these challenges include: - Short release cycles; - Constantly Changing Technology; - Possible huge number of users during initial website launch; - Inability to control the user's running environment; - 24-hour availability of the web site. The quality of a website must be evident from the Onset. Any difficulty whether in response time, accuracy of information, or ease of use-will compel the user to click to a competitor's site. Such problems translate into lost of users, lost sales, and poor company image. To overcome these types of problems, use the following techniques: 1. Functionality Testing Functionality testing involves making Sure the features that most affect user interactions work properly. These include: · forms · searches · pop-up windows · shopping carts · online payments 2. Usability Testing Many users have low tolerance for anything that is difficult to use or that does not work. A

Supriya Testing Doc

Embed Size (px)

Citation preview

Page 1: Supriya Testing Doc

8/9/2019 Supriya Testing Doc

http://slidepdf.com/reader/full/supriya-testing-doc 1/50

1. What is bidirectional traceability?

Bidirectional traceability needs to be implemented both forward and backward (i.e., from

requirements to end products and from end product back to requirements).

When the requirements are managed well, traceability can be established from the source

requirement to its lower level requirements and from the lower level requirements back totheir source. Such bidirectional traceability helps determine that all source requirements have

been completely addressed and that all lower level requirements can be traced to a valid

source.

2. What is stub? Explain in testing point of view?

Stub is a dummy program or component, the code is not ready for testing, it's used for

testing...that means, in a project if there are 4 modules and last is remaining and there is no

time then we will use dummy program to complete that fourth module and we will run whole 4

modules also. The dummy program is also known as stub.

3. For Web Applications what type of tests are you going to do? Web-based applications present new challenges, these challenges include:

- Short release cycles;

- Constantly Changing Technology;

- Possible huge number of users during initial website launch;

- Inability to control the user's running environment;

- 24-hour availability of the web site.

The quality of a website must be evident from the Onset. Any difficulty whether in response

time, accuracy of information, or ease of use-will compel the user to click to a competitor's

site. Such problems translate into lost of users, lost sales, and poor company image.

To overcome these types of problems, use the following techniques:

1. Functionality Testing

Functionality testing involves making Sure the features that most affect user interactions work

properly. These include:

· forms

· searches

· pop-up windows

· shopping carts

· online payments

2. Usability Testing

Many users have low tolerance for anything that is difficult to use or that does not work. A

Page 2: Supriya Testing Doc

8/9/2019 Supriya Testing Doc

http://slidepdf.com/reader/full/supriya-testing-doc 2/50

user's first impression of the site is important, and many websites have become cluttered with

an increasing number of features. For general-use websites frustrated users can easily click

over a competitor's site.

Usability testing involves following main steps

· identify the website's purpose;

· identify the indented users ;

· define tests and conduct the usability testing

· analyze the acquired information

3. Navigation Testing

Good Navigation is an essential part of a website, especially those that are complex and

provide a lot of information. Assessing navigation is a major part of usability Testing.

4. Forms TestingWebsites that use forms need tests to ensure that each field works properly and that the forms

posts all data as intended by the designer.

5. Page Content Testing

Each web page must be tested for correct content from the user perspective for correct

content from the user perspective. These tests fall into two categories: ensuring that each

component functions correctly and ensuring that the content of each is correct.

6. Configuration and Compatibility testing

A key challenge for web applications is ensuring that the user sees a web page as the designer

intended. The user can select different browser software and browser options, use different

network software and on-line service, and run other concurrent applications. We execute the

application under every browser/platform combination to ensure the web sites work properly

under various environments.

7. Reliability and Availability Testing

A key requirement o a website is that it Be available whenever the user requests it, after 24-

hours a day, every day. The number of users accessing web site simultaneously may also affect

the site's availability.

8. Performance Testing

Performance Testing, which evaluates System performance under normal and heavy usage, is

crucial to success of any web application. A system that takes for long to respond may frustrate

the user who can then quickly move to a competitor's site. Given enough time, every page

request will eventually be delivered. Performance testing seeks to ensure that the website

Page 3: Supriya Testing Doc

8/9/2019 Supriya Testing Doc

http://slidepdf.com/reader/full/supriya-testing-doc 3/50

server responds to browser requests within defined parameters.

9. Load Testing

The purpose of Load testing is to model real world experiences, typically by generating many

simultaneous users accessing the website. We use automation tools to increases the ability to

conduct a valid load test, because it emulates thousand of users by sending simultaneous

requests to the application or the server.

10. Stress Testing

Stress Testing consists of subjecting the system to varying and maximum loads to evaluate the

resulting performance. We use automated test tools to simulate loads on website and execute

the tests continuously for several hours or days.

11. Security Testing

Security is a primary concern when communicating and conducting business- especiallysensitive and business- critical transactions - over the internet. The user wants assurance that

personal and financial information is secure. Finding the vulnerabilities in an application that

would grant an unauthorized user access to the system is important.

4. Define Brain Stromming and Cause Effect Graphing?

BS:

A learning technique involving open group discussion intended to expand the range of available

ideas

OR

A meeting to generate creative ideas. At PEPSI Advertising, daily, weekly and bi-monthlybrainstorming sessions are held by various work groups within the firm. Our monthly I-Power

brainstorming meeting is attended by the entire agency staff.

OR

Brainstorming is a highly structured process to help generate ideas. It is based on the principle

that you cannot generate and evaluate ideas at the same time. To use brainstorming, you must

first gain agreement from the group to try brainstorming for a fixed interval (eg six minutes).

CEG:

A testing technique that aids in selecting, in a systematic way, a high-yield set of test cases

that logically relates causes to effects to produce test cases. It has a beneficial side effect in

pointing out incompleteness and ambiguities in specifications.

5. What is the maximum length of the test case we can write?

We can't say exactly test case length, it depending on functionality.

6. Password is having 6 digit alphanumeric then what are the possible input conditions?

Page 4: Supriya Testing Doc

8/9/2019 Supriya Testing Doc

http://slidepdf.com/reader/full/supriya-testing-doc 4/50

Including special characters also Possible input conditions are:

1) Input password as = 6abcde (ie number first)

2) Input password as = abcde8 (ie character first)

3) Input password as = 123456 (all numbers)

4) Input password as = abcdef (all characters)

5) Input password less than 6 digit

6) Input password greater than 6 digits

7) Input password as special characters

8) Input password in CAPITAL ie uppercase

9) Input password including space

10) (SPACE) followed by alphabets /numerical /alphanumerical/

7. What is internationalization Testing?

Software Internationalization is process of developing software products independent from

cultural norms, language or other specific attributes of a market

8. If I give some thousand tests to execute in 2 days what do you do?

If possible, we will automate or else, execute only the test cases which are mandatory.

9. What does black-box testing mean at the unit, integration, and system levels?

Tests for each software requirement using

Equivalence Class Partitioning, Boundary Value Testing, and more

Test cases for system software requirements using the Trace Matrix, Cross-functional Testing,

Decision Tables, and more

Test cases for system integration for configurations, manual operations, etc.

10. What is agile testing?Agile testing is used whenever customer requirements are changing dynamically

If we have no SRS, BRS but we have test cases does you execute the test cases blindly or do you

follow any other process.

Test case would have detail steps of what the application is supposed to do.

1) Functionality of application.

2) In addition you can refer to Backend, is mean look into the Database. To gain more

knowledge of the application.

11. What is Bug life cycle?

New: when tester reports a defect

Open: when developer accepts that it is a bug or if the developer rejects the defect, then the

status is turned into "Rejected"

Fixed: when developer make changes to the code to rectify the bug...

Page 5: Supriya Testing Doc

8/9/2019 Supriya Testing Doc

http://slidepdf.com/reader/full/supriya-testing-doc 5/50

Closed/Reopen: when tester tests it again. If the expected result shown up, it is turned into

"Closed" and if the problem persists again, it's "Reopen".

12. What is deferred status in defect life cycle?

Deferred status means the developer accepted the bus, but it is scheduled to rectify in the

next build.13. Smoke test? Do you use any automation tool for smoke testing?

Testing the application whether it's performing its basic functionality properly or not, so that

the test team can go ahead with the application. Definitely can use.

14. Verification and validation?

Verification is static. No code is executed. Say, analysis of requirements etc.

Validation is dynamic. Code is executed with scenarios present in test cases.

15. When a bug is found, what is the first action?

Report it in bug tracking tool.

16. What is test plan and explain its contents?

Test plan is a document which contains the scope for testing the application and what to be

tested, when to be tested and who to test.

17. Advantages of automation over manual testing?

Time saving, resource and money

18. What is mean by release notes?

It's a document released along with the product which explains about the product. It also

contains about the bugs that are in deferred status.

19. What is Testing environment in your company, means how testing process start?

Testing process is going as follows:

Quality assurance unit

Quality assurance manager

Test lead

Test engineer

20. Give an example of high priority and low severity, low priority and high severity?

Severity level: 

The degree of impact the issue or problem has on the project. Severity 1 usually means the

highest level requiring immediate attention. Severity 5 usually represents a documentation

defect of minimal impact.

Severity is levels:

• Critical: the software will not run

• High: unexpected fatal errors (includes crashes and data corruption)

Page 6: Supriya Testing Doc

8/9/2019 Supriya Testing Doc

http://slidepdf.com/reader/full/supriya-testing-doc 6/50

• Medium: a feature is malfunctioning

• Low: a cosmetic issue

Severity levels

• Bug causes system crash or data loss.

• Bug causes major functionality or other severe problems; product crashes in

obscure cases.

• Bug causes minor functionality problems, may affect "fit anf finish".

• Bug contains typos, unclear wording or error messages in low visibility fields.

Severity levels

• High: A major issue where a large piece of functionality or major system component is

completely broken. There is no workaround and testing cannot continue.

• Medium: A major issue where a large piece of functionality or major system component

is not working properly. There is a workaround, however, and testing can continue.

• Low: A minor issue that imposes some loss of functionality, but for which there is an

acceptable and easily reproducible workaround. Testing can proceed without

interruption.

Severity and Priority 

Priority is Relative: the priority might change over time. Perhaps a bug initially deemed P1

becomes rated as P2 or even a P3 as the schedule draws closer to the release and as the test

team finds even more heinous errors. Priority is a subjective evaluation of how important an

issue is, given other tasks in the queue and the current schedule. It’s relative. It shifts over

time. And it’s a business decision.

Severity is an absolute: it’s an assessment of the impact of the bug without regard to other

work in the queue or the current schedule. The only reason severity should change is if we

have new information that causes us to re-evaluate our assessment. If it was a high severity

issue when I entered it, it’s still a high severity issue when it’s deferred to the next release.

The severity hasn’t changed just because we’ve run out of time. The priority changed.

Severity Levels can be defined as follow: 

Page 7: Supriya Testing Doc

8/9/2019 Supriya Testing Doc

http://slidepdf.com/reader/full/supriya-testing-doc 7/50

S1 - Urgent/Showstopper. Like system crash or error message forcing to close the window.

Tester's ability to operate the system either totally (System Down), or almost totally, affected.

A major area of the users system is affected by the incident and it is significant to business

processes.

S2 - Medium/Workaround. Exist like when a problem is required in the specs but tester can go

on with testing. Incident affects an area of functionality but there is a work-around which

negates impact to business process. This is a problem that:

a) Affects a more isolated piece of functionality.

b) Occurs only at certain boundary conditions.

c) Has a workaround (where "don't do that" might be an acceptable answer to the user).

d) Occurs only at one or two customers. or is intermittent

S3 - Low. This is for minor problems, such as failures at extreme boundary conditions that are

unlikely to occur in normal use, or minor errors inlayout/formatting. Problems do not impact use of the product in any substantive way. These

are incidents that are cosmetic in nature and of no or very low impact to business processes.

21. What is Use case?

A simple flow between the end user and the system. It contains pre conditions, post conditions,

normal flows and exceptions. It is done by Team Lead/Test Lead/Tester.

22. Diff. between STLC and SDLC?

STLC is software test life cycle it starts with

Preparing the test strategy.

• Preparing the test plan.

• Creating the test environment.

• Writing the test cases.

• Creating test scripts.

• Executing the test scripts.

• Analyzing the results and reporting the bugs.

• Doing regression testing.

• Test exiting.

SDLC is software or system development life cycle, phases are...

• Project initiation.

Page 8: Supriya Testing Doc

8/9/2019 Supriya Testing Doc

http://slidepdf.com/reader/full/supriya-testing-doc 8/50

• Requirement gathering and documenting.

• Designing.

• Coding and unit testing.

Integration testing.

• System testing.

• Installation and acceptance testing. " Support or maintenance

23. How you are breaking down the project among team members?

It can be depend on these following cases----

1) Number of modules

2) Number of team members

3) Complexity of the Project

4) Time Duration of the project

5) Team member's experience etc......

24. What is Test Data Collection?

Test data is the collection of input data taken for testing the application. Various types and

size of input data will be taken for testing the applications. Sometimes in critical application

the test data collection will be given by the client also.

25. What is Test Server?

The place where the developers put their development modules, which are accessed by the

testers to test the functionality.

26. What are non-functional requirements?

The non-functional requirements of a software product are: reliability, usability, efficiency,

delivery time, software development environment, security requirements, standards to be

followed etc.

27. What are the differences between these three words Error, Defect and Bug?

Error: The deviation from the required logic, syntax or standards/ethics is called as error.

There are three types of error. They are:

Syntax error (This is due to deviation from the syntax of the language what supposed tofollow).

Logical error (This is due to deviation from the logic of the program what supposed to follow)

Execution error (This is generally happens when you are executing the same program, that time

you get it.)

Defect: When an error found by the test engineer (testing department) then it is called defect

Page 9: Supriya Testing Doc

8/9/2019 Supriya Testing Doc

http://slidepdf.com/reader/full/supriya-testing-doc 9/50

Bug: if the defect is agreed by the developer then it converts into bug, which has to fix by the

developer or post pond to next version.

28. Why we perform stress-testing, resolution-testing and cross- browser testing?

Stress Testing: - We need to check the performance of the application.

Def: Testing conducted to evaluate a system or component at or beyond the limits of itsspecified requirements

Resolution Testing: - Some times developer created only for 1024 resolution, the same page

displayed a horizontal scroll bar in 800 x 600 resolutions. No body can like the horizontal scroll

appears in the screen. That is reason to test the Resolution testing.

Cross-browser Testing: - This testing some times called compatibility testing. When we develop

the pages in IE compatible, the same page is not working in Fairfox or Netscape properly,

because

most of the scripts are not supporting to other than IE. So that we need to test the cross-

browser Testing

29. There are two sand clocks(timers) one complete totally in 7 minutes and other in 9-

minutes we have to calculate with this timers and bang the bell after completion of 11-

minutes!plz give me the solution.

1. Start both clocks

2. When 7 min clock complete, turn it so that it restarts.

3. When 9 min clock finish, turn 7 min clocks (It has 2 mints only).

4. When 7 min clock finishes, 11 min complete.

30. What is the minimum criteria for white box?

We should know the logic, code and the structure of the program or function. Internal

knowledge of the application how the system works what's the logic behind it and structure

how it should react to particular action.

31. What are the technical reviews?

For each document, it should be reviewed. Technical Review in the sense, for each screen,

developer will write a Technical Specification. It should be reviewed by developer and tester.

There are functional specification review, unit test case review and code review etc.

32. In what basis you will write test cases?

I would write the Test cases based on Functional Specifications and BRDs and some more testcases using the Domain knowledge.

33. Explain ETVX concept?

E- Entry Criteria

T- Task

V- Validation

X- Exit Criteria

Page 10: Supriya Testing Doc

8/9/2019 Supriya Testing Doc

http://slidepdf.com/reader/full/supriya-testing-doc 10/50

ENTRY CRITERIA: Input with 'condition' attached.

e.g. Approved SRS document is the entry criteria for the design phase.

TASK: Procedures.

e.g. Preparation of HLD, LLD etc.

VALIDATION: Building quality & Verification activities

e.g. Technical reviews

EXIT CRITERIA: Output with 'condition' attached.

e.g Approved design document

It is important to follow ETVX concept for all phases in SDLC.

34. What are the main key components in Web applications and client and Server

applications? (differences)

For Web Applications: Web application can be implemented using any kind of technology like

Java, .NET, VB, ASP, CGI& PERL. Based on the technology,We can derive the components.

Let's take Java Web Application. It can be implemented in 3 tier architecture. Presentation tier

(jsp, html, dthml,servlets, struts). Busienss Tier (Java Beans, EJB, JMS) Data Tier(Databases

like Oracle, SQL Server etc., )

If you take .NET Application, Presentation (ASP, HTML, DHTML), Business Tier (DLL) & Data Tier

( Database like Oracle, SQL Server etc.,)

Client Server Applications: It will have only 2 tiers. One is Presentation (Java, Swing) and Data

Tier (Oracle, SQL Server). If it is client Server architecture, the entire application has to be

installed on the client machine. When ever you do any changes in your code, Again, It has to be

installed on all the client machines. Where as in Web Applications, Core Application will reside

on the server and client can be thin Client(browser). Whatever the changes you do, you have to

install the application in the server. NO need to worry about the clients. Because, You will not

install any thing on the client machine.

35. If the client identified some bugs to whom did he reported?

He will report to the Project Manager. Project Manager will arrange a meeting with all the

leads (Dev. Manager, Test Lead and Requirement Manager) then raise a Change Request and

then, identify which all the screens are going to be impacted by the bug. They will take the

code and correct it and send it to the Testing Team.

36. What is the formal technical review?

Page 11: Supriya Testing Doc

8/9/2019 Supriya Testing Doc

http://slidepdf.com/reader/full/supriya-testing-doc 11/50

Technical review should be done by the team of members. The document, which is going to be

reviewed, who has prepared and reviewers should sit together and do the review of that

document. It is called Peer Review. If it is a technical document, It can be called as formal

Technical review, I guess. It varies depends on the company policy.

37. At what phase tester role starts?

In SDLC after complition of FRS document the test lead prepare the use case document and

test plan document, then the tester role is start.

38. Explain 'Software metrics'?

Measurement is fundamental to any engineering discipline

Why Metrics?

- We cannot control what we cannot measure!

- Metrics helps to measure quality

- Serves as dash-board

The main metrices are :size,shedule,defects.In this there are main sub metrices.

Test Coverage = Number of units (KLOC/FP) tested / total size of the system

Test cost (in %) = Cost of testing / total cost *100

Cost to locate defect = Cost of testing / the number of defects located

Defects detected in testing (in %) = Defects detected in testing / total system defects*100

Acceptance criteria tested = Acceptance criteria tested / total acceptance criteria

39. Actually how many positive and negetive testcases will write for a module?

That depends on the module and complexity of logic. For every test case, we can identify +ve

and -ve points. Based on the criteria, we will write the test cases, If it is crucial process or

screen. We should check the screen,in all the boundary conditions.

40. What is Software reliability?

It is the probability that software will work without failure for a specified period of time in a

specified environment.Reliability of software is measured in terms of Mean Time Between

Failure (MTBF). For eg if MTBF = 10000 hours for an average software, then it should not fail for

10000 hours of continous operation.

41. What are the main bugs which were identified by you and in that how many are

considered as real bugs?If you take one screen, let's say, it has got 50 Test conditions, out of which, I have identified 5

defects which are failed. I should give the description defect, severity and defect classfication.

All the defects will be considered.

Defect Classification are:

GRP : Graphical Representation

Page 12: Supriya Testing Doc

8/9/2019 Supriya Testing Doc

http://slidepdf.com/reader/full/supriya-testing-doc 12/50

LOG : Logical Error

DSN : Design Error

STD : Standard Error

TST : Wrong Test case

TYP : Typographical Error (Cosmotic Error)

42. What the main use of preparing a traceability matrix?

Traceability matrix is prepared in order to cross check the test cases designed against each

requirement, hence giving an opportunity to verify that all the requirements are covered in

testing the application.

(Or)

To Cross verify the prepared test cases and test scripts with user requirements. To monitor the

changes, enhance occurred during the development of the project.

43. What is Six sigma? Explain.

Six Sigma:A quality discipline that focuses on product and service excellence to create a culture that

demands perfection on target, every time.

Six Sigma quality levels

Produces 99.9997% accuracy, with only 3.4 defects per million opportunities.

Six Sigma is designed to dramatically upgrade a company's performance, improving quality and

productivity. Using existing products, processes, and service standards,

They go for Six Sigma MAIC methodology to upgrade performance.

MAIC is defined as follows:

Measure: Gather the right data to accurately assess a problem.

Analyze: Use statistical tools to correctly identify the root causes of a problem

Improve: Correct the problem (not the symptom).

Control: Put a plan in place to make sure problems stay fixed and sustain the gains.

Key Roles and Responsibilities:

The key roles in all Six Sigma efforts are as follows:

Sponsor: Business executive leading the organization.

Champion: Responsible for Six Sigma strategy, deployment, and vision.

Process Owner: Owner of the process, product, or service being improved responsible for long-

term sustainable gains.

Master Black Belts: Coach black belts expert in all statistical tools.

Page 13: Supriya Testing Doc

8/9/2019 Supriya Testing Doc

http://slidepdf.com/reader/full/supriya-testing-doc 13/50

Black Belts: Work on 3 to 5 $250,000-per-year projects; create $1 million per year in value.

Green Belts: Work with black belt on projects.

44. What is TRM?

TRM means Test Responsibility Matrix.

TRM: --- It indicates mapping between test factors and development stages...

Test factors like:

Ease of use, reliability, portability, authorization, access control, audit trail, ease of operates,

maintainable... Like dat...

Development stages...

Requirement gathering, Analysis, design, coding, testing, and maintenance

45. What are cookies? Tell me the advantage and disadvantage of cookies?

Cookies are messages that web servers pass to your web browser when you visit Internet sites.Your browser stores each message in a small file. When you request another page from the

server, your browser sends the cookie back to the server. These files typically contain

information about your visit to the web page, as well as any information you've volunteered,

such as your name and interests. Cookies are most commonly used to track web site activity.

When you visit some sites, the server gives you a cookie that acts as your identification card.

Upon each return visit to that site, your browser passes that cookie back to the server. In this

way, a web server can gather information about which web pages are used the most, and which

pages are gathering the most repeat hits. Only the web site that creates the cookie can read it.

Additionally, web servers can only use information that you provide or choices that you make

while visiting the web site as content in cookies. Accepting a cookie does not give a server

access to your computer or any of your personal information. Servers can only read cookies

that they have set, so other servers do not have access to your information. Also, it is not

possible to execute code from a cookie, and not possible to use a cookie to deliver a virus.

46. What is the difference between Product-based Company and Projects-based Company?

Product based company develops the applications for Global clients i.e. there is no specific

clients. Here requirements are gathered from market and analyzed with experts.

Project based company develops the applications for the specific client. The requirements are

gathered from the client and analyzed with the client.

What makes a good test engineer?

A good test engineer has a 'test to break' attitude, an ability to take the point of view of the

customer, a strong desire for quality, and an attention to detail. Tact and diplomacy are useful

in maintaining a cooperative relationship with developers, and an ability to communicate with

both technical (developers) and non-technical (customers, management) people is useful.

Page 14: Supriya Testing Doc

8/9/2019 Supriya Testing Doc

http://slidepdf.com/reader/full/supriya-testing-doc 14/50

Previous software development experience can be helpful as it provides a deeper

understanding of the software development process, gives the tester an appreciation for the

developers' point of view, and reduce the learning curve in automated test tool programming.

Judgement skills are needed to assess high-risk areas of an application on which to focus

testing efforts when time is limited.

What makes a good Software QA engineer? 

The same qualities a good tester has are useful for a QA engineer. Additionally, they must be

able to understand the entire software development process and how it can fit into the

business approach and goals of the organization. Communication skills and the ability to

understand various sides of issues are important. In organizations in the early stages of

implementing QA processes, patience and diplomacy are especially needed. An ability to find

problems as well as to see 'what's missing' is important for inspections and reviews.

What makes a good QA or Test manager? 

A good QA, test, or QA/Test(combined) manager should:

• be familiar with the software development process

• be able to maintain enthusiasm of their team and promote a positive atmosphere, despite

• what is a somewhat 'negative' process (e.g., looking for or preventing problems)

• be able to promote teamwork to increase productivity

• be able to promote cooperation between software, test, and QA engineers

• have the diplomatic skills needed to promote improvements in QA processes

• have the ability to withstand pressures and say 'no' to other managers when quality is

insufficient or QA processes are not being adhered to

• have people judgement skills for hiring and keeping skilled personnel• be able to communicate with technical and non-technical people, engineers, managers, and

customers.

• be able to run meetings and keep them focused

What's the role of documentation in QA? 

Page 15: Supriya Testing Doc

8/9/2019 Supriya Testing Doc

http://slidepdf.com/reader/full/supriya-testing-doc 15/50

Critical. (Note that documentation can be electronic, not necessarily paper.) QA practices

should be documented such that they are repeatable. Specifications, designs, business rules,

inspection reports, configurations, code changes, test plans, test cases, bug reports, user

manuals, etc. should all be documented. There should ideally be a system for easily finding and

obtaining documents and determining what documentation will have a particular piece of

information. Change management for documentation should be used if possible.

What's the big deal about 'requirements'? 

One of the most reliable methods of insuring problems, or failure, in a complex software

project is to have poorly documented requirements specifications. Requirements are the

details describing an application's externally-perceived functionality and properties.

Requirements should be clear, complete, reasonably detailed, cohesive, attainable, and

testable. A non-testable requirement would be, for example, 'user-friendly' (too subjective). A

testable requirement would be something like 'the user must enter their previously-assigned

password to access the application'. Determining and organizing requirements details in a

useful and efficient way can be a difficult effort; different methods are available depending on

the particular project. Many books are available that describe various approaches to this task.

(See the Bookstore section's 'Software Requirements Engineering' category for books on

Software Requirements.)

Care should be taken to involve ALL of a project's significant 'customers' in the requirements

process. 'Customers' could be in-house personnel or out, and could include end-users, customer

acceptance testers, customer contract officers, customer management, future software

maintenance engineers, salespeople, etc. Anyone who could later derail the project if their

expectations aren't met should be included if possible.

Organizations vary considerably in their handling of requirements specifications. Ideally, therequirements are spelled out in a document with statements such as 'The product shall.....'.

'Design' specifications should not be confused with 'requirements'; design specifications should

be traceable back to the requirements.

In some organizations requirements may end up in high level project plans, functional

specification documents, in design documents, or in other documents at various levels of

Page 16: Supriya Testing Doc

8/9/2019 Supriya Testing Doc

http://slidepdf.com/reader/full/supriya-testing-doc 16/50

detail. No matter what they are called, some type of documentation with detailed

requirements will be needed by testers in order to properly plan and execute tests. Without

such documentation, there will be no clear-cut way to determine if a software application is

performing correctly.

'Agile' methods such as XP use methods requiring close interaction and cooperation between

programmers and customers/end-users to iteratively develop requirements. The programmer

uses 'Test first' development to first create automated unit testing code, which essentially

embodies the requirements.

What steps are needed to develop and run software tests? 

The following are some of the steps to consider:

• Obtain requirements, functional design, and internal design specifications and other

necessary documents

• Obtain budget and schedule requirements

• Determine project-related personnel and their responsibilities, reporting requirements,

required standards and processes (such as release processes, change processes, etc.)

• Identify application's higher-risk aspects, set priorities, and determine scope and limitations

of tests

• Determine test approaches and methods - unit, integration, functional, system, load,

usability tests, etc.

• Determine test environment requirements (hardware, software, communications, etc.)

• Determine testware requirements (record/playback tools, coverage analyzers, test tracking,

problem/bug tracking, etc.)

• Determine test input data requirements

• Identify tasks, those responsible for tasks, and labor requirements• Set schedule estimates, timelines, milestones

• Determine input equivalence classes, boundary value analyses, error classes

• Prepare test plan document and have needed reviews/approvals

• Write test cases

• Have needed reviews/inspections/approvals of test cases

Page 17: Supriya Testing Doc

8/9/2019 Supriya Testing Doc

http://slidepdf.com/reader/full/supriya-testing-doc 17/50

• Prepare test environment and testware, obtain needed user manuals/reference

documents/configuration guides/installation guides, set up test tracking processes, set up

logging and archiving processes, set up or obtain test input data

• Obtain and install software releases

• Perform tests

• Evaluate and report results

• Track problems/bugs and fixes

• Retest as needed

• Maintain and update test plans, test cases, test environment, and testware through life cycle

What's a 'test plan'? 

A software project test plan is a document that describes the objectives, scope, approach, and

focus of a software testing effort. The process of preparing a test plan is a useful way to think

through the efforts needed to validate the acceptability of a software product. The completed

document will help people outside the test group understand the 'why' and 'how' of product

validation. It should be thorough enough to be useful but not so thorough that no one outside

the test group will read it. The following are some of the items that might be included in a test

plan, depending on the particular project:

• Title

• Identification of software including version/release numbers

• Revision history of document including authors, dates, approvals

• Table of Contents

• Purpose of document, intended audience

• Objective of testing effort

• Software product overview• Relevant related document list, such as requirements, design documents, other test plans,

etc.

• Relevant standards or legal requirements

• Traceability requirements

• Relevant naming conventions and identifier conventions

Page 18: Supriya Testing Doc

8/9/2019 Supriya Testing Doc

http://slidepdf.com/reader/full/supriya-testing-doc 18/50

• Overall software project organization and personnel/contact-info/responsibilties

• Test organization and personnel/contact-info/responsibilities

• Assumptions and dependencies

• Project risk analysis

• Testing priorities and focus

• Scope and limitations of testing

• Test outline - a decomposition of the test approach by test type, feature, functionality,

process, system, module, etc. as applicable

• Outline of data input equivalence classes, boundary value analysis, error classes

• Test environment - hardware, operating systems, other required software, data

configurations, interfaces to other systems

• Test environment validity analysis - differences between the test and production systems and

their impact on test validity.

• Test environment setup and configuration issues

• Software migration processes

• Software CM processes

• Test data setup requirements

• Database setup requirements

• Outline of system-logging/error-logging/other capabilities, and tools such as screen capture

software, that will be used to help describe and report bugs

• Discussion of any specialized software or hardware tools that will be used by testers to help

track the cause or source of bugs

• Test automation - justification and overview

• Test tools to be used, including versions, patches, etc.

• Test script/test code maintenance processes and version control• Problem tracking and resolution - tools and processes

• Project test metrics to be used

• Reporting requirements and testing deliverables

• Software entrance and exit criteria

• Initial sanity testing period and criteria

Page 19: Supriya Testing Doc

8/9/2019 Supriya Testing Doc

http://slidepdf.com/reader/full/supriya-testing-doc 19/50

• Test suspension and restart criteria

• Personnel allocation

• Personnel pre-training needs

• Test site/location

• Outside test organizations to be utilized and their purpose, responsibilties, deliverables,

contact persons, and coordination issues

• Relevant proprietary, classified, security, and licensing issues.

• Open issues

• Appendix - glossary, acronyms, etc.

(See the Bookstore section's 'Software Testing' and 'Software QA' categories for useful books

with more information.)

What's a 'test case'? 

• A test case is a document that describes an input, action, or event and an expected response,

to determine if a feature of an application is working correctly. A test case should contain

particulars such as test case identifier, test case name, objective, test conditions/setup, input

data requirements, steps, and expected results.

• Note that the process of developing test cases can help find problems in the requirements or

design of an application, since it requires completely thinking through the operation of the

application. For this reason, it's useful to prepare test cases early in the development cycle if

possible.

What should be done after a bug is found? 

The bug needs to be communicated and assigned to developers that can fix it. After the

problem is resolved, fixes should be re-tested, and determinations made regardingrequirements for regression testing to check that fixes didn't create problems elsewhere. If a

problem-tracking system is in place, it should encapsulate these processes. A variety of

commercial problem-tracking/management software tools are available (see the 'Tools' section

for web resources with listings of such tools). The following are items to consider in the

tracking process:

Page 20: Supriya Testing Doc

8/9/2019 Supriya Testing Doc

http://slidepdf.com/reader/full/supriya-testing-doc 20/50

• Complete information such that developers can understand the bug, get an idea of it's

severity, and reproduce it if necessary.

• Bug identifier (number, ID, etc.)

• Current bug status (e.g., 'Released for Retest', 'New', etc.)

• The application name or identifier and version

• The function, module, feature, object, screen, etc. where the bug occurred

• Environment specifics, system, platform, relevant hardware specifics

• Test case name/number/identifier

• One-line bug description

• Full bug description

• Description of steps needed to reproduce the bug if not covered by a test case or if the

developer doesn't have easy access to the test case/test script/test tool

• Names and/or descriptions of file/data/messages/etc. used in test

• File excerpts/error messages/log file excerpts/screen shots/test tool logs that would be

helpful in finding the cause of the problem

• Severity estimate (a 5-level range such as 1-5 or 'critical'-to-'low' is common)

• Was the bug reproducible?

• Tester name

• Test date

• Bug reporting date

• Name of developer/group/organization the problem is assigned to

• Description of problem cause

• Description of fix

• Code section/file/module/class/method that was fixed

• Date of fix• Application version that contains the fix

• Tester responsible for retest

• Retest date

• Retest results

• Regression testing requirements

Page 21: Supriya Testing Doc

8/9/2019 Supriya Testing Doc

http://slidepdf.com/reader/full/supriya-testing-doc 21/50

• Tester responsible for regression tests

• Regression testing results

A reporting or tracking process should enable notification of appropriate personnel at various

stages. For instance, testers need to know when retesting is needed, developers need to know

when bugs are found and how to get the needed information, and reporting/summary

capabilities are needed for managers.

What is 'configuration management'? 

Configuration management covers the processes used to control, coordinate, and track: code,

requirements, documentation, problems, change requests, designs,

tools/compilers/libraries/patches, changes made to them, and who makes the changes. (See

the 'Tools' section for web resources with listings of configuration management tools. Also see

the Bookstore section's 'Configuration Management' category for useful books with more

information.)

What if the software is so buggy it can't really be tested at all? 

The best bet in this situation is for the testers to go through the process of reporting whatever

bugs or blocking-type problems initially show up, with the focus being on critical bugs. Since

this type of problem can severely affect schedules, and indicates deeper problems in the

software development process (such as insufficient unit testing or insufficient integration

testing, poor design, improper build or release procedures, etc.) managers should be notified,

and provided with some documentation as evidence of the problem.

How can it be known when to stop testing? 

This can be difficult to determine. Many modern software applications are so complex, and runin such an interdependent environment, that complete testing can never be done. Common

factors in deciding when to stop are:

• Deadlines (release deadlines, testing deadlines, etc.)

• Test cases completed with certain percentage passed

• Test budget depleted

Page 22: Supriya Testing Doc

8/9/2019 Supriya Testing Doc

http://slidepdf.com/reader/full/supriya-testing-doc 22/50

• Coverage of code/functionality/requirements reaches a specified point

• Bug rate falls below a certain level

• Beta or alpha testing period ends

What if there isn't enough time for thorough testing? 

Use risk analysis to determine where testing should be focused.

Since it's rarely possible to test every possible aspect of an application, every possible

combination of events, every dependency, or everything that could go wrong, risk analysis is

appropriate to most software development projects. This requires judgement skills, common

sense, and experience. (If warranted, formal methods are also available.) Considerations can

include:

• Which functionality is most important to the project's intended purpose?

• Which functionality is most visible to the user?

• Which functionality has the largest safety impact?

• Which functionality has the largest financial impact on users?

• Which aspects of the application are most important to the customer?

• Which aspects of the application can be tested early in the development cycle?

• Which parts of the code are most complex, and thus most subject to errors?

• Which parts of the application were developed in rush or panic mode?

• Which aspects of similar/related previous projects caused problems?

• Which aspects of similar/related previous projects had large maintenance expenses?

• Which parts of the requirements and design are unclear or poorly thought out?

• What do the developers think are the highest-risk aspects of the application?

• What kinds of problems would cause the worst publicity?

• What kinds of problems would cause the most customer service complaints?• What kinds of tests could easily cover multiple functionalities?

• Which tests will have the best high-risk-coverage to time-required ratio?

What if the project isn't big enough to justify extensive testing? 

Consider the impact of project errors, not the size of the project. However, if extensive testing

Page 23: Supriya Testing Doc

8/9/2019 Supriya Testing Doc

http://slidepdf.com/reader/full/supriya-testing-doc 23/50

is still not justified, risk analysis is again needed and the same considerations as described

previously in 'What if there isn't enough time for thorough testing?' apply. The tester might then

do ad hoc testing, or write up a limited test plan based on the risk analysis.

What can be done if requirements are changing continuously? 

A common problem and a major headache.

• Work with the project's stakeholders early on to understand how requirements might change

so that alternate test plans and strategies can be worked out in advance, if possible.

• It's helpful if the application's initial design allows for some adaptability so that later changes

do not require redoing the application from scratch.

• If the code is well-commented and well-documented this makes changes easier for the

developers.

• Use rapid prototyping whenever possible to help customers feel sure of their requirements

and minimize changes.

• The project's initial schedule should allow for some extra time commensurate with the

possibility of changes.

• Try to move new requirements to a 'Phase 2' version of an application, while using the

original requirements for the 'Phase 1' version.

• Negotiate to allow only easily-implemented new requirements into the project, while moving

more difficult new requirements into future versions of the application.

• Be sure that customers and management understand the scheduling impacts, inherent risks,

and costs of significant requirements changes. Then let management or the customers (not the

developers or testers) decide if the changes are warranted - after all, that's their job.

• Balance the effort put into setting up automated testing with the expected effort required to

re-do them to deal with changes.• Try to design some flexibility into automated test scripts.

• Focus initial automated testing on application aspects that are most likely to remain

unchanged.

• Devote appropriate effort to risk analysis of changes to minimize regression testing needs.

• Design some flexibility into test cases (this is not easily done; the best bet might be to

Page 24: Supriya Testing Doc

8/9/2019 Supriya Testing Doc

http://slidepdf.com/reader/full/supriya-testing-doc 24/50

minimize the detail in the test cases, or set up only higher-level generic-type test plans)

• Focus less on detailed test plans and test cases and more on ad hoc testing (with an

understanding of the added risk that this entails).

What if the application has functionality that wasn't in the requirements? 

It may take serious effort to determine if an application has significant unexpected or hidden

functionality, and it would indicate deeper problems in the software development process. If

the functionality isn't necessary to the purpose of the application, it should be removed, as it

may have unknown impacts or dependencies that were not taken into account by the designer

or the customer. If not removed, design information will be needed to determine added testing

needs or regression testing needs. Management should be made aware of any significant added

risks as a result of the unexpected functionality. If the functionality only effects areas such as

minor improvements in the user interface, for example, it may not be a significant risk.

How can Software QA processes be implemented without stifling productivity? 

By implementing QA processes slowly over time, using consensus to reach agreement on

processes, and adjusting and experimenting as an organization grows and matures, productivity

will be improved instead of stifled. Problem prevention will lessen the need for problem

detection, panics and burn-out will decrease, and there will be improved focus and less wasted

effort. At the same time, attempts should be made to keep processes simple and efficient,

minimize paperwork, promote computer-based processes and automated tracking and

reporting, minimize time required in meetings, and promote training as part of the QA process.

However, no one - especially talented technical types - likes rules or bureacracy, and in the

short run things may slow down a bit. A typical scenario would be that more days of planning

and development will be needed, but less time will be required for late-night bug-fixing andcalming of irate customers.

What if an organization is growing so fast that fixed QA processes are impossible? 

This is a common problem in the software industry, especially in new technology areas. There

is no easy solution in this situation, other than:

Page 25: Supriya Testing Doc

8/9/2019 Supriya Testing Doc

http://slidepdf.com/reader/full/supriya-testing-doc 25/50

• Hire good people

• Management should 'ruthlessly prioritize' quality issues and maintain focus on the customer

• Everyone in the organization should be clear on what 'quality' means to the customer

How does a client/server environment affect testing? 

Client/server applications can be quite complex due to the multiple dependencies among

clients, data communications, hardware, and servers. Thus testing requirements can be

extensive. When time is limited (as it usually is) the focus should be on integration and system

testing. Additionally, load/stress/performance testing may be useful in determining

client/server application limitations and capabilities. There are commercial tools to assist with

such testing. (See the 'Tools' section for web resources with listings that include these kinds of

test tools.)

How can World Wide Web sites be tested? 

Web sites are essentially client/server applications - with web servers and 'browser' clients.

Consideration should be given to the interactions between html pages, TCP/IP communications,

Internet connections, firewalls, applications that run in web pages (such as applets, javascript,

plug-in applications), and applications that run on the server side (such as cgi scripts, database

interfaces, logging applications, dynamic page generators, asp, etc.). Additionally, there are a

wide variety of servers and browsers, various versions of each, small but sometimes significant

differences between them, variations in connection speeds, rapidly changing technologies, and

multiple standards and protocols. The end result is that testing for web sites can become a

major ongoing effort. Other considerations might include:

• What are the expected loads on the server (e.g., number of hits per unit time?), and what

kind of performance is required under such loads (such as web server response time, databasequery response times). What kinds of tools will be needed for performance testing (such as web

load testing tools, other tools already in house that can be adapted, web robot downloading

tools, etc.)?

• Who is the target audience? What kind of browsers will they be using? What kind of

connection speeds will they by using? Are they intra- organization (thus with likely high

Page 26: Supriya Testing Doc

8/9/2019 Supriya Testing Doc

http://slidepdf.com/reader/full/supriya-testing-doc 26/50

connection speeds and similar browsers) or Internet-wide (thus with a wide variety of

connection speeds and browser types)?

• What kind of performance is expected on the client side (e.g., how fast should pages appear,

how fast should animations, applets, etc. load and run)?

• Will down time for server and content maintenance/upgrades be allowed? how much?

• What kinds of security (firewalls, encryptions, passwords, etc.) will be required and what is it

expected to do? How can it be tested?

• How reliable are the site's Internet connections required to be? And how does that affect

backup system or redundant connection requirements and testing?

• What processes will be required to manage updates to the web site's content, and what are

the requirements for maintaining, tracking, and controlling page content, graphics, links, etc.?

• Which HTML specification will be adhered to? How strictly? What variations will be allowed

for targeted browsers?

• Will there be any standards or requirements for page appearance and/or graphics throughout

a site or parts of a site??

• How will internal and external links be validated and updated? how often?

• Can testing be done on the production system, or will a separate test system be required?

How are browser caching, variations in browser option settings, dial-up connection

variabilities, and real-world internet 'traffic congestion' problems to be accounted for in

testing?

• How extensive or customized are the server logging and reporting requirements; are they

considered an integral part of the system and do they require testing?

• How are cgi programs, applets, javascripts, ActiveX components, etc. to be maintained,

tracked, controlled, and tested?

Some sources of site security information include the Usenet newsgroup'comp.security.announce' and links concerning web site security in the 'Other Resources'

section.

Some usability guidelines to consider - these are subjective and may or may not apply to a

given situation (Note: more information on usability testing issues can be found in articles

about web site usability in the 'Other Resources' section):

Page 27: Supriya Testing Doc

8/9/2019 Supriya Testing Doc

http://slidepdf.com/reader/full/supriya-testing-doc 27/50

• Pages should be 3-5 screens max unless content is tightly focused on a single topic. If larger,

provide internal links within the page.

• The page layouts and design elements should be consistent throughout a site, so that it's

clear to the user that they're still within a site.

• Pages should be as browser-independent as possible, or pages should be provided or

generated based on the browser-type.

• All pages should have links external to the page; there should be no dead-end pages.

• The page owner, revision date, and a link to a contact person or organization should be

included on each page.

Many new web site test tools have appeared in the recent years and more than 280 of them are

listed in the 'Web Test Tools' section.

Software Life Cycle

 The software life cycle typically includes the following: requirements analysis,

design, coding, testing, installation and maintenance. In between, there can

be a requirement to provide Operations and support activities for the

product.

FOREWORD

Beginners Guide To Software Testing introduces a practical approach to

testing software. It bridges the gap between theoretical knowledge and real

world implementation. This article helps you gain an insight to Software

 Testing - understand technical aspects and the processes followed in a real

working environment.

Page 28: Supriya Testing Doc

8/9/2019 Supriya Testing Doc

http://slidepdf.com/reader/full/supriya-testing-doc 28/50

Who will benefit? 

Beginners. For those of you who wish to mould your theoretical software

engineering knowledge into practical approach to working in the real world.

 Those who wish to take up Software Testing as a profession.

Developers! This is an era where you need to be an “All rounder”. It is

advantageous for developers to posses testing capabilities to test the

application before hand. This will help reduce overhead on the testing team.

Already a Tester! You can refresh all your testing basics and techniques

and gear up for Certifications in Software Testing

An earnest suggestion: No matter which profession you choose, it is

advisable that you posses the following skills:

- Good communication skills – oratory and writing- Fluency in English

- Good Typing skills

By the time you finish reading this article, you will be aware of all the

techniques and processes that improves your efficiency, skills and confidence

to jump start into the field of Software Testing.[/size]

<!--[if !supportLineBreakNewLine]-->

1. Overview

The Big Picture 

All software problems can be termed as bugs. A software bug usually occurs

Page 29: Supriya Testing Doc

8/9/2019 Supriya Testing Doc

http://slidepdf.com/reader/full/supriya-testing-doc 29/50

when the software does not do what it is intended to do or does something

that it is not intended to do. Flaws in specifications, design, code or other

reasons can cause these bugs. Identifying and fixing bugs in the early stages

of the software is very important as the cost of fixing bugs grows over time.

So, the goal of a software tester is to find bugs and find them as early as

possible and make sure they are fixed.

 Testing is context-based and risk-driven. It requires a methodical and

disciplined approach to finding bugs. A good software tester needs to build

credibility and possess the attitude to be explorative, troubleshooting,

relentless, creative, diplomatic and persuasive.

As against the perception that testing starts only after the completion of 

coding phase, it actually begins even before the first line of code can be

written. In the life cycle of the conventional software product, testing begins

at the stage when the specifications are written, i.e. from testing the product

specifications or product spec. Finding bugs at this stage can save huge

amounts of time and money.

Once the specifications are well understood, you are required to design and

execute the test cases. Selecting the appropriate technique that reduces the

number of tests that cover a feature is one of the most important things that

you need to take into consideration while designing these test cases. Test

cases need to be designed to cover all aspects of the software, i.e. security,

database, functionality (critical and general) and the user interface. Bugsoriginate when the test cases are executed.

As a tester you might have to perform testing under different circumstances,

i.e. the application could be in the initial stages or undergoing rapid changes,

you have less than enough time to test, the product might be developed

Page 30: Supriya Testing Doc

8/9/2019 Supriya Testing Doc

http://slidepdf.com/reader/full/supriya-testing-doc 30/50

using a life cycle model that does not support much of formal testing or

retesting. Further, testing using different operating systems, browsers and

the configurations are to be taken care of.

Reporting a bug may be the most important and sometimes the most difficult

task that you as a software tester will perform. By using various tools and

clearly communicating to the developer, you can ensure that the bugs you

find are fixed.

Using automated tools to execute tests, run scripts and tracking bugs

improves efficiency and effectiveness of your tests. Also, keeping pace with

the latest developments in the field will augment your career as a software

test engineer.

What is software? Why should it be tested? 

Software is a series of instructions for the computer that perform a particular

task, called a program; the two major categories of software are systemsoftware and application software. System software is made up of control

programs. Application software is any program that processes data for the

user (spreadsheet, word processor, payroll, etc.).

A software product should only be released after it has gone through a proper

process of development, testing and bug fixing. Testing looks at areas such

as performance, stability and error handling by setting up test scenariosunder controlled conditions and assessing the results. This is why exactly any

software has to be tested. It is important to note that software is mainly

tested to see that it meets the customers’ needs and that it conforms to the

standards. It is a usual norm that software is considered of good quality if it

meets the user requirements.

Page 31: Supriya Testing Doc

8/9/2019 Supriya Testing Doc

http://slidepdf.com/reader/full/supriya-testing-doc 31/50

What is Quality? How important is it? 

Quality can briefly be defined as “a degree of excellence”. High quality

software usually conforms to the user requirements. A customer’s idea of 

quality may cover a breadth of features - conformance to specifications, good

performance on platform(s)/configurations, completely meets operational

requirements (even if not specified!), compatibility to all the end-user

equipment, no negative impact on existing end-user base at introduction

time.

Quality software saves good amount of time and money. Because software

will have fewer defects, this saves time during testing and maintenance

phases. Greater reliability contributes to an immeasurable increase in

customer satisfaction as well as lower maintenance costs. Because

maintenance represents a large portion of all software costs, the overall cost

of the project will most likely be lower than similar projects.

Following are two cases that demonstrate the importance of software quality:

 Ariane 5 crash June 4, 1996- Maiden flight of the European Ariane 5 launcher

crashed about 40 seconds after takeoff 

- Loss was about half a billion dollars

- Explosion was the result of a software error- Uncaught exception due to floating-point error: conversion from a 64-bit

integer to a 16-bit signed integer applied to a larger than expected number

- Module was re-used without proper testing from Ariane 4

- Error was not supposed to happen with Ariane 4

- No exception handler

Page 32: Supriya Testing Doc

8/9/2019 Supriya Testing Doc

http://slidepdf.com/reader/full/supriya-testing-doc 32/50

Mars Climate Orbiter - September 23, 1999 - Mars Climate Orbiter,

disappeared as it began to orbit Mars.

- Cost about $US 125-million

- Failure due to error in a transfer of information between a team in Colorado

and a team in California

- One team used English units (e.g., inches, feet and pounds) while the other

used metric units for a key spacecraft operation.

What exactly does a software tester do?

Apart from exposing faults (“bugs”) in a software product confirming that the

program meets the program specification, as a test engineer you need to

create test cases, procedures, scripts and generate data. You execute test

procedures and scripts, analyze standards and evaluate results of 

system/integration/regression testing. You also...

· Speed up development process by identifying bugs at an early stage (e.g.specifications stage)

· Reduce the organization's risk of legal liability

· Maximize the value of the software

· Assure successful launch of the product, save money, time and reputation of 

the company by discovering bugs and design flaws at an early stage before

failures occur in production, or in the field

· Promote continual improvement

What makes a good tester? 

As software engineering is now being considered as a technical engineering

profession, it is important that the software test engineer’s posses certain

Page 33: Supriya Testing Doc

8/9/2019 Supriya Testing Doc

http://slidepdf.com/reader/full/supriya-testing-doc 33/50

traits with a relentless attitude to make them stand out.

Here are a few.

· Know the technology. Knowledge of the technology in which the

application is developed is an added advantage to any tester. It helps design

better and powerful test cases basing on the weakness or flaws of the

technology. Good testers know what it supports and what it doesn’t, so

concentrating on these lines will help them break the application quickly.

· Perfectionist and a realist. Being a perfectionist will help testers spot the

problem and being a realist helps know at the end of the day which problems

are really important problems. You will know which ones require a fix and

which ones don’t.

· Tactful, diplomatic and persuasive. Good software testers are tactful

and know how to break the news to the developers. They are diplomatic while

convincing the developers of the bugs and persuade them when necessary

and have their bug(s) fixed. It is important to be critical of the issue and not

let the person who developed the application be taken aback of the findings.

· An explorer. A bit of creativity and an attitude to take risk helps thetesters venture into unknown situations and find bugs that otherwise will be

looked over.

· Troubleshoot. Troubleshooting and figuring out why something doesn’t

work helps testers be confident and clear in communicating the defects to

the developers.

· Posses people skills and tenacity. Testers can face a lot of resistance

from programmers. Being socially smart and diplomatic doesn't mean beingindecisive. The best testers are both-socially adept and tenacious where it

matters.

· Organized. Best testers very well realize that they too can make mistakes

and don’t take chances. They are very well organized and have checklists,

use files, facts and figures to support their findings that can be used as an

Page 34: Supriya Testing Doc

8/9/2019 Supriya Testing Doc

http://slidepdf.com/reader/full/supriya-testing-doc 34/50

evidence and double-check their findings.

· Objective and accurate. They are very objective and know what they

report and so convey impartial and meaningful information that keeps politics

and emotions out of message. Reporting inaccurate information is losing a

little credibility. Good testers make sure their findings are accurate and

reproducible.

· Defects are valuable. Good testers learn from them. Each defect is an

opportunity to learn and improve. A defect found early substantially costs

less when compared to the one found at a later stage. Defects can cause

serious problems if not managed properly. Learning from defects helps –

prevention of future problems, track improvements, improve prediction and

estimation.

Guidelines for new testers·

. Testing can’t show that bugs don’t exist. An important reason for

testing is to prevent defects. You can perform your tests, find and report

bugs, but at no point can you guarantee that there are no bugs.· It is impossible to test a program completely. Unfortunately this is not

possible even with the simplest program because – the number of inputs is

very large, number of outputs is very large, number of paths through the

software is very large, and the specification is subjective to frequent changes.

· You can’t guarantee quality. As a software tester, you cannot test

everything and are not responsible for the quality of the product. The mainway that a tester can fail is to fail to report accurately a defect you have

observed. It is important to remember that we seldom have little control over

quality.

· Target environment and intended end user. Anticipating and testing

the application in the environment user is expected to use is one of the major

Page 35: Supriya Testing Doc

8/9/2019 Supriya Testing Doc

http://slidepdf.com/reader/full/supriya-testing-doc 35/50

factors that should be considered. Also, considering if the application is a

single user system or multi user system is important for demonstrating the

ability for immediate readiness when necessary. The error case of Disney’s

Lion King illustrates this. Disney Company released it first multimedia CD-

ROM game for children, The Lion King Animated Storybook. It was highly

promoted and the sales were huge. Soon there were reports that buyers were

unable to get the software to work. It worked on a few systems – likely the

ones that the Disney programmers used to create the game – but not on the

most common systems that the general public used.

· No application is 100% bug free. It is more reasonable to recognize

there are priorities, which may leave some less critical problems unsolved or

unidentified. Simple case is the Intel Pentium bug. Enter the following

equation into your PC’s calculator: (4195835 / 3145727) * 3145727 –

4195835. If the answer is zero, your computer is just fine. If you get anything

else, you have an old Intel Pentium CPU with a floating-point division bug.

· Be the customer. Try to use the system as a lay user. To get a glimpse of 

this, get a person who has no idea of the application to use it for a while and

you will be amazed to see the number of problems the person seem to comeacross. As you can see, there is no procedure involved. Doing this could

actually cause the system to encounter an array of unexpected tests –

repetition, stress, load, race etc.

· Build your credibility. Credibility is like quality that includes reliability,

knowledge, consistency, reputation, trust, attitude and attention to detail. It

is not instant but should be built over time and gives voice to the testers in

the organization. Your keys to build credibility – identify your strengths andweaknesses, build good relations, demonstrate competency, be willing to

admit mistakes, re-assess and adjust.

· Test what you observe. It is very important that you test what you can

observe and have access to. Writing creative test cases can help only when

you have the opportunity to observe the results. So, assume nothing.

Page 36: Supriya Testing Doc

8/9/2019 Supriya Testing Doc

http://slidepdf.com/reader/full/supriya-testing-doc 36/50

· Not all bugs you find will be fixed. Deciding which bugs will be fixed and

which won’t is a risk-based decision. Several reasons why your bug might not

be fixed is when there is no enough time, the bug is dismissed for a new

feature, fixing it might be very risky or it may not be worth it because it

occurs infrequently or has a work around where the user can prevent or avoid

the bug. Making a wrong decision can be disastrous.

· Review competitive products. Gaining a good insight into various

products of the same kind and getting to know their functionality and general

behavior will help you design different test cases and to understand the

strengths and weaknesses of your application. This will also enable you to

add value and suggest new features and enhancements to your product.

· Follow standards and processes. As a tester, your need to conform to

the standards and guidelines set by the organization. These standards

pertain to reporting hierarchy, coding, documentation, testing, reporting

bugs, using automated tools etc.

2. Introduction

Software Life Cycle 

 The software life cycle typically includes the following: requirements analysis,

design, coding, testing, installation and maintenance. In between, there can

be a requirement to provide Operations and support activities for the

product.

Requirements Analysis. Software organizations provide solutions to

customer requirements by developing appropriate software that best suits

their specifications. Thus, the life of software starts with origin of 

requirements. Very often, these requirements are vague, emergent and

always subject to change.

Page 37: Supriya Testing Doc

8/9/2019 Supriya Testing Doc

http://slidepdf.com/reader/full/supriya-testing-doc 37/50

 Analysis is performed to - To conduct in depth analysis of the proposed

project, To evaluate for technical feasibility, To discover how to partition the

system, To identify which areas of the requirements need to be elaborated

from the customer, To identify the impact of changes to the requirements, To

identify which requirements should be allocated to which components.

Design and Specifications. The outcome of requirements analysis is the

requirements specification. Using this, the overall design for the intended

software is developed.

 Activities in this phase - Perform Architectural Design for the software, Design

Database (If applicable), Design User Interfaces, Select or Develop Algorithms

(If Applicable), Perform Detailed Design.

Coding. The development process tends to run iteratively through these

phases rather than linearly; several models (spiral, waterfall etc.) have been

proposed to describe this process.

 Activities in this phase - Create Test Data, Create Source, Generate Object

Code, Create Operating Documentation, Plan Integration, Perform Integration.

Testing. The process of using the developed system with the intent to find

errors. Defects/flaws/bugs found at this stage will be sent back to thedeveloper for a fix and have to be re-tested. This phase is iterative as long as

the bugs are fixed to meet the requirements.

 Activities in this phase - Plan Verification and Validation, Execute Verification

and validation Tasks, Collect and Analyze Metric Data, Plan Testing, Develop

Page 38: Supriya Testing Doc

8/9/2019 Supriya Testing Doc

http://slidepdf.com/reader/full/supriya-testing-doc 38/50

 Test Requirements, Execute Tests.

Installation. The so developed and tested software will finally need to be

installed at the client place. Careful planning has to be done to avoid

problems to the user after installation is done.

 Activities in this phase - Plan Installation, Distribution of Software, Installation

of Software, Accept Software in Operational Environment.

Operation and Support. Support activities are usually performed by the

organization that developed the software. Both the parties usually decide on

these activities before the system is developed.

 Activities in this phase - Operate the System, Provide Technical Assistance

and Consulting, Maintain Support Request Log.

Maintenance. The process does not stop once it is completely implemented

and installed at user place; this phase undertakes development of newfeatures, enhancements etc.

 Activities in this phase - Reapplying Software Life Cycle.

Various Life Cycle Models 

 The way you approach a particular application for testing greatly depends onthe life cycle model it follows. This is because, each life cycle model places

emphasis on different aspects of the software i.e. certain models provide

good scope and time for testing whereas some others don’t. So, the number

of test cases developed, features covered, time spent on each issue depends

on the life cycle model the application follows.

Page 39: Supriya Testing Doc

8/9/2019 Supriya Testing Doc

http://slidepdf.com/reader/full/supriya-testing-doc 39/50

No matter what the life cycle model is, every application undergoes the same

phases described above as its life cycle.

Following are a few software life cycle models, their advantages and

disadvantages.

Waterfall Model 

Strengths:

•Emphasizes completion of one phase before moving on

•Emphasises early planning, customer input, and design

•Emphasises testing as an integral part of the life cycle •Provides quality

gates at each life cycle phase

Weakness:

•Depends on capturing and freezing requirements early in the life cycle

•Depends on separating requirements from design•Feedback is only from testing phase to any previous stage

•Not feasible in some organizations

•Emphasises products rather than processes

Prototyping Model 

Strengths:•Requirements can be set earlier and more reliably

•Requirements can be communicated more clearly and completelybetween

developers and clients

•Requirements and design options can be investigated quickly and with low

cost

Page 40: Supriya Testing Doc

8/9/2019 Supriya Testing Doc

http://slidepdf.com/reader/full/supriya-testing-doc 40/50

•More requirements and design faults are caught early

Weakness:

•Requires a prototyping tool and expertise in using it – a cost for the

development organisation

•The prototype may become the production system

Spiral Model 

Strengths:

•It promotes reuse of existing software in early stages of development

•Allows quality objectives to be formulated during development

•Provides preparation for eventual evolution of the software product

•Eliminates errors and unattractive alternatives early.

•It balances resource expenditure.

•Doesn’t involve separate approaches for software development and

software maintenance.

•Provides a viable framework for integrated Hardware-software systemdevelopment.

Weakness:

•This process needs or usually associated with Rapid Application

Development, which is very difficult practically.

•The process is more difficult to manage and needs a very different approach

as opposed to the waterfall model (Waterfall model has managementtechniques like GANTT charts to assess)

Software Testing Life Cycle 

Software Testing Life Cycle consist of six (generic) phases: 1) Planning, 2)

Page 41: Supriya Testing Doc

8/9/2019 Supriya Testing Doc

http://slidepdf.com/reader/full/supriya-testing-doc 41/50

Analysis, 3) Design, 4) Construction, 5) Testing Cycles, 6) Final Testing and

Implementation and 7) Post Implementation. Each phase in the life cycle is

described with the respective activities.

Planning. Planning High Level Test plan, QA plan (quality goals), identify –

reporting procedures, problem classification, acceptance criteria, databases

for testing, measurement criteria (defect quantities/severity level and defect

origin), project metrics and finally begin the schedule for project testing. Also,

plan to maintain all test cases (manual or automated) in a database.

Analysis. Involves activities that - develop functional validation based on

Business Requirements (writing test cases basing on these details), develop

test case format (time estimates and priority assignments), develop test

cycles (matrices and timelines), identify test cases to be automated (if 

applicable), define area of stress and performance testing, plan the test

cycles required for the project and regression testing, define procedures for

data maintenance (backup, restore, validation), review documentation.

Design. Activities in the design phase - Revise test plan based on changes,

revise test cycle matrices and timelines, verify that test plan and cases are in

a database or requisite, continue to write test cases and add new ones based

on changes, develop Risk Assessment Criteria, formalize details for Stress

and Performance testing, finalize test cycles (number of test case per cycle

based on time estimates per test case and priority), finalize the Test Plan,

(estimate resources to support development in unit testing).

Construction (Unit Testing Phase). Complete all plans, complete Test Cycle

matrices and timelines, complete all test cases (manual), begin Stress and

Performance testing, test the automated testing system and fix bugs,

(support development in unit testing), run QA acceptance test suite to certify

Page 42: Supriya Testing Doc

8/9/2019 Supriya Testing Doc

http://slidepdf.com/reader/full/supriya-testing-doc 42/50

software is ready to turn over to QA.

Test Cycle(s) / Bug Fixes (Re-Testing/System Testing Phase). Run the test

cases (front and back end), bug reporting, verification, revise/add test cases

as required.

Final Testing and Implementation (Code Freeze Phase). Execution of all

front end test cases - manual and automated, execution of all back end test

cases - manual and automated, execute all Stress and Performance tests,

provide on-going defect tracking metrics, provide on-going complexity and

design metrics, update estimates for test cases and test plans, document test

cycles, regression testing, and update accordingly.

Post Implementation. Post implementation evaluation meeting can be

conducted to review entire project. Activities in this phase - Prepare final

Defect Report and associated metrics, identify strategies to prevent similar

problems in future project, automation team - 1) Review test cases to

evaluate other cases to be automated for regression testing, 2) Clean upautomated test cases and variables, and 3) Review process of integrating

results from automated testing in with results from manual testing.

Testing tools

Black box testing - not based on any knowledge of internal design or code. Tests are

based on requirements and functionality.

White box testing - based on knowledge of the internal logic of an application's code.

Tests are based on coverage of code statements, branches, paths, conditions.

Unit testing - the most 'micro' scale of testing; to test particular functions or code

modules. Typically done by the programmer and not by testers, as it requires detailed

knowledge of the internal program design and code. Not always easily done unless the

application has a well-designed architecture with tight code; may require developing test

driver modules or test harnesses.

Page 43: Supriya Testing Doc

8/9/2019 Supriya Testing Doc

http://slidepdf.com/reader/full/supriya-testing-doc 43/50

Incremental integration testing - continuous testing of an application as new

functionality is added; requires that various aspects of an application's functionality be

independent enough to work separately before all parts of the program are completed, or 

that test drivers be developed as needed; done by programmers or by testers.

Integration testing - testing of combined parts of an application to determine if theyfunction together correctly. The 'parts' can be code modules, individual applications,

client and server applications on a network, etc. This type of testing is especially relevant

to client/server and distributed systems.

Functional testing - black-box type testing geared to functional requirements of an

application; this type of testing should be done by testers. This doesn't mean that the

programmers shouldn't check that their code works before releasing it (which of course

applies to any stage of testing.)

System testing - black-box type testing that is based on overall requirements

specifications; covers all combined parts of a system.

End-to-end testing - similar to system testing; the 'macro' end of the test scale; involves

testing of a complete application environment in a situation that mimics real-world use,

such as interacting with a database, using network communications, or interacting with

other hardware, applications, or systems if appropriate.

Sanity testing - typically an initial testing effort to determine if a new software version is

performing well enough to accept it for a major testing effort. For example, if the new

software is crashing systems every 5 minutes, bogging down systems to a crawl, or 

destroying databases, the software may not be in a 'sane' enough condition to warrant

further testing in its current state.

Regression testing - re-testing after fixes or modifications of the software or its

environment. It can be difficult to determine how much re-testing is needed, especially

near the end of the development cycle. Automated testing tools can be especially useful

for this type of testing.

Acceptance testing - final testing based on specifications of the end-user or customer,

or based on use by end-users/customers over some limited period of time.

Load testing - testing an application under heavy loads, such as testing of a web

site under a range of loads to determine at what point the system's response timedegrades or fails.

Stress testing - term often used interchangeably with 'load' and 'performance'

testing. Also used to describe such tests as system functional testing while under 

unusually heavy loads, heavy repetition of certain actions or inputs, input of large

numerical values, large complex queries to a database system, etc.

Page 44: Supriya Testing Doc

8/9/2019 Supriya Testing Doc

http://slidepdf.com/reader/full/supriya-testing-doc 44/50

Performance testing - term often used interchangeably with 'stress' and 'load' testing.

Ideally 'performance' testing (and any other 'type' of testing) is defined in requirements

documentation or QA or Test Plans.

Usability testing - testing for 'user-friendliness'. Clearly this is subjective, and will depend

on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not

appropriate as usability testers.

Install/uninstall testing - testing of full, partial, or upgrade install/uninstall processes.

Recovery testing - testing how well a system recovers from crashes, hardware failures,

or other catastrophic problems.

Security testing - testing how well the system protects against unauthorized internal or 

external access, willful damage, etc; may require sophisticated testing techniques.

Compatibility testing - testing how well software performs in a particular 

hardware/software/operating system/network/etc. environment.

Exploratory testing - often taken to mean a creative, informal software test that is not

based on formal test plans or test cases; testers may be learning the software as they

test it.

Ad-hoc testing - similar to exploratory testing, but often taken to mean that the testers

have significant understanding of the software before testing it.

User acceptance testing - determining if software is satisfactory to an end-user or 

customer.

Comparison testing - comparing software weaknesses and strengths to competing

products.

Alpha testing - testing of an application when development is nearing completion; minor 

design changes may still be made as a result of such testing. Typically done by end-

users or others, not by programmers or testers.

Beta testing - testing when development and testing are essentially completed and final

bugs and problems need to be found before final release. Typically done by end-users or 

others, not by programmers or testers.

Mutation testing - a method for determining if a set of test data or test cases is useful,

by deliberately introducing various code changes ('bugs') and retesting with the originaltest data/cases to determine if the 'bugs' are detected. Proper implementation requires

large computational resources.

Some more questions

Page 45: Supriya Testing Doc

8/9/2019 Supriya Testing Doc

http://slidepdf.com/reader/full/supriya-testing-doc 45/50

Q - What are test case formats widely use in web based testing? 

A - Web based applications deal with live web portals. Hence the test cases can be

broadly classified as - front end , back end, security testing cases, navigation based,

field validations, database related cases. The test cases are written based on the

functional specifications and wire-frames.

Q - How to prepare test case and test description for job application?  

A - Actually the question seems to be vague,... see naukri is one of biggest job site

globally and it has is own complex functionality normally a Test case is derived from a

SRS (or) FRS basically and test description is always derived from a Test case. Test

description is nothing but the steps which has to be followed for the TC what u wrote.

And the TC is nothing which compares the expectation and the actual(outcome)result.

Q - What is the difference between Functional and Technical bugs? Give an

example for each.?

Functional Bugs : Bugs found when Testing the Functionality of the AUT.

Technical bugs: Related to Communication which AUT makes.Like H/W,DB ! where

these could not be connected properly.

Q - Give proper Seq. to following testing Types Regression, Retesting, Funtional,

Sanity and Performance Testing.? 

A - The proper sequence in which these types of testing are performed is - Sanity,

Functional, Regression, Retesting, Performance.

Q - How u test MS- Vista without any requirement Doc.? 

Know what change is being made from the older verison of Windows to the newer 

version with the help of User Release notes thats released with Windows Vista. Basedon that, formulate the test cases and execute the same.

Q - What is verification? validation?

Page 46: Supriya Testing Doc

8/9/2019 Supriya Testing Doc

http://slidepdf.com/reader/full/supriya-testing-doc 46/50

Verification typically involves reviews and meetings to evaluate documents, plans, code,

requirements, and specifications. This can be done with checklists, issues lists,

walkthroughs, and inspection meetings. Validation typically involves actual testing and

takes place after verifications are completed. The term 'IV & V' refers to Independent

Verification and Validation.

Q - How can new Software QA processes be introduced in an existing

organization?

A lot depends on the size of the organization and the risks involved. For large

organizations with high-risk (in terms of lives or property) projects, serious management

buy-in is required and a formalized QA process is necessary.

Where the risk is lower, management and organizational buy-in and QA implementation

may be a slower, step-at-a-time process. QA processes should be balanced with

productivity so as to keep bureaucracy from getting out of hand.

For small groups or projects, a more ad-hoc process may be appropriate, depending on

the type of customers and projects. A lot will depend on team leads or managers,

feedback to developers, and ensuring adequate communications among customers,

managers, developers, and testers.

The most value for effort will often be in (a) requirements management processes, with a

goal of clear, complete, testable requirement specifications embodied in requirements or 

design documentation, or in 'agile'-type environments extensive continuous coordination

with end-users, (b) design inspections and code inspections, and (c) post-

mortems/retrospectives.

Q - Why is it often hard for management to get serious about quality assurance?

Solving problems is a high-visibility process; preventing problems is low-visibility. This is

illustrated by an old parable: In ancient China there was a family of healers, one of 

whom was known throughout the land and employed as a physician to a great lord.

Page 47: Supriya Testing Doc

8/9/2019 Supriya Testing Doc

http://slidepdf.com/reader/full/supriya-testing-doc 47/50

Q - What's an 'inspection'?

An inspection is more formalized than a 'walkthrough', typically with 3-8 people including

a moderator, reader, and a recorder to take notes.The subject of the inspection is typically a document such as a requirements spec or a

test plan, and the purpose is to find problems and see what's missing, not to fix anything.

Attendees should prepare for this type of meeting by reading thru the document; most

problems will be found during this preparation. The result of the inspection meeting

should be a written report.

Q - What is a 'walkthrough'?

A 'walkthrough' is an informal meeting for evaluation or informational purposes. Little or 

no preparation is usually required.

Q - What makes a good test engineer?

A good test engineer has a 'test to break' attitude, an ability to take the point of view of 

the customer, a strong desire for quality, and an attention to detail. Tact and diplomacy

are useful in maintaining a cooperative relationship with developers, and an ability to

communicate with both technical (developers) and non-technical (customers,

management) people is useful. Previous software development experience can be

helpful as it provides a deeper understanding of the software development process,

gives the tester an appreciation for the developers' point of view, and reduce the learning

curve in automated test tool programming. Judgment skills are needed to assess high-

risk areas of an application on which to focus testing efforts when time is limited.

Q - What makes a good Software QA engineer?

The same qualities a good tester has are useful for a QA engineer. Additionally, they

must be able to understand the entire software development process and how it can fit

into the business approach and goals of the organization. Communication skills and theability to understand various sides of issues are important. In organizations in the early

stages of implementing QA processes, patience and diplomacy are especially needed.

An ability to find problems as well as to see 'what's missing' is important for inspections

and reviews.

Q - What is agile testing?

Page 48: Supriya Testing Doc

8/9/2019 Supriya Testing Doc

http://slidepdf.com/reader/full/supriya-testing-doc 48/50

Agile testing is used whenever customer requirements are changing dynamically

If we have no SRS, BRS but we have test cases does you execute the test cases blindly

or do you follow any other process.

Test case would have detail steps of what the application is supposed to do.

1) Functionality of application.2) In addition you can refer to Backend, is mean look into the Database. To gain more

knowledge of the application.

2. How you will know when to stop testing?

a: Testing will be stopped when we came to know that there are only some

minnor bugs which may not effect the functionality of the application and

when all the test cases has benn executed sucessfully.

3. What are the metrics generally you use in testing?

A: These software metrics will be taken care by SQA team

Ex:rate of deffect efficiency

5. What is ECP and how you will prepare test cases?

A:It is a software testing related technique which is used for writing

test cases.it will break the range into some wqual partitions.the main

purpose of this tecnique is

1) To reduce the no. of test cases to a necessary minimun.

2) To select the right test cases to cover all the senarios.

6. Test Plan contents? Who will use this doc?

A: Test plan is a document which contains scope,risk analysis.

for every sucess there should be some plan,like that for geting some

quality product proper test plan should be there.

 The test plan Contents are:

1) introduction

a) Overview

b) Achronyms

2) Risk Analysis

3) Test items

Page 49: Supriya Testing Doc

8/9/2019 Supriya Testing Doc

http://slidepdf.com/reader/full/supriya-testing-doc 49/50

4) Features and funtions to be test

5) Features and funtions not to be test

6) Test statergy

7) test environment

8) system test shedule

9) test delivarables

10) Resources

11) re-sumptiom and suspension criteria

12) staff and training

8. What are Test case preparation guidelines?

A:Requirement specifications and User interface documents(sreen

shots of application)

10. How u will do usability testing explain with example?

A:Mainly to check the look and feel,ease of 

use,gui(colours,fonts,allignment),help manuals and complete end to

end navigation.

11. What is Functionality testing?

A:Here in this testing mainly we will check the functionality of the

application whether its meet the customer requirements are not.

Ex:1+1 =2.

12. Which SDLC you are using?

A:V model

13. Explain V & V model?A:Verification and Validation Model.

14. What are the acceptance criteria for your project?

A:It will be specified by the customer what is his acceptance

criteria.Ex:if so and so functionality has worked enough for me.

Page 50: Supriya Testing Doc

8/9/2019 Supriya Testing Doc

http://slidepdf.com/reader/full/supriya-testing-doc 50/50

15. Who will provide the LOC to u?

A:Loc (lines of code) it will depend on the any company standards they

are following.

16. How u will report the bugs?

A:by using Bug tracking tool like Bugzilla,test director again it may

depends on company,some companies they may use own tool.

17. Explain your organizations testing process?

A: 1) SRS

2) Planning

3) Test senario design

4) Test case design

5) Execution

6) Bug Reporting

7) maintainance