Saturday, December 29, 2007

Software Testing Process






Table of Contents
1. WHY DO WE NEED TO TEST?
2. WHAT IS TESTING?
2.1 Testing Techniques.
2.2 Testing Phases.
3. WHY DO WE NEED TO PLAN TESTING?
4. SOFTWARE TEST PROCESS:
4.1 Test Artifacts:
4.1.1 Test Plan:
4.1.2 Test Environment:
4.1.3 Test Case:
4.1.4 Test Data:
4.1.5 Test Tools:
4.1.6 Test Scripts:
4.1.7 Test Log:
4.1.8 Bug Report:
5. TRAITS OF A GOOD TESTER.
6. TESTING ROLES.
7. WHEN TO STOP TESTING?







1.Why do we need to test?


It is the primary duty of a software vendor to ensure that software delivered does not have defects and that the customer’s day-to-day operations do not get affected. This can be achieved by rigorously testing the software. In spite of rigorous testing, software delivered may contain some defects.
Defects can exist in the software, as the human who can make mistake designs it. To a large extent, the circumstances under which the software has been developed are major reasons for inducing defects in the software.
The most common reasons that cause bugs in software are found to be:
1.Poor requirements - if requirements are unclear, incomplete, too general, or not testable, there will be problems.
2.Unrealistic schedule - if too much work crammed in too little time, problems are inevitable.
3.Inadequate testing - no one will know whether or not the program is any good until the customer complains or the system crashes.
4.Miscommunication - if developers don't know what's needed or customers have erroneous expectations, problems are guaranteed.
5.Changing requirements – Customer without understanding impact of a change on software may request frequent changes. Frequent Changes directly affect the timelines of the project. Ultimately it is testing which takes least priority.
6.Assumptions and complacency – often projects make assumptions on key design or architectural issues that may prove costly in the end.

2. What is Testing?
“Testing is an activity in which a system or component is executed under specified conditions; the results are observed and recorded and an evaluation is made of some aspect of the system or component
· “ - IEEE
*Executing a system or component is known as dynamic testing.
*Review, inspection and verification of documents (Requirements, design documents Test Plans etc.), code and other work products of software is known as static testing.
*Static testing is found to be the most effective and efficient way of testing.
*Successful testing of software demands both dynamic and static testing.
*Measurements show that a defect discovered during design that costs $1 to rectify at that stage will cost $1,000 to repair in production. This clearly points out the advantage of early testing.
*Testing should start with small measurable units of code, gradually progress towards testing integrated components of the applications and finally be completed with testing at the application level.
*Testing verifies the system against its stated and implied requirements, i.e., is it doing what it is supposed to do? It should also check if the system is not doing what it is not supposed to do, if it takes care of boundary conditions, how the system performs in production-like environment and how fast and consistently the system responds when the data volumes are high.

2.1Testing Techniques:


Software can be tested either by running the programs and verifying each step of its execution against expected result or by statically examining the code or the document against its stated requirement or objective. These 2 distinct methods have led to the popularization of 2 techniques of software testing viz.
Static Testing:
This is a non-execution-based testing technique. The Design, code or any document may be inspected and reviewed against a stated requirement and for some standards against some guideline/checklist. Static Testing includes:
*Walk-through - code reading and inspections with a team
*Reviews - Formal techniques for specification and design, code reading and inspections.
*Logical correctness proving-
Many studies show that the single most cost-effective defect reduction process is the classic structural test - the code inspection or walk-through. Code inspection is like proof-reading - it can find the mistakes the author missed - the "typo's" and logic errors that even the best programmers can produce.

Dynamic Testing:


This is an execution-based testing technique. Here, the program, module or the entire system is executed (run) and the output is verified against the expected result. Dynamic execution of tests is based on one of the following:
*Specifications (black box testing) - test cases based on specifications
*Code (glass box testing) - test cases based on the code
*Methodology
White box testing:
This testing technique takes into account the internal structure of a system or component. Complete access to the object's source code is needed for white-box testing. This is known as ‘white box’ testing because you get to see the internal working of the code. White box testing helps to:


1.Traverse complicated loop structures
2.Cover common data areas,
3.Traverse 100,000 lines of spaghetti code and nests of ifs.
4.Cover control structures and sub-routines
5.Evaluate different execution paths
6.Traditionally, this kind of testing has been done using interactive debuggers, or by actually changing the source code.


Now, there are a number of testing tools that let you perform white-box testing on executables, without modifying the source and without incurring the high overhead of an interactive debugger. These tools speed testing and debugging because there is no need to wait for test support code to be inserted into the program. Unit testing and some part of integration testing come under white box testing category.

Black Box Testing:


Functional tests examine the observable behavior of software as evidenced by its outputs, without any reference to internal functions. This is the essence of ‘black box’ testing.

1.Black box tests better attack the quality target. It is an advantage to create the quality criteria from this point of view from the beginning.
2.In black box testing, software is exercised over a full range of inputs and the outputs are observed for correctness. How those outputs are achieved, or what is inside the box are immaterial.
3.Black Box testing technique can be applied once unit and integration testing is completed. i.e. each line code has been covered through white-box testing.

2.2Test phases:


Unit Test:


Testing of individual units or groups of related code is known as Unit Testing. A developer generally carries out unit testing himself/herself (self-test) or they test each other’s modules (independent test). He/she needs to prepare Unit Test plan, which is a prerequisite to start testing. The Plan should take care of:
1. Standards specified (Ex. GUI related)
2. Field validations, boundary values.
3.Basic functionality of the component (add/delete/modify/query activities)
3. Code coverage and exception handling.
4. Negative or destructive testing

Integration Test:
Testing in which software components, hardware components, or both are combined and tested to evaluate the interfaces and interaction between them is known as Integration Testing. This may again be done by the developer themselves or independently amongst them. Integration test plan should take care that:
1.Multiple components being integrated provide necessary communications, links and data sharing.
2.Integrated components work cohesively to perform a specified task.
Larger systems often require several integration steps. There are four basic integration test methods:
· Incremental (Random)
· Big-bang (All-at-once)
· Bottom-up
· Top-down

System Test:
It can be defined as testing conducted on a complete, integrated system to evaluate the system’s compliance with its specified requirements. During System testing one focuses solely on the outputs generated in response to inputs provided and execution conditions, rather than verifying the internal structure of the program/system. i.e. the system is validated against its functional requirements. System testing is carried out when Unit and Integration testing have been completed successfully.

A system tester should have a good understanding of architecture and application domain of the system. The System test plan needs take care following details:
1.Compliance of a system or component with its specified functional requirements.
2.Validation of business rules
3.Validation of system resource (H/W) and the application response when multiple components are executed simultaneously
4.Destructive tests to determine if the software does things it shouldn't do.
1. Inter and intra components dependencies.
2. Controlled environment under which the testing will be
carried out
3. Data that will go as inputs to the various tests and the
corresponding expected output
4. Coverage of all critical decision making components

User Acceptance test:
This is a formal testing conducted to enable the end-user of the system (customer or any of his authorized personnel) to determine whether to accept a system or component. The user himself will typically carry out this test or will certainly participate in this test activity. He may evaluate the system against functional, operational and performance requirements.
Regression testing:
This involves re-testing of the system or component to verify that modifications / enhancements have not introduced unintended defects on some existing features / modules. It would also involve re-testing prior bug fixes to ensure they work with the current fix / enhancement. The emphasis of regression testing is to ensure that the new modifications have not adversely affected existing functionality.
Performance testing:
This is a testing conducted to evaluate the system’s ability to meet the required performance levels. Performance testing requires special skills. Many Performance testing tools are available and it is always preferred that tools are used for performance testing. These tools simulate load in terms of the number of concurrent users accessing the system. Performance tests are referred to as non-functional tests. Performance test plan should take care of:
1.Number of concurrent users accessing at any point in given time.
2.System’s performance under high volume of data.
3.Stress testing for systems, which are being scaled up to larger environments or implemented for the first time.
4.Operational intensive transactions. (Most frequently used transactions)
5.Volume intensive transactions (for both volume and stress testing)

3.Why do we need to plan/design testing?

Complete testing of software is impossible for several reasons:
a. Can’t test all inputs
The number of inputs you can feed to a program is typically infinite. We use rules of thumb (heuristics) to select a small number of test cases that we will use to represent the full space of possible tests. That small sample might or might not reveal all of the input-related bugs.
· Valid Inputs: Consider a function that will accept any integer value between 1 and 1000. It is possible to run all 1000 values but it will take a long time. To save time, we normally test at boundaries, testing perhaps at 0, 1, 1000, and 1001. Boundary analysis provides us with good rules of thumb about where we can shortcut testing, but the rules aren’t perfect.
· Invalid Inputs: Suppose you only tested –1, 0, and 1001 as your invalid numerical inputs. Some programs will fail if you enter 999999999999999—a number that has too many characters, rather than being too numerically large.
· Edited Inputs: If the program accepts numbers from 1 to 1000, what about the following sequence of keystrokes: 1 1 1 1000 ? If you entered single digits and backspaced over them hundreds of times, could you overflow an input buffer in the program? There have been programs with such a bug. How many variations on editing would you have to test before you could be absolutely certain that you’ve caught all the editing-related bugs?
· Timing Considerations: The program accepts any number between 1 and 9999999. You type 123, then pause for a minute, then type 4567. What result? If you were typing into a telephone, the system would have timed you out during your pause and would have discarded 4567. Timing-related issues are at the heart of the difference between traditional systems and client/server systems. How much testing do you have to do to check for timeouts and their effects?
b. Can’t test all combinations of inputs
Suppose a program lets you add two numbers. The program’s design allows the first number to be between 1 and 100 and the second number between 1 and 25. The total number of pairs you can test (not counting all of the pairs that use invalid inputs) is 100 x 25 (2500).

In general, if you have V variables, and N1 is the number of possible values of variable 1, N2 is the number of possible values of variable 2, and NV is the number of values of variable V, then the number of possible combinations of inputs is N1 x N2 x N3 x . . . x NV. (The number is smaller and the formulas are more complicated if the values available for one variable depend on the value chosen for a different variable.)

We can use heuristics (domain testing) to select a few "interesting" combinations of variables that will probably reveal most of the combination bugs, but we can’t be sure that we’ll catch all of them.
c. Can’t test all the paths
A path through a program starts at the point that you enter the program, and includes all the steps you run through until you exit the program. There is a virtually infinite series of paths through any program that is even slightly complex.

Consider an example of a software program in a flowchart-like format as shown below:



(Refer the Diagram 1 at the top)

The program starts at point A and ends at Exit. You get to Exit from X. When you get to X, you can either exit or loop back to A. You can’t loop back to A more than 19 times. The twentieth time that you reach X, you must exit. There are five paths from A to X. You can go A to B to X (ABX). Or you can go ACDFX or ACDGX or ACEHX or ACEIX. There are thus 5 ways to get from A to the exit, if you only pass through X once.

If you hit X the first time, and loop back from X to A, then there are five ways (the same five) to get from A back to X. There are thus 5 x 5 ways to get from A to X and then A to X again. There are 25 paths through the program that will take you through X twice before you get to the exit.

There are 53 ways to get to the exit after passing through X three times, and 520 ways to get to the exit after passing through X twenty times. In total, there are 5 + 52 + 53 + 54 + . . . + 520 = 1014 (100 trillion) paths through this program. If you could write, execute, and verify a test case every five minutes, you’d be done testing in a billion years.

One alternative to complete path testing is sub-path testing. A sub-path is just a series of steps that you take to get from one part of the program to another. In the above example, A to C is a sub-path. So are ACD, ACDF, and ACDFX. Under this viewpoint, once you’ve tested ACDFX, you wouldn’t test this sub-path again. You’d probably test ABX, ACDFX, ACDGX, ACEHX and ACEIX once each, plus one test that would run you through X the full 20 times.

If you go through a sub-path with one set of data one time, and a different set of data the next, you can easily get different results. And the values of the data often depend on what you’ve done recently in some other part of the program.

d. Can’t test for all of the other potential failures
A system can fail in the field because the software is defective, the hardware is malfunctioning, or the humans who work with the system make a mistake. If the system is designed in a way that makes it likely that humans will make important mistakes, the system is defective. Therefore, to find all defects in a system, you have to test for usability issues, hardware/software compatibility issues, requirements conformance issues, timing issues, etc. There are lots and lots of additional tests.
4.Software Test Process:
For every product/application testing, the following steps constitute the typical Testing process:
1. Test Planning:
a. Prepare Test Plan
· Define Test objectives
· Identify environmental needs
· Identify the test tools to be used, if any
· Identify the Test team
· Assess risks
· Identify critical success factors
· Write Test cases
· Determine sequence/integration procedure
· Determine Test stop and resume criteria
· Identify the no. Of test cycles to be carried out
· Review Test Plan
2. Test Execution:
a. Setup Test Environment
· Identify the Testing server
· Setup restorable Test data
· Set up the applicable test tools
b. Set up the CM environment for the test environmental assets such as
· Application/Product code
· Test data
· Test scripts
· System configuration files
c. Ensure that the Test environment closely simulates the user’s platform
· Browsers
· OS
· Hardware
d. Execute Tests
· Unit Testing
· Integration Testing
· System Testing
· User Acceptance Testing
3. Test Evaluation
a. Review Test Results
· Define how defects are to be classified
· Setup criteria in place to prioritize/evaluate defects

Test Artifacts:
Test Plan:
It is a plan document that covers the overall plan for entire Testing activities for the project. This includes the scope and objective of the plan, references to other documents, the test items and their prioritization, tests to be conducted, tools to be used, environment setup, resources and infrastructure, test entry and exit criteria, test measurements.
Test Environment/Bed:
It is a combination of hardware and software platforms together with other third-party products, application configuration and data on which a Testing activity is performed.
Test Case:
This is a series of steps for executing a program or a set of programs, along with a set of data that should be provided as inputs. In addition, a test case also specifies the expected output for each step to be executed.
Test Data:
It is a set of values that should be provided as input while executing a test, along with the expected output associated with that step.
Test Tools:
These are third-party/In house products that aid in automating the Testing activity. These tools typically work on a record-and-playback mechanism and aid in simulation of production-like loads for simulated Load and Performance Testing. Most of these tools also provide output/error logging and reporting features for a consolidated view of results and to help compare/contrast them.
Test Scripts:
A set of scripts, which when executed in a particular sequence, runs the test and logs/displays the output as desired. Test scripts are typically used while regression testing where the test sequence is recorded and played back later, as desired through the scripts. The scripting language varies with the tools being used.
Test Log:
A file that logs the output /errors during manual /automated testing activities. These files provide a consolidated view of results and help compare/contrast them. They provide data on test coverage and the proportion of tests that passed/failed. They also log the errors reported against each test executed along with the input data.
Bug Report:
The Bug report is a consolidated document, which gives details of all the bugs found during the test execution process. Once the bug is found then by assigning the seveority, register the bugs using Bug tracking tool the bugs are reported to Team lead.
The priority is decided by either the client, Team lead or project leader.
BUG REPORT Consists of:
1.Bug Identifier.
2.Bug description.
3.Status (Open, Fixed, Closed, New).
4.Severioty.
5.Detected by.
6.Dectected on Dt.
7.Product Version.
8.Platform.
5.Traits of a good tester:
Software test engineers require technical skills, but they also require other characteristics that help them appreciate the customer or user’s perspective while testing it. The following are some traits that can help a tester test a system/application thoroughly:
· Destructive creativity: A software test engineer can't be afraid of causing a product to crash and burn. In software testing, boundaries are meant to be crossed, not obeyed.
· Detective skills: In an ideal world, the software to be tested should be well documented and updated regularly. Due to time pressures, it may become impossible to update requirements document, or requirements may just not be available. Under such situations, it is difficult to track changes and prepare a test plan.
Because we don't create software in an ideal world, the test engineer must be able to figure out how things work on his own. Typically, some design or functional specifications will be available, so the test engineer can use these documents to begin his research. Skilled testers should find ways and means of getting right information required for testing.

· Understanding the product as the sum of its parts: The developer will concentrate on his piece of code rather than entire system; this induces bugs when components are assembled and tested. Whereas a tester cannot afford to have the same mentality. His strength lies in understanding individual components of a system, and testing the same from system perspective.

· Appreciating the users’ perspective: The test engineer must also act as an advocate for the customer. The program under test may perform reliably and may meet all requirements, but it may not be operable under production. So it is essential for tester to simulate production like environment.
· Requirements change: Testers should have the patience to test and retest the same system or component repeatedly. It is nature of software that it evolves every day. A customer with ever changing market needs is prone to request for frequent changes in software. The objective of a tester is to identify a defect early in software life cycle, thus minimizing defects that get released along with software.

· A skeptical, but not hostile attitude: The test engineer must not accept things at face value, but must be tenacious in questioning everything until it's all proven. The engineer must, however, balance this skepticism and stubbornness with a spirit of cooperation with the rest of the project team. The test engineer must remember to assault the integrity of the programs, not the programmers.

An eagerness to embrace new technologies: The Internet is increasing the rate at which technology is changing. A testing team needs to be as much equipped technologically, as the development team itself. Knowing the technology himself
helps him appreciate what is achievable within the given architectural framework and what is not. This helps build a stronger rapport with the development team. The development team and testing team need to coexist to achieve a common.




6.Testing Roles:
Test Management
· The role - Define test strategies, allocate resources, manage project risks, produce project plans, develop processes.
· Knowledge - Industry knowledge, risk management and project planning using available tools
· Skills - Excellent communicator, flexible, leadership, diplomatic
· Experience - Industry specific, a variety of platform and application types, detailed testing methods.
Test Planning
· The role - Define test requirements, plan tests, ensure test coverage, identify test conditions, identify re-tests
· Knowledge - How systems and technology support business processes, the role of data, software development life cycle, testing techniques and terminology
· Skills - Requirements analysis, interviewing, prioritization, data analysis, working with limited time and information
· Experience - Using applications, acceptance-testing applications, documenting activities, using software in the work environment, business processes.
Test Design
· The role - Design the tests and prepare test data, produce scripts, set up automated tests, prepare test cases, prepare data tables for automation
· Knowledge - The application under test, the data, writing test scripts, what tests to automate/run manually
· Skills - Re-use of Test Assets, data manipulation, communicator, methodical
· Experience - Clear documentation, reporting, creating instruction sets, defining data requirements.
Test Execution
· The role - Execute tests, record test scripts, maintain statistics and metrics, check test data setup, test environment setup, execute re-tests
· Knowledge - Understanding of the importance of testing, awareness of tools, how to progress against a plan
· Skills - Observation, accuracy, methodical, co-ordination, problem solver
· Experience - Following instructions, problem reporting and solving, and relevant testing tools.
Problem Identification
· The role - Identify and record problems, maintain metrics, re-create problems
· Knowledge - Application under test, testing tools, problem management, change control, configuration management, metrics
· Skills - analysis, pro-active, intuitive, identify solutions, diplomatic
· Experience - Incident reporting, causal analysis, problem solving
Test Asset Administration
· The role - Maintaining a directory of all test assets, apply change control to assets
· Knowledge - Change and version control, configuration management
· Skills - Ability to organize; version and configuration management
· Experience - Change and version control, release management, asset management.


Environment Management
· The role - To set up and maintain environments for testing, provide quality data and test tools
· Knowledge - Environment maintenance utilities, how to run environments from a system perspective
· Skills - Operational analysis skills, DBA skills, system programming
· Experience - System support, experience in one of the above disciplines

7.When to Stop Testing?
It is very well perceived that testing like any other activity, needs time, effort and cost. Testing should be stopped when any of the above parameter exceeds budget. It is always advised to discuss budget required for testing with the customer. Ensure optimization in terms of functionality, reliability, effort, time and risks associated with each of the above parameters, is covered during discussion. Document the same as part of the Test strategy and get the same signed off by the customer. During Test execution these parameters provide guidelines for further action.
The following points describe situations how and when testing can be stopped.
· The test stop criteria can depend on the methodology adopted for software development. E.g. If the project works on incremental model the testing can be restricted by the functionality developed during that cycle taking into account Time to Market.
· Incase if changes are frequent, and changes are implemented in parallel with development, then testing of both will not help Time to Market. Under these circumstances it is advised, that you list critical transactions and prioritize and test only those transactions.
· Time lines of the project may be very narrow and you may not be able to test 100 % functionality. Under these circumstances arrive at the % of functional testing that can be completed within the specified timeline so that quality is not compromised. Once again prioritize critical transactions.
· In the test strategy, severity of defects needs to be defined where you specify the nature of fatal, major, minor and cosmetic error. Project teams can specify in the test strategy, that software will be released with minor/cosmetic bugs.
· There are times when testing cannot be continued simply because the defect found is a showstopper and there is no round about way of continuing testing for the remaining part of the software. In such a situation, the tester will not have any choice but to fix and close the defect quickly and then proceed.

No comments: