Testing is a key part of the software development process, and is iterative. This document describes:
There are five levels of testing:
Test Level | Focuses on… |
Build Verification Testing (BVT) | Testing the stability of the software build. |
Functionality | Testing program functions—a program can have several levels of functionality, depending on its complexity. |
Ad hoc (“guerilla”) | Unstructured, random testing by casual users. |
Integration | Testing the interaction between modules or program components—testing ways in which the program works with other products or platforms (both hardware and software). |
Stress (also known as Load or Performance tests) | Testing the number of transactions or level of use the program can consistently sustain. |
The types of testing to be done can vary from program to program, but should always include BVT, functionality, and ad hoc testing. There are two types of test methods that can be applied to each type of test:
Method | Focuses on… |
Black box | Testing the standard user interface (such as black-box functionality testing). Test cases use the same standard interfaces that customers are expected to use. |
White box | Testing application code (such as white-box stress testing). Test cases use batch files, SQL queries, and similar methods to interact directly with program code. |
There are three types of documents to create before testing begins:
The test plan must be in place before test specifications can be written; test specifications must be written before test cases can be created; test cases must be created before testing can begin.
Test plans should describe in detail everything that the test team, the program management team, and the development team need to know about the testing to be done. A test plan should contain the following information:
Content | Describes…. |
What is to be tested | The scope of the testing—the functionality that is to be tested; test criteria, such as the following:
|
When it is going to be tested (schedule) | Beginning date, end date, and dates for all significant milestones. |
Where it is to be tested | What computers, networks, and other hardware are required to perform the tests; where they will be located. |
How it is to be tested | What types of testing will be done:
|
Who is going to do the testing | The names of the test team and each tester’s responsibilities |
Assumptions and risks | List of conditions that must be met for testing to be successful—probable outcome if the necessary conditions aren’t met |
Test planning can be done for the entire project, or on a function-by-function basis for each of the program sub-functions.
The test specification conveys the entire scope of testing required for a set of functionality and defines individual test cases sufficiently that testers can use it as a basis for creating the test cases. Format of the test specification can vary, depending on project context. Some common formats include the following:
Format | Defines… |
Matrix | Program functions and function variables |
Written description or scenario | Process |
Screen print or format | Fields |
Database schema | Fields, tables, stored procedures |
Graphic | GUI form or window |
Outline | The functions to be tested |
A test specification can combine any of these formats, as needed.
There are four basic types of test cases:
Type | Focuses on… |
Functionality | Testing what the software is supposed to do |
Boundary | Testing the defined values and defaults |
Positive (or Valid) | Proving that the software does what it should do |
Negative (or Invalid) | Proving that the software doesn’t do what it shouldn’t do |
Test cases can be organized in a variety of ways, depending on the size, type, and complexity of the software being tested. For example, tests can be organized by type of test, by product feature, or by a combination of the two.
Test cases organized by test type might be organized as follows:
Test cases organized by product feature might be organized as follows:
Depending on complexity of the software, you might organize test cases using a combination of methods, such as the following:
An error, or “bug,” occurs when the software does something it shouldn’t, or won’t do something it should. A Test Error Report should contain the following information:
Section | Describes… |
Description | What went wrong. This section should be descriptive enough that the problem and its location are obvious. |
Reproduction steps | The detailed procedure for reproducing the bug. This might include platform or installation details. |
Tracking data | Who found the error, who’s fixing it, and other pertinent data. |
Test plans should contain the following information:
Section | Contains… |
Introduction/Purpose | Description of program, features. |
Risks and assumptions | List of conditions that must be met for testing to be successful; probable outcome if the necessary conditions aren’t met. |
Scope | Test methodology, approach; detailed description of what is in or out of scope. |
Feature/Program-specific function/Area | Testing type, deliverables. |
Test environment | Platform (operating system); platform dependencies. |
Test criteria | Acceptance, pass, suspension of test criteria. |
Bug tracking | Location of bug database, key bug data format. |
Test case management | Location of test cases, test case management structure. |
Test team | Members of test team, escalation path. |
Test platform (hardware) | Office-based test hardware, lab-based test hardware. |
Test schedule | Beginning and end dates, key milestones. |
Appendix | Most test plans should contain the following appendices:
|
The format and content of test specifications vary, depending on the project. However, test specifications should always define the scope of the entire testing effort and provide sufficient detail to enable testers to construct test cases.
Test cases should contain the following information:
Section | Contains… |
Title | Name of the test—a one-line description of what is being tested (ideally implying the expected result). |
Steps | How to do the test—a list of the discrete tasks required to complete the test case. Form names, menu items, field names, and input data should all be specifically identified in each step. |
Expected results | Description of what should happen (system behavior and/or output) after each step of the test. |
Assumptions | Notes about specialized platform requirements, key functionality, steps needed to set up the test case. |
Information in this document, including URL and other Internet web site references, is subject to change without notice. The entire risk of the use or the results of the use of this resource kit remains with the user. This resource kit is not supported and is provided as is without warranty of any kind, either express or implied. The example companies, organizations, products, people and events depicted herein are fictitious. No association with any real company, organization, product, person or event is intended or should be inferred. Complying with all applicable copyright laws is the responsibility of the user. Without limiting the rights under copyright, no part of this document may be reproduced, stored in or introduced into a retrieval system, or transmitted in any form or by any means (electronic, mechanical, photocopying, recording, or otherwise), or for any purpose, without the express written permission of Microsoft Corporation.
Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual property rights covering subject matter in this document. Except as expressly provided in any written license agreement from Microsoft, the furnishing of this document does not give you any license to these patents, trademarks, copyrights, or other intellectual property.
© 1999-2000 Microsoft Corporation. All rights reserved.
Microsoft, Windows and Windows NT are either registered trademarks or trademarks of Microsoft Corporation in the U.S.A. and/or other countries/regions.
The names of actual companies and products mentioned herein may be the trademarks of their respective owners.