Are you a new in Software Quality Assurance? Please let this be a reference to get you started learning all about SQA.
Testing Methodology
The following is an overview of the quality practices of Software Quality Assurance team:
-The iterative approach to software development presents a significant challenge for SQA. The iterative, rapid deployment process is characterized by a lack of strict adherence to a traditional waterfall development methodology (marketing first specs the feature set, then engineering refines the marketing requests into more detailed specifications and a schedule, then engineering starts building to specification and SQA starts building tests, then a formal testing cycle, and finally product release). Here is a variant of development:
As progress is made toward a release, the first priority features are done to a significant level of completion before much progress is made on the second priority features. A similar approach is taken for the hopefully and third priority features. The first priority feature list is all that has to be completed before a product is feature complete, even though, there has been time built into the schedule to complete the second priority, as well.
Other than the initial OK from the executive team that they want a particular product built, there is not a rigorous set of phases that each feature must pass.
Developers (designers, coders, testers, writers, managers) are expected to interact aggressively and exchange ideas and status.
By not going heavily into complete specifications, the impact of a better idea along the way need not invalidate a great deal of work.
One prototype is worth a pound of specification. However, this does not mean that large scale changes should not be specified in writing. Often times, the effort to do paper based design is significantly cheaper than investing in a working prototype. The right balance is sought here.
Complementing the strategy of iterative software development, the SQA testing assessment is accomplished through personal interaction between SQA engineers and Development engineers. Lead SQA engineers meet with the development team to assess the scope of the project, whether new features for an existing product, or the development of a new product. Feature, function, GUI, and cross-tool interaction are defined to the level of known attributes. When development documentation is provided, the understanding of the SQA engineer is greatly enhanced. The lead SQA engineer then meets with the test team, to scope the level and complexity of testing required. An estimate of test cases and testing time is arrived at and published, based upon the previous discussions.
Working with the development team, the SQA team takes the builds, from the first functioning integration, and works with the features as they mature, to determine their interaction and the level of testing required to validate the functionality throughout the product.
The SQA engineers, working with existing test plans and development notations on new functionality, as well as their notes on how new features function, develop significant guidelines for actual test cases and strategies to be employed in the testing. The SQA engineers actively seek the input of the development engineers in definition and review of these tests.
Testing is composed of intertwined layers of manual ad hoc and structured testing, supplemented by automated regression testing which is enhanced as the product matures.
Test Plan Components
Test requirements based on new features or functions.
Specific testing based on the features defined as Development Priority 1. There must be a plan in place for these features and they must be scheduled for testing. A product release date will be slipped in order to complete adequate testing of the Priority 1 features Specific testing based on new features or functions defined as Development
Priority 2. There must be a plan in place for these features and they must be scheduled for testing. If testing of the Priority 1 features impacts adequate testing of these, they may be dropped from the product. Specific testing based on new features or functions defined as Development
Priority 3. Software Quality Assurance will not schedule or plan for these features. However, Priority 3 completed prior to Functional Freeze will be added to the SQA Priority 2 for testing and appropriate risk assessment will be taken with respect their inclusion in the released product.
SQA has its own set of Priority 1, Priority 2, Priority 3, which include not only the Development activities, but also testing required as due diligence for product verification prior to shipment.
Priority 1, features include the testing of new features and functions, but also a defined set of base installations, program and data integrity checks, regression testing, documentation (printed, HTML and on-line Help) review and final "confidence" (high level manual or automated tests exercising the most frequently used features of the product) checks on all media to be released to the public. Products being distributed over the Web also have their Web download and installation verified.
Priority 2, include a greater spectrum of installation combinations, boundary checking, advanced test creation and more in-depth "creative" ad hoc testing.
Priority 3, usually reflect attempts to bring greater organization to the SQA effort in documentation of test scripts, creation of Flashboards for metric tracking, or expanded load testing.
Testing
One of the test methods SQA team practice is "Black Box" testing. The SQA engineers, like the customers whom they attempt to emulate, are isolated from the source code and must rely upon their understanding of the application and its features and functions. SQA engineers work with Development engineers toward development of code which lends itself to the exercise of automated test tools, thus providing for stable, repeatable, reliable testing of base features and functions. The deductive, reasoning, creative skills of the SQA engineers are thus freed from the more repetitive tasks to focus on development of more user centric testing, which expands the scope coverage.
Manual Testing
GUI - SQA team members upon receipt of the Development builds, walk through the GUI and either update existing hard copy of the product Roadmaps, or create new hard copy. This is then passed on to the Tools engineer to automate for new builds and regression testing. Defects are entered into the bugs tracking database, for investigation and resolution. Questions about GUI content are communicated to the Development team for clarification and resolution. The team works to arrive at a GUI appearance and function which is "customer oriented" and appropriate for the platform, Web, UNIX, Windows, Macintosh. Automated GUI regression tests are run against the product at Alpha and Beta "Hand off to QA" ,(HQA) to validate that the GUI remains consistent throughout the development process. During the Alpha and Beta periods, selected customers validate the customer orientation of the GUI.
Features & Functions - SQA test engineers, relying on the team definition, exercise the product features and functions accordingly. Defects in feature/function capability are entered into the defect tracking system and are communicated to the team. Features are expected to perform as expected and their functionality should be oriented toward ease of use and clarity of objective. Tests are planned around new features and regression tests are exercised to validate existing features and functions are enabled and performing in a manner consistent with prior releases. SQA using the exploratory testing method manually tests and then plans more exhaustive testing and automation. Regression tests are exercised which consist of using developed test cases against the product to validate field input, boundary conditions and so on... Automated tests developed for prior releases are also used for regression testing.
Installation - Product is installed on each of the supported operating systems in either default, flat file configuration, or with one of the supported databases. Every operating system and database, supported by the product, is tested, though not in all possible combinations. SQA is committed to executing, during the development life cycle, the combinations most frequently used by the customers. Clean and upgrade installations are the minimum requirements.
Documentation - All documentation, which is reviewed by Development prior to Alpha, is reviewed by the SQA team prior to Beta. On-line help and context sensitive Help are considered documentation as well as manuals, HTML documentation and Release Notes. SQA not only verifies technical accuracy, clarity and completeness, they also provide editorial input on consistency, style and typographical errors.
Automated Testing
GUI - Automated GUI tests are run against the product at Alpha and Beta "Hand off to QA" (HQA) to validate that the GUI has remained consistent within the product throughout the development process. The automated Roadmaps walk through the client tool windows and functions, validating that each is there and that it functions.
Data Driven - Data driven scripts developed using the automation tools and auto driver scripts are exercised for both UNIX and Windows platforms to provide repeatable, verifiable actions and results of core functions of the product. Currently these are a subset of all functionality. These are used to validate new builds prior to extensive manual testing, thus assuring both Development and SQA of the robustness of the code.
Future - Utilization of automated tools will increase as our QA product groups become more proficient at the creation of automated tests. Complete functionality testing is a goal, which will be implemented feature by feature.
Reporting
The SQA team lead produces a formal test plan which is submitted to the project team for validation and information. Input is solicited for the understanding of requirements, so that the plan can accurately reflect the testing which will be required. Because of the variances between Development teams' style of disseminating feature, function scope and definition, the amount of information available is not always consistent. The SQA lead attempts to establish a good working relationship with the Development team, so that questions can be asked and comprehensive, identical understanding of the product will exist. The lead also pairs a QA engineer with a development counter part to facilitate day to day interaction.
SQA engineers produce status sheets for distribution to the members of the product teams. The data on the status sheets have included: features, functions, time estimated to test, time consumed and amount of testing yet to be done, the last has proved to be too subjective. Some test engineers include the staff assigned to specific testing and the percentage complete. The later is difficult to estimate due to the iterative process, as feature and function specifics change often and rapidly during the development cycle, as they are refined. The evolving report, which appears to give the most information, is one which lists the features, the build in which they were tested or re-tested and the pass or fail status of the test.
SQA provides monitoring of the defects which appear in the product, through the use of QA designed Flashboards (graphical representations of the aggregate numbers of defects) and reports. The defects found in the product are recorded in a Bug Tracking database, where the information is made available to the development group. Information stored in the database then provides statistical and trend analysis for defect find rates and product areas. This information is compared to that presented by the Development team and the Product team. Customer support is kept abreast of these defects and influences the priority assigned to the defect by the team.
Team leads have established directories for their products in which test plans and weekly status reports have been posted. These are updated weekly by the team lead and reviewed by the manager and linked or posted to the QA home page on the intranet.
QA managers work with their teams to assess better forms and methods of information dissemination. These reviewed with the larger engineering and project teams so that the teams feel they understand the scope of work to be done by SQA and the status of a project currently being tested.
André Vondran