[Project Name] - Northwestern University



Project NameAUTHOR NAME: Tina CooperDate Written: DATE \@ "M/d/yyyy" 7/16/2011Revision NumberRevision DateAuthor(s)Revision DescriptionContents TOC \o "1-3" \h \z \u 1.INTRODUCTION PAGEREF _Toc172443441 \h 31.1.Description of Testing PAGEREF _Toc172443442 \h 31.2.Definitions, Acronyms, and Abbreviations PAGEREF _Toc172443443 \h 32.RESOURCE REQUIREMENTS PAGEREF _Toc172443444 \h 32.1Testing Environment PAGEREF _Toc172443445 \h 32.2.Testing Tools PAGEREF _Toc172443446 \h 32.3.Project Staffing PAGEREF _Toc172443447 \h 33.TESTING SCOPE PAGEREF _Toc172443448 \h 33.1.Levels of Testing PAGEREF _Toc172443449 \h 33.2.Performance PAGEREF _Toc172443450 \h 43.2.1High Level planning: PAGEREF _Toc172443451 \h 43.2.2. Mid Level planning: PAGEREF _Toc172443452 \h 43.2.3. Detailed Level planning: PAGEREF _Toc172443453 \h 43.2.4. Preparation: PAGEREF _Toc172443454 \h 53.2.5. Execution: PAGEREF _Toc172443455 \h 53.2.6. Analysis: PAGEREF _Toc172443456 \h 63.2.7. Areas Not Being Tested PAGEREF _Toc172443457 \h 64. STANDARDS AND METHODS PAGEREF _Toc172443458 \h 64.1. Defect Reporting PAGEREF _Toc172443459 \h 64.2. Procedure Controls PAGEREF _Toc172443460 \h 65. OPEN ISSUES PAGEREF _Toc172443461 \h 76. APPROVALS PAGEREF _Toc172443462 \h 77. APPENDIX A - Definitions, Acronyms, and Abbreviations PAGEREF _Toc172443463 \h 98. Document Tracking PAGEREF _Toc172443464 \h 101.INTRODUCTION1.1.Description of TestingThis section should be used to define what is being tested and the primary purpose (the “why” the testing is being conducted. Consideration may be given to special circumstances, special focus/emphasis, or other issues that are unique to this project. 1.2.Definitions, Acronyms, and AbbreviationsSee Appendix A.2.RESOURCE REQUIREMENTS2.1Testing EnvironmentThis section will describe the hardware and software necessary for the test environment in order to begin testing for this project.2.2.Testing ToolsThis section will describe the tools necessary to conduct the test (excluding manual tests). 2.3.Project StaffingThis section is used to identify key individuals involved with the test and their designated responsibility and availability.Area of ResponsibilityName(s)Availability / Scheduling Constraints*3.TESTING SCOPE3.1.Levels of TestingThis section lists out the levels of testing that will be performed for this project.3.2.PerformancePlanningDefine scope, plan out timeline and resource allocationPreparationUser data, scripts and monitoring setupExecutionRunning simulated user scenarios against test systemAnalysisReview of data gathered, success determination and summarize resultsPlanning is a critical in the success and smooth performance testing cycle. There are three levels to the planning and preparation.3.2.1High Level planning:3.2.1.1. Goals/ScenariosThis section describes each test scenario and their associated goals.3.2.1.2. Systems to be tested3.2.1.3. Back end processes:3.2.2. Mid Level planning:3.2.2.1. Scope and functionality coverage3.2.2.2. Types of users to be simulated3.2.2.3. Types of transactions to be simulated3.2.2.4. Test system configurations3.2.3. Detailed Level planning:3.2.3.1. User data and transaction details:3.2.3.2. Define test scenarios:3.2.3.3. Define test run timelinesA sample chart is provided below. Separate charts can be created for each scenario, as appropriate.Test Run Timeline – Scenario XXXXXTimeAction3.2.4. Preparation:3.2.4.1. Setup data needed for test3.2.4.2. Test system in place with data available such that scripts with data requirements are tested against system before execution phase. (as close to end system as possible regarding functionality and data)3.2.4.3. Record / Test out scripts3.2.4.4. Record script walkthroughs3.2.4.5. Run sample multi-user test to ensure data integrity (for example if unique logins are necessary – make sure system is not duplicating user logins)3.2.4.6 Define Key measures (transaction rates, Hits/second etc..)3.2.4.7. How many of what type of user for a given test3.2.4.8. Determine appropriate rate of user think time3.2.4.9. Monitoring of test3.2.4.10.Determine what systems need to be monitored3.2.4.11. Determine what aspects/stats need to be monitored3.2.4.12. Setup monitoring in Load Controller as well as Site Scope.3.2.5. Execution:3.2.5.1. Execute test run scenarios as outlined3.2.5.2. Intermediate review and brief analysis of results3.2.5.3. Simulated users data3.2.5.4. Transaction stats3.2.5.5. Response time stats3.2.5.6. System data3.2.5.7. CPU, Memory, I/O, Diskspace3.2.5.8. Web Statistics [Hits/second, HTTP Responses, Time to first Buffer breakdown, Connections/sec, SSL’s/sec…etc]3.2.5.9. Determine if goals are being met? If not and there is a problem or concern. Determine if there needs to be change in testing focus.3.2.6. Analysis:3.2.6.1 Create/generate reports3.2.6.2. Review results and summarize based on scenario 3.2.6.3 Simulated users data3.2.6.4. Transaction stats3.2.6.5. Response time stats3.2.6.6. System data3.2.6.7. CPU, Memory, I/O, Disk space, Linux Resources, Web Statistics.3.2.6.8. Determine if goals and objectives were met3.2.6.9. Write report summarizing results and recommendations highlighting issues or concerns uncovered during testing.3.2.7. Areas Not Being TestedThis section describes specific areas that will not be tested during QA Testing.AreaDescription Of What Will Not Be Tested4. STANDARDS AND METHODS4.1. Defect ReportingThis section outlines how defects and issues will be tracked. And how team members will manage the log and resolve the defect.4.2. Procedure ControlsThis section describes the procedure controls (initiation, critical failure, resumption, and completion) for this type of testing. The following chart serves as an example and should be updated to reflect this project.ControlDescriptionInitiationGuidelines that must be met in order to start testing. The initiation controls are:Requirements/Scope document created and signed off by management.Unit testing has been completed.Product available and running in test environment.Test Strategy created and signed off by management.Critical FailureGuidelines that determine the point at which a failure is deemed to be critical and testing will stop. A defect is not necessarily a critical failure. A critical failure is a defect or issue so severe that there is no point in continuing. Example: The Critical Failure controls are:System cannot be installed (critical).System cannot be accessed (critical).ResumptionGuidelines that determine the point at which testing can resume after resolution of a critical failure. Resumption controls are:Failure resolved and new release moved to test environment by pletionGuidelines that must be met for testing to be considered complete. Completion controls are:All high priority defects/issues have been resolved.All defects/issues with been reported and addressed in some manner.Once all testing has been completed, QA will issue a QA Results Memo to all involved parties. The memo will briefly describe the overall testing that was done, any open defects/issues with their severity and the final status of the testing (accepted, conditionally accepted, or not accepted for production).5. OPEN ISSUESThis section provides the location of the team’s issue log and instruction on how issues are managed and resolved.6. APPROVALSThis section defines the individuals who have approval authority during the performance testing process for this project.NameTitleSignatureDate7. APPENDIX A - Definitions, Acronyms, and AbbreviationsThe chart below defines the various terms that will be used in this document as in communication related to the performance test. This list can be modified in any way to ensure that it reflects the terms of the specific project. Term (Acronym)DefinitionTest Case (TC)A documented description of the inputs, execution instructions, and expected results, which are created for the purpose of determining whether a specific software feature works correctly or a specific requirement, has been satisfied.DefectFor purposes of testing, a defect is defined as an anomaly caused by the system not functioning exactly as outlined in the requirements or the intended system functionality cannot be explicitly understood from the requirements and design documentation.Revision ControlSequential capturing of changes to an artifact that allows retracing (if necessary). Usually accomplished through the use of a tool.Unit TestingUnit testing is performed against a specific program by the developer who wrote it to test their own code and ensure that the program will operate according to the design specification. Usually executed independently of other programs, in a standalone manner.Integration / System TestingIntegration testing is performed to demonstrate that the unit-tested programs work properly with each other when they are progressively assembled together to eventually operate as a cohesive, integrated system.System testing is performed against the complete application system to demonstrate that it satisfies the User and Technical Requirements, within the constraints of the available technology. System testing is usually performed in conjunction with Integration Testing.Functional TestingFunctional testing is performed in a specific testing environment, similar to production that verifies the functionality of the entire system, as it would be in a live environment. Testing efforts and objectives will center on test cases specifically derived from the requirements. In addition to specified error processing. Tests will be documented using formal test cases.Regression TestingRegression testing is performed to verify that the new code did not break any of the existing code.Performance TestingPerformance testing is performed to verify how well the application measures up under varying loads of data, but still within the limits of normal, acceptable operating conditions.Load TestingLoad testing is performed to demonstrate how the product functions under certain high volume conditions (helps determine its breaking point). Load testing is usually performed in conjunction with Performance Testing. User Acceptance TestingUser Acceptance testing is performed to help validate the functionality of the entire system, including the manual procedures, which is usually performed by the system end-users. This testing helps insure that the system meets all the business scenarios that were identified by the users.Automated Testing Automated testing is performed to help validate the functionality of the entire system in a more efficient manner than manual testing. Regression testing will utilize the automated testing efforts. Currently, the tool used in automated testing is Segue Silk Test.HPManufacturer of the performance and functional based automated testing tools.HP Load RunnerPerformance/Load testing tool.HP Site ScopeMonitoring Tool8. Document TrackingDateAction TakenBy Whom ................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download