Issues To Consider for V&V Strategy



Issues To Consider for Test Strategy

Objectives

1. Find safety and efficacy bugs

1. Identify usability issues early

2. Test functional requirements

3. Estimate overall undiscovered bugs or evaluate code robustness

4. Get the product robust enough to inspire user confidence in the product

2. Identify obscure bugs which may or may not affect use, should or should not be “fixed”

Test Types

|Test Type |Pros |Cons |

|Smoke test—plug it in and see if anything |Satisfaction of getting beyond coding and |False sense of accomplishment and a bad |

|burns up, crashes, etc. |on to debugging |idea to send to test without thorough |

| | |module test |

|Scenario, task-based or user-case |More likely than step-by-step tests to |Harder to duplicate bugs |

|testing—test designed from typical user |uncover new efficacy or safety bugs and | |

|activity |usability issues | |

|Functional testing—test designed from spec |Uncovers more bugs |Typically boundary condition bugs, etc. not|

| | |as related to usability |

|Configuration testing—testing the types of |Uncovers configuration related issues that |It is expensive and the easiest to do by |

|machines, environments and parameters that |sometimes pop up in the transition from |beta testers outside the company, either by|

|it should run under and other programs it |engineering hardware versions to released |distributing free software over the |

|should be able to co-exist with. |versions |Internet or by going to a test facility |

| | |that specializes in configuration testing. |

|Load or stress testing (rapid transactions,|Tests buffers and queues when requests come|More applicable for some applications than |

|large data sets, etc.) |faster than they can be executed, etc. |others |

|Documentation and installation procedure |Prevents engineers, tech support and tech |Takes time at the end… |

|testing |pubs from really looking stupid | |

|Freelance test on recently changed areas |Always a good idea since fixing bugs often |Not applicable. |

| |creates new ones | |

|Automated test insertion using products |Makes critical software more robust |Costs money and requires user training |

|such as Purify( or Sentinel( | | |

|Automated testing |Good if failure condition is difficult to |Takes 10 times as long to write and program|

| |reproduce or test will be run at least 10 |than to manually write and execute. If |

| |times without changes to the test |product changes, test may not still work? |

|Coverage testing (a.k.a. branch coverage or|Great for determining what has not been |Can be an exhaustive process gives the |

|path testing) is best used to find where to|exercised, and finding bugs that always |sense of complete testing, but doesn’t find|

|test and not necessarily how to test |occur in the code |numerous bugs from handling boundary or |

| | |special case conditions |

|Thread testing--follow data through the |A good way to test interfaces between |Inadequate for testing modules |

|process |modules | |

|Back-to-back testing compares a previous |Great for testing systems whose interface |If the interface is undergoing changes, |

|system to the new one for differences |hasn't changed (systems in the sustaining |this is in exercise in futility |

| |phase of life) | |

|Black-box testing |A good sanity check for the robustness of |Should not take the place of white-box |

| |the code |testing |

|White-box testing—tests based on source |The most efficient method of finding bugs |Requires tester to read and understand the |

|code (most efficiently done by the |with lots of off-the-shelf tools at your |code |

|programmer) |disposal | |

|Code inspections (preparation, overview |Finds bugs not easily uncovered by UI. |Takes an average of 4 min/line to do right |

|meeting 500 line/hr, individual analysis |Will detect 60-90% of bugs depending on how| |

|125 line/hr, group meeting 90-125 lines/hr,|formally it is done. Can replace module | |

|rework, follow-up) |test. Improves comments. | |

|Code walkthrough and step-through execution|Like code inspection, only quicker |May not improve comments. |

|w/ debugger | | |

Places to look

1. Data faults: variables initialized, constants named, bounds checking, string delimiters and terminators

2. Control faults: conditional checks correct, loops terminate, brackets match, catch-all handlers

3. I/O faults: parameters all used, outputs all assigned

4. Interface faults: parameters match type, count or order; shared memory shares data model or timing conventions

5. Storage faults: queueing and dequeueing of elements cleaned up properly, memory allocation and deallocation as required

6. Error handling faults: all error conditions covered properly

Policies, Techniques & Places To Look For Bugs

|Test/Debug Technique |Pros |Cons |

|For testing to be systematic, tests must be|Much easier to defend in the case of an |Takes time |

|planned (and plans should be reviewed) |audit | |

|Early test planning with programmer and |Prevents bugs and uncovers usability issues|Perception of not getting anything done |

|tester | | |

|Include test-setup instructions to ensure |Prevents “red herrings” that are due to |Not applicable. |

|configuration assumptions are true |“special” hardware or configuration. | |

|Identify risky areas in the code (changed, |Problems in the past tend return and bugs |Not applicable. |

|high-use, sensitive or critical components)|tend to cluster, as do usability issues | |

|using historical problem report data and | | |

|broad input from programmers, testers, | | |

|users and experts | | |

|Explore "irrelevant" oddities |These often lead to bona fide bugs |Not applicable. |

|Tight v. loose instructions |Tight is good to reproduce or regression |Loose is good to uncover new bugs, is less |

| |test bugs, ensure passing of tests, |expensive to maintain, easier to recover |

| |discourage simple values such as zero or |from bugs encountered or user error during |

| |one, catch more spelling errors, etc. |test, multiple input options, |

|Spell out goals of test sections |Makes the test easier to perform and |Not applicable. |

| |maintain | |

|Design in a testing interface (such as |Allows simulation of errors and makes test |Takes extra time in the specification and |

|serial port, etc.) |automation easier |coding phases. Not applicable for code |

| | |that is mostly user interface. |

|Top-down testing--structure, stub, code and|When coding is done it is usually pretty |Forces a style of development that may not |

|tests in a cycle. |robust. |be appropriate for the desired product |

|Bottom-up testing |Allows programmers to work independently on|Requires everything to be coded before |

| |modules without relying on the test harness|architectural or usability problems are |

| |or framework to first be implemented |detected |

|Interface testing—test modules based on |Forces programmers to think about |Requires very detailed software design |

|defined interfaces |interfaces before coding. |specifications |

|Cleanroom coding--three separate teams to | A great way to generate very clean code |Requires high degree of discipline from |

|specify, develop (and verify without |for critical applications. |both engineers and managers |

|executing), certify ( black-box test and | | |

|validate) code | | |

|Equivalent partitioning (testing one value |Saves time |Not true for non-linear functions |

|in the middle, plus upper and lower | | |

|boundaries is usually sufficient because | | |

|almost all values will behave like one of | | |

|those) | | |

|Have an expert that uses code analysis |Good at finding obscure syntax mistakes, |Not applicable. |

|tools like lint and CDOC |questionable practices, etc. | |

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download