U.dianyuan.com



Additional Tools

ALTA 6 and ALTA 6 PRO contain some additional analysis tools that allow you to perform supplementary analyses. These include tests of comparison between two data sets, likelihood ratio tests, and degradation analysis. The principles and theory behind each of these analysis tools are presented next.

 

Click a subchapter to go directly to that page.

 

•      Tests of Comparison

•      Common Shape Parameter Likelihood Ration Test

•      Degradation Analysis

 

See Also:

Contents

Introduction

 

Tests of Comparison

It is often desirable to be able to compare two sets of accelerated life data in order to determine which of the data sets has a more favorable life distribution. The units from which the data are obtained could either be from two alternate designs, alternate manufacturers or alternate lots or assembly lines. Many methods are available in statistical literature for doing this when the units come from a complete sample, i.e. a sample with no censoring. This process becomes a little more difficult when dealing with data sets that have censoring, or when trying to compare two data sets that have different distributions. In general, the problem boils down to that of being able to determine any statistically significant difference between the two samples of potentially censored data from two possibly different populations. This section discusses some of the methods that are applicable to censored data, and are available in ALTA.

 

Simple Plotting

One popular graphical method for making this determination involves plotting the data at a given stress level with confidence bounds and seeing whether the bounds overlap or separate at the point of interest. One could also perform the same comparison, at the point of interest, utilizing the Quick Calculation Pad and compare the exact results generated by that utility.

 

This can be effective for comparisons at a given point in time or a given reliability level, but it is difficult to assess the overall behavior of the two distributions, as the confidence bounds may overlap at some points and be far apart at others. This can be easily done using the multiple plot feature in ALTA.

 

Estimating P[[pic] [pic][pic]] Using the Comparison Wizard

Another methodology, suggested by Gerald G. Brown and Herbert C. Rutemiller, is to estimate the probability of whether the times-to-failure of one population are better or worse than the times-to-failure of the second. The equation used to estimate this probability is given by,

 

(1)

[pic]

 

where [pic](t) is the pdf of the first distribution and [pic](t) is the reliability function of the second distribution. The evaluation of the superior data set is based on whether this probability is smaller or greater than 0.5. If the probability is equal to 0.5, that is equivalent to saying that the two distributions are identical.

 

If given two alternate designs with life test data, where X and Y represent the life test data from two different populations, and if we simply wanted to choose the component at time t with the higher reliability, one choice would be to select the component with the higher reliability at time t. However, if we wanted to design a product as long-lived as possible, we would want to calculate the probability that the entire distribution of one product is better than the other and choose X or Y when this probability is above or below 0.50 respectively.

 

The statement "the probability that X is greater than or equal to Y" can be interpreted as follows:

 

•      If P = 0.50, then the statement is equivalent to saying that both X and Y are equal.

•      If P < 0.50 or, for example, P = 0.10, then the statement is equivalent to saying that P = 1-0.10 = 0.90, or Y is better than X with a 90% probability.

 

ALTA's Comparison Wizard allows you to perform such calculations. The comparison is performed at the given use stress levels of each data set. Eqn. (1) can then be expressed as,

 

[pic]

 

The disadvantage of this method is that the sample sizes are not taken into account, thus one should avoid using this method of comparison when the sample sizes are different.

 

See Also:

Additional Tools

 

Common Shape Parameter Likelihood Ratio Test

In order to assess the assumption of a common shape parameter among the data obtained at various stress levels, the likelihood ratio (LR) test can be utilized [28]. This test applies to any distribution with a shape parameter. In the case of ALTA 6, it applies to the Weibull and lognormal distributions. When Weibull is used as the underlying life distribution, the shape parameter, [pic], is assumed to be constant across the different stress levels (i.e. stress independent). Similarly, the parameter [pic], of the lognormal distribution is assumed to be constant across the different stress levels.

 

The likelihood ratio test is performed by first obtaining the LR test statistic, T. If the true shape parameters are equal, then the distribution of T is approximately chi-square with n - 1 degrees of freedom, where n is the number of test stress levels with two or more exact failure points. The LR test statistic, T, is calculated as follows,

 

[pic]

 

[pic]are the likelihood values obtained by fitting a separate distribution to the data from each of the n test stress levels (with two or more exact failure times). The likelihood value [pic], is obtained by fitting a model with a common shape parameter and a separate scale parameter for each of the n stress levels, using indicator variables.

Once the LR statistic has been calculated, then:

 

•      If T [pic][pic](1- [pic]; n - 1), the n shape parameter estimates do not differ statistically significantly at the 100[pic]% level.

•      If T > [pic](1- [pic]; n - 1), the n shape parameter estimates differ statistically significantly at the 100[pic]% level.

 

[pic](1- [pic];n - 1) is the 100(1 – [pic]) percentile of the chi-square distribution with n – 1 degrees of freedom.

 

Example

Consider the following times-to-failure data at three different stress levels.

 

[pic]

 

The data set was analyzed using an Arrhenius-Weibull model. The analysis yields,

 

[pic]

 

The assumption of a common [pic]across the different stress levels can be assessed visually using a probability plot.

 

[pic]

Fig. 1: Probability plot of the three test stress levels.

 

In Figure 1, it can be seen that the plotted data from the different stress levels seem to be fairly parallel.

 

A better assessment can be made with the LR test, which can be performed using the Likelihood Ratio Test tool in ALTA 6. For example, the [pic]s are compared for equality at the 10% level.

 

[pic]

 

The individual likelihood values for each of the test stresses can be found in the Results tab of the Likelihood Ratio Test window.

 

[pic]

 

The LR test statistic, T, is calculated to be 0.481. Therefore, T = 0.481 [pic]4.605 = [pic](0.9; 2), the [pic]do not differ significantly at the 10% level.

 

See Also:

Additional Tools

 

Degradation Analysis

Given that products are frequently being designed with higher reliabilities and developed in shorter amounts of time, even accelerated life testing is often not sufficient to yield reliability results in the desired timeframe. In some cases, it is possible to infer the reliability behavior of unfailed test samples with only the accumulated test time information and assumptions about the distribution. However, this generally leads to a great deal of uncertainty in the results. Another option in this situation is the use of degradation analysis. Degradation analysis involves the measurement and extrapolation of degradation or performance data that can be directly related to the presumed failure of the product in question. Many failure mechanisms can be directly linked to the degradation of part of the product, and degradation analysis allows the user to extrapolate to an assumed failure time based on the measurements of degradation or performance over time. To reduce testing time even further, tests can be performed at elevated stresses and the degradation at these elevated stresses can be measured resulting in a type of analysis known as accelerated degradation.

 

In some cases, it is possible to directly measure the degradation over time, as with the wear of brake pads or with the propagation of crack size. In other cases, direct measurement of degradation might not be possible without invasive or destructive measurement techniques that would directly affect the subsequent performance of the product. In such cases, the degradation of the product can be estimated through the measurement of certain performance characteristics, such as using resistance to gauge the degradation of a dielectric material. In either case, however, it is necessary to be able to define a level of degradation or performance at which a failure is said to have occurred. With this failure level of performance defined, it is a relatively simple matter to use basic mathematical models to extrapolate the performance measurements over time to the point where the failure is said to occur. This is done at different stress levels, and therefore each time-to-failure is also associated with a corresponding stress level. Once the times-to-failure at the corresponding stress levels have been determined, it is merely a matter of analyzing the extrapolated failure times like conventional accelerated time-to-failure data.

 

Once the level of failure (or the degradation level that would constitute a failure) is defined, the degradation for multiple units over time needs to be measured (with different groups of units being at different stress levels). As with conventional accelerated data, the amount of certainty in the results is directly related to the number of units being tested, the number of units at each stress level, as well as in the amount of overstressing with respect to the normal operating conditions. The performance or degradation of these units needs to be measured over time, either continuously or at predetermined intervals. Once this information has been recorded, the next task is to extrapolate the performance measurements to the defined failure level in order to estimate the failure time. ALTA allows the user to perform such analysis using a linear, exponential, power or logarithmic model to perform this extrapolation. These models have the following forms:

 

[pic]

 

where y represents the performance, x represents time, and a and b are model parameters to be solved for.

 

Once the model parameters [pic]and [pic]are estimated for each sample i, a time, [pic], can be extrapolated that corresponds to the defined level of failure y. The computed [pic]can now be used as our times-to-failure for subsequent accelerated life data analysis. As with any sort of extrapolation, one must be careful not to extrapolate too far beyond the actual range of data in order to avoid large inaccuracies (modeling errors).

 

Example

Consider a chemical solution (e.g. ink formulation, medicine, etc.) that degrades with time. A quantitative measure of the quality of the product can be obtained. This measure (QM) is said to be around 100 when the product is first manufactured and decreases with product age. The minimum acceptable value for QM is 50. Products with QM equal to or lower than 50 are considered to be "out of compliance" or failed.

 

Engineering analysis has indicated that at higher temperatures the QM has a higher rate of decrease. Assuming that the product's normal use temperature is 20 degress celcius (or 293K), the goal is to determine the shelf life of the product via an accelerated degradation test.

For the purpose of this analysis "shelf life" is defined as the time by which 10% of the products will have a QM that is out of compliance.

 

For this experiment, 15 samples of the product were tested, with 5 samples in each of three accelerated stress environments: 323K, 373K and 383K. Once a month, and for a period of seven months, the QM for each sample was measured and recorded. The data obtained is given in Table 1.

 

[pic]

Table 1: Accelerated Degradation

 

Since all of the readings are above the critical QM threshold of 50, none of the samples tested in this experiment had gone out of compliance (or "failed") by the end of the test. However, there was sufficient data for the degradation of each sample to extrapolate a time-to-failure (i.e. the month at which we expect each sample to be at QM = 50).

 

[pic]

Fig. 2: Data entered in ALTA's degradation analysis

 

Using ALTA's degradation analysis utility (shown in Figure 2) the data for all samples were entered and individually fitted to multiple exponential curves (Figure 3 shows sample graphs). From each respective curve, a time-to-failure (i.e. the time the product is expected to go out of compliance) was automatically extrapolated and transferred to an ALTA Data Folio (Figure 4).

 

[pic]

Fig. 3: Sample degradation lines.

 

[pic]

Fig. 4: Extrapolated time-to-failure data in the ALTA 6 PRO Data Folio.

 

Several plots can be obtained from the analysis. Specifically, Figure 5 shows a Weibull probability plot at the use stress level. Figure 6 shows a Life vs. Stress plot where the line represents the time by which 10% of the units are expected to be out of compliance (at a given temperature).

 

[pic]

Fig. 5: Use level Weibull probability plot.

 

[pic]

Fig. 6: Life vs. Stress plot.

 

Based on this analysis, the projected shelf life of this product is 15.6 months. The desired result could also have been obtained from the QCP, as shown in Figure 4.

 

[pic]

 

See Also:

Additional Tools

 

General Examples using ALTA

In this chapter we present some application examples utilizing ReliaSoft's ALTA 6 Standard and ALTA 6 PRO. We assume that you have previously consulted the ALTA User's Guide and familiarized yourself with the software.

 

This chapter includes the following examples:

 

•      Paper Clip Example

•      Electronic Devices Example

•      Mechanical Components Example

•      Tensile Components Example

•      ACME Example

•      Circuit Boards Example

•      Electronic Components Example

•      Voltage Stress Example

•      Automotive Step-Stress Example

 

See Also:

Contents

Introduction

 

Paper Clip Example

To illustrate the principles behind accelerated testing, consider the following simple example that involves a paper clip and can be easily and independently performed by the reader. The objective was to determine the mean number of cycles-to-failure of a given paper clip. The use cycles were assumed to be at a 45° bend. The acceleration stress was determined to be the angle to which we bend the clips, thus two accelerated bend stresses of 90° and 180° were used. The paper clips where tested using the following procedure for the 90° bend. A similar procedure was also used for the 180° and 45° test.

 

•      Open the Paper Clip.

 

[pic]

 

1.    With one hand, hold the clip by the longer, outer loop.

2.    With the thumb and forefinger of the other hand, grasp the smaller, inner loop.

3.    Pull the smaller, inner loop out and down 90 degrees so that a right angle is formed as shown.

 

•      Close the Paper Clip.

 

[pic]

 

1.    With one hand, continue to hold the clip by the longer, outer loop.

2.    With the thumb and forefinger of the other hand, grasp the smaller, inner loop.

3.    Push the smaller inner loop up and in 90 degrees so that the smaller loop is returned to the original upright position in line with the larger, outer loop as shown.

4.    This completes one cycle.

 

•      Repeat until the paper clip breaks. Count and record the cycles-to-failure for each clip.

 

At this point the reader must note that the paper clips used in this example were "Jumbo" paper clips capable of repeated bending. Different paper clips will yield different results. Additionally, and so that no other stresses are imposed, caution must be taken to assure that the rate at which the paper clips are cycled remains the same across the experiment.

 

For the experiment a sample of six paper clips was tested to failure at both 90° and 180° bends. A base test sample of six paper clips was tested at a 45° bend (the assumed use stress level) to confirm the analysis. The cycles-to-failure are given next.

 

Cycles-to-failure at 90° [pic]

16, 17, 18, 21, 22, 23 cycles.

 

Cycles-to-failure at 180° [pic] 

4, 4, 5, 6, 6, 8 cycles.

 

Cycles-to-failure at 45° [pic] 

58, 63, 65, 72, 78, 86 cycles.

 

The accelerated test data was then analyzed in ALTA, assuming a lognormal life distribution (fatigue) and an inverse power law relationship (non-thermal) for the life-stress model. The analysis and some of the results are shown in Figures 1, 2, 3 and 4 next. Figure 5 shows the analysis of the base data in Weibull++ and the base MTTF estimate. In this case our accelerated test correctly predicted the MTTF as verified by our base test.

 

[pic]

Fig. 1: The accelerated test data analyzed in ALTA.

 

[pic]

Fig. 2: Lognormal probability plot of both stress levels from ALTA.

 

[pic]

Fig. 3: The resulting acceleration factor versus stress plot from ALTA.

 

[pic]

Fig. 4: The resulting life versus stress plot from ALTA. Note that from the plot the estimated MTTF at a 45° bend is 71.3 cycles. This was estimated utilizing the 90° and 180° bend data.

 

[pic]

Fig 5: The base 45° data analyzed in ReliaSoft's Weibull++ 6, utilizing a lognormal distribution, shown on lognormal probability paper along with the MTTF estimate of 70.32 cycles from the QCP.

 

See Also:

General Examples Using ALTA

 

Electronic Devices Example

Twelve electronic devices were put into a continuous accelerated test. The accelerated stresses were temperature and voltage, with use level conditions of 328K and 2V respectively. The data obtained is shown in the table below:

 

[pic]

 

Do the following:

1.    Using the T-NT Weibull model analyze the data in ALTA and determine the MTTF and B(10) life for these devices at use level. Determine the upper and lower 90% 2-sided confidence intervals on the results.

2.    Examine the effects of each stress on life.

 

Solution to Example 2: Electronic Devices Example

1.    The data was analyzed in ALTA and the following MTTF and B(10) life were obtained: (The results are shown in ALTA's QCP in the following figures.)

 

[pic]

 

2.    Figures 6 and 7 below examine the effects of each stress on life, while Figure 8 examines the effects of the combined stresses on the reliability. Specifically, Figure 6 below shows the life vs. voltage plot with temperature held constant at 328K.

 

[pic]

Fig. 6: The effects of voltage on life, with temperature held constant.

 

Figure 7 below shows the life vs temperature plot with voltage held constant at 2V.

 

[pic]

Fig. 7: The effects of temperature on life, with voltage held constant.

 

Figure 8 below shows a 3D plot of reliability (at a constant time) versus both stresses. One can observe that reliability declines slightly faster with the change in temperature than with the change in voltage. Note that in the following plot Stress 1 refers to temperature and Stress 2 refers to voltage.

 

[pic]

Fig. 8: The combined effects of voltage and temperature on the reliability, as plotted in ALTA.

 

See Also:

General Examples Using ALTA

 

Mechanical Components Example

A mechanical component was put into an accelerated test with temperature as the accelerated stress. The following times-to-failure were observed.

 

[pic]

 

1.    Determine the parameters of the Arrhenius-Weibull model.

2.    What is your observation?

 

Solution to Example 3: Mechanical Components Example

1.    The parameters of the Arrhenius Weibull model were estimated using ALTA with the following results:

 

[pic]= 1.771456, B = 86.273322, C = 1170.142567.

 

2.    A small value for B was estimated. The following observations can then be made:

 

•      Life is not accelerated with temperature, or

•      the stress increments were not sufficient, or

•      the test stresses were well within the "specification limits."

 

A small value for B is not the only indication for this behavior. One can also observe from the data that at all three stress levels, the times-to-failure are within the same range. Another way to observe this is by looking at the Arrhenius plot. The scale parameter, [pic], and the mean life are plotted next.

 

[pic]

 

It can be seen that life ([pic], and the mean life) is almost invariant with stress.

 

See Also:

General Examples Using ALTA

 

Tensile Components Example

A tensile component of a landing gear was put through an accelerated reliability test to determine whether the life goal would be achieved under the designed-in load. Fifteen units, N =15, were tested at three different shock loads. The component was designed for a peak shock load of 50 kips with an estimated return of 10% of the population by 10,000 landings. Using the inverse power law lognormal model, determine whether the design life was met.

 

[pic]

 

Solution to Example 4: Tensile Components Exampls

The data were entered into ALTA, and the following estimates were obtained for the parameters of the IPL-lognormal model:

 

Std = 0.179080,

K = 8.953055E – 13,

N = 4.638882.

 

The probability plot at each test stress level (73 kips, 95 kips and 123 kips) was then obtained, as shown next.

 

[pic]

 

In this plot, it can be seen that there is a good agreement between the data and the fitted model. The probability plot at a use stress level of 50 kips is shown next.

 

[pic]

 

Using the use level probability plot, the cycles-to failure for a 10% probability of failure can be estimated. From the plot, this time is estimated to be T(Q = 0.10 = 10%) [pic]12,000 landings.

 

Another way to obtain life information is by using a life vs. stress plot. Once a life vs. stress plot is selected, the 10% unreliability (probability of failure) line must be plotted. To do this, click the Plot Options menu and select Specify Life Lines from the Show Life Char. Lines submenu. Next, enter 10 in any of the Unreliability Value boxes in the Specify Life Lines window, as shown next.

 

[pic]

 

In the following plot, the 10% unreliability line is plotted (the first line from the left). For a stress of 50 psi (X-axis) and for the 10% unreliability line, the cycles-to-failure can be obtained by reading the value on the Y-axis. Again, T(0.1) [pic]12,000 landings.

 

[pic]

 

However, a more accurate way to obtain such information is by using the Quick Calculation Pad (QCP) in ALTA. Using the QCP, the life for a 10% probability of failure at 50 kips is estimated to be 11,668.73 cycles (or landings), as shown next

[pic]

 

The design criteria of 10,000 landings is met.

 

Example 5: Tensile Components Example Continued

In the tensile components example, it was found that the 10,000 landings life criteria for a 90% reliability was met. However, the estimate of 11,668.73 cycles (or landings) is very close to the requirement. Additionally, this estimate was obtained at the 50% confidence level. In other words, 50% of the time life will be greater than 10,544.899 landings, and 50% of the time life will be less. A lower confidence level is thus crucial before any decisions are made. Repeat the calculation of the previous example, this time including a 90% lower 1-sided confidence bound on the estimation.

 

Solution to Example 5: Tensile Components Example Continued

Using ALTA's QCP, the 90% lower 1-sided confidence bound on the 11,668.73 cycles-to-failure can be obtained.

 

[pic]

 

The 90% lower 1-sided confidence bound is estimated to be 9,680.51 landings. Thus the 10,000 landings criterion is not quite met.

 

See Also:

General Examples Using ALTA

 

ACME Example

ACME manufacturing has implemented an accelerated testing program for their new design. A total of 40 units were tested at four different pressure levels. The operating stress level is 170psi.

 

[pic]

 

Do the following:

 

1.    Determine the parameters of the inverse power law weibull model.

2.    Obtain the use level probability plot with 90% 2-sided confidence bounds on time.

3.    Obtain the life vs. stress plot with 90% 2-sided confidence bounds.

 

Solution to Example 6: Acme Example

1.    The parameters of the IPL-Weibull model are estimated to be, 

 

[pic]= 3.009236

K = 3.267923E – 27,

N = 10.218104.

 

2.    The use level probability plot is shown next.

 

[pic]

 

The confidence bounds on time are the type 1 (percentile) bounds in ALTA.

 

[pic]

 

3.    Similarly, the life vs. stress plot with the confidence bounds can be obtained.

 

See Also:

General Examples Using ALTA

 

Circuit Boards Example

Twenty-seven circuit boards were tested with temperature as the accelerated stress (or stimuli). The goal was to estimate the reliability after 10 years at 100°C. The circuit boards were tested and the following data obtained:

 

[pic]

 

The data were entered in ALTA and analyzed utilizing an Arrhenius-Weibull model. Note that all units that did not fail in the test, but failed for unrelated reasons were entered as suspensions. The plot that follows shows the predicted reliability after 10 years at 100°C.

 

[pic]

 

See Also:

General Examples Using ALTA

 

Voltage Stress Example

An electronic component was subjected to a voltage stress, starting at 2V (use stress level) and increased to 7V in stepwise increments. The following steps, in hours, were used to apply stress to the products under test: 0 to 250, 2V; 250 to 350, 3V; 350 to 370, 4V; 370 to 380, 5V; 380 to 390, 6V; and 390 to 400, 7V. This profile is represented graphically next.

 

[pic]

 

The objective of this test was to determine the B10 life and the mean life (often called "mean time to failure," MTTF or MTBF) of these components at the normal use stress level of 2V.

 

In this experiment, the overall test time was 385 hrs. If the test had been performed at use conditions, one would expect the test duration to be approximately 1700 hrs if the test were run until all units failed.

 

Eleven units were available for the test. All eleven units were tested using the same stress profile. Units that failed were removed from the test and their total time on test recorded. The following times-to-failure were observed in the test, in hours: 280, 310, 330, 352, 360, 366, 371, 374, 378, 381 and 385.

 

Solution to Example 9: Voltage Stress Example

Using ALTA 6 PRO, the analyst began by creating a new Data Entry Spreadsheet to hold non-grouped time-to-failure data with a single stress column for voltage, as shown next.

 

[pic]

 

After creating the Data Entry Spreadsheet, the analyst used the Stress Profile Explorer to create the stress profile for this experiment.

 

In ALTA's Stress Profile Explorer, you can define any stress profile in segments. The stress applied during these segments can be a constant value (as is the case in this step example) or defined as a function of a time variable (t). The stress profile for this analysis is displayed next.

 

[pic]

 

The times-to-failure data are entered in the ALTA Data Folio with the following analysis options selected: cumulative damage power life-stress relationship and Weibull for the underlying life distribution. The ALTA Data Folio with the data entered and the estimated parameters is shown next. Notice that the voltage step-stress profile has been assigned to each data point.

 

[pic]

 

The B10 life and mean life at the 2V use stress level, can be calculated with ALTA's Quick Calculation Pad (QCP), as shown next.

 

[pic]

 

See Also:

General Examples Using ALTA

 

Automotive Step-Stress Example

Consider a test in which multiple stresses were applied simultaneously to a particular automotive part in order to precipitate failures more quickly than they would occur under normal use conditions. The engineers responsible for the test were able to quantify the combination of applied stresses in terms of a "percentage stress" as compared to typical stress levels (or assumed field conditions). In this scenario, the typical stress (field or use stress) was defined as 100% and any combination of the test stresses was quantified as a percentage over the typical stress. For example, if the combination of stresses on test was determined to be two times higher than typical conditions, then the stress on test was said to be at 200%.

 

The test was set up and run as a step-stress test (i.e. the stresses were increased in a stepwise fashion) and the time on test was measured in hours. The step-stress profile used was as follows: until 200 hours, the equivalent applied stress was 125%; from 200 to 300 hrs, it was 175%; from 300 to 350 hrs, it was 200% and from 350 to 375 hrs, it was 250%. The test was terminated after 375 hours and any units that were still running after that point were right-censored (suspended).

 

Additionally, and based on prior analysis/knowledge, the engineers also stated that each hour on test under normal use conditions (i.e. at 100% stress measure) was equivalent to approximately 100 miles of normal driving. The test was conducted and the following times-to-failure and times-to-suspension under the stated step-stress profile were observed (note that XXX+ indicates a non-failed unit, i.e. suspension): 252, 280, 320, 328, 335, 354, 361, 362, 368, 375+, 375+, 375+.

 

After performing failure analysis on the failed parts, it was determined that the failure that occurred at 328 hrs was due to mechanisms other than the ones considered. That data point was therefore identified as a suspension in the currently analysis. The modified data set for this analysis was: 252, 280, 320, 328+, 335, 354, 361, 362, 368, 375+, 375+, 375+.

 

The test objective was to estimate the B1 life for the part (i.e. time at which reliability is equal to 99%) at the typical operating conditions (i.e. Stress=100%), in miles.

 

Solution to Example 10: Automotive Step-Stress Example

Utilizing ALTA 6 PRO, the analyst first created a new Data Entry Spreadsheet for non-grouped time-to-failure and time-to-suspension data, and used the Stress Profile Explorer to define the stress profile, as shown next.

 

[pic]

 

Once the profile was defined, the analyst entered the observed times, their state (i.e. failed F or non-failed S) and a reference to the profile used in the ALTA Data Folio. The analyst selected the cumulative damage life-stress relationship (to use a time-varying stress) based on a power model (since the effect of the stress was deemed to be mechanical and more appropriately modeled by a power function). The Weibull distribution was selected as the underlying life distribution. The data folio and the selected model are shown next.

 

[pic]

 

The results of the analysis are shown next. Additionally, note that the use stress was set to 100 and all results were then extrapolated to the typical stress level.

 

[pic]

 

The last part remaining was to determine the B1 life at the part's use stress level. Using the QCP, the B1 life was found to be 657 hours, as shown next.

 

[pic]

 

Based on the given multiplier, the B1 life in miles would then be 657 test-hr*100 (miles/test - hr) = 65,700 miles.

 

The B1 life can also be obtained from the use level probability plot, as shown next.

 

[pic]

 

 

See Also:

General Examples Using ALTA

 

Electronic Components Example

An electronic component was redesigned and was tested to failure at three different temperatures. Six units were tested at each stress level. At the 406K stress level however, a unit was removed from the test due to a test equipment failure, which led to a failure of the component. A warranty time of one year is to be given, with an expected return of 10% of the population. The times-to-failure and test temperatures are given next:

 

[pic]

 

The operating temperature is 356K. Using the Arrhenius-Weibull model, determine the following:

 

1.    Should the first failure at 406K be included in the analysis?

2.    Determine the warranty time for 90% reliability.

3.    Determine the 90% lower confidence limit on the warranty time.

4.    Is the warranty requirement met? If not, what steps should be taken?

5.    Repeat the analysis with the unrelated failure included. Is there any difference?

6.    If the unrelated failure occurred at 500 hr, should it be included in the analysis?

 

Solution to Example 8: Electronic Components Example

1.    Since the failure occurred at the very beginning of the test and for an unrelated reason, it can be omitted from the analysis. If it is included it should be treated as a suspension and not as a failure.

2.    The first failure at 406K was neglected and the data were analyzed using ALTA. The following parameters were obtained:

 

[pic]= 2.9658,

B = 10679.57,

C = 2.39662 [pic]

 

The use level probability plot (at 356K) can then be obtained. The warranty time for a reliability of 90% (or an unreliability of 10%) can be estimated from this plot as shown next.

 

[pic]

 

This estimate can also be obtained from the Arrhenius plot (a life vs. stress plot). The 10th percentile (time for a reliability of 90%) is plotted versus stress. This type of plot is useful because a time for a given reliability can be determined for different stress levels.

 

[pic]

 

A more accurate way to determine the warranty time would be to use ALTA's Quick Calculation Pad (QCP). By selecting the Warranty (Time) Information option from the Basic Calculations tab in the QCP and entering 356 for the temperature and 90 for the required reliability, a warranty time of 11,977.793 hr can be determined, as shown next:

 

[pic]

 

3.    The warranty time for a 90% reliability was estimated to be approximately 12,000 hr. This is above the 1 year (8,760 hr) requirement. However, this is an estimate at the 50% confidence level. In other words, 50% of the time life will be greater than 12,000 hr and 50% of the time life will be less. A known confidence level is therefore crucial before any decisions are made. Using ALTA, confidence bounds can be plotted on both Probability and Arrhenius plots. In the following use level probability plot, the 90% Lower Confidence Level (LCL) is plotted. Note that percentile bounds are type 1 confidence bounds in ALTA.

 

[pic]

 

An estimated 4,300 hr warranty time at a 90% lower confidence level was obtained from the use level probability plot. This means that 90% of the time, life will be greater than this value. In other words, a life of 4,300 hr is a bounding value for the warranty.

 

The Arrhenius plot with the 90% lower confidence level is shown next.

 

[pic]

 

Using the QCP and specifying a 90% lower confidence level, a warranty time of 4436.5 hr is estimated, as shown next.

 

[pic]

 

4.    The warranty time for this component is estimated to be 4,436.5 hr at a 90% lower confidence bound. This is much less than the 1 year warranty time required (almost 6 months). Thus the desired warranty is not met. In this case, the following four options are available:

 

•      redesign

•      reduce the confidence level

•      change the warranty policy

•      test additional units at stress levels closer to the use level

 

5.    Including the unrelated failure of 0.3 hr at 406 K (by treating it as a suspension at that time), the following results are obtained:

 

[pic]= 2.9658,

B = 10679.57

C = 2.39662 [pic]

 

These results are identical to the ones with the unrelated failure excluded. A small difference can be seen only if more significant digits are considered. The warranty time with the 90% lower 1-sided confidence bound was estimated to be:

 

T = 11.977.729 hr,

[pic]= 4436.46 hr.

 

Again, the difference is negligible. This is due to the very early time at which this unrelated failure occurred.

 

6.    The analysis is repeated treating the unrelated failure at 500 hr as a suspension, with the following results:

 

[pic]= 3.0227,

B = 10959.52,

C = 1.23808 [pic]

 

In this case, the results are very different. The warranty time with the 90% lower 1-sided confidence bound is estimated to be:

 

T = 13780.208 hr,

[pic]= 5303.67 hr.

 

It can be seen that in this case, it would be a mistake to neglect the unrelated failure. By neglecting this failure, we would actually underestimate the warranty time. The important observation in this example is that every piece of life information is crucial. In other words, unrelated failures also provide information about the life of the product. An unrelated failure occurring at 500 hr indicates that the product has survived for that period of time under the particular stress level, thus neglecting it would be a mistake. On the other hand it would also be a mistake to treat this data point as a failure, since the failure was caused by a test equipment failure.

 

See Also:

General Examples Using ALTA

 

Appendix A: Brief Statistical Background Introduction

In this appendix we attempt to provide a brief elementary introduction to the most common and fundamental statistical equations and definitions used in reliability engineering and life data analysis. The equations and concepts presented in this appendix are used extensively throughout this reference.

 

This appendix includes the following topics:

 

•      Basic Statistical Definitions

•      Distributions

•      Confidence Intervals (or Bounds)

•      Confidence Limits Determination

 

See Also:

Contents

Introduction

 

Basic Statistical Definitions

Random Variables

In general, most problems in reliability engineering deal with quantitative measures, such as the time-to-failure of a product, or whether the product fails or does not fail. In judging a product to be defective or non-defective, only two outcomes are possible. We can then use a random variable X to denote these possible outcomes (i.e. defective or non-defective). In this case, X is a random variable that can take on only these values.

 

In the case of times-to-failure, our random variable X can take on the time-to-failure of the product and can be in a range from 0 to infinity (since we do not know the exact time apriori).

 

In the first case in which the random variable can take on discreet values (let's say defective=0 and non-defective=1), the variable is said to be a discrete random variable. In the second case, our product can be found failed at any time after time 0 (i.e. at 12 hr or at 100 hr and so forth), thus X can take on any value in this range. In this case, our random variable X is said to be a continous random variable. In this reference, we will deal almost exclusively with continuous random variables.

 

The Probability Density and Cumulative Density Functions

Designations

From probability & statistics, given a continuous random variable X, we denote:

 

•      The probability density function, pdf, as f(x).

•      The cumulative density function, cdf, as F(x).

 

The pdf and cdf give a complete description of the probability distribution of a random variable.

 

Definitions

[pic]

 

If X is a continuous random variable, then the probability density function, pdf, of X is a function f(x) such that for two numbers, a and b with a ≤ b,

 

(1)     

[pic]

 

That is, the probability that X takes on a value in the interval [ a, b] is the area under the density function from a to b.

 

The cumulative distribution function, cdf, is a function F(x), of a random variable X, and is defined for a number x by:

 

(2)     

[pic]

 

That is, for a number x, F(x) is the probability that the observed value of X will be at most x.

 

Note that depending on the function denoted by f(x), or more specifically the distribution denoted by f(x), the limits will vary depending on the region over which the distribution is defined. For example, for all the life distributions considered in this reference, this range would be [0, + [pic]].

 

Graphical Representation of the pdf and cdf

[pic]

 

Mathematical Relationship Between the pdf and cdf

The mathematical relationship between the pdf and cdf is given by

 

(3)     

[pic]

 

where s is a dummy integration variable.

 

Conversely,

 

[pic]

 

In plain English, the cdf is the area under the probability density function, up to a value of x, if so chosen. The total area under the pdf is always equal to 1, or mathematically,

 

[pic]

 

[pic]

 

An example of a probability density function is the well-known normal distribution, for which the pdf is given by:

 

[pic]

 

where [pic]is the mean and [pic]is the standard deviation. The normal distribution is a two parameter distribution, i.e. with two parameters [pic]and [pic].

 

Another is the lognormal distribution, whose pdf is given by:

 

[pic]

 

where [pic]is the mean of the natural logarithms of the times-to-failure, and [pic]is the standard deviation of the natural logarithms of the times-to-failure. Again, this is a two-parameter distribution.

 

•      The Reliability Function

•      The Failure Rate Function

•      The Mean Life Function

•      Median Life

•      Mode

 

See Also:

Appendix A: Brief Statistical Background Introduction

 

The Reliability Function

The reliability function can be derived using the previous definition of the cumulative density function, Eqn. (3). Note that the probability of an event occurring by time t based on a continuous distribution given by f(x), or henceforth f(t) since our random variable of interest in life data analysis is time, or t), is given by:

 

(4)     

[pic]

 

or, one could equate this event to the probability of a unit failing by time t.

 

[pic]

 

From this fact, the most commonly used function in reliability engineering, the reliability function, can then be obtained. The reliability function enables the determination of the probability of success of a unit, in undertaking a mission of a prescribed duration.

 

To show this mathematically, we first define the unreliability function, Q(t), which is the probability of failure, or the probability that our time-to-failure is in the region of 0 and t. So from Eqn. (4),

 

[pic]

 

Reliability and unreliability are success and failure probabilities, are the only two events being considered, and are mutually exclusive, hence the sum of these probabilities is equal to unity. So then,

 

[pic]

 

Conversely,

 

[pic]

 

See Also:

Basic Statistical Defintions

 

The Failure Rate Function

The failure rate function enables the determination of the number of failures occurring per unit time. Omitting the derivation, see [18], the failure rate is mathematically given as,

 

[pic]

 

Failure rate is denoted as failures per unit time.

 

See Also:

Basic Statistical Defintions

 

The Mean Life Function

The mean life function, which provides a measure of the average time of operation to failure is given by:

 

[pic]

 

This is the expected or average time to failure and is denoted as the MTTF (mean time-to-failure) and synonymously called MTBF (mean time before failure) by many authors.

 

See Also:

Basic Statistical Defintions

 

Median Life

Median life, [pic], is the value of the random variable that has exactly one-half of the area under the pdf to its left and one-half to its right. The median is obtained from,

 

(5)     

[pic]

 

(For individual data, e.g. 12, 20, 21, the median is the midpoint value, or 20 in this case.)

 

See Also:

Basic Statistical Defintions

 

Mode

The modal (or mode) life, [pic], is the maximum value of T that satisfies,

 

(6)     

[pic]

 

For a continuous distribution, the mode is that value of the variate that corresponds to the maximum probability density (the value at which the pdf has its maximum value).

 

See Also:

Basic Statistical Defintions

 

Distributions

A statistical distribution is fully described by its pdf (or probability density function). In the previous sections, we used the definition of the pdf to show how all other functions most commonly used in reliability engineering and life data analysis can be derived. The reliability function, failure rate function, mean time function, and median life function can be determined directly from the pdf definition, or f(t). Different distributions exist, such as the normal, exponential etc., and each has a predefined f(t) which can be found in most references.

 

These distributions were formulated by statisticians, mathematicians and/or engineers to mathematically model or represent certain behavior. For example, the Weibull distribution was formulated by Waloddi Weibull and thus it bears his name. Some distributions tend to better represent life data and are most commonly called lifetime distributions.

 

One of the simplest and most commonly used distributions is the exponential distribution. The pdf of the exponential distribution is mathematically defined as,

 

[pic]

 

In this definition note that t is our random variable that represents time and the Greek letter [pic](lambda) represents what is commonly referred to as the parameter of the distribution. For any distribution, the parameter or parameters of the distribution are estimated (obtained) from the data. For example, in the case of the most well known distribution, namely the normal distribution,

 

[pic]

 

where the mean, [pic], and the standard deviation, [pic], are its parameters. Both of these parameters are estimated from the data, i.e. the mean and standard deviation of the data. Once these parameters are estimated, our function f(t) is fully defined and we can obtain any value for f(t) given any value of t.

 

Given the mathematical representation of a distribution, we can also derive all of the functions needed for life data analysis, which again will only depend on the value of t after the value of the distribution parameter or parameters are estimated from data.

 

For example we know that the exponential distribution pdf is given by:

 

[pic]

 

Thus the reliability function can be derived to be,

 

[pic]

 

The failure rate function is given by:

 

[pic]

 

The mean time to/before failure (MTTF/MTBF) is given by:

 

[pic]

 

Exactly the same methodology can be applied to any distribution given its pdf with various degrees of difficulty depending on the complexity of f(t).

 

Most Commonly Used Distributions

There are many different lifetime distributions that can be used. ReliaSoft [31] presents a thorough overview of lifetime distributions. Leemis [22] and others also present a good overview of many of these distributions. The three distributions used in ALTA, the 1-parameter exponential, 2-parameter Weibull, and the lognormal, are presented in greater detail in the Life Distributions chapter.

 

See Also:

Appendix A: Brief Statistical Background

 

Confidence Intervals (or Bounds)

One of the most confusing concepts to an engineer new to the field is the concept of putting a probability on a probability. In life data analysis, this concept is referred to as confidence intervals or confidence bounds. In this section we will try to briefly present the concept, in less than statistical terms, but based on solid common sense.

 

The Black & White Marbles

To illustrate, imagine a situation where there are millions of black and white marbles in a large swimming pool, and your job is to estimate the percentage of black marbles. One way to do this is (other than counting all the marbles!), is to estimate the percentage of black marbles by taking a sample and then counting the number of black marbles in the sample.

 

Taking a Small Sample of Marbles

First, pick out a small sample of marbles and count the black ones. Let’s say you picked out 10 marbles and counted 4 black marbles. Based on this, your estimate would be that 40% of the marbles are black.

 

If you put the 10 marbles back into the pool and repeated this example, you might get 5 black marbles, changing your estimate to 50% black marbles.

 

Which of the two estimates is correct? Both estimates are correct! As you repeat this experiment over and over again, you might find out that this estimate is usually between [pic]% and [pic]%, or maybe 90% of the time this estimate is between [pic]% and [pic]%.

 

Taking a Larger Sample of Marbles

If you now repeat the experiment and pick out 1,000 marbles, you may get results such as 545, 570, 530, etc. for the number of black marbles in each trial. Note that the range in this case will be much narrower than before. For example, let's say that 90% of the time, the number of black marbles will be from [pic]% to [pic]%, where [pic]% < [pic]% and [pic]% > [pic]%, thus giving you a narrower interval. For confidence intervals, the larger the sample size, the narrower the confidence intervals.

 

Back to Reliability

Returning to the subject at hand, your task is to determine the probability of failure or reliability of all of our units. However, until all units fail, you will never know the exact value. Your task is to estimate the reliability based on a sample, much like estimating the number of black marbles in the pool. If you perform 10 different reliability tests for your units, and estimate the parameters using ALTA, you will obtain slightly different parameters for the distribution each time, and thus slightly different reliability results. However, when employing confidence bounds, you obtain a range in which these values are more likely to occur X percent of the time. Remember that each parameter is an estimate of the true parameter, a true parameter that is unknown to us.

 

The One-Sided and Two-Sided Confidence Bounds is presented next.

 

See Also:

Appendix A: Brief Statistical Background

 

One-Sided and Two-Sided Confidence Bounds

Confidence bounds (or intervals) are generally described as one-sided or two-sided.

 

Two-Sided Bounds

[pic]

When we use two-sided confidence bounds (or intervals) we are looking at where most of the population is likely to lie. For example, when using 90% two-sided confidence bounds, we are saying that 90% lies between X and Y, with 5% less than X and 5% greater than Y.

 

One-Sided Bounds

When using one-sided intervals, we are looking at the percentage of units that are greater or less (upper and lower) than a certain point X.

 

[pic]

 

For example, 95% one-sided confidence bounds would indicate that 95% of the population is greater than X (if 95% is a lower confidence bound), or that 95% is less than X (if 95% is an upper confidence bound).

 

In ALTA, we use upper to mean the higher limit and lower to mean the lower limit, regardless of their position, but based on the value of the results. So for example, when returning the confidence bounds on the reliability, we would term the lower value of reliability as the lower limit and the higher value of reliability as the higher limit. When returning the confidence bounds on probability of failure, we will again term the lower numeric value for the probability of failure as the lower limit and the higher value as the higher limit.

 

See Also:

Confidence Intervals (or Bounds)

 

Confidence Limits Determination

This section presents an overview of the theory on obtaining approximate confidence bounds on suspended (multiply censored) data. The methodology used is the so-called Fisher Matrix Bounds, described in Nelson [27] and Lloyd & Lipow [24].

 

Approximate Estimates of the Mean and Variance of a Function

Single Parameter Case

For simplicity, consider a one-parameter distribution represented by a general function G, which is a function of one parameter estimator, say (G)[pic]. Then, in general, the expected value of G ([pic]) can be found by:

 

(7)     

[pic]

 

where G ([pic]) is some function of [pic], such as the reliability function, and [pic]is the population moment, or parameter such that E([pic]) = [pic]as n [pic][pic]. The term O ([pic]) is a function of n, the sample size, and tends to zero, as fast as [pic]as n [pic][pic]. For example, in the case of [pic]= [pic]and G(x) = [pic], then E(G([pic])) = [pic]+ O([pic]) where O ([pic]) = [pic], thus as n [pic][pic], E(G([pic])) = [pic]([pic] and [pic]are the mean and standard deviation, respectively). Using the same one parameter distribution, the variance of the function G ([pic]) can then be estimated by:

 

(8)     

[pic]

 

Two Parameter Case

Repeating the previous method for the case of a two parameter distribution, it is generally true that for a function G, which is a function of two parameter estimators, say G ([pic], [pic]), that,

 

(9)     

[pic]

 

and,

 

(10)     

[pic]

 

Note that the derivatives of Eqn. (10) are evaluated at [pic]= [pic]and [pic]= [pic], where E([pic]) > [pic]and E([pic]) > [pic].

 

Variance and Covariance Determination of the Parameters

The determination of the variance and covariance of the parameters is accomplished via the use of the Fisher information matrix. For a two-parameter distribution, and using maximum likelihood estimates, the log likelihood function for censored data (without the constant coefficient) is given by:

 

[pic]

 

Then the Fisher information matrix is given by:

 

[pic]

 

where [pic]= [pic], and [pic]= [pic].

 

So for a sample of N units where R units have failed, S have been suspended, and P have failed within a time interval, and N = R + M + P, one could obtain the sample local information matrix by:

 

(11)     

[pic]

 

Substituting in the values of the estimated parameters, in this case [pic]and [pic]and by inverting the matrix, one can then obtain the local estimate of the covariance matrix or,

 

(12)     

[pic]

 

Then the variance of a function (Var(G)) can be estimated using Eqn. (10). Values for the variance and covariance of the parameters are obtained from Eqn. (12).

 

Once they are obtained, the approximate confidence bounds on the function are given as:

 

(13)     

[pic]

 

Approximate Confidence Intervals on the Parameters

In general, MLE estimates of the parameters are asymptotically normal, thus if [pic]is the MLE estimator for [pic], in the case of a single parameter distribution, estimated from a sample of n units, and if,

 

[pic]

 

then, 

 

(14)     

[pic]

 

for large n. If one now wishes to place confidence bounds on [pic], at some confidence level [pic], bounded by the two end points [pic]and [pic], and where,

 

[pic]

 

then from Eqn. (14),

 

[pic]

 

where [pic]is defined by:

 

[pic]

 

Now by simplifying Eqn. (15), one can obtain the approximate confidence bounds on the parameter [pic]at a confidence level [pic]or,

 

[pic]

 

If [pic]must be positive, then ln[pic] is treated as normally distributed. The two-sided approximate confidence bounds on the parameter [pic], at confidence level [pic], then become,

 

(16)     

[pic](two-sided upper)

 

(17)     

[pic](two-sided lower)

 

The one-sided approximate confidence bounds on the parameter [pic], at confidence level [pic]can be found from,

 

[pic](one-sided upper)

 

[pic](one-sided lower)

 

The same procedure can be repeated for the case of a two or more parameter distribution. Lloyd and Lipow [24] elaborate on this procedure.

 

Percentile Confidence Bounds (Type 1 in ALTA)

Percentile confidence bounds are confidence bounds around time. For example, when using the 1-parameter exponential distribution, the corresponding time for a given exponential percentile (i.e. y-ordinate or unreliability, Q = 1 - R) is determined by solving the unreliability function for the time, T, or

 

(18)     

[pic]

 

Percentile bounds (Type 1) return the confidence bounds by determining the confidence intervals around [pic]and substituting into Eqn. (18). The bounds on [pic]were determined using Eqns. (16) and (17), with its variance obtained from Eqn. (12).

 

Reliability Confidence Bounds (Type 2 in ALTA)

Type 2 bounds in ALTA are confidence bounds around reliability. For example, when using the 1-parameter exponential distribution, the reliability function is

 

(19)     

[pic]

 

Reliability bounds (Type 2) return the confidence bounds by determining the confidence intervals around [pic]and substituting into Eqn. (19). The bounds on [pic]were determined using Eqns. (16) and (17), with its variance obtained from Eqn. (12).

 

See Also:

Appendix A: Brief Statistical Background

 

Appendix B: Parameter Estimation

Once a life distribution and a life-stress relationship have been selected, the parameters (i.e. the variables that govern the characteristics of the pdf ) need to be determined. Several parameter estimation methods, including probability plotting, least squares, and maximum likelihood, are available. This appendix will present an overview of these methods. Because the least squares method for analyzing accelerated life data is very limiting, it will be covered very briefly in this appendix. Interested readers can refer to Nelson [28] for a more detailed discussion of the least squares parameter estimation method.

 

This chapter includes the following subchapters:

 

•      Graphical Method

•      MLE (Maximum Likelihood) Parameter Estimation

•      MLE of Accelerated Life Data

•      Analysis of Censored Data

•      Appendix B Conclusions

 

See Also:

Contents

Introduction  

 

Graphical Method

Graphical analysis is the simplest method for obtaining results in both life data and accelerated life testing analyses. Although they have limitations (presented in Comments on the Graphical Method section) in general graphical methods are easily implemented and easy to interpret.

 

The graphical method for estimating the parameters of accelerated life data involves generating two types of plots. First, the life data at each individual stress level are plotted on a probability paper appropriate to the assumed life distribution (i.e. Weibull, exponential, or lognormal). The parameters of the distribution at each stress level are then estimated from the plot. Once these parameters have been estimated at each stress level, the second plot is created on a paper that linearizes the assumed life-stress relationship (i.e. Arrhenius, inverse power law, etc.). The parameters of the life-stress relationship are then estimated from the second plot. The life distribution and life-stress relationship are then combined to provide a single model that describes the accelerated life data. Figure 1 illustrates these two plots.

 

[pic]

Fig. 1: Illustration of the graphical method of parameter estimation.

 

With this general understanding of the graphical parameter estimation method, we will continue with a more specific discussion of each step.

 

Life Distribution Parameters at Each Stress Level

The first step in the graphical analysis of accelerated data is to calculate the parameters of the assumed life distribution at each stress level. Because life data are collected at each test stress level in accelerated life tests, the assumed life distribution is fitted to data at each individual stress level. The parameters of the distribution at each stress level are then estimated using the probability plotting method described next.

 

Life Distribution Probability Plotting

The easiest parameter estimation method (to use by hand) for complex distributions, such as the Weibull distribution, is the method of probability plotting. Probability plotting involves a physical plot of the data on specially constructed probability plotting paper. This method is easily implemented by hand as long as one can obtain the appropriate probability plotting paper.

 

Probability plotting looks at the cdf (cumulative density function) of the distribution and attempts to linearize it by employing a specially constructed paper. For example, in the case of the 2-parameter Weibull distribution, the cdf and unreliability Q(T) can be shown to be,

 

[pic]

 

(The Life Distributions chapter of this reference presents derivations of this equation.) This function can then be linearized (i.e. put into the common form of y = a + bx) as follows, 

 

(20)     

[pic]

 

Then setting,

 

[pic]

 

and,

 

[pic]

 

the equation can be rewritten as,

 

[pic]

 

which is now a linear equation with a slope of [pic]and an intercept of [pic]ln([pic]) .

 

The next task is to construct a paper with the appropriate x- and y- axes. The x-axis is easy since it is simply logarithmic. The y-axis, however, must represent,

 

[pic]

 

where Q(T) is the unreliability. Such papers have been created by different vendors and are called Weibull probability plotting papers.

 

To illustrate, consider the following probability plot on a Weibull Probability Paper (created using Weibull++).

 

For free downloads of probability plotting papers from ReliaSoft, visit

.

 

[pic]

This paper is constructed based on the y and x transformation mentioned previously where the y-axis represents unreliability and the x-axis represents time. Both of these values must be known for each point (or time-to-failure) we want to plot.

 

Then, given the y and x value for each point, the points can easily be placed on the plot. Once the points are placed on the plot, the best possible straight line is drawn through these points. Once the line is drawn, the slope of the line can be obtained (most probability papers include a slope indicator to facilitate this) and thus the parameter [pic], which is the value of the slope, can be obtained.

 

To determine the scale parameter, [pic](also called the characteristic life by some authors), a little more work is required. Note that from before,

 

[pic]

 

so at T = [pic]

 

[pic]

 

Thus if we entered the y axis at Q(T) = 63.2%, the corresponding value of T will be equal to [pic]. Using this simple, but rather time-consuming methodology, then, the parameters of the Weibull distribution can be determined. For data obtained from accelerated tests, this procedure is repeated for each stress level.

 

Determining the X and Y Position of the Plot Points

The points plotted on the probability plot represent our data, or more specifically in life data analysis, times-to-failure data. So if we tested four units that failed at 10, 20, 30 and 40 hours at a given stress level, we would use these times as our x values or time values. Determining the appropriate y plotting position, or the unreliability, is a little more complex. To determine the y plotting positions, we must first determine a value called the median rank for each failure.

 

Median Ranks

Median ranks are used to obtain an estimate of the unreliability, [pic]for each failure. It represents the value that the true probability of failure, [pic], should have at the [pic]failure out of a sample of N units, at a 50% confidence level. This is an estimate of the value based on the binomial distribution. The rank can be found for any percentage point, P, greater than zero and less than one, by solving the cumulative binomial distribution for Z (rank for the [pic]failure) [31],

[pic]

 

where N is the sample size and j the order number.

 

The median rank is obtained by solving the following equation for

 

(21)     

[pic]

 

For example if N = 4 and we have four failures at that particular stress level, we would solve the median rank equation, Eqn. (21), four times; once for each failure with j = 1, 2, 3, and 4, for the value of Z. This result can then be used as the unreliability for each failure, or the y plotting position. Solution of equation (21) requires numerical methods.

 

A more straightforward and easier method of estimating median ranks is to apply two transformations to Eqn. (21), first to the beta distribution and then to the F distribution, resulting in [31],

 

[pic]

 

[pic]denotes the F distribution at the 0.50 point, with m and n degrees of freedom, for the [pic]failure out of N units.

 

A quick and less accurate approximation of the median ranks is also given by [31],

 

(22)     

[pic]

 

This approximation of the median ranks is also known as Benard's approximation.

 

Some Shortfalls of Manual Probability Plotting

Besides the most obvious shortfall of probability plotting, the amount of effort required, manual probability plotting is not always consistent in the results. Two people plotting a straight line through a set of points will not always draw this line the same way and they will therefore come up with slightly different results. In addition, when dealing with accelerated test data a probability plot must be constructed for each stress level. This implies that sufficient failures must be observed at each stress level, which is not always possible.

 

Probability Plotting with Censored Data

Probability plotting can also be performed with censored data. The methodology involved is rather laborious. ReliaSoft [31] presents this methodology.

 

This section includes the following topics:

 

•      Least Squares Method

•      Life-Stress Relationship Plotting

•      Comments on the Graphical Method

 

See Also:

Appendix B: Parameter Estimation

 

Least Squares Method

The least squares parameter estimation method is a variation of the probability plotting methodology in which one mathematically fits a straight line to a set of points in an attempt to estimate the parameters. The method of least squares requires that a straight line be fitted to a set of data points such that the sum of the squares of the vertical deviations from the points to the line is minimized, if the regression is on Y, or the line be fitted to a set of data points such that the sum of the squares of the horizontal deviations from the points to the line is minimized, if the regression is on X.

 

[pic]

Fig 2: Linear regression on Y and on X

 

The regression on Y is not necessarily the same as the regression on X. The only time when the two regressions are the same (i.e. will yield the same equation for a line) is when the data lie perfectly on a line. ReliaSoft [31] presents this methodology in detail.

 

See Also:

Graphical Method

 

Life-Stress Relationship Plotting

Once the parameters of the life distribution have been obtained using probability plotting methods, a second plot is created in which life is plotted versus stress. To do this, a life characteristic must be chosen to be plotted. The life characteristic can be any percentile, such as B(x) life, the scale parameter, mean life, etc. The plotting paper used is a special type of paper that linearizes the life-stress relationship. For example, a log-log paper linearizes the inverse power law relationship, and a log-reciprocal paper linearizes the Arrhenius relationship. The parameters of the model are then estimated by solving for the slope and the intercept of the line. This methodology is illustrated in Example 1.

 

Example 1

Consider the following times-to-failure data at three different stress levels.

 

[pic]

 

Estimate the parameters for a Weibull assumed life distribution and for the inverse power law life-stress relationship.

 

Solution

First the parameters of the Weibull distribution need to be determined. The data is individually analyzed (for each stress level) using the probability plotting method, or software such as ReliaSoft's Weibull++, with the following results:

 

[pic]

 

where:

 

•      [pic], [pic]are the parameters of the 393 psi data.

•      [pic], [pic]are the parameters of the 408 psi data.

•      [pic], [pic]are the parameters of the 423 psi data.

 

[pic]

 

Since the shape parameter, [pic]is not common for the three stress levels, the average value is estimated.

 

[pic]

 

Averaging the betas is one of many simple approaches available. One can also use a weighted average, since the uncertainty on beta is greater for smaller sample sizes. In most practical applications the value of [pic]will vary (even though it is assumed constant) due to sampling error, etc. The variability in the value of [pic]is a source of error when performing analysis by averaging the betas. MLE analysis, which uses a common [pic], is not susceptible to this error. MLE analysis is the method of parameter estimation used in ALTA and it is explained in the next section.

 

Redraw each line with a [pic]= 4, and estimate the new eta’s as follows.

 

[pic]

 

[pic]= 6650

[pic]= 5745

[pic]= 4774.

 

The IPL relationship is given by:

 

[pic]

 

where,

 

L represents a quantifiable life measure ([pic] in the Weibull case), V represents the stress level, K is one of the parameters, and n is another model parameter. The relationship is linearized by taking the logarithm of both sides, which yields,

 

[pic]

 

where L = [pic], (-lnK) is the intercept, and (-n) is the slope of the line.

The values of [pic]obtained previously are now plotted on a log-linear scale yielding the following plot,

 

[pic]

 

The slope of the line is the [pic]parameter, which is obtained from the plot:

 

[pic]

 

Thus,

 

[pic]

 

Solving the inverse power law equation with respect to K yields,

 

[pic]

 

Substituting V = 403, the corresponding L (from the plot), L = 6,00 and the previously estimated n,

 

[pic]

 

See Also:

Graphical Method

 

Comments on the Graphical Method

Although the graphical method is simple, it is quite laborious. Furthermore, many issues surrounding its use require careful consideration. Some of these issues are presented next:

 

•      What happens when no failures are observed at one or more stress level? In this case, plotting methods cannot be employed. Discarding the data would be a mistake since every piece of life data information is important. (In other words, no failures at one stress level combined with observed failures at other stress level(s) are an indication of the dependency of life on stress. This information cannot be discarded.)

•      In the step at which the life-stress relationship is linearized and plotted to obtain its parameters, you must be able to linearize the function, which is not always possible.

•      In real accelerated tests the data sets are small. Separating them and individually plotting them, and then subsequently replotting the results, increases the underlying error.

•      During initial parameter estimation, the parameter that is assumed constant will more than likely vary. What value do you use?

•      Confidence intervals on all of the results cannot be ascertained using graphical methods.

 

The maximum likelihood estimation parameter estimation method described next overcomes these shortfalls, and is the method utilized in ALTA.

 

See Also:

Graphical Method

 

MLE (Maximum Likelihood) Parameter Estimation

The idea behind maximum likelihood parameter estimation is to determine the parameters that maximize the probability (likelihood) of the sample data. From a statistical point of view, the method of maximum likelihood is considered to be more robust (with some exceptions) and yields estimators with good statistical properties. In other words, MLE methods are versatile and apply to most models and to different types of data. In addition, they provide efficient methods for quantifying uncertainty through confidence bounds. Although the methodology for maximum likelihood estimation is simple, the implementation is mathematically intense. Using today's computer power, however, mathematical complexity is not a big obstacle. The MLE methodology is presented next.

 

Background Theory

This section presents the theory that underlies maximum likelihood estimation for complete data. If x is a continuous random variable with pdf,

 

[pic]

 

where [pic]are k unknown constant parameters which need to be estimated, conduct an experiment and obtain N independent observations, [pic]. Then the likelihood function is given by the following product,

 

(23)     

[pic]

 

The logarithmic likelihood function is given by:

 

[pic]

 

The maximum likelihood estimators (MLE) of [pic]are obtained by maximizing L or [pic].

 

By maximizing [pic], which is much easier to work with than L, the maximum likelihood estimators (MLE) of [pic]are the simultaneous solutions of k equations such that,

 

[pic]

 

Even though it is common practice to plot the MLE solutions using median ranks (points are plotted according to median ranks and the line according to the MLE solutions), this is not completely accurate. As it can be seen from the equations above, the MLE method is independent of any kind of ranks or plotting methods. For this reason, many times the MLE solution appears not to track the data on the probability plot. This is perfectly acceptable since the two methods are independent of each other and in no way suggests that the solution is wrong.

 

Illustrating the MLE Method Using the Exponential Distribution

•      To estimate [pic], for a sample of n units (all tested to failure), first obtain the likelihood function,

 

[pic]

 

•      Take the natural log of both sides,

 

[pic]

 

•      Obtain [pic], and set it equal to zero,

 

[pic]

 

•      Solve for [pic]or,

 

[pic]

 

Notes on lambda

•      Note that the value of [pic]is an estimate because if we obtain another sample from the same population and re-estimate [pic], the new value would differ from the one previously calculated.

•      In plain language, [pic]is an estimate of the true value of [pic].

•      How close is the value of our estimate to the true value? To answer this question, one must first determine the distribution of the parameter, in this case [pic]. This methodology introduces a new term, confidence level, which allows us to specify a range for our estimate with a certain confidence level.

•      The treatment of confidence intervals is integral to reliability engineering, and to all of statistics. (Confidence intervals are presented in Appendix A: A Brief Statistical Background.)

 

Illustrating the MLE Method Using the Normal Distribution

To obtain the MLE estimates for the mean, [pic], and standard deviation, [pic]for the normal distribution, start with the pdf of the normal distribution which is given by:

 

[pic]

 

If [pic]are known times-to-failure (and with no suspensions), then the likelihood function is given by:

 

[pic]

 

Then,

 

[pic]

 

Then taking the partial derivatives of [pic]with respect to each one of the parameters and setting it equal to zero yields,

 

(24)     

[pic]

 

and,

 

(25)     

[pic]

 

Solving Eqns. (24) and (25) simultaneously yields,

 

(26)     

[pic]

 

And,

 

(27)     

[pic]

 

These solutions are only valid for data with no suspensions, i.e. all units are tested to failure. In cases in which suspensions are present, the methodology changes and the problem becomes much more complicated.

 

Estimator

As mentioned above, the parameters obtained from maximizing the likelihood function are estimators of the true value. It is clear that the sample size determines the accuracy of an estimator. If the sample size equals the whole population, then the estimator is the true value. Estimators have properties such as unbiasedness, sufficiency, consistency, and efficiency. Numerous books and papers deal with these properties and this coverage is beyond the scope of this reference. However, we would like to briefly address unbiasedness and consistency.

 

Unbiased Estimator

An estimator is said to be unbiased if and only if the estimator [pic]= d([pic]) satisfies the condition E[[pic]] = [pic]for all [pic][pic][pic]. Note that E[X] denotes the expected value of X and is defined by (for continuous distributions),

 

[pic]

 

This implies that the true value is not consistently underestimated nor overestimated by an unbiased estimator.

 

Consistent Estimator

An unbiased estimator that converges more closely to the true value as the sample size increases is called a consistent estimator. In the example above, the standard deviation of the normal distribution was obtained using MLE. This estimator of the true standard deviation is a biased one. It can be shown [4] that the consistent estimate of the variance and standard deviation for complete data (for the normal and lognormal distribution) is given by:

 

[pic]

 

Note that for larger values of N, [pic]tends to 1.

 

See Also:

Appendix B: Parameter Estimation

 

MLE of Accelerated Life Data

Due to its nature, maximum likelihood offers a very powerful method in estimating the parameters of accelerated testing models, making possible the analysis of very complex models. In the beginning of this Appendix, a graphical method for obtaining the parameters of accelerated testing models was illustrated. It involved estimating the parameters of the life distribution separately for each individual stress level, and then plotting the life-stress relationship in a linear manner on a separate life vs. stress plot. In other words, the life distribution and the life-stress relationship were treated separately. However, using the MLE method, the life distribution and the life-stress relationship can be treated as one complete model that describes both. This can be accomplished by including the life-stress relationship into the pdf of the life distribution.

 

Background Theory

The maximum likelihood for accelerated life testing analysis is formulated in the same way as shown previously, however in this case the stress level of each individual observation is included in the likelihood function. Consider a continuous random variable x(v), where v is the stress. The pdf of the random variable now becomes a function of both x and v.

 

[pic]

 

where [pic]are k unknown constant parameters which need to be estimated. Conduct an experiment and obtain N independent observations, [pic]each at a corresponding stress, [pic]. Then the likelihood function for complete data is given by:

 

(28)     

[pic]

 

[pic]

 

The logarithmic likelihood function is given by:

 

(29)     

[pic]

 

The maximum likelihood estimators (MLE) of [pic]are obtained by maximizing L or [pic].

 

In this case, [pic]are the parameters of the combined model which includes the parameters of the life distribution and the parameters of the life-stress relationship. Note that in Eqns. (28) and (29), N is the total number of observations. This means that the sample size is no longer broken into the number of observations at each stress level. In Example 1, the sample size at the stress level of 20V was 4, and 15 at 36V. Using Eqn. (28) or Eqn. (29) however, the test's sample size is 19.

 

Once the parameters are estimated, they can be substituted back into the life distribution and the life-stress relationship.

 

Example 2

The following example illustrates the use of the MLE method on accelerated life test data. Consider the inverse power law relationship, given by:

 

(30)     

[pic]

 

where,

 

•        L represents a quantifiable life measure, V represents the stress level, K is one of the parameters, and n is another model parameter.

 

Assume that the life at each stress follows a Weibull distribution, with a pdf given by:

 

(31)     

[pic]

 

where the time-to-failure, t, is a function of stress, V.

 

A common life measure needs to determined so that it can be easily included in Eqn. (31). In this case, setting [pic]= L(V) (which is the life at 63.2%) in Eqn. (30) and substituting in Eqn. (31), yields the following IPL-Weibull pdf,

 

[pic]

 

The log-likelihood function for the complete data is given by

 

[pic]

 

Note that [pic]is now the common shape parameter to solve for, along with K and n.

 

See Also:

Appendix B: Parameter Estimation

 

Analysis of Censored Data

So far we have discussed parameter estimation methods for complete data only. We will expand on that approach in this section by including the maximum likelihood estimation method for right censored data. The method is based on the same principles covered previously but modified to take into account the fact that some of the data is censored.

 

MLE Analysis of Right Censored Data

The maximum likelihood method is, at this point, by far the most appropriate analysis method for censored data. It is versatile and applicable to most accelerated life testing models. When performing maximum likelihood analysis, the likelihood function needs to be expanded to take into account the suspended items. A big advantage of using MLE when dealing with censored data is that each suspension term is included in the likelihood function. Thus, the estimates of the parameters are obtained from consideration of the entire population. Using MLE properties, confidence bounds can be obtained which also account for all the suspension terms. In the case of suspensions and where x is a continuous random variable with pdf and cdf,

 

[pic]

 

where [pic]are the k unknown parameters which need to be estimated from R observed failures at [pic], and M observed suspensions at [pic]where [pic]is the [pic]stress level corresponding to the [pic]observed failure, and [pic]is the [pic]stress level corresponding to the [pic]observed suspension. The likelihood function is then formulated as follows,

 

[pic]

 

and the parameters are solved by maximizing this equation. In most cases, no closed form solution exists for this maximum, or for the parameters.

 

Example 3

Example 1 was repeated using MLE with the following results:

 

[pic]

 

In the individual analysis (probability plotting) the betas were averaged in order to estimate a common shape parameter. This introduced further uncertainties into the analysis. However, in this case (using MLE) the parameter [pic]was estimated from the entire data set.

 

See Also:

Appendix B: Parameter Estimation

 

Appendix B Conclusions

In this appendix, two methods for estimating the parameters of accelerated life testing models were presented. First, the graphical method was illustrated using a probability plotting method for obtaining the parameters of the life distribution. The parameters of the life-stress relationship were then estimated graphically by linearizing the model. However, not all life-stress relationships can be linearized. In addition, estimating the parameters of each individual distribution leads to an accumulation of uncertainties, depending on the number of failures and suspensions observed at each stress level. Furthermore, the slopes (shape parameters) of each individual distribution are rarely equal (common). Using the graphical method, one must estimate a common shape parameter (usually the average) and repeat the analysis. By doing so, further uncertainties are introduced on the estimates, and these are uncertainties that cannot be quantified. On the other hand, treating both the life distribution and the life-stress relationship as one model, the parameters of that model can be estimated using the complete likelihood function. Doing so, a common shape parameter is estimated for the model, thus eliminating the uncertainties of averaging the individual shape parameters. All uncertainties are accounted for in the form of confidence bounds (presented in detail in Appendix A: Brief Statistical Background), which are quantifiable because they are obtained based on the overall model.

 

See Also:

Appendix B: Parameter Estimation

 

Appendix C: References

 

Aitchison, J., Jr. and Brown, J.A.C., The Lognormal Distribution, Cambridge University Press, New York, 176 pp., 1957.

 

Cramer, H., Mathematical Methods of Statistics, Princeton University Press, Princeton, NJ, 1946.

 

Davis, D.J., An Analysis of Some Failure Data, J. Am. Stat. Assoc., Vol. 47, p. 113, 1952.

 

Dietrich, D., SIE 530 Engineering Statistics Lecture Notes, The University of Arizona, Tucson, Arizona.

 

Dudewicz, E.J., An Analysis of Some Failure Data, J. Am. Stat. Assoc., Vol. 47, p. 113, 1952.

 

Dudewicz, E.J., and Mishra, Satya N., Modern Mathematical Statistics, John Wiley & Sons, Inc., New York, 1988.

 

Evans, Ralph A., The Lognormal Distribution is Not a Wearout Distribution, Reliability Group Newsletter, IEEE, Inc., 345 East 47th St., New York, N.Y. 10017, p. 9, Vol. XV, Issue 1, January 1970.

 

Gottfried, Paul, Wear-out, Reliability Group Newsletter, IEEE, Inc., 345 East 47th St., New York, N.Y. 10017, p. 7, Vol. XV, Issue 3, July 1970.

 

Glasstone, S., Laidler, K. J., and Eyring, H. E., The Theory of Rate Processes, McGraw Hill, NY, 1941.

 

Hahn, Gerald J., and Shapiro, Samuel S., Statistical Models in Engineering, John Wiley & Sons, Inc., New York, 355 pp., 1967.

 

Hald, A., Statistical Theory with Engineering Applications, John Wiley & Sons, Inc., New York, 783 pp., 1952.

 

Hald, A., Statistical Tables and Formulas, John Wiley & Sons, Inc., New York, 97 pp., 1952.

 

Hirose, Hideo, Maximum Likelihood Estimation in the 3-parameter Weibull Distribution - A Look through the Generalized Extreme-value Distribution, IEEE Transactions on Dielectrics and Electrical Insulation, Vol. 3, No. 1, pp. 43-55, February 1996.

 

Johnson, Leonard G., The Median Ranks of Sample Values in their Population With an Application to Certain Fatigue Studies, Industrial Mathematics, Vol. 2, 1951.

 

Johnson, Leonard G., The Statistical Treatment of Fatigue Experiment, Elsevier Publishing Company, New York, 144 pp., 1964.

 

Kao, J.H.K., A New Life Quality Measure for Electron Tubes, IRE Transaction on Reliability and Quality Control, PGRQC 13, pp. 15-22, July 1958.

 

Kapur, K.C., and Lamberson, L.R., Reliability in Engineering Design, John Wiley & Sons, Inc., New York, 586 pp., 1977.

 

Kececioglu, Dimitri, Reliability Engineering Handbook, Prentice Hall, Inc., New Jersey, Vol. 1, 1991.

 

Kececioglu, Dimitri, Reliability & Life Testing Handbook, Prentice Hall, Inc., New Jersey, Vol. 1 and 2, 1993 and 1994.

 

Kececioglu, Dimitri, and Sun, Feng-Bin, Environmental Stress Screening - Its Quantification, Optimization and Management, Prentice Hall PTR, New Jersey, 1995.

 

Kececioglu, Dimitri, and Sun, Feng-Bin, Burn-In Testing - Its Quantification and Optimization, Prentice Hall PTR, New Jersey, 1997.

 

Leemis Lawrence M., Reliability - Probabilistic Models and Statistical Methods, Prentice Hall, Inc., Englewood Cliffs, New Jersey, 1995.

 

Lieblein, J., and Zelen, M., Statistical Investigation of the Fatigue Life of Deep-Groove Ball Bearings, Journal of Research, National Bereau of Standards, Vol. 57, p. 273, 1956.

 

Lloyd, David K., and Lipow Myron, Reliability: Management, Methods and Mathematics, 1962, Prentice Hall, Englewood Cliffs, New Jersey.

 

Mann, Nancy R., Schafer, Ray. E., and Singpurwalla, Nozer D., Methods for Statistical Analysis of Reliability and Life Data, John Wiley & Sons, Inc., New York, 1974.

 

Meeker, William Q., and Escobar, Luis A., Statistical Methods for Reliability Data, John Wiley & Sons, Inc., New York, 1998.

 

Nelson, Wayne, Applied Life Data Analysis, John Wiley & Sons, Inc., New York, 1982.

 

Nelson, Wayne, Accelerated Testing: Statistical Models, Test Plans and Data Analyses, John Wiley & Sons, Inc., New York, 1990.

 

Perry, J. N., Semiconductor Burn-in and Weibull Statistics, Semiconductor Reliability, Vol. 2, Engineering Publishers, Elizabeth, N.J., pp. 8-90, 1962.

 

Procassini, A. A., and Romano, A., Transistor Reliability Estimates Improve with Weibull Distribution Function, Motorola Military Products Division, Engineering Bulletin, Vol. 9, No. 2, pp. 16-18, 1961.

 

ReliaSoft Corporation, Life Data Analysis Reference, ReliaSoft Publishing, Tucson, AZ, 1997.

 

Striny, Kurt M., and Schelling, Arthur W., Reliability Evaluation of Aluminum-Metalized MOS Dynamic RAMS in Plastic Packages in High Humidity and Temperature Environments, IEEE 31st Electronic Components Conference, pp. 238-244, 1981.

 

Weibull, Waloddi, A Statistical Representation of Fatigue Failure in Solids, Transactions on the Royal Institute of Technology, No. 27, Stockholm, 1949.

 

Weibull, Wallodi, A Statistical Distribution Function of Wide Applicability, Journal of Applied Mechanics, Vol. 18, pp. 293-297, 1951.

 

Wingo, Dallas R., Solution of the Three-Parameter Weibull Equations by Constrained Modified Quasilinearization (Progressively Censored Samples), IEEE Transactions on Reliability, Vol. R-22, No. 2, pp. 96-100, June 1973.

 

See Also:

Contents

Introduction

 

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download