Introduction



Introduction

ReliaSoft’s research and development team compiled this on-line reference as an introduction to accelerated life testing for the general use of the practitioner.

 

This on-line reference presents the underlying theory and principles, relevant calculations and derivations, and numerous practical examples as they apply to accelerated life-testing analysis. It is one of many planned references that will reside on . 

 

This on-line reference is a modified version of ReliaSoft's Accelerated Life Testing Reference, the printed theory manual that accompanies the ALTA 6 application. To find out more about the ALTA 6 application, visit ReliaSoft's ALTA 6 product home page at .

 

Go to the next page to see how is this on-line reference arranged?

 

Modifications and Enhancements to ALTA 6

This reference has been updated and enhanced since the initial printing to include a discussion of the principles and theory behind the new functionality available in ALTA 6 and ALTA 6 PRO. This includes a detailed discussion of three additional life-stress models: general log-linear, proportional hazards and cumulative damage.

 

This reference also includes an introduction to three new plot types designed to help the analyst assess model assumptions: the standardized residuals, Cox-Snell residuals and standardized vs. fitted residuals plots. A presentation of the principles and theory that support three new supplementary analysis tools is also included: tests of comparison, likelihood ratio tests and accelerated degradation analysis.

 

•      The general log-linear model is a flexible life-stress relationship that can be used to analyze data with up to eight stress types and allows the user to specify the type of transformation (exponential or power) for each stress type.

 

•      The proportional hazards model has been widely used in the biomedical field and there has been increasing interest in its application to reliability engineering. This model supports the analysis of up to eight stress types and assumes an exponential relationship for each stress.

 

•      The cumulative damage model represents a great advance in the development of mathematical models to analyze data from accelerated life tests. This model, also known as "cumulative exposure," supports the analysis of data from accelerated life tests where the application of the stress is a function of time. This model can be used to analyze data from time-varying stress tests (e.g. step-stress, ramp-stress, etc.) and/or for products that have an expected use stress level that varies with time. This sophisticated mathematical model has been highly anticipated in the reliability engineering field and its development is the result of extensive research efforts.

 

See Also:

Introduction

How is this On-Line Reference Arranged?

 

How is this On-Line Reference Arranged?

You can browse through this on-line reference by using the following two buttons that appear on the top right corner of each page:

 

[pic]takes you to the previous page in the sequence.

[pic]takes you to the next page in the sequence.

 

This reference is divided into chapters, which are identified by the [pic]icon, and then sections and subsections. Some of the subsections also have topics associated with them.

 

At the end of some pages in this reference, a See Also: list will appear. The See Also: list will provide links to other related topics in the reference.

 

Throughout this reference there are hotspots. These hotspots are always displayed similarly to a hypertext link and will take you to related items in this reference. In addition, there are a few hotspots that display a pop-up citation wherever an outside source was used in the reference. There also are a few hotspot pop-ups that will display an equation to better explain a subject.

 

The chapters, sections and subsections of this reference are included on the Contents page. This reference concludes with a References page.

 

See Also:

Introduction

Contents

 

Contents

[pic]Introduction

•      Modifications and Enhancements to ALTA 6

•      How is this On-Line Reference Arranged?

•      Contents

 

[pic]What is Accelerated Life Testing?

•      Types of Accelerated Tests

•      Quantitative Accelerated Life Tests

 

[pic]Understanding Accelerated Life Test Analysis

•      Looking at a Single Constant Stress Accelerated Life Test

•      Analysis Method

•      Stress Loading

•      Summary of Accelerated Life Testing Analysis

 

[pic]Accelerated Life Testing and ALTA

•      Data and Data Types

•      Plots

 

[pic]Life Distributions

•      Exponential Distribution

•      Weibull Distribution

•      Lognormal Distribution

 

[pic]Arrhenius Relationship

•      Arrhenius Relationship Introduction

•      Acceleration Factor

•      Arrhenius-Exponential

•      Arrhenius-Weibull

•      Arrhenius-Lognormal

 

[pic]Eyring Relationship

•      Eyring Relationship Introduction

•      Eyring Acceleration Factor

•      Eyring-Exponential

•      Eyring-Weibull

•      Eyring-Lognormal

 

[pic]Inverse Power Law Relationship

•      Inverse Power Law Introduction

•      IPL Accleration Factor

•      IPL-Exponential

•      IPL -Weibull

•      IPL -Lognormal

•      IPL and Coffin Manson Relationship

 

[pic]Temperature-Humidity Relationship

•      Temperature-Humidity Relationship Introduction

•      T-H Data

•      T-H Acceleration Factor

•      T-H Exponential

•      T-H Weibull

•      T-H Lognormal

 

[pic]Temperature-Non Thermal Relationship

•      Temperature-Non Thermal Relationship Introduction

•      T-NT Acceleration Factor

•      T-NT Exponential

•      T-NT Weibull

•      T-NT Lognormal

 

[pic]Multivariable Relationships: General Log-Linear and Proportional Hazards

•      General Log-Linear Relationship Introduction

•      Proportional Hazards Model

•      Indicator Variables

 

[pic]Time-Varying Stress Models

•      Time-Varying Stress Models Introduction

•      Time-Varying Stress Models Formulation

•      Time-Varying Stress Model Confidence Intervals

 

[pic]Additional Tools

•      Tests of Comparison

•      Common Shape Parameter Likelihood Ration Test

•      Degradation Analysis

 

[pic]General Examples using ALTA

•      Paper Clip Example

•      Electronic Devices Example

•      Mechanical Components Example

•      Tensile Component Example

•      ACME Example

•      Circuit Boards Example

•      Electronic Components Example

•      Voltage Stress Example

•      Automotive Step-Stress Example

 

[pic]Appendix A: Brief Statistical Background

•      Basic Statistical Definitions

•      Distributions

•      Confidence Intervals (or Bounds)

•      Confidence Limits Determination

 

[pic]Appendix B: Parameter Estimations

•      Graphical Method

•      MLE (Maximum Likelihood) Parameter Estimation

•      MLE of Accelerated Life Data

•      Analysis of Censored Data

•      Appendix B Conclusions

 

[pic]Appendix C: References

 

See Also:

Introduction

How is this On-Line Reference Arranged?

 

[pic]What is Accelerated Life Testing?

Traditional life data analysis involves analyzing times-to-failure data (of a product, system or component) obtained under normal operating conditions in order to quantify the life characteristics of the product, system or component. In many situations, and for many reasons, such life data (or times-to-failure data) is very difficult, if not impossible, to obtain. The reasons for this difficulty can include the long life times of today's products, the small time period between design and release, and the challenge of testing products that are used continuously under normal conditions. Given this difficulty, and the need to observe failures of products to better understand their failure modes and their life characteristics, reliability practitioners have attempted to devise methods to force these products to fail more quickly than they would under normal use conditions. In other words, they have attempted to accelerate their failures. Over the years, the term accelerated life testing has been used to describe all such practices.

 

A variety of methods that serve different purposes have been termed "accelerated life testing." As we use the term in this reference, accelerated life testing involves acceleration of failures with the single purpose of quantification of the life characteristics of the product at normal use conditions.

 

More specifically, accelerated life testing can be divided into two areas: qualitative accelerated testing and quantitative accelerated life testing. In qualitative accelerated testing, the engineer is mostly interested in identifying failures and failure modes without attempting to make any predictions as to the product’s life under normal use conditions. In quantitative accelerated life testing, the engineer is interested in predicting the life of the product (or, more specifically, life characteristics such as MTTF, B(10) life, etc.) at normal use conditions, from data obtained in an accelerated life test.

 

This chapter inlcudes the follwing subchapters:

•      Types of Accelerated Life Testing

•      Quantitative Accelerated Life Tests

 

See Also:

Introduction

Contents

 

Types of Accelerated Tests

Each type of test that has been called an accelerated test provides different information about the product and its failure mechanisms. These tests can be divided into two types: qualitative tests (HALT, HAST, torture tests, shake and bake tests, etc.) and quantitative accelerated life tests. This reference addresses and quantifies the models and procedures associated with quantitative accelerated life tests.

 

Qualitative Accelerated Life Testing

Qualitative tests are tests that yield failure information (or failure modes) only. They have been referred to by many names including:

 

•      elephant tests

•      torture tests

•      HALT tests

•      shake and bake tests

 

Qualitative tests are performed on small samples with the specimens subjected to a single severe level of stress, to a number of stresses, or to a time-varying stress (i.e. stress cycling, cold to hot, etc.). If the specimen survives, it passes the test. Otherwise, appropriate actions will be taken to improve the product's design in order to eliminate the cause(s) of failure.

 

Qualitative tests are used primarily to reveal probable failure modes. However, if not designed properly, they may cause the product to fail due to modes that would have never been encountered in real life. A good qualitative test is one that quickly reveals those failure modes that will occur during the life of the product under normal use conditions. In general, qualitative tests are not designed to yield life data that can be used in subsequent quantitative accelerated life data analysis as described in this reference. In general, qualitative tests do not quantify the life (or reliability) characteristics of the product under normal use conditions, however, they provide valuable information as to the types and level of stresses one may wish to employ during a subsequent quantitative test.

 

Benefits and Drawbacks of Qualitative Tests:

•      Benefit: Increase reliability by revealing probable failure modes.

•      Provide valuable feedbackin designing quantitative tests, and in may cases they are a precursor to a quantitative test.

•      Unanswered Question: What is the reliability of the product at normal use conditions?

 

Quantitative Accelerated Life Testing

Quantitative accelerated life testing (QALT) unlike the qualitative testing methods described previously, consists of tests designed to quantify the life characteristics of the product, component or system under normal use conditions, and thereby provide reliability information. Reliability information can include the determination of the probability of failure of the product under use conditions, mean life under use conditions, and projected returns and warranty costs. It can also be used to assist in the performance of risk assessments, design comparisons, etc.

 

Quantitative accelerated life testing can take the form of usage rate acceleration or overstress acceleration. Both accelerated life test methods are described next. Because usage rate acceleration test data can be analyzed with typical life data analysis methods, the overstress acceleration method is the testing method relevant to both ALTA and the remainder of this reference.

 

See Also:

What is Accelerated Life Testing?

Quantitative Accelerated Life Tests

 

Quantitative Accelerated Life Tests

For all life tests, some time-to-failure information (or time-to-an-event) for the product is required since the failure of the product is the event we want to understand. In other words, if we wish to understand, measure, and predict any event, we must observe the event!

 

Most products, components or systems are expected to perform their functions successfully for long periods of time, such as years. Obviously, for a company to remain competitive the time required to obtain times-to-failure data must be considerably less than the expected life of the product.

 

Two methods of acceleration, usage rate acceleration and overstress acceleration, have been devised to obtain times-to-failure data at an accelerated pace. For products that do not operate continuously, one can accelerate the time it takes to induce failures by continuously testing these products. This is called usage rate acceleration. For products for which usage rate acceleration is impractical, one can apply stress(es) at levels which exceed the levels that a product will encounter under normal use conditions and use the times-to-failure data obtained in this manner to extrapolate to use conditions. This is called overstress acceleration.

 

Usage Rate Acceleration

For products that do not operate continuously under normal conditions, if the test units are operated continuously, failures are encountered earlier than if the units were tested at normal usage. For example, a microwave oven operates for small periods of time every day. One can accelerate a test on microwave ovens by operating them more frequently until failure. The same could be said of washers. If we assume an average washer use of 6 hours a week, one could conceivably reduce the testing time 28-fold by testing these washers continuously.

 

To simplify the example, other stresses or secondary effects that might be encountered, such as turning the unit on and off, or overheating due to continuous usage, are not mentioned. The practitioner should take these effects into account when the test is formulated. For example, to overcome this challenge, the test washers could run their cycles with 30 minute rest periods to take other stresses into account.

 

Data obtained through usage acceleration can be analyzed with the same methods used to analyze regular times-to-failure data. These typical life data analysis techniques are thoroughly described in ReliaSoft's Life Data Analysis Reference [31] and facilitated by ReliaSoft's Weibull++ software package ().

 

The limitation of usage rate acceleration arises when products, such as computer servers and peripherals, maintain a very high or even continuous usage. In such cases, usage acceleration, even though desirable, is not a feasible alternative. In these cases the practitioner must stimulate, usually through the application of stress(es), the product to fail. This method of accelerated life testing is called overstress acceleration and is described next.

 

Overstress Acceleration

For products with very high or continuous usage, the accelerated life testing practitioner must stimulate the product to fail in a life test. This is accomplished by applying stress(es) that exceed the stress(es) that a product will encounter under normal use conditions. The times-to-failure data obtained under these conditions is then used to extrapolate to use conditions. Accelerated life tests can be performed at high or low temperature, humidity, voltage, pressure, vibration, etc. in order to accelerate or stimulate the failure mechanisms. They can also be performed at a combination of these stresses.

 

Stresses and Stress Levels

Accelerated life test stresses and stress levels should be chosen so that they accelerate the failure modes under consideration but do not introduce failure modes that would never occur under use conditions. Normally, these stress levels will fall outside the product specification limits but inside the design limits as illustrated next:

 

[pic]

Fig. 1: Typical stress range for a component, product or system.

 

This choice of stresses and stress levels and the process of setting up the experiment is of the utmost importance. Consult your design engineer(s) and material scientist(s) to determine what stimuli (stress) is appropriate as well as to identify the appropriate limits (or stress levels). If these stresses or limits are unknown, multiple tests with small sample sizes can be performed in order to ascertain the appropriate stress(es) and stress levels. Proper use of Design of Experiments (DOE) methodology is also crucial at this step. In addition to proper stress selection, the application of the stresses must be accomplished in some logical, controlled and quantifiable fashion. Accurate data on the stresses applied as well as the observed behavior of the test specimens must be maintained.

 

It is clear that as the stress used in an accelerated test becomes higher the required test duration decreases. However, as the stress level moves farther away from the use conditions, the uncertainty in the extrapolation increases. Confidence intervals provide a measure of the uncertainty in extrapolation. Confidence intervals are presented in Appendix A: A Brief Statistical Background.

 

See Also:

What is Acclerated Life Testing?

 

[pic]Understanding Quantitative Accelerated Life Data Analysis

In typical life data analysis one determines, through the use of statistical distributions, a life distribution that describes the times-to-failure of a product. Statistically speaking, one wishes to determine the use level probability density function, or pdf, of the times-to-failure. Appendix A of this reference presents these statistical concepts and provides a basic statistical background as it applies to life data analysis.

 

Once this pdf is obtained, all other desired reliability results can be easily determined including but not limited to:

 

•      Percentage failing under warranty.

•      Risk assessment.

•      Design comparison.

•      Wear-out period (product performance degradation).

 

In typical life data analysis, this use level probability density function, or pdf, of the times-to-failure can be easily determined using regular times-to-failure/suspension data and an underlying distribution such as the Weibull, exponential, and lognormal distributions. These lifetime distributions are presented in greater detail in the Life Distributions chapter of this reference.

 

In accelerated life data analysis, however, we face the challenge of determining this use level pdf from accelerated life test data rather than from times-to-failure data obtained under use conditions. To accomplish this, we must develop a method that allows us to extrapolate from data collected at accelerated conditions to arrive at an estimation of use level characteristics.

 

This chapter includes the following subchapters:

 

•      Looking at a Single Constant Stress Acclerated Life Test

•      Analysis Method

•      Stress Loading

•      Summary of Acclerated Life Testing Analysis

 

See Also:

Introduction

Contents

 

Looking at a Single Constant Stress Accelerated Life Test 

To understand the process involved with extrapolating from overstress test data to use level conditions, let's look closely at a simple accelerated life test. For simplicity we will assume that the product was tested under a single stress at a single constant stress level. We will further assume that times-to-failure data have been obtained at this stress level. The times-to-failure at this stress level can then be easily analyzed using an underlying life distribution. A pdf of the times-to-failure of the product can be obtained at that single stress level using traditional approaches. This pdf, the overstress pdf, can likewise be used to make predictions and estimates of life measures of interest at that particular stress level. The objective in an accelerated life test, however, is not to obtain predictions and estimates at the particular elevated stress level at which the units were tested, but to obtain these measures at another stress level, the use stress level.

 

To accomplish this objective, we must devise a method to traverse the path from the overstress pdf to extrapolate a use level pdf.

 

Figure 1 illustrates a typical behavior of the pdf at the high stress (or overstress level) and the pdf at the use stress level. To further simplify the scenario, let's assume that the pdf for the product at any stress level can be described by a single point. Figure 2 illustrates such a simplification. In Figure 2, we need to determine a way to project (or map) this single point from the high stress to the use stress.

 

[pic]

Fig. 1: pdf at different stress levels.

 

[pic]

Fig. 2: Projecting a single point from the high stress to the use stress.

 

Obviously, there are infinite ways to map a particular point from the high stress level to the use stress level. We will assume that there is some model (or a function) that maps our point from the high stress level to the use stress level. This model or function can be described mathematically and can be as simple as the equation for a line. Figure 3 demonstrates some simple models or relationships.

 

[pic]

Fig. 3: Example of two simple life-stress relationships

 

Even when a model is assumed (i.e. linear, exponential, etc.), the mapping possibilities are still infinite since they depend on the parameters of the chosen model or relationship. For example, a simple linear model would generate different mappings for each slope value because we can draw an infinite number of lines through a point. If we tested specimens of our product at two different stress levels, we could begin to fit the model to the data. Obviously, the more points we have, the better off we are in correctly mapping this particular point, or fitting the model to our data.

 

[pic]

Fig. 4: Testing at two (or more) higher stress levels allows us to begin to fit the model.

 

Figure 4 illustrates that you need a minimum of two higher stress levels to properly map the function to a use stress level.

 

This subchapter includes the following topic:

 

•      Life Distribution and Life-Stress Models

 

See Also:

Understanding Accelerated Life Test Analysis

 

Life Distribution and Life-Stress Models

Analysis of accelerated life test data, then, consists of an underlying life distribution that describes the product at different stress levels and a life-stress relationship (or model) that quantifies the manner in which the life distribution changes across different stress levels. These elements of analysis are shown graphically in Figure 5.

 

[pic]

Fig. 5: A life distribution and a life-stress relationship.

 

The combination of both an underlying life distribution and a life-stress model can be best seen in Figure 6 where a pdf is plotted against both time and stress.

 

[pic]

Fig. 6: pdf vs. time and stress

 

The assumed underlying life distribution can be any life distribution. The most commonly used life distributions include the Weibull, the exponential, and the lognormal. Along with the life distribution, a life-stress relationship is also used. These life-stress relationships have been empirically derived and fitted to data. An overview of some of these life-stress relationships is presented in the Analysis Method subchapter.

 

The following chapters combine these relationships with life data models (i.e. life distributions) and present more detailed explanations of their use and applicability:

 

•      Arrhenius Relationship

•      Eyring Relationship

•      Inverse Power Law (IPL) Relationship

•      Temperature-Humidity Relationship

•      Temperature-Non Thermal Relationship

•      Multivariable Relationships: General Log-Linear and Proportional Hazards

•      Time-Varying Stress Models

 

See Also:

Looking at a Single Constant Stress Accelerated Life Test

 

Analysis Method

With our current understanding of the principles behind accelerated life testing analysis, we will continue with a discussion of the steps involved in performing an analysis on life data that has been collected from accelerated life tests like those described in the Quantitative Accelerated Life Tests section.

 

Select a Life Distribution

The first step in performing an accelerated life data analysis is to choose an appropriate life distribution. Although it is rarely appropriate, the exponential distribution, because of its simplicity, has in the past been widely used as the underlying life distribution. The Weibull and lognormal distributions, which require more involved calculations, are more appropriate for most uses. The underlying life distributions available in ALTA are presented in detail in the Life Distributions chapter of this reference.

 

Select a Life-Stress Relationship

After you have selected an underlying life distribution appropriate to your data, the second step is to select (or create) a model that describes a characteristic point or a life characteristic of the distribution from one stress level to another.

 

[pic]

Fig. 7: Selecting a model.

 

The life characteristic can be any life measure such as the mean, median, R(x), F(x), etc. This life characteristic is expressed as a function of stress. Depending on the assumed underlying life distribution, different life characteristic are considered. Typical life characteristics for some distributions are shown in the next table.

 

[pic]

*Usually assumed constant

 

For example, when considering the Weibull distribution, the scale parameter, [pic], is chosen to be the life characteristic that is stress dependent, while [pic]is assumed to remain constant across different stress levels. A life-stress relationship is then assigned to [pic]. Eight common life-stress models are presented later in this reference. Click a topic to go directly to that page.

 

•      Arrhenius Relationship

•      Eyring Relationship

•      Inverse Power Law Relationship

•      Temperature-Humidity Relationship

•      Temperature Non-Thermal Relationship

•      Multivariable Relationships: General Log-Linear and Proportional Hazards

•      Time-Varying Stress Models

 

Parameter Estimation

Once you have selected an underlying life distribution and life-stress relationship model to fit your accelerated test data, the next step is to select a method by which to perform parameter estimation. Simply put, parameter estimation involves fitting a model to the data and solving for the parameters that describe that model. In our case, the model is a combination of the life distribution and the life-stress relationship (model). The task of parameter estimation can vary from trivial (with ample data, a single constant stress, a simple distribution and simple model) to impossible. Available methods for estimating the parameters of a model include the graphical method, the least squares method and the maximum likelihood estimation method. Parameter estimation methods are presented in detail in Appendix B: Parameter Estimation of this reference. Greater emphasis will be given to the MLE method because it provides a more robust solution, and is the one employed in ALTA.

 

Derive Reliability Information

Once the parameters of the underlying life distribution and life-stress relationship have been estimated, a variety of reliability information about the product can be derived such as:

 

•      Warranty time.

•      The instantaneous failure rate, which indicates the number of failures occurring per unit time.

•      The mean life which provides a measure of the average time of operation to failure.

•      B(X) life, which is the time by which X% of the units will fail.

 

See Also:

Understanding Accelerated Life Test Analysis

 

Stress Loading

The discussion of accelerated life testing analysis thus far has included the assumption that the stress loads applied to units in an accelerated test have been constant with respect to time. In real life, however, different types of loads can be considered when performing an accelerated test. Accelerated life tests can be classified as constant stress, step stress, cycling stress, or random stress etc. These types of loads are classified according to the dependency of the stress with respect to time. There are two possible stress loading schemes, loadings in which the stress is time-independent and loadings in which the stress is time-dependent. The mathematical treatment, models and assumptions vary depending on the relationship of stress to time. Both of these loading schemes are described next.

 

Stress is Time-Independent (Constant Stress)

When the stress is time-independent, the stress applied to a sample of units does not vary. In other words, if temperature is the thermal stress, each unit is tested under the same accelerated temperature, e.g. 100° C, and data is recorded. This is the type of stress load that has been discussed so far.

 

[pic]

Fig. 8: Graphical representation of time vs. stress in a time-independent stress loading.

 

This type of stress loading has many advantages over time-dependent stress loadings. Specifically:

 

•      Most products are assumed to operate at a constant stress under normal use.

•      It is far easier to run a constant stress test (e.g. one in which the chamber is maintained at a single temperature).

•      It is far easier to quantify a constant stress test.

•      Models for data analysis exist, are widely publicized and are empirically verified.

•      Extrapolation from a well-executed constant stress test is more accurate than extrapolation from a time-dependent stress test.

 

Stress is Time-Dependent

When the stress is time-dependent, the product is subjected to a stress level that varies with time. Products subjected to time-dependent stress loadings will yield failures more quickly and models that fit them are thought by many to be the "holy grail" of accelerated life testing. The cumulative damage model, available in ALTA 6 PRO, allows you to analyze data from accelerated life tests with time-dependent stress profiles.

 

The step-stress model [28] and the related ramp-stress model are typical cases of time-dependent stress tests. In these cases, the stress load remains constant for a period of time and then is stepped/ramped into a different stress level where it remains constant for another time interval until it is stepped/ramped again. There are numerous variations of this concept.

 

[pic]

Fig. 9: Graphical representation of the step-stress model.

 

[pic]

Fig. 10: Graphical representation of the ramp-stress model.

 

The same idea can be extended to include a stress as a continuous function of time.

 

[pic]

Fig. 11: Graphical representation of a constantly increasing (or progressive) stress model.

 

[pic]

Fig. 12: Graphical representation of a completely time-dependent stress model.

 

A Summary of Accelerated Life Testing Analysis is presented next.

 

See Also:

Understanding Accelerated Life Test Analysis

 

Summary of Accelerated Life Testing Analysis

In summary, accelerated life testing analysis can be conducted on data collected from carefully designed quantitative accelerated life tests. Well-designed accelerated life tests will apply stress(es) at levels that exceed the stress level the product will encounter under normal use conditions in order to accelerate the failure modes that would occur under use conditions. An underlying life distribution (like the exponential, Weibull and lognormal lifetime distributions) can be chosen to fit the life data collected at each stress level to derive overstress pdfs for each stress level. A life-stress relationship (Arrhenius, Eyring, etc.,) can then be chosen to quantify the path from the overstress pdfs in order to extrapolate a use level pdf. From the extrapolated use level pdf, a variety of functions, including reliability, failure rate, mean life, warranty time etc., can be derived.

 

See Also:

Understanding Accelerated Life Test Analysis

 

Accelerated Life Testing and ALTA

This chapter presents issues relevant to using the ALTA software package to analyze data collected in accelerated life tests. These issues include the types of data that can be analyzed and the types of plots that can be created to display analyses.

 

This chapter includes the following subchapters:

•      Data and Data Types

•      Plots

 

See Also:

Contents

Introduction

Data and Data Types

The analysis of accelerated tests relies extensively on data. Specifically, analysis relies on life and stress data or times-to-failure data at a specific stress level. The accuracy of any prediction is directly proportional to the quality of and accuracy of the supplied data. Good data along with the appropriate distribution and life-stress model usually results in good predictions. Bad or insufficient data will always result in bad predictions.

 

For the purposes of this reference, we will separate data into two types based on the failure or success of the product. Failure data will be referred to as complete data and success data will be referred to as suspended (or right censored) data. In other words, we know that a product failed after a certain time (complete data), or we know that it operated successfully up to a certain time (suspended or right censored data). Each type is explained next.

 

Complete Data

Most nonlife data, as well as some life data, are what we refer to as complete data. Complete data means that the value of each sample unit is observed (or known) [31]. For example, if we had to compute the average test score for a sample of 10 students, complete data would consist of the known score for each student. For products, known times-to-failure (along with the stress level), comprise what is usually referred to as complete data. For example, if we tested five units and they all failed, we would then have complete information as to the time-to-failure for each unit in the sample.

 

[pic]

 

Censored Data

It is also possible that some of the units have not yet failed when the life data are analyzed. This type of data is commonly called right censored data, or suspended data. Assume that we tested five units and three failed. In this scenario, our data set is composed of the times-to-failure of the three units that failed (complete data) and the running time of the other two units that have not failed at the time the data are analyzed (suspended data). This is the most common censoring scheme and it is used extensively in the analysis of field data.

 

[pic]

 

Grouped Data Analysis

Data can also be entered into ALTA individually or into groups. Grouped data analysis is used for tests in which groups of units possess the same time-to-failure or in which groups of units were suspended at the same time. We highly recommend entering redundant data into groups. Grouped data speeds data entry by the user and significantly speeds up the calculations.

 

A Note About Data Classification

Depending on the event that we want to measure, data type classification (i.e. complete or suspended) can be open to interpretation. For example, under certain circumstances, and depending on the question one wishes to answer, a specimen that has failed might be classified as suspended for analysis purposes. To illustrate this, consider the following times-to-failure data for a product that can fail due to modes A, B and C:

 

[pic]

 

If the objective of analysis were to determine the probability of failure of the product regardless of the mode responsible for the failure, we would analyze the data with all data entries classified as failures (complete data). However, if the objective of the analysis is to determine the probability of failure of the product due to Mode A only, we would then choose to treat failures due to Modes B or C as suspended (right censored) data. Those data points would be treated as suspended data with respect to Mode A because the product operated until the listed time without failure due to Mode A.

 

See Also:

Accelerated Life Testing and ALTA

 

Plots

In addition to the probability plots required in life data analysis, accelerated life test analysis utilizes a variety of stress-related plots. Each plot provides information crucial to performing accelerated life test analyses. The addition of stress dependency into the life equations introduces another dimension into the plots. This generates a whole new family of three dimensional (3D) plots. The following table summarizes the types of plots available in ReliaSoft's ALTA 6 PRO and ALTA 6.

 

[pic]

 

Considerations relevant to the use of some of the plots available in ALTA are discussed in the sections that follow. Click a topic to go directly to that page.

 

•      Probability Plots

•      Reliability/Unreliability Plots

•      Failure Rate Plots

•      pdf Plots

•      Life-Stress Plots

•      Standard Deviation Plots

•      Acceleration Factor Plots

•      Residual Plots

 

See Also:

Accelerated Life Testing and ALTA

 

Probability Plots

The probability plots used in accelerated life testing analysis are similar to those used in life data analysis. The only difference is that each probability plot in accelerated testing is associated with the corresponding stress or stresses. Multiple lines will be plotted on a probability plot in ALTA, each corresponding to a different stress level. The information that can be obtained from probability plots includes: reliability with confidence bounds, warranty time with confidence bounds, shape and scale parameters, etc.

 

[pic]

Fig. 1: A typical probability plot for three different stress levels.

 

See Also:

Plots

 

Reliability/Unreliability Plots

There are two types of reliability plots. The first type is a 2-dimensional plot of reliability vs. time for a given stress level. The second type is a 3-dimensional plot of the reliability vs. time vs. stress. The 2-dimensional plot of reliability is just a section of the 3-dimensional plot at the desired stress level as illustrated in Figure 2.

 

[pic]

Fig. 2: Example of the relationship between 3D and 2D reliability plots.

 

A reliability vs. time plot provides reliability values at a given time and time at a given reliability. These can be plotted with or without confidence bounds. The same 2-dimensional and 3-dimensional plots are available for unreliability as well, and they are just the complent of the reliability plots.

 

See Also:

Plots

 

Failure Rate Plots

The instantaneous failure rate is a function of time and stress. For this reason, a 2-dimensional plot of failure rate vs. time at a given stress and a 3-dimensional plot of failure rate vs. time and stress can be obtained in ALTA.

 

Failure Rate vs. Stress Surface

[pic]

Fig. 3: 3D Failure Rate Plot

 

[pic]

Fig. 4: 2D Failure Rate Plot

 

A failure rate plot shows the expected number of failures per unit time at a particular stress level (e.g. failures per hour at 410K).

 

See Also:

Plots

 

pdf Plots

The pdf is a function of time and stress. For this reason, a 2-dimensional plot of the pdf vs. Time at a given stress and a 3-dimensional plot of the pdf vs. Time and Stress can be obtained in ALTA.

 

[pic]

Figure 5: 2D pdf plot

 

[pic]

Figure 6: 3D pdf plot

 

A pdf plot represents the relative frequency of failures as a function of time and stress. Although the pdf plot is less important in most reliability applications than the other plots available in ALTA, it provides a good way of visualizing the distribution and its characteristics such as its shape, skewness, mode, etc.

 

See Also:

Plots

 

Life-Stress Plots

Life vs. stress plots and probability plots are the most important plot types in accelerated life testing analysis. Life vs. stress plots are widely used for estimating the parameters of life-stress relationships. Any life measure can be plotted versus stress in the life vs. stress plots available in ALTA. Confidence bounds information on each life measure can also be plotted. The most widely used life vs. stress plots are the Arrhenius and the inverse power law plots. Figure 7 illustrates a typical Arrhenius life vs. stress plot.

 

[pic]

Fig. 7: Typical Arrhenius Life vs. Stress plot

 

Each line in Figure 7 represents the path for extrapolating a life measure, such as a percentile, from one stress level to another. The slope and intercept of those lines are the parameters of the life-stress relationship (whenever the relationship can be linearized). The imposed pdfs represent the distribution of the data at each stress level.

 

See Also:

Plots

 

Standard Deviation Plots

Standard deviation vs. stress is a useful plot in accelerated life testing analysis and provides information about the spread of the data at each stress level.

 

[pic]

Fig. 8: Standard Deviation vs. Stress Plot

 

See Also:

Plots

 

Acceleration Factor Plots

The acceleration factor is a unitless number that relates a product's life at an accelerated stress level to the life at the use stress level. It is defined by:

 

(1)

[pic]

 

where,

•      [pic] is the life at use stress level,

 

and 

 

•      [pic] is the life at the accelerated level.

 

As it can be seen in Eqn. (1), the acceleration factor depends on the life-stress relationship (i.e. Arrhenius, Eyring, etc.), and is thus a function of stress.

 

The acceleration factor vs. stress plot is generated using Eqn. (1) at a constant use stress level and at a varying accelerated stress. In Figure 9, the acceleration factor vs. stress was plotted for a constant use level of 300K. Since [pic]= [pic], the value of the acceleration factor at 300K is equal to one. The acceleration factor for a temperature of 450K is approximately 8. This means that the life at the use level of 300K is eight times higher than the life at 450K.

 

[pic]

Fig. 9: Acceleration Factor vs. Stress Plot

 

See Also:

Plots

 

Residual Plots

Residual analysis for reliability consists of analyzing the results of a regression analysis by assigning residual values to each data point in the data set. Plotting these residuals provides a very good tool in assessing model assumptions, and revealing inadequacies in the model, as well as revealing extreme observations. Three types of residual plots are available in ALTA 6. Click a topic to go directly to that page:

 

•      Standardized Residuals

•      Cox-Snell Residuals

•      Standardized vs. Fitted Values

 

See Also:

Plots

 

Standardized Residuals Plots

The standardized residuals plot for the Weibull and lognormal distributions can be obtained in ALTA. Each plot type is discussed next.

 

SR for the Weibull Distribution

Once the parameters have been estimated, the standardized residuals for the Weibull distribution can be calculated by,

 

[pic]

 

Then, under the assumed model, these residuals should look like a sample from an extreme value distribution with a mean of zero. For the Weibull distribution the standardized residuals are plotted on a smallest extreme value probability paper. If the Weibull distribution adequately describes the data, then the standardized residuals should appear to follow a straight line on such a probability plot. Note that when an observation is censored (suspended), the corresponding residual is also censored.

 

[pic]

Fig. 10: Probability plot of standardized residuals for the Weibull distribution.

 

SR for the Lognormal Distribution

Once the parameters have been estimated using rank regression, the fitted or calculated responses can be calculated by,

 

[pic]

 

Then, under the assumed model, the standardized residuals should be normally distributed with a mean of zero and a standard deviation of one (~N(0,1)). Consequently, the standardized residuals for the lognormal distribution are commonly displayed on a normal probability plot.

 

[pic]

Fig. 11: Probability plot of standardized residuals for the lognormal distribution.

 

See Also:

Residual Plots

 

Cox-Snell Residuals Plots

The Cox-Snell residuals are given by,

 

[pic]

 

where R([pic]) is the calculated reliability value at failure time [pic]. The Cox-Snell residuals are plotted on an exponential probability paper.

 

[pic]

Fig. 12: Probability plot of the Cox-Snell residuals.

 

See Also:

Residual Plots

 

Standardized vs. Fitted Values Plot

A (standardized) Residual vs. Fitted Value plot helps to detect behavior not modeled in the underlying relationship. However, when heavy censoring is present, the plot is more difficult to interpret. In a Residual vs. Fitted Value plot, the standardized residuals are plotted versus the scale parameter of the underlying life distribution (which is a function of stress) on log-linear paper (linear on the Y-axis). Therefore, in the case of the Weibull distribution, the standardized residuals are plotted versus [pic](V), for the lognormal versus [pic](V), and for the exponential versus m(V).

 

[pic]

Fig. 13: Standardized Residuals vs. Fitted Value

 

See Also:

Residual Plots

 

Life Distributions

In this section we will briefly present three lifetime distributions commonly used in accelerated life test analysis, namely the 1-parameter exponential, the 2-parameter Weibull and the lognormal distributions. Readers who are interested in a more rigorous overview or in different forms of these and other life distributions can refer to ReliaSoft's Life Data Analysis Reference, Chapters 6-10 [31].

 

This chapter includes the following subchapters:

 

•      Exponential Distribution

•      Weibull Distribution

•      Lognormal Distribution

 

See Also:

Contents

Introduction

 

Exponential Distribution

The exponential distribution is a very commonly used distribution in reliability engineering. Due to its simplicity, it has been widely employed even in cases to which it does not apply. The exponential distribution is used to describe units that have a constant failure rate.

 

The single-parameter exponential pdf is given by:

 

[pic]

 

where:

 

•      [pic] = constant failure rate, in failures per unit of measurement, e.g. failures per hour, per cycle, etc.

•      [pic] = [pic].

•        m = mean time between failures, or to a failure.

•        T = operating time, life, or age, in hours, cycles, miles, actuations, etc.

 

This distribution requires the estimation of only one parameter, [pic], for its application.

 

Statistical Properties Summary

The Mean or MTTF

The mean, [pic], or mean time to failure (MTTF) of the 1-parameter exponential distribution is given by:

 

(1)   

[pic]

 

The Median

The median, [pic], of the 1-parameter exponential distribution is given by:

 

(2)   

[pic]

 

The Mode

The mode, [pic], of the 1-parameter exponential distribution is given by:

 

(3)   

[pic]

 

The Standard Deviation

The standard deviation, [pic], of the 1-parameter exponential distribution is given by:

 

(4)   

[pic]

 

The Reliability Function

The 1-parameter exponential reliability function is given by:

 

[pic]

 

This function is the complement of the exponential cumulative distribution function or,

 

[pic]

 

and,

 

[pic]

 

Conditional Reliability

The conditional reliability function for the 1-parameter exponential distribution is given by:

 

[pic]

 

which says that the reliability for a mission of t duration undertaken after the component or equipment has already accumulated T hours of operation from age zero is only a function of the mission duration, and not a function of the age at the beginning of the mission. This is referred to as the memoryless property.

 

Reliable Life

The reliable life, or the mission duration for a desired reliability goal [pic]for the 1-parameter exponential distribution is given by:

 

[pic]

 

or,

 

[pic]

 

Failure Rate Function

The exponential failure rate function is given by:

 

[pic]

 

This subchapter includes the following topics:

 

•      Characteristics of the 1-Parameter Exponential Distribution

•      Calculating the Parameters of the Exponential Distribution

 

See Also:

Life Distributions

 

Characteristics of the 1-parameter Exponential Distribution 

The characteristics of the 1-parameter exponential distribution can be exemplified by examining its parameter, lambda, [pic], and the effect lambda has on the pdf, reliability and failure rate functions.

 

Effects of [pic]on the pdf

[pic]

 

•      The scale parameter is [pic].

•      As [pic]is decreased in value, the distribution is stretched out to the right, and as [pic]is increased, the distribution is pushed toward the origin.

•      This distribution has no shape parameter as it has only one shape, i.e. the exponential. The only parameter it has is the failure rate, [pic].

•      The distribution starts at T = 0 at the level of f(T = 0) = [pic]and decreases thereafter exponentially and monotonically as T increases, and is convex.

•      As T [pic][pic], f(T) [pic]0.

•      This pdf can be thought of as a special case of the Weibull pdf with [pic]= 1.

 

Effects of [pic]on the Reliability Function

[pic]

 

•      The 1-parameter exponential reliability function starts at the value of 1 at T = 0. It decreases thereafter monotonically and is convex.

•      As T [pic][pic], R(T [pic][pic]) [pic]0.

 

Effects of [pic]on the Failure Rate Function

The failure rate function for the exponential distribution is constant and it is equal to the parameter [pic].

 

[pic]

 

See Also:

Exponential Distribution

 

Calculating the Parameter of the Exponential Distribution

The parameter of the exponential distribution can be estimated graphically on probability plotting paper or analytically using either least squares or maximum likelihood. Parameter estimation methods are presented in detail in Appendix B: Parameter Estimation.

 

Probability Plotting

One method of calculating the parameter of the exponential distribution is by using probability plotting. To better illustrate this procedure, consider the following example.

 

Example 1

Let's assume six identical units are reliability tested at the same application and operation stress levels. All of these units fail during the test after operating for the following times (in hours), [pic]: 96, 257, 498, 763, 1051 and 1744.

 

The steps for determining the parameters of the exponential pdf representing the data, using probability plotting, are as follows:

 

•      Rank the times-to-failure in ascending order as shown next.

 

[pic]

 

•      Obtain their median rank plotting positions.

 

Median rank positions are used instead of other ranking methods because median ranks are at a specific confidence level (50%).

 

•      The times-to-failure, with their corresponding median ranks, are shown next:

 

[pic]

 

•      On an exponential probability paper, plot the times on the x-axis and their corresponding rank value on the y-axis. Figure 1 displays an example of an exponential probability paper. The paper is simply a log-linear paper. (The solution is given in Figure 2.)

 

[pic]

Fig. 1: Sample exponential probability paper.

 

•      Draw the best possible straight line that goes through the t = 0 and R(t) =100% point and through the plotted points (as shown in Figure 2).

 

•      At the Q(t) = 63.2% or R(t) = 36.8% ordinate point, draw a straight horizontal line until this line intersects the fitted straight line. Draw a vertical line through this intersection until it crosses the abscissa. The value at the intersection of the abscissa is the estimate of the mean. For this case, [pic]= 833 hr which means that [pic]= [pic]=0.0012. (This is always at 63.2% since Q(T) = 1- [pic]= 1- [pic]= 0.63 2= 63.2%).

 

[pic]

Fig. 2: Probability Plot for Example 1

 

Now any reliability value for any mission time t can be obtained. For example, the reliability for a mission of 15 hr, or any other time, can now be obtained either from the plot or analytically (i.e. using the equations given in the Exponential Statistical Properites Summary).

 

To obtain the value from the plot, draw a vertical line from the abscissa, at t = 15 hr, to the fitted line. Draw a horizontal line from this intersection to the ordinate and read R(t). In this case, R(t = 15) = 98.15%. This can also be obtained analytically, from the exponential reliability function.

 

MLE Parameter Estimation

The parameter of the exponential distribution can also be estimated using the maximum likelihood estimation (MLE) method. This log-likelihood function is composed of two summation portions:

 

[pic]

 

where:

 

•      [pic] is the number of groups of times-to-failure data points.

•      [pic] is the number of times-to-failure in the [pic]time-to-failure data group.

•      [pic] is the failure rate parameter (unknown apriori, the only parameter to be found).

•      [pic] is the time of the [pic]group of time-to-failure data.

•        S is the number of groups of suspension data points.

•      [pic] is the number of suspensions in the [pic]group of suspension data points.

•      [pic] is the time of the [pic]suspension data group.

 

The solution will be found by solving for a parameter [pic]so that [pic]=0 where,

 

(5)    

[pic]

 

Example 2

Using the same data as in the probability plotting example (Example 1), and assuming an exponential distribution, estimate the parameter using the MLE method.

 

Solution

In this example we have non-grouped data without suspensions. Thus Eqn. (5) becomes,

 

[pic]

 

Substituting the values for T we get,

 

[pic]

 

See Also:

Exponential Distribution

 

Weibull Distribution

The Weibull distribution is one of the most commonly used distributions in reliability engineering because of the many shapes it attains for various values of [pic](slope). It can therefore model a great variety of data and life characteristics [18].

 

The 2-parameter Weibull pdf is given by:

 

[pic]

 

where,

 

[pic]

 

and,

 

•      [pic] = scale parameter

•      [pic] = shape parameter (or slope).

 

Weibull Statistical Properties Summary

The Mean or MTTF

The mean, [pic]of the 2-parameter Weibull pdf is given by:

 

[pic]

 

where [pic]is the gamma function evaluated at the value of [pic].

 

The Median

The median, [pic], of the 2-parameter Weibull is given by:

 

(6)    

[pic]

 

The Mode

The mode, [pic], of the 2-parameter Weibull is given by:

 

(7)    

[pic]

 

The Standard Deviation

The standard deviation, [pic]of the 2-parameter Weibull is given by:

 

[pic]

 

The cdf and the Reliability Function

The cdf of the 2-parameter Weibull distribution is given by:

 

[pic]

 

The Weibull reliability function is given by:

 

[pic]

 

The Conditional Reliability Function

The Weibull conditional reliability function is given by:

 

(8)    

[pic]

 

or,

 

[pic]

 

Equation (8) gives the reliability for a new mission of t duration, having already accumulated T hours of operation up to the start of this new mission, and the units are checked out to assure that they will start the next mission successfully. (It is called conditional because you can calculate the reliability of a new mission based on the fact that the unit(s) already accumulated T hours of operation successfully).

 

The Reliable Life

For the 2-parameter Weibull distribution, the reliable life, [pic], of a unit for a specified reliability, starting the mission at age zero, is given by: 

 

(9)    

[pic]

 

This is the life for which the unit will function successfully with a reliability of [pic]. If [pic]= 0.50 then [pic]= [pic], the median life, or the life by which half of the units will survive.

 

The Failure Rate Function

The 2-parameter Weibull failure rate function, [pic](T), is given by:

 

[pic]

 

This subchapter includes the following topics:

 

•      Characteristics of the 2-Parameter Weibull

•      Calculating the Parameters of the Weibull Distribution

 

See Also:

Life Distributions

 

Characteristics of the 2-Parameter Weibull

The characteristics of the 2-parameter Weibull distribution can be exemplified by examining the two parameters, beta, [pic], and eta, [pic], and the effect they have on the pdf, reliability and failure rate functions.

 

Looking at [pic]

Beta, [pic], is called the shape parameter or slope of the Weibull distribution. Changing the value of [pic]forces a change in the shape of the pdf as shown in Figure 3. In addition, when the cdf is plotted on Weibull probability paper, as shown in Figure 4, a change in beta is a change in the slope of the distribution on Weibull probability paper.

 

Effects of [pic]on the pdf

[pic]

Fig. 3: Weibull pdf with 0 < [pic]< 1, [pic]= 1, [pic]> 1 and a fixed [pic].

 

•      For 0 < [pic]< 1, the failure rate decreases with time and:

•      As T [pic]0, f(T) [pic][pic].

•      As T [pic][pic], f(T) [pic]0.

•        f(T) decreases monotonically and is convex as T increases.

•      The mode is non-existent.

 

•      For [pic]= 1, it becomes the exponential distribution, as a special case, or

 

[pic]

 

where [pic]= [pic]= chance, useful life, or failure rate.

 

•      For [pic]>1, f(T), the Weibull assumes wear-out type shapes (i.e. the failure rate increases with time) and:

•        f(T) = 0 at T = 0.

•        f(T) increases as T [pic][pic](mode) and decreases thereafter.

•      For [pic]= 2 it becomes the Rayleigh distribution as a special case. For [pic]< 2.6 the Weibull pdf is positively skewed (has a right tail), for 2.6 < [pic]< 3.7 its coefficient of skewness approaches zero (no tail); consequently, it may approximate the normal pdf, and for [pic]> 3.7 it is negatively skewed (left tail).

 

•      The parameter [pic]is a pure number, i.e. it is dimensionless.

 

Effects of [pic]on the Reliability Funtion and the cdf

[pic]

Fig. 4: Weibull cdf, or unreliability vs. time, on Weibull probability plotting paper with 0 1 and a fixed [pic].

 

[pic]

Fig. 5: Weibull 1-cdf, or reliability vs. time, on linear scales with 0 < [pic]< 1, [pic]= 1, [pic]> 1 and a fixed [pic].

 

•        R(T) decreases sharply and monotonically for 0 < [pic]< 1, it is convex, and decreases less sharply for the same [pic].

•      For [pic]= 1 and the same [pic], R(T) decreases monotonically but less sharply than for 0 < [pic]< 1, and is convex.

•      For [pic]> 1, R(T) decreases as T increases but less sharply than before, and as wear-out sets in, it decreases sharply and goes through an inflection point.

 

Effects of [pic]on the Failure Rate Funtion

[pic]

Fig. 6: Weibull failure rate vs. time with 0 < [pic]< 1, [pic]= 1, [pic]> 1

 

The Weibull failure rate for 0 < [pic]< 1 is unbounded at T = 0. The failure rate, [pic](T), decreases thereafter monotonically and is convex, approaching the value of zero as T [pic][pic]or [pic]([pic]) = 0. This behavior makes it suitable for representing the failure rate of units exhibiting early-type failures, for which the failure rate decreases with age. When such behavior is encountered, the following conclusions can be drawn:

 

•      Burn-in testing and/or environmental stress screening are not well implemented.

•      There are problems in the production line.

•      Inadequate quality control.

•      Packaging and transit problems.

 

•      For [pic]=1, [pic](T) yields a constant value of [pic], or,

 

[pic]

 

This makes it suitable for representing the failure rate of chance-type failures and the useful life period failure rate of units.

 

•      For [pic]> 1, [pic](T) increases as T increases and becomes suitable for representing the failure rate of units exhibiting wear-out type failures. For 1< [pic]< 2 the [pic](T) curve is concave, consequently the failure rate increases at a decreasing rate as T increases.

 

•      For [pic]= 2, or for the Rayleigh distribution case, the failure rate function is given by:

 

[pic]

 

hence there emerges a straight line relationship between [pic](T) and T, starting at a value of [pic](T) = 0 at T = 0, and increasing thereafter with a slope of [pic]. Consequently, the failure rate increases at a constant rate as T increases. Furthermore, if [pic]=1 the slope becomes equal to 2, and [pic](T) becomes a straight line which passes through the origin with a slope of 2.

 

•      When [pic]> 2 the [pic](T) curve is convex, with its slope increasing as T increases. Consequently, the failure rate increases at an increasing rate as T increases indicating wear-out life.

 

Looking at [pic]

Eta, [pic], is called the scale parameter of the Weibull distribution. The parameter [pic]has the same units as T, such as hours, miles, cycles, actuations, etc.

 

[pic]

Fig. 7: Weibull pdf with [pic]= 50, [pic]= 100, [pic]= 200

 

•      A change in the scale parameter [pic]has the same effect on the distribution as a change of the abscissa scale.

 

•      If [pic]is increased, while [pic]is kept the same, the distribution gets stretched out to the right and its height decreases, while maintaining its shape and location.

•      If [pic]is decreased, while [pic]is kept the same, the distribution gets pushed in toward the left (i.e. toward its beginning, or 0) and its height increases.

 

See Also:

Weibull Distribution

Calculating the Parameters of the Weibull Distribution

Parameter Estimation

The estimates of the parameters of the Weibull distribution can be found graphically on probability plotting paper, or analytically using either least squares or maximum likelihood. Parameter estimation methods are presented in detail in Appendix B: Parameter Estimations.

 

Probability Plotting

One method of calculating the parameters of the Weibull distribution is by using probability plotting. To better illustrate this procedure, consider the following example [18].

 

Example 3

Let's assume six identical units are being reliability tested at the same application and operation stress levels. All of these units fail during the test after operating the following times (in hours), [pic]: 93, 34, 16, 120, 53 and 75.

 

The steps for determining the parameters of the Weibull pdf representing the data, using probability plotting, are as follows:

 

•      Rank the times-to-failure in ascending order as shown next.

 

[pic]

 

•      Obtain their median rank plotting positions. The times-to-failure, with their corresponding median ranks, are shown next.

 

[pic]

 

•      On a Weibull probability paper, plot the times and their corresponding ranks. Figure 8 displays an example of a Weibull probability paper (the solution is given in Figure 9).

 

[pic]

Fig. 8: Sample Weibull probability plotting paper

 

•      Draw the best possible straight line through the plotted points (as shown in Figure 9).

 

•      Obtain the slope of this line by drawing a line, parallel to the one just obtained, through the slope indicator. This value is the estimate of the shape parameter [pic]. In this case [pic]= 1.4.

 

•      At the Q(t) = 63.2% ordinate point, draw a straight horizontal line until this line intersects the fitted straight line. Draw a vertical line through this intersection until it crosses the abscissa. The value at the intersection of the abscissa is the estimate of [pic]. For this case [pic]= 76 hr. (This is always at 63.2% since Q(t) = [pic]= 1-[pic] = 0.632 = 63.2%).

 

[pic]

Fig. 9: Probability Plot for Example 3

 

Now any reliability value for any mission time t can be obtained. For example the reliability for a mission of 15 hr, or any other time, can now be obtained either from the plot or analytically (i.e. using the equations given in the Weibull Distribution section).

 

To obtain the value from the plot, draw a vertical line from the abscissa, at t = 15 hr, to the fitted line. Draw a horizontal line from this intersection to the ordinate and read Q(t), in this case Q(t = 15) = 9.8%. Thus, R(t = 15) = 1 - Q(t) = 90.2%. This can also be obtained analytically, from the Weibull reliability function, since both of the parameters are known or,

 

[pic]

 

MLE Parameter Estimation

The parameters of the 2-parameter Weibull distribution can also be estimated using maximum likelihood estimation (MLE). This log-likelihood function is composed of two summation portions:

 

[pic]

 

where:

 

•      [pic] is the number of groups of times-to-failure data points.

•      [pic] is the number of times-to-failure in the [pic]time-to-failure data group.

•      [pic] is the Weibull shape parameter (unknown apriori, the first of two parameters to be found).

•      [pic] is the Weibull scale parameter (unknown apriori, the second of two parameters to be found).

•      [pic] is the time of the [pic]group of time-to-failure data.

•        S is the number of groups of suspension data points.

•      [pic] is the number of suspensions in [pic]group of suspension data points.

•      [pic] is the time of the [pic]suspension data group.

 

The solution will be found by solving for a pair of parameters [pic]so that [pic]= 0 and [pic]= 0. (Other methods can also be used, such as direct maximization of the likelihood function, without having to compute the derivatives.)

 

(10)    

[pic]

 

(11)    

[pic]

 

Example 4

Using the same data as in the probability plotting example (Example 3), and assuming a 2-parameter Weibull distribution, estimate the parameter using the MLE method.

 

Solution

In this case we have non-grouped data with no suspensions, thus Eqns. (10) and (11) become,

 

[pic]

 

and,

 

[pic]

 

Solving the above equations simultaneously we get,

 

[pic]

 

See Also:

Weibull Distribution

 

Lognormal Distribution

The lognormal distribution is commonly used for general reliability analysis, cycles-to-failure in fatigue, material strengths and loading variables in probabilistic design. A random variable is lognormally distributed if the logarithm of the random variable is normally distributed. Since the logarithms of a lognormally distributed random variable are normally distributed, the lognormal distribution is given by:

 

[pic]

 

where [pic]= ln T, and where the Ts are the times-to-failure, 

and

 

•      [pic] = mean of the natural logarithms of the times to failure,

 

•      [pic] = standard deviation of the natural logarithms of the times to failure.

 

The lognormal pdf can be obtained, realizing that for equal probabilities under the normal and lognormal pdfs incremental areas should also be equal or,

 

[pic]

 

Taking the derivative yields,

 

[pic]

 

Substitution yields,

 

[pic]

 

where,

 

[pic]

 

Statistical Properties Summary

The Mean or MTTF

•      The mean of the lognormal distribution, [pic], is given by:

 

(12)    

[pic]

 

•      The mean of the natural logarithms of the times-to-failure, [pic], in terms of [pic]and [pic]is given by:

 

[pic]

 

The Standard Deviation

•      The standard deviation of the lognormal distribution, [pic], is given by:

 

(13)    

[pic]

 

•      The standard deviation of the natural logarithms of the times-to-failure, [pic], in terms of [pic]and [pic]is given by:

 

[pic]

 

The Median

•      The median of the lognormal distribution is given by:

 

(14)    

[pic]

 

The Mode

The mode of the lognormal distribution is given by:

 

[pic]

 

Reliability Function

For the lognormal distribution, the reliability for a mission of time T, starting at age 0, is given by:

 

[pic]

 

or,

 

[pic]

 

There is no closed form solution for the lognormal reliability function. Solutions can be obtained via the use of standard normal tables.

 

Lognormal Failure Rate

The lognormal failure rate is given by:

 

[pic]

 

This subchapter includes the following topics:

 

•      Characteristics of the Lognormal Distribution

•      Calculating the Parameters of the Lognormal Distribution

 

See Also:

Life Distributions

 

Characteristics of the Lognormal Distribution

•      The lognormal distribution is a distribution skewed to the right.

 

•      The pdf starts at zero, increases to its mode, and decreases thereafter.

 

[pic]

 

The characteristics of the lognormal distribution can be exemplified by examining the two parameters, the log-mean, ([pic]), and the log-std, ([pic]) and the effect they have on the pdf.

 

Looking at the Log-mean ([pic])

•      The parameter, [pic], or the log-mean life, or the MTTF’ in terms of the logarithm of the T’s is also the scale parameter, and is a unitless number.

 

•      For the same [pic]the pdf's skewness increases as [pic]increases.

 

[pic]

 

Looking at the Log-std ([pic])

•      The parameter [pic], or the standard deviation of the T’s in terms of their logarithm or of their [pic], is also the shape parameter, and not the scale parameter as in the normal pdf. It is a unitless number and assumes only positive values.

 

•      The degree of skewness increases as [pic]increases, for a given [pic].

 

•      For [pic]values significantly greater than 1, the pdf rises very sharply in the beginning, i.e. for very small values of T near zero, and essentially follows the ordinate axis, peaks out early, and then decreases sharply like an exponential pdf or a Weibull pdf with 0 < [pic] ................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download