Washington University in St. Louis



INTRODUCTION

National Semiconductor, located in Santa Clara, California, is a company that manufactures a variety of technologies. This company prides itself on having strong quality control standards. If you visit the company’s website, national .com, you will see an entire section describing National Semiconductor’s commitment to quality. In one of the sections, Metrics: Reliability, the company provides failure rate data for various parts of the wafer fabrication process.

Figure 1. National Semiconductor data[i]

|Sample |Process  |EFR |Sample Size  |*PPM |Rejects |Device Hours |

|1 |4 |4.871 | |18 |6 |6.613 |

|2 |7 |7.912 | |19 |4 |4.646 |

|3 |2 |2.225 | |20 |3 |3.484 |

|4 |3 |2.732 | |21 |2 |1.831 |

|5 |6 |6.589 | |22 |3 |3.157 |

|6 |10 |11.615 | |23 |3 |3.744 |

|7 |5 |6.040 | |24 |36 |45.297 |

|8 |3 |2.970 | |25 |3 |3.926 |

|9 |6 |0.711 | |26 |8 |9.151 |

|10 |6 |6.786 | |27 |10 |12.242 |

|11 |5 |5.147 | |28 |3 |2.684 |

|12 |10 |11.615 | |29 |11 |13.225 |

|13 |3 |3.061 | |30 |14 |16.777 |

|14 |3 |3.184 | |31 |6 |7.365 |

|15 |10 |12.653 | |32 |5 |6.313 |

|16 |8 |9.388 | |33 |10 |12.460 |

|17 |2 |1.342 | |34 |3 |2.970 |

Using a t-test, I obtained a t statistic of -2.695. To accept the null hypothesis, meaning that my calculations statistically are the same as those of National Semiconductor, the confidence level must be around 99.8%[xiii]. Using this presumption, I assumed that I used a valid calculation to determine FITS (There are two samples, 9 and 24, where the calculated FITS were much different than the provided FITS. Still, I decided to use all of the data at the beginning of the analysis, and then see if the statistical analysis changed as I removed outliers).

To continue the analysis, I calculated FITS at the following confidence levels:

| | |Confidence Level[xiv] | | |

| |30% |50% |60% |70% |95% |

|no errors |0.713 |1.39 |1.83 |2.41 |5.99 |

|1 error* |2.2 |3.36 |4.405 |4.48 |9.49 |

| | | | | | |

|*Sample 25 (from Figure 1) had one reject, so it had one degree of freedom when calculating chi-square values. All other samples has zero |

|rejects and zero degrees of freedom |

| |

After calculating the FITS for different confidence levels, I had to determine how to analyze the data. Because the sample size for each process was different, I had to use quality control equations that adjusted for varying sample size across the different processes. Also, the provided data from National Semiconductor did not include the standard deviation for each process. To successfully calculate the control chart data, I needed standard deviations. Because there was no simple way to determine the variation for each process, I decided to estimate the standard deviations by calculating the standard deviation of the entire sample and use that standard deviation as an estimate for each process.

The center line for each control chart was calculated using:

∑(sample size *FITS)/ ∑(sample size)[xv].

Then, the standard deviation for the entire system was calculated using:

SQRT(∑(sample size-1)*(standard deviation of each process)2/∑(sample size-34))[xvi]

While incorporating degrees of freedom, this equation essentially takes a weighted average of the standard deviation of each individual process that is based on individual sample sizes.

After determining the control chart average and standard deviation, I could calculate the upper and lower control limits:

control chart average +-control chart standard deviation *A3(for each sample)[xvii]

c4=4*(sample size-1)/(4*sample size-3)[xviii]

A3=3/(c4*sqrt(sample size)[xix]

A3 and c4 are both variables for building control charts.

As previously mentioned, the National Semiconductor’s website says the 50% confidence level should provide a good approximation for the FITS[xx]. I therefore decided to construct this control chart. However, instead of using the 60% Arrehenius equations for the upper control limits, I used the above control chart equations to calculated upper and lower control limits.

[pic]

This data is out of control because there are multiple FITS that are outside of the upper and lower control limits. Therefore, decided to analyze the data at the other confidence levels noted earlier (confidence level given in parenthesis).

[pic][pic]

[pic] [pic]

[pic]

Using my hypothesis to analyze the data, the system of processes is out of control on all of the charts. Regardless of the confidence level used, the data on the control charts indicate that the manufacturing processes statistically have too many errors per one billion manufacturing hours. While each process individually may be in control, there appears to be a problem when considering the manufacturing processes as a system. However, it should be noted that the 95% FITS control chart has only a few points out of control. After removing the outliers, sampls 6, 15, and 24, the following control chart resulted:

[pic]

Because I had removed the outliers, I had to reconfigure the control chart equations, which ultimately made the 95% control chart even more out of control.

Since all of the control charts are out of control, I wanted to see how I could adjust the charts or calculations in order to make the charts in cotrol. To achieve this goal, attempted to adjust the assumed standard deviation I used for the original National Semiconductor data. After making changes, I finally found a standard deviation that put the data in control. The problem is that this standard deviation was 35.99 FITS, or thirteen times the standard deviation of the data. Qualitatively, the problem with this standard deviation is that it is, quite simply, much too high. National Semiconductor engineers surely make sure that each set of the supply chain has a standard deviation as small as possible. With a standard devitiaton of 35.99 FITS, the company would have difficulty competiting. Therefore, this information helps to prove that there may be a quality control problem when analyzing the data from the manufacturing processes.

It is possible to analyze the National Semiconductor data using other qualtiy control methods. In addition to having the FITS rates, the data from Figure 1 also has mean time to failure (MTTF) information. This data tells us how many hours on average the manufacturing process runs before it produces an error. Similar to the MTTF is mean time between failure (MTBF), which tells how much time on average passes between erros of a process. The MTBF is equal to the MTTF added to the mean time to recovery (MTTR)[xxi]. If we assume that MTTR is negligable, then the MTTF is approximatley equal to the MTBF[xxii]. By making this assumption, I was able to apply another method to analyzye National Semiconductor data. According to Douglas Montgomery, Nelson(1994) developed a method to study low defect data, such as high MTBF[xxiii]. Because I assumed that the MTTF approximately equals the MTBF, I could use Nelson’s approximation. I also had to make sure that the MTTF data had an exponential distribution. The following histogram shows the distribution of MTTF for the manufacturing processes. While the distribution is not perfectly exponential, it does skew some to indicate that the data may be at least partially exponentially distributed. By making this assumption, I decided that I could use Nelson’s transformation. I decided to use the National both the given data and the calculatd 50% data since it is supposed to represent a good estimate for the FITS.

To covert FITS to MTTF, I used to following equation:

MTTF = 10^9/FITS[xxiv]

Histogram of MTTF distribuiton (given data) Histogram of MTTF distribuiton (50%)

[pic][pic]

Because the MTTFdata for both charts are approximately exponentialy distributed, I tranansformed the MTTF:

Transformed MTTF = MTTF0.2777 [xxv]

After transforming the data, I made sure that this new data were approximately normally distributed. The historgram below represents the adjusted MTTF data. Clearly, the exponential distribtuion is gone and the data is approximately normal, particuarly the 50% adjusted data. Therefore, I decided to assume that the adjusted MTTF data is normal in order to use Nelson’s formula.

Historgram of transformed MTTF distribution(given data) Historgram of transformed MTTF distribution(50%)

[pic] [pic]

As a final note, it is important to understand that Nelson’s approximation was originally meant for studying the amount of time between errors for one specifict manufacturing process. Therefore, I had to make an assumption that Nelson’s transformation could also be used across a system of manufacturing processes.

Because the original MTTF data is exponentially distributed, the standard deviation for each sample is the same as the MTTF. I therefore used Nelson’s transformation for both the MTTF and the standard deviation. Because the sample size differed for each process, I had to use the same formulas I used for the prior quality control analysis to calculate the control chart average and standard deviation.

Control chart average=∑(sample size *FITS)/ ∑(sample size)[xxvi].

Control chart stnd dev=∑(sample size-1)*(standard deviation of each process)2/∑(sample size-34)[xxvii]

Using the following formula to caluclate the upper and lower control limits, I was able to construct control charts for the given MTTF data and adjusted MTTF data.

control chart average +-control chart standard deviation *A3(for each sample)[xxviii]

[pic] [pic]

[pic]

Regardless of whether I analyze the MTTF data or the adjusted MTTF data, the National Semiconductor process data is out of control according to my assumptions and hypothesis, which further shows that the company can improve its manufacturing quality and that the manufacturing process across multiple steps is out of control.

Despite my discoveries, I believed that I could further study the National Semiconductor data to attempt to prove my initial hypothesis accurate. The reason is that the data set I am working with is only a part of the company’s entire manufacturing supply chain. While my results do indicate that the processes are out of control, I do not know exactly where the problems are located along the supply. After looking over data from Figure 1, I realized that I could use quality control statistics to analyze subgroups. Three groups of manufacturing groups, CMOS, CS, and VIP have at least four types of processes. I decided to analyze these groups using quality control statistics. I once again decided to use the National both the given data and the calculatd 50% data since it is supposed to represent a good estimate for the FITS.

For each subgroup I analyzed the data using FITS, MTTF, and adjusted MTTF**. I used the following equations to build the control charts:

Control chart average=∑(sample size *FITS)/ ∑(sample size)[xxix].

Control chart stnd dev=∑(sample size-1)*(standard deviation of each process)2/∑(sample size-4 or 5*)[xxx]

control chart average +-control chart standard deviation *A3(for each sample)[xxxi],

**two of the subgroups had five samples, and the third had four samples.

Given Data

[pic] [pic] [pic]

[pic] [pic][pic]

[pic] [pic] [pic]

50% data

[pic][pic][pic]

[pic][pic][pic]

[pic][pic][pic]

The subgroup analysis of the given data and the 50% data further shows that the manufacturing process across provided data is out of control. Each of the subgroups for each of the three tests (FITS, MTTF, adjusted MTTF) is out of control, so the problematic process cannot be isolated to one group.

I noticed from the FITS calculations that as the confidence level increased, the lower control limits became negative. Because error rates cannot be negative, I had to adjust each of these valued to zero. From the FITS calculations, the 95% level did not improve the outcome of the results; the data were still out of control. I wondered, however, if the subgroup analysis would yield different results at the 95% level. At this confidence level, I once again calculated MTTF using the following equation:

MTTF = 10^9/FITS[xxxii]

Using the same analysis and formulas I used previous calculations, I produced the following control charts.

[pic] [pic] [pic] [pic] [pic] [pic] [pic] [pic] [pic]

According to the control charts, some of the peformance seems to improve a little. For instance, in the VIP FITS control chart, two of the data points are within the control limits and the other two are on the edges of the limits. Still, at this high confidence level, the subgroups are out of control.

CONCLUSION

Therefore, I believe that there are two ways to consider my results. From one perspective, I may have determined another way to analyze the National Semiconductor data. Even though I had to make some assumptions, the consistent out-of-control quality control charts indicates that there is something wrong acorss the manufactuing supply chain. The subgroup analysis shows that the problem is not isolated to any particular manufacturing station.

However, there is another way to consider my data. The following control chart models the data a National Semiconductor recommends with the 50% FITS as the average rate estimate, 60% FITS as the upper control limits, and 0% as the lower control limit.

[pic]

This control chart is in control. I am lead to believe that, because this chart produced in-control data while all of my charts, regardless of the model and confidence level, yielded out-of-control data, my hypothesis of modeling the FITS data using control chart equations is an unsuccessful way of analyzing this data. Thus, the best way to model this data requires using the Arrhenius equation to calculate both the average and upper control limit. Using National Semiconductor’s procedures, there appears to be no statistical variation when analyzing FITS across the production line.

Instead of saying that the Arrhenius equation produced an upper confidence level, I assumed that the Arrehenius equation calculated an average failure rate that says that we can be statistically confident (at a certain level) that the actual failure rate is what is calculated with the Arrhenius formula. If my calculations did yield statistically in-control charts, maybe my results would indicate that the National Semiconductor data could be analyzed another way. Even though may data my not lead to a new way to study quality control, it was still worth perfroming the analysis because companies and researchers are always looking for new ways to improve existing methods. If nothing else, my study showed the value of trying something different and see if it can be applied to current procedures.

-----------------------

[i] “Quality Network”. Available: .

[ii] “How to Calculate FITs and MTBF.” Available: .

[iii] Ibid.

[iv] “Metal Migration.” Available: .

[v] Ibid.

[vi] Ibid. Please refer to chart titles: “Acceleration Factors for Common Junction Temperatures and Common Activation Energies.”

[vii] Ibid.

[viii]Vigrass, William J. “Calculation of Semiconductor Failure Rates.” Available: rel.docs/rel/calculation_of_semiconductor_failure_rates.pdf.

[ix] “Quality Network”. Available: .

[x] “Reliability Programs”. Available: .

[xi] Ibid.

[xii] Vigrass, William J. “Calculation of Semiconductor Failure Rates.” Available: rel.docs/rel/calculation_of_semiconductor_failure_rates.pdf.

[xiii] . Because there are 34 data points, I used 33 degrees of freedom. I also assumed a two-sided test. According to the t-table, at 33 degrees of freedom, the t-score at a significance level of 0.01 was 2.445; the t-score at a 0.001 significance level was 2.733. To accept the null hypothesis, the t-score must have an absolute value greater than 2.695. This occurs at a 0.001 significance level. Therefore, the confidence level at which we can still say that the data are not statistically different is approximately 99.8% (1-2*0.001).

[xiv]“Chi-Square Distribution Calculator: Online Statistical Table.” Available: .

[xv]Montgomery, Douglas C. Introduction to Statistical Quality Control. 5th Edition. John Wiley & Sons, 2005. Page 227.

[xvi] Ibid.

[xvii] Montgomery, Douglas C. Introduction to Statistical Quality Control. 5th Edition. John Wiley & Sons, 2005. Page 225.

[xviii] Montgomery, Douglas C. Introduction to Statistical Quality Control. 5th Edition. John Wiley & Sons, 2005. Page 725.

[xix] Ibid.

[xx] “Reliability Programs”. Available: .

[xxi] “Theoretical Definitions and Alternative Uses for MTBF.” Available: .

[xxii] Ibid.

[xxiii] Montgomery, Douglas C. Introduction to Statistical Quality Control. 5th Edition. John Wiley & Sons, 2005. Page 304.

[xxiv] “Reliability Programs”. Available: .

[xxv] Montgomery, Douglas C. Introduction to Statistical Quality Control. 5th Edition. John Wiley & Sons, 2005. Page 304.

[xxvi] Montgomery, Douglas C. Introduction to Statistical Quality Control. 5th Edition. John Wiley & Sons, 2005. Page 227.

[xxvii] Ibid.

[xxviii]Montgomery, Douglas C. Introduction to Statistical Quality Control. 5th Edition. John Wiley & Sons, 2005. Page 225.

[xxix] Montgomery, Douglas C. Introduction to Statistical Quality Control. 5th Edition. John Wiley & Sons, 2005. Page 227.

[xxx] Ibid.

[xxxi] Montgomery, Douglas C. Introduction to Statistical Quality Control. 5th Edition. John Wiley & Sons, 2005. Page 225.

[xxxii] “Reliability Programs”. Available: .

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download