Example: confidence

Measurement System Analysis And Destructive …

Measurement System AnalysisAnd Destructive Testingmeasurement System Analysis is a vital component for many qualityimprovement initiatives. It is important to assess the ability of ameasurement System to detect meaningful differences in processvariables. In many instances, it is possible to design an experiment inwhich the level of variation attributed to the operator, parts and the oper-ator by part interaction effect can be assessed. This is possible when alloperators have the ability to measure each part and is typically the exper-imental design assessed during the measure phases of the define-measure-analyze-improve-control many Measurement systems, however, the part being measured isaffected in some manner. As such, you cannot assume all operators in thestudy can assess the same part and reasonably obtain similar results. Look at Destructive testing , for example. The part characteristic such astensile strength, impact strength or burst pressure of a vessel is measuredas the part is destroyed.

Measurement System Analysis And Destructive Testing measurement system analysis is a vital component for many quality improvement initiatives. It is important to assess the ability of a

Tags:

  Analysis, System, Testing, Measurement, Component, Destructive, Measurement system analysis and destructive, Measurement system analysis and destructive testing

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Other abuse

Transcription of Measurement System Analysis And Destructive …

1 Measurement System AnalysisAnd Destructive Testingmeasurement System Analysis is a vital component for many qualityimprovement initiatives. It is important to assess the ability of ameasurement System to detect meaningful differences in processvariables. In many instances, it is possible to design an experiment inwhich the level of variation attributed to the operator, parts and the oper-ator by part interaction effect can be assessed. This is possible when alloperators have the ability to measure each part and is typically the exper-imental design assessed during the measure phases of the define-measure-analyze-improve-control many Measurement systems, however, the part being measured isaffected in some manner. As such, you cannot assume all operators in thestudy can assess the same part and reasonably obtain similar results. Look at Destructive testing , for example. The part characteristic such astensile strength, impact strength or burst pressure of a vessel is measuredas the part is destroyed.

2 Once the Measurement is obtained for a particu-lar part, that part is no longer available for additional measurements withthe same or different operators. There are statistical methods available to estimate the repeatability andreproducibility (R&R) components in Destructive scenarios if a key, andperhaps controversial, assumption is made. The assumption is that it ispossible to identify a batch of parts enough alike that it is reasonable toconsider them the same part. This means the Measurement characteristicof interest is identical for each part in the group the batch is homoge-nous. This assumption is important because the observed within-batchvariability is used to estimate the repeatability of the Measurement this assumption is reasonable you can consider conducting an R&Rstudy. The homogenous batch size is an important consideration in designingand analyzing a Destructive R&R study. A more traditional or crosseddesign and Analysis may be appropriate when the batch is large enough toassign at least two parts from each batch to each operator.

3 This is becauseUSE A NESTEDMODEL WHENTESTING A DouglasGorman,Minitab, andKeith ,Minitab16 IAUGUST SYSTEMS1123 OperatorIngotSpecimen6789101112 13 14 15234 5 Figure 1. Nested Model Designeach operator can test eachbatch multiple times (thebatch is crossed with the oper-ator). When the batch can becrossed with the operator, thisexperimental design allowsestimation of the operator bybatch the homogenous batchis small and multiple partsfrom the batch cannot begiven to each operator, anappropriate way to deal withthe situation is to use a nestedor hierarchical model. The model, the assumptionsbehind the model and the interpretation of resultsfrom a practical example are discussed using the following a Measurement System in which operatorsare required to measure the impact strength of impact test measures the energy required tobreak a notched metal bar of specified geometry. Thetest specimens are prepared from ingots randomlyselected from a wider population of ingots.

4 The exper-imenters believe specimens created from the sameingot are more homogenous than specimens createdfrom different ingots. However, only three specimenscan be prepared from each because of the ingot s size. Due to the Destructive nature of the test and thesmall batch size, it is not possible for each operator tomeasure each ingot multiple times. Therefore, youcannot use a crossed gage R&R study. Instead, youmust use the nested gage R&R study, which is availablein Minitab, release 13. In this example there are three randomly selectedoperators in the study. Each operator is given five ran-domly selected ingots. Since it is possible to obtainthree test samples from each ingot, each operator willrecord 15 measurements, leading to a total of 45observations in this statistical model for this nested design is:strengthijk= + operatori+ ingot j(i)+ (ij)kwhere i =1,2,3; j = 1,2,3,4,5; k = 1,2,3. For more information onthe mathematical theory and practical application ofnested designs, refer to Design and Analysis ofExperiments1and Statistics for this setup, the ingots are coded one throughfive.

5 Of course, with the nested model in the aboveformula, the same ingot would not be measuredacross each operator. We may think, however, interms of ingot one for operator one and ingot onefor operator two. The arbitrary way in which we cancode the 15 ingots under each operator (see Figure1) indicates this model is nested, not crossed. ResultsExecuting the commands: stat > qualitytools > gage R&R study (nested), you cancomplete the dialog box shown in Figure2. Following the usual guidelines for thetotal R&R percentage study, the resultssuggest this Measurement System needsimprovement (see Table 1). A measure-ment System is generally considered unac-ceptable when this metric exceeds 30%,and in this case it is The largestrelative contributor to the measurementsystem variability is the reproducibilitySIX SIGMA FORUM MAGAZINEIAUGUST 2002I17 Measurement System Analysis and Destructive TestingTable 1. Standard Deviation ResultsStandardStudyPercentageSourcedevi ationvariationstudy variation(SD)( *SD)(%SV)Total gage R& to 2.

6 Nested Gage R&R Study Dialog boxMeasurement System Analysis and Destructive Testingerror of The defaultMinitab graphical output isinvestigated in the nextsection to guide improve-ment efforts with regard tothe operator effect. Note the standard devia-tions and study percentagesin Table 1 (p. 17) do notadd up to the totals in thefinal row. This is becausethe variances are additive,but the standard deviations are not. For example, 32+42= 52, but 3 + 4 5. Looking at the corresponding Analysis of varianceresults in Table 2, you will see the operator effect is sta-tistically significant at the = level. This isbecause the p-value = < You can, there-fore, reject the null hypothesis H0: 2operator= 0 in favorof H1: 2operator> 0. The ingot to ingot differences also contribute a rela-tively large amount of variability to the study ( ).It is certainly desirable to have the largest componentof variation in the study be due to batch or ingot dif-Table 2.

7 Analysis of Variance OutputDegrees ofSum ofMeanSourcefreedomsquaressquareF-ratioP -value(DF)(SS)(MS)(F)(P) (operator) R&R RepeatReprod Part to partContributionpercentageStudy variationpercentageUCL = = 112 Mean = = = upper control limitLCL = lower control limitR = = chart by operatorR-chart by operatorBy operatorOperatorIngot operatorComponents of variationGage repeatability and reproducibility (R&R) nested for strengthBy ingot (operator)Gage name:Date of study:Reported by:Tolerance:Miscellaneous:Sample meanSample 3. Graphical Results From Nested Gage R&R AnalysisMeasurement System Analysis and Destructive Testingferences. This ingot-to-ingot effect (within operators)is also statistically significant at the = level of sig-nificance. This result is important because it shows thateven though the Measurement System needs improve-ment, it is capable of detecting differences amongingots under each operator. Table 1 also shows that the percentage study metricassociated with the repeatability what is notexplained by the operator or ingots is small relativeto the others at With Destructive R&R results,regardless whether you use a crossed or nested model,the repeatability estimate actually contains within-batch variability.

8 This means the repeatability estimateis likely inflated to the extent that the homogenousbatch assumption is violated. A Measurement systemfound inadequate due mostly to repeatability errorshould raise questions about the homogenous batchassumption. In essence, the information in the R&R table (seeTable 1) tells us the ingot-to-ingot variation is largerthan the combined repeatability and within-ingotvariation. This supports the previous claim that spec-imens created from the same ingot are more homo-geneous than those created from different 1 also indicates the operators are contributingalmost as much variability to the study as the differentingots are. Graphical ResultsThe R-chart in Figure 3 shows the level of variationexhibited within each ingot appears to be relativelyconstant. Since the range for the example using thisscenario actually represents the combined variabilityfrom the repeatability of the Measurement System andthe within-ingot variability, this chart would be helpfulin identifying whether certain operators have difficultyconsistently preparing and testing specimens as well asidentifying specific ingots that were not homogeneous.

9 From the X chart in Figure 3 you can see the ingotaverages vary much more than the control limits. Thisis a desirable result, as the control limits are based onthe combined repeatability and within-ingot varia-tions. It indicates the between-ingot differences willlikely be detected over the repeatability error. The chart also makes it apparent that the averagesfor operator one appear generally higher than thosefor operators two and three, but you cannot automat-ically lay blame on operator one. Instead, you shoulddesign an additional study to assess procedurally whatthe operators might do differently with regard to theimportant aspects of obtaining the Measurement . Forexample, you may wish to inspect how the test speci-mens are prepared or how they are fixtured in thetesting device. Remember, it is also possible the randomizationprocedure was ineffective in ensuring a representativesample of ingots was given to each operator and, bychance alone, operator one happened to get the threehighest strength ingots out of the 15.

10 The by ingot (operator) and by operator graphsin Figure 3 also indicate the prevalence of operatorone s generally recording higher values for the fiveingots measured, in comparison with operators twoand Design And Execution Are ImportantBy making use of an appropriate experimentaldesign structure it is possible to assess the performanceof a Measurement System when Destructive testing isrequired. It is necessary to use a nested approach whenhomogenous batch sizes are limited and each batchcan only be tested multiple times by one operator. When using an R&R approach to assess a destruc-tive Measurement System , the results are not asstraightforward as those in a nondestructive , repeatability variation is indistinguish-able from the within-batch variation. If a destructivemeasurement System is deemed unacceptable from arepeatability standpoint, the homogenous batchassumption should be questioned.


Related search queries