Gage R&R Attribute

Reading time

Attribute R&R specifically concerns attribute characteristics, i.e. characteristics that are categorized rather than numerically measured. For example, characteristics such as compliance with a specification, the presence or absence of a defect, etc., are categorized rather than measured numerically. 

Visit MSA standard (Measurement System Analysis), provides guidelines and methodologies for assessing and improving the reliability of a measurement system, whether for continuous or attribute measurements. For attribute characteristics, MSA offers specific methods for assessing repeatability and reproducibility, notably with the Gage R&R attribute method using Cohen's Kappa indicator (see details). 

According to the MSA standard, to carry out a Gage R&R attribute study, the following steps are generally recommended: 

  • Selecting attribute characteristics Identify the attribute characteristics you wish to evaluate. Example: Appearance  
  • Defining categories Clearly define categories or classifications for each attribute feature. (CF/NCF) / (OK/KO) etc... 
  • Operator selection Select the operators who will perform the measurements. It is important that these operators are representative of those who will use the measurement system in practice. 
  • Data collection Measure attribute characteristics on a sample of parts using the measuring system in question. This step requires an expert to characterize the samples prior to the Gage R&R study. Each operator should then measure each part. 
  • Data analysis Use appropriate statistical methods to break down total variation into components attributable to repeatability (variation due to the same operator) and reproducibility (variation due to different operators). 
  • Interpretation of results Evaluate the proportion of total variation attributable to repeatability and reproducibility. If these variation components are small in relation to the tolerance interval of the characteristic, the measurement system is considered reliable for the attribute characteristics. 

Cohen's Kappa 

The Cohen's kappa coefficientThe kappa coefficient, often referred to simply as the "kappa", is a statistical measure used to assess the agreement between two observers or measurement methods when classifying items into discrete categories. It is widely used in medical research, epidemiology, psychology and other disciplines where agreement between observers or methods is important. 

The kappa coefficient takes into account both actual agreement between observers and agreement that could be due to chance. It thus corrects the simple rate of gross agreement by taking into account the probability of agreement due to chance. 

The kappa coefficient is calculated using the formula : 

Kappa =\frac{P_{0}-P_{e}}{1-P_{e}}

  • 𝑃0 is the proportion of agreement observed between observers. 
  • 𝑃𝑒 is the proportion of expected agreement due to chance. 

The kappa coefficient can vary from -1 to 1, where : 

  • Kappa = 1, indicates perfect agreement between observers. 
  • Kappa=0, indicates an agreement equivalent to that which could be obtained by chance. 
  • Kappa=-1, indicates complete disagreement between observers. 

Efficiency : 

The effectiveness of an attribute measurement method quantifies the ratio of good decisions to the overall number of opportunities. 

Efficiency = \frac{\text{number of correct decisions}}{\text{number of parts inspected}}

Error rate : 

The error rate expresses the frequency with which a non-conforming part is erroneously considered by the operator as conforming. 

Error rate = \frac{\text{number of "compliant" decisions knowing that the part is "non-compliant"}}}{\text{number of opportunities for a non-compliant part}}

False alarm rate : 

The error rate expresses the frequency with which a compliant part is erroneously considered by the operator as non-compliant. 

False alarms = \frac{\text{number of "non-compliant" decisions knowing that the part is "compliant"}}}{\text{number of opportunities for a compliant part}}