MINIMIZING ERROR IN CALIBRATING SPECTROMETERS Part 3. Continuous Maintenance – Determine the Optimum Number of Factors and Identify Outliers That Degrade Model Performance

In every instance of process or laboratory measurement it pays to understand the source of errors and to minimize those errors where possible. From the sampling system to the instrument or analyzer settings, to the method development used for interpreting the signal, to the maintenance of the calibration over time, all have something to contribute to the error profile. In the case of optical spectroscopy, care needs to be exercised in the initial setup and data processing method development, but once set, these remain constant for the life of the system. Minimizing error then falls to the routine calibration maintenance, which requires constant monitoring of the process results.

In Part 1, one suggestion was to improve the precision of the reference method by running duplicate samples and averaging or adopting the median value. However, we can take advantage of the many samples that are collected during the calibration process, and averaging may not be necessary. Consider a typical calibration as shown in the figure. The distribution of points around the regression line is controlled by errors both in the spectrometer/sampling system and in the reference method. In a gaussian distribution, the best answer is at the apex. If we rotate the regression line to be oriented vertically, we essentially are mapping the points to a gaussian. This means that the best estimate is anywhere on the regression line.

Here is the link to the full presentation on Minimizing Error in Calibrating Spectrometers from ATC 2025 Conference. Also available in Youtube video from 00:00 to 22:00.