MINIMIZING ERROR IN CALIBRATING SPECTROMETERS Part 4. Conclusion and Understanding True Calibration Error

In presenting this information over the last two decades, one commonly hears that “the calibration can only be as good as the reference value”; this statement is not true. Because of the precision of optical spectroscopy, assuming the analyzer physics is appropriate to the task, a well-calibrated spectrometer will outperform most laboratory reference methods. In addition, there are ways of getting a better estimate of the true error of the analysis; the adjustment of the apparent error based on correcting errors in the reference value are the appropriate measure of the quality of the analysis.

Best practices in the laboratory will include assessing the precision of the reference value.  Here, we run a given sample several, even many, times to find the standard deviation of the measurements.  Combining this value with the degrees of freedom (essentially the number of repeat assessments) allows you to estimate the actual error in your spectral analysis.  As can be seen by comparing the observed error for several petroleum laboratory measurements, the true error is typically between 50 and 75% of the error reported in the calibration step.

Here is the link to the full presentation on Minimizing Error in Calibrating Spectrometers from ATC 2025 Conference. Also available in Youtube video from 00:00 to 22:00.

MINIMIZING ERROR IN CALIBRATING SPECTROMETERS Part 3. Continuous Maintenance – Determine the Optimum Number of Factors and Identify Outliers That Degrade Model Performance

In every instance of process or laboratory measurement it pays to understand the source of errors and to minimize those errors where possible. From the sampling system to the instrument or analyzer settings, to the method development used for interpreting the signal, to the maintenance of the calibration over time, all have something to contribute to the error profile. In the case of optical spectroscopy, care needs to be exercised in the initial setup and data processing method development, but once set, these remain constant for the life of the system. Minimizing error then falls to the routine calibration maintenance, which requires constant monitoring of the process results.

In Part 1, one suggestion was to improve the precision of the reference method by running duplicate samples and averaging or adopting the median value. However, we can take advantage of the many samples that are collected during the calibration process, and averaging may not be necessary. Consider a typical calibration as shown in the figure. The distribution of points around the regression line is controlled by errors both in the spectrometer/sampling system and in the reference method. In a gaussian distribution, the best answer is at the apex. If we rotate the regression line to be oriented vertically, we essentially are mapping the points to a gaussian. This means that the best estimate is anywhere on the regression line.

Here is the link to the full presentation on Minimizing Error in Calibrating Spectrometers from ATC 2025 Conference. Also available in Youtube video from 00:00 to 22:00.

MINIMIZING ERROR IN CALIBRATING SPECTROMETERS – Part 2. Method Development – Choosing Preprocessing, Model Complexity, and Algorithm Selection

Choosing the best method will optimize how future spectra will be processed. The calibration procedure for spectroscopic models begins with selecting appropriate preprocessing methods. Different analyzer technologies (e.g., NIR vs. Raman), spectrometer manufacturers, and chemometric algorithms can influence the optimal choices. To evaluate preprocessing methods, Root Mean Squared Error of Cross Validation (RMSECV) is commonly used, offering a reliable measure of combined bias and precision error.

In one example, various combinations of preprocessing techniques (e.g., derivatives, scatter correction, normalization) were tested. The results showed that some combinations significantly outperformed others, with the best approaches selected based on minimizing RMSECV with the fewest model factors.

Partial Least Squares (PLS) regression is the standard algorithm due to its widespread integration and effectiveness in handling errors from both spectroscopy and lab sources. However, PLS assumes linearity, which doesn’t always apply. In such cases, alternative methods like Locally Weighted PLS (LWR-PLS) can provide better performance by addressing non-linearity through localized modeling.

Finally, method development is a one-time effort. Once a spectral data processing method is chosen, it generally remains fixed throughout the instrument’s use.

Here is the link to the full presentation on Minimizing Error in Calibrating Spectrometers from ATC 2025 Conference. Also available in Youtube video from 00:00 to 22:00.

MINIMIZING ERROR IN CALIBRATING SPECTROMETERS – Part 1. Accuracy vs Precision: Misconceptions of error that influences decisions

The terms “accuracy” and “precision” are often confusing. Accuracy refers to how close a result is to a known value, while precision indicates the consistency of repeated results. Ideal measurements are both accurate and precise. However, it’s possible to have high precision with poor accuracy, or low precision that still averages out to an accurate result over many trials.

Application to Octane Measurement:
In octane rating analysis, precision is a challenge, especially with the reference octane engine, which is not always precise despite being used to define accuracy. Essentially, the octane engine represents the “Not precise, maybe accurate” portion of picture. Repeated measurements of the same gasoline sample show a normal distribution of values with a standard deviation of ~0.25 octane units, meaning 95% of results fall within ±0.5 units.

The calibration process, then, is charged with mapping a very precise spectrum to a far less precise reference value and calibration must account for this. One can run multiple engine tests on the same sample and use the average value to reduce error—error decreases with the square root of the number of runs. In dealing with a system where the reference is imprecise, there is more that we can do.

Here is the link to the full presentation on Minimizing Error in Calibrating Spectrometers from ATC 2025 Conference. Also available in Youtube video from 00:00 to 22:00.

APACT 2025 Conference. Don’t Miss It.

APACT 2025 ConferenceVenue: Hilton Glasgow, 1 William Street, Glasgow, G3 8HT

Date: Sep 23 – 25, 2025

https://apact.co.uk/

 

The APACT meeting is one of the most dynamic forums for integrating a detailed knowledge of analytical chemistry and how to best manage change from data-centric to information-centric processes.

The combination of process-focused academics and a diverse set of industry people makes the APACT meeting both informative and enjoyable.  The team at CPACT is second to none at organizing and conducting a tight meeting.

Chemometrics & Advanced Data Analysis
Tuesday Sept 23, 2025
11:00 – 11:25
Chemometrics versus machine learning
Brian Rohrback
Infometrix, Inc.