IFPAC 2023 – The Multiverse of Challenges for Spectral Libraries

IFPAC 2023 Conference Short Course and Paper

Time to be announced
Bethesda North Marriott Hotel and Conference Center

Presented by:
Brian Rohrback, Ph.D., MBA, President, Infometrix, Inc.


There are challenges when considering application-specific libraries of optical spectra. For most quality control applications in industry, there is no standard set of spectra available as the process is typically tied to a set of (unique) analytes mixed in various proportions. Add in changes from ingredient suppliers, seasonal variations, and changes in unit operation, there is not a pinpoint target for assessing quality. Luckily, we have more than a half century of processing data like this using chemometrics and the newer moniker machine learning. But handling process libraries is not just a simple application of an appropriate algorithm; there are challenges that need to be considered in all aspects of sample collection, handling instrument drift, and ensuring consistency across all operators. An outline of best practices needs to include how to match laboratory reference data to spectral data, an unbiased mechanism for selecting validation samples, an optimal mechanism for model construction, establishing standards for quality reports, tracking model performance over time, handling process or ingredient transitions, and much more. A systematic approach to building, maintaining, and benefiting from an application-specific spectral library is presented as part of the USP effort to establish appropriate standard practices.

Register at www.IFPACglobal.org/attendee-registration.

IFPAC 2020 – Autonomous Calibration and Optimizing Chromatographic Interpretation

IFPAC 2020 cardIFPAC 2020
Feb 23-26, 2020
Bethesda, MD

See abstracts below for papers being presented at the IFPAC 2020 conference. Join us or contact us for more information.



Autonomous Calibration
Brian Rohrback – Infometrix
Randy Pell – Infometrix
Scott Ramos – Infometrix

The use of chemometrics in processing spectroscopic data is far from new; the processing of NIR data in petroleum refineries dates to the early 1980s and in the food industry well before that. Although the computers have improved in performance leading to speed ups in the calibration process, the procedures being followed have not changed significantly since the 1980s. Intriguingly, we have made decisions on the corporate level that work against each other. We are installing more spectrometers and at the same time we are reducing staffing for spectrometer calibration and maintenance. A change in approach is mandated. In the spirit of automation, there are tools from both the chemometrics and the general statistics realms that can be applied to simplify the work involved in optimizing a calibration. Robust statistical techniques require some set-up of parameters, but once established for an application, they are often useable in every other instance of that application. The result is a one-pass means of selecting optimal samples for a calibration problem and, in turn, simplifies the assignment of model rank. This approach solves two problems:


Optimizing Chromatographic Interpretation
Brian Rohrback – Infometrix, Inc.

The heartbeat of the process environment is in the data we collect, but we are not always efficient in translating our data streams into actionable information. The richest source of process information comes from spectrometers and chromatographs and, for many applications, these prove to be the cheapest, most adaptable, and most reliable technologies available. In chromatography, there is a rich history and the chemometrics role is well defined but rarely placed into routine practice. This paper will provide a retrospective of routine processing solutions that have solved problems in pharmaceutical, clinical, food, environmental, chemical, and petroleum applications. It also discusses how to use tech borrowed from other fields to provide more consistent and objective GC results, automate translation of the raw traces into real-time information streams, and create databases that can be used across plant sites or even across industries.