ISA 2020 – Rethinking Calibration for Process Spectrometers II

The Long Beach Convention Center
Long Beach, CA
1:30pm, April 27th

 

Brian Rohrback
Infometrix, Inc.
Will Warkentin
Chevron Richmond Refinery

 

KEYWORDS
Best Practices, Calibration, Cloud Computing, Database, Gasoline Blending, Optical Spectroscopy, PLS, Process Control

ABSTRACT
Optical spectroscopy is a favored technology to measure chemistry and is ubiquitous in the hydrocarbon processing industry. In a previous paper, we focused on a generic, machine-learning approach that addressed the primary bottlenecks of mustering data, automating analyzer calibration, and tracking data and model performance over time. The gain in efficiency has been considerable, and the fact that the approach does not disturb any of the legacy (i.e., no changes or alterations to any analyzer or software in place) made deployment simple.

We also standardized a procedure for doing calibrations that, adheres to best practices, archives all data and models, provides ease of access, and delivers the models in any format. What remains is to assess the speed of processing and the quality of the models. To that end a series of calibration experts were tasked with model optimization, restricting the work to selecting the proper samples to include in the computation and setting the number of factors in PLS.  The amount of time and the quality of the models were then compared.  The automated system performed the work in minutes rather than hours and the quality of the predictions at least matched the best experts and performed significantly better than the average expert.  The conclusion is that there is a large amount of recoverable giveaway that can be avoided through automation of this process and the consistency it brings to the PLS model construction.

INTRODUCTION
There is a lot of mundane work tied to the assembly of spectra and laboratory reference values to enable quality calibration work.  There is also insufficient guidance when it comes to the model construction task.  How much time should be spent on this task?  How to best assess whether a spectrum-reference pair is an outlier or not? How many cycles of regression-sample elimination make sense? Where do we switch over from improving the model by adding PLS factors to overfitting and incorporating destabilizing noise?

For more information or the full paper, contact us.

2020 AIChE Spring Meeting and 16th Global Congress on Process Safety

 2020 AIChE Spring Meeting and 16th Global Congress on Process Safety
Mar 31, 1:52pm
Hilton Americas and George R. Brown Convention Center, Houston, TX

See abstract below for presentation at the 2020 AIChE Spring Meeting. Join us or contact us for more information.

Harnessing Big Data Approaches and AI in the Chemical Processing Industry
Brian Rohrback – Infometrix

The term Big Data implies a systematic approach to extracting information from multiple, byte-dense data sources. Effective extraction of this information leads to improvements in decision making at all levels of the chemical, petrochemical, and petroleum industries. To accomplish anything in the Big Data space, we need to combine traditional approaches in statistics, database organization, pattern recognition, and chemometrics with some newer concepts tied to better understanding of data mining, neuro-computing, and machine learning. In order for industry to achieve the goals that this form of AI promises, we need to approach the issues with more than just words.

This is a summary of a multi-company, multi-industry, hydrocarbon processing consortium, established seven years ago to re-evaluate how the calibration process for sensors and analyzers could be managed more efficiently. The focus spans optical spectrometers, chromatographs, and process sensors, independently and in combination. The idea is to enable a shift from current practices to approaches that take advantage of the computational power at our fingertips. It was critical to prioritize solutions that are non-disruptive, utilize legacy systems, and lessen the workload rather than layer on additional requirements. The result is a choice of tools available to consume the data and generate actionable, process-specific information are in hand. The analyzers in place, optical spectrometers in particular, represent the low-hanging fruit.

IFPAC 2020 – Autonomous Calibration and Optimizing Chromatographic Interpretation

IFPAC 2020 cardIFPAC 2020
Feb 23-26, 2020
Bethesda, MD

See abstracts below for papers being presented at the IFPAC 2020 conference. Join us or contact us for more information.

 

 

Autonomous Calibration
Brian Rohrback – Infometrix
Randy Pell – Infometrix
Scott Ramos – Infometrix

The use of chemometrics in processing spectroscopic data is far from new; the processing of NIR data in petroleum refineries dates to the early 1980s and in the food industry well before that. Although the computers have improved in performance leading to speed ups in the calibration process, the procedures being followed have not changed significantly since the 1980s. Intriguingly, we have made decisions on the corporate level that work against each other. We are installing more spectrometers and at the same time we are reducing staffing for spectrometer calibration and maintenance. A change in approach is mandated. In the spirit of automation, there are tools from both the chemometrics and the general statistics realms that can be applied to simplify the work involved in optimizing a calibration. Robust statistical techniques require some set-up of parameters, but once established for an application, they are often useable in every other instance of that application. The result is a one-pass means of selecting optimal samples for a calibration problem and, in turn, simplifies the assignment of model rank. This approach solves two problems:

 

Optimizing Chromatographic Interpretation
Brian Rohrback – Infometrix, Inc.

The heartbeat of the process environment is in the data we collect, but we are not always efficient in translating our data streams into actionable information. The richest source of process information comes from spectrometers and chromatographs and, for many applications, these prove to be the cheapest, most adaptable, and most reliable technologies available. In chromatography, there is a rich history and the chemometrics role is well defined but rarely placed into routine practice. This paper will provide a retrospective of routine processing solutions that have solved problems in pharmaceutical, clinical, food, environmental, chemical, and petroleum applications. It also discusses how to use tech borrowed from other fields to provide more consistent and objective GC results, automate translation of the raw traces into real-time information streams, and create databases that can be used across plant sites or even across industries.

 

Rethinking Calibration for Process Spectrometers

Click on image to view the full paper.

 

 

Title:
Rethinking Calibration for Process Spectrometers

Authors:
Will Warkentin, Chevron
Brian Rohrback, Infometrix

Abstract:
Optical spectroscopy is a great source of process chemistry knowledge. It has the advantage of speed, sensitivity, and simple safety requirements. As one of very few analyzer technologies that can measure chemistry, it has become a workhorse in the hydrocarbon processing industry. What if we could put a spectroscopy system in place and have it handle the application and communicate results as soon as it is turned on? Then, if predictions do not match legacy standards, the system dials itself in or calls for help. And, we are not constrained on either the hardware or the software front. In this paper, we address the primary bottleneck of mustering data, automating analyzer calibration, and tracking data and model performance over time.

Keywords:
Best Practices, Calibration, Cloud Computing, Database, Optical Spectroscopy, PLS, Process Control