ISA 2020 – Rethinking Calibration for Process Spectrometers II

The Long Beach Convention Center
Long Beach, CA
1:30pm, April 27th

 

Brian Rohrback
Infometrix, Inc.
Will Warkentin
Chevron Richmond Refinery

 

KEYWORDS
Best Practices, Calibration, Cloud Computing, Database, Gasoline Blending, Optical Spectroscopy, PLS, Process Control

ABSTRACT
Optical spectroscopy is a favored technology to measure chemistry and is ubiquitous in the hydrocarbon processing industry. In a previous paper, we focused on a generic, machine-learning approach that addressed the primary bottlenecks of mustering data, automating analyzer calibration, and tracking data and model performance over time. The gain in efficiency has been considerable, and the fact that the approach does not disturb any of the legacy (i.e., no changes or alterations to any analyzer or software in place) made deployment simple.

We also standardized a procedure for doing calibrations that, adheres to best practices, archives all data and models, provides ease of access, and delivers the models in any format. What remains is to assess the speed of processing and the quality of the models. To that end a series of calibration experts were tasked with model optimization, restricting the work to selecting the proper samples to include in the computation and setting the number of factors in PLS.  The amount of time and the quality of the models were then compared.  The automated system performed the work in minutes rather than hours and the quality of the predictions at least matched the best experts and performed significantly better than the average expert.  The conclusion is that there is a large amount of recoverable giveaway that can be avoided through automation of this process and the consistency it brings to the PLS model construction.

INTRODUCTION
There is a lot of mundane work tied to the assembly of spectra and laboratory reference values to enable quality calibration work.  There is also insufficient guidance when it comes to the model construction task.  How much time should be spent on this task?  How to best assess whether a spectrum-reference pair is an outlier or not? How many cycles of regression-sample elimination make sense? Where do we switch over from improving the model by adding PLS factors to overfitting and incorporating destabilizing noise?

For more information or the full paper, contact us.

2020 AIChE Spring Meeting and 16th Global Congress on Process Safety

 2020 AIChE Spring Meeting and 16th Global Congress on Process Safety
Mar 31, 1:52pm
Hilton Americas and George R. Brown Convention Center, Houston, TX

See abstract below for presentation at the 2020 AIChE Spring Meeting. Join us or contact us for more information.

Harnessing Big Data Approaches and AI in the Chemical Processing Industry
Brian Rohrback – Infometrix

The term Big Data implies a systematic approach to extracting information from multiple, byte-dense data sources. Effective extraction of this information leads to improvements in decision making at all levels of the chemical, petrochemical, and petroleum industries. To accomplish anything in the Big Data space, we need to combine traditional approaches in statistics, database organization, pattern recognition, and chemometrics with some newer concepts tied to better understanding of data mining, neuro-computing, and machine learning. In order for industry to achieve the goals that this form of AI promises, we need to approach the issues with more than just words.

This is a summary of a multi-company, multi-industry, hydrocarbon processing consortium, established seven years ago to re-evaluate how the calibration process for sensors and analyzers could be managed more efficiently. The focus spans optical spectrometers, chromatographs, and process sensors, independently and in combination. The idea is to enable a shift from current practices to approaches that take advantage of the computational power at our fingertips. It was critical to prioritize solutions that are non-disruptive, utilize legacy systems, and lessen the workload rather than layer on additional requirements. The result is a choice of tools available to consume the data and generate actionable, process-specific information are in hand. The analyzers in place, optical spectrometers in particular, represent the low-hanging fruit.

Last call. Upgrade Pirouette before July 1, 2020

Hello fellow Pirouette users.

If you are still using a legacy version of Pirouette, on July 1st the price to upgrade to the current version of Pirouette, version 4.5, will no longer be available. The cost will be the full retail price of Pirouette v4.5. Older versions (e.g., Pirouette 4.0) were designed and implemented for older Windows environments and have become less compatible with current Windows operating systems. If you are still using an older system that has not been upgraded to Windows 10, products like Pirouette 4.0 may still work. However, we are no longer fixing bugs or implementing enhancements. If you have an older version, we recommend you take advantage of the current upgrade rate before it goes to full price. If you have any questions, need additional information or a quote, email us at sales@infometrix.com.

Chemometrics-enhanced Classification of Source Rock Samples Using their Bulk Geochemical Data: Southern Persian Gulf Basin

Chemometrics-enhanced Classification of Source Rock Samples Using their Bulk Geochemical Data: Southern Persian Gulf Basin, co-authored by Infometrix’ Scott Ramos has recently been published. See abstract below and contact us if you have any questions.

Abstract

Chemometric methods can enhance geochemical interpretations, especially when working with large datasets. With this aim, exploratory hierarchical cluster analysis (HCA) and principal component analysis (PCA) methods are used herein to study the bulk pyrolysis parameters of 534 samples from the Persian Gulf basin. These methods are powerful techniques for identifying the patterns of variations in multivariate datasets and reducing their dimensionality. By adopting a “divide-and-conquer” approach, the existing dataset could be separated into sample groupings at family and subfamily levels. The geochemical characteristics of each category were defined based on loadings and scores plots. This procedure greatly assisted the identification of key source rock levels in the stratigraphic column of the study area and highlighted the future research needs for source rock analysis in the Persian Gulf basin.

Keywords: Chemometric Classification, Source Rock Geochemistry, Rock-Eval Pyrolysis Data, HCA, PCA.

IFPAC 2020 – Autonomous Calibration and Optimizing Chromatographic Interpretation

IFPAC 2020 cardIFPAC 2020
Feb 23-26, 2020
Bethesda, MD

See abstracts below for papers being presented at the IFPAC 2020 conference. Join us or contact us for more information.

 

 

Autonomous Calibration
Brian Rohrback – Infometrix
Randy Pell – Infometrix
Scott Ramos – Infometrix

The use of chemometrics in processing spectroscopic data is far from new; the processing of NIR data in petroleum refineries dates to the early 1980s and in the food industry well before that. Although the computers have improved in performance leading to speed ups in the calibration process, the procedures being followed have not changed significantly since the 1980s. Intriguingly, we have made decisions on the corporate level that work against each other. We are installing more spectrometers and at the same time we are reducing staffing for spectrometer calibration and maintenance. A change in approach is mandated. In the spirit of automation, there are tools from both the chemometrics and the general statistics realms that can be applied to simplify the work involved in optimizing a calibration. Robust statistical techniques require some set-up of parameters, but once established for an application, they are often useable in every other instance of that application. The result is a one-pass means of selecting optimal samples for a calibration problem and, in turn, simplifies the assignment of model rank. This approach solves two problems:

 

Optimizing Chromatographic Interpretation
Brian Rohrback – Infometrix, Inc.

The heartbeat of the process environment is in the data we collect, but we are not always efficient in translating our data streams into actionable information. The richest source of process information comes from spectrometers and chromatographs and, for many applications, these prove to be the cheapest, most adaptable, and most reliable technologies available. In chromatography, there is a rich history and the chemometrics role is well defined but rarely placed into routine practice. This paper will provide a retrospective of routine processing solutions that have solved problems in pharmaceutical, clinical, food, environmental, chemical, and petroleum applications. It also discusses how to use tech borrowed from other fields to provide more consistent and objective GC results, automate translation of the raw traces into real-time information streams, and create databases that can be used across plant sites or even across industries.