Efficient Calibration Process and Big Data

View latest talks on Big Data and Calibration Process Efficiency.

 

 

 

 

Harnessing Big Data – AiChE 2020

Big Data implies a systematic approach to extracting information from multiple, byte-dense data sources. Effective extraction of this information leads to improvements in decision making at all levels of industry. Here, we combine traditional approaches in statistics, database organization, pattern recognition, and chemometrics with some newer concepts tied to data mining, neurocomputing, and machine learning. The cost is low and the benefits are high.

The Multivariate Process Paradigm – SciX 2020

This is a summary of a chemical processing consortium, established eight years ago to re-evaluate how the calibration process for sensors and analyzers could be managed more efficiently. The focus is on optical spectrometers to enable a shift from current practices to approaches that take advantage of the computational power at our fingertips. It was critical to prioritize solutions that are non-disruptive, utilize legacy systems, and lessen the workload rather than layer on additional requirements. The result is a choice of tools available to consume the data and generate actionable, process-specific information.

Deconvoluting Mixed Petroleum and the Effect of Oil and Gas-Condensate Mixes on Identifying Petroleum Systems – AAPG ACE 2020

Virtual talk at AAPG ACE 2020

Watch recent virtual talk by Ken Peters at AAPG ACE on using Pirouette’s unmixing algorithm for evaluating oil production.

 

 

Two points made in the talk are:

  • You cannot use ratios as the input variables and need to use concentrations instead.
  • The alternating least squares algorithm performs well to untangle mixed sources accurately.

2020 AIChE Spring Meeting and 16th Global Congress on Process Safety

 2020 AIChE Spring Meeting and 16th Global Congress on Process Safety
Aug 19, 2020
Virtual Meeting

See abstract below for presentation at the 2020 AIChE Spring Meeting. Join us or contact us for more information.

 

Harnessing Big Data Approaches and AI in the Chemical Processing Industry
Brian Rohrback – Infometrix

The term Big Data implies a systematic approach to extracting information from multiple, byte-dense data sources. Effective extraction of this information leads to improvements in decision making at all levels of the chemical, petrochemical, and petroleum industries. To accomplish anything in the Big Data space, we need to combine traditional approaches in statistics, database organization, pattern recognition, and chemometrics with some newer concepts tied to better understanding of data mining, neuro-computing, and machine learning. In order for industry to achieve the goals that this form of AI promises, we need to approach the issues with more than just words.

This is a summary of a multi-company, multi-industry, hydrocarbon processing consortium, established seven years ago to re-evaluate how the calibration process for sensors and analyzers could be managed more efficiently. The focus spans optical spectrometers, chromatographs, and process sensors, independently and in combination. The idea is to enable a shift from current practices to approaches that take advantage of the computational power at our fingertips. It was critical to prioritize solutions that are non-disruptive, utilize legacy systems, and lessen the workload rather than layer on additional requirements. The result is a choice of tools available to consume the data and generate actionable, process-specific information are in hand. The analyzers in place, optical spectrometers in particular, represent the low-hanging fruit.

ISA 2020 – Rethinking Calibration for Process Spectrometers II

The Long Beach Convention Center
Long Beach, CA
1:30pm, April 27th

 

Brian Rohrback
Infometrix, Inc.
Will Warkentin
Chevron Richmond Refinery

 

KEYWORDS
Best Practices, Calibration, Cloud Computing, Database, Gasoline Blending, Optical Spectroscopy, PLS, Process Control

ABSTRACT
Optical spectroscopy is a favored technology to measure chemistry and is ubiquitous in the hydrocarbon processing industry. In a previous paper, we focused on a generic, machine-learning approach that addressed the primary bottlenecks of mustering data, automating analyzer calibration, and tracking data and model performance over time. The gain in efficiency has been considerable, and the fact that the approach does not disturb any of the legacy (i.e., no changes or alterations to any analyzer or software in place) made deployment simple.

We also standardized a procedure for doing calibrations that, adheres to best practices, archives all data and models, provides ease of access, and delivers the models in any format. What remains is to assess the speed of processing and the quality of the models. To that end a series of calibration experts were tasked with model optimization, restricting the work to selecting the proper samples to include in the computation and setting the number of factors in PLS.  The amount of time and the quality of the models were then compared.  The automated system performed the work in minutes rather than hours and the quality of the predictions at least matched the best experts and performed significantly better than the average expert.  The conclusion is that there is a large amount of recoverable giveaway that can be avoided through automation of this process and the consistency it brings to the PLS model construction.

INTRODUCTION
There is a lot of mundane work tied to the assembly of spectra and laboratory reference values to enable quality calibration work.  There is also insufficient guidance when it comes to the model construction task.  How much time should be spent on this task?  How to best assess whether a spectrum-reference pair is an outlier or not? How many cycles of regression-sample elimination make sense? Where do we switch over from improving the model by adding PLS factors to overfitting and incorporating destabilizing noise?

For more information or the full paper, contact us.

Last call. Upgrade Pirouette before July 1, 2020

Hello fellow Pirouette users.

If you are still using a legacy version of Pirouette, on July 1st the price to upgrade to the current version of Pirouette, version 4.5, will no longer be available. The cost will be the full retail price of Pirouette v4.5. Older versions (e.g., Pirouette 4.0) were designed and implemented for older Windows environments and have become less compatible with current Windows operating systems. If you are still using an older system that has not been upgraded to Windows 10, products like Pirouette 4.0 may still work. However, we are no longer fixing bugs or implementing enhancements. If you have an older version, we recommend you take advantage of the current upgrade rate before it goes to full price. If you have any questions, need additional information or a quote, email us at sales@infometrix.com.