The centerpiece of the Ai-Metrix system is a SQL Server database. This part of the system acts as a conductor managing the addition of spectra and metadata, initiating the model creation process, processing new data to assess the model quality, and driving reports either through the interactive web-based dashboards or as electronic standard reports..
Intelligent Data Assembly
The first step in the routine handling of data is to match spectra and reference values if this is not done external to the Ai-Metrix system. During initial setup of a spectrometer’s database, we evaluate the output of reference values and create a custom file read for that instrument’s metadata. To simplify the job of the end user, this file does not need to be cleaned up manually, the SQL read process will match values hidden in a data dump to the appropriate spectra.
Because SQL is watching the shared folder on your desktop, the data load is accomplished in a few minutes and will push a visualization on to the Tableau web host. The input data will fall into one of three categories:
1) Matched spectra and reference values available for calibration;
2) Spectra or reference values awaiting a partner; or
3) Matched spectra and reference values reserved for future validation.
AI Calibration Process
Infometrix will provide you with an Excel sheet that is created to correspond to the model requests you will make. This spreadsheet gives you control of the constraints to place on the model tied to date range and data demographics. Once set up, you only need to press a button to request a series of models; an INI format file is created and sent to the SQL Server for processing. There are no limitations on how many models you can request; they are processed in the order received.
When the first data is received, Infometrix scientists review the historical approach to calibration and process a starter model using Infometrix best practices. The quality of this model can then be compared to any historical model using cross validation.
Our modeling process is a 5-step procedure designed to identify and eliminate outliers and to set the appropriate number of PLS factors for the model. This flow is set by a three-step Robust statistical sample evaluation to establish an inlier set of samples optimized for modeling. This data trim is followed by a two-step PLS run to optimize model complexity. This is repeated for every reference value being modeled and detail of the processing is reported in a report returned to your desktop.
With input of new data, the SQL system will calculate a standard error of prediction based on the most recent model built and can be used as a guide to determine if a new model is needed. This ensures that a model would be created only if an improvement over the old would be expected. The succession of created models drives a visualization tracking performance over time.
The time a user must spend to request a new set of models is only a couple of minutes. The processing time will vary depending on the number of spectra in the database and the number of properties requested and ranges from a few minutes to several hours.
In its entirety, the Ai-Metrix system is designed to turn the average modeling task from the hours-days world into the seconds-minutes realm and at the same model quality level of higher.