A framework for evaluating the maturity level of machine learning explanations

Published

August 6, 2024

Talk given at JSM 2024

Scientific computational models are designed and implemented to represent known physical relationships of engineered systems. In situations where mechanistic equations are unknown or the computational burden is too expensive, machine learning (ML) techniques are becoming commonly employed in lieu of, to complement, or as surrogates for classic computational models to uncover these relationships and handle computational challenges. We refer to this fusion of traditional mathematical models with machine learning models as scientific machine learning (SciML). When SciML is used in high consequence applications, the ability to interpret the model is essential for assessment and understanding. However, many ML models are not inherently interpretable. Explainability techniques are intended to provide insight into “black box” ML models, but as with the models, it is imperative that explanations used in high consequence applications are accurate and meaningful. For this reason, we propose that ML explanations used to aid SciML that is deployed to support high consequence decisions be assessed via a framework of maturity level requirements. We draw inspiration from the Predictive Capability Maturity Model (PCMM) currently used at Sandia National Labs to assess the credibility of scientific computational models. The PCMM was specifically developed to assess scientific computational models for high consequence applications and, thus, provides a solid framework to build on for SciML maturity levels. In this talk, we will review PCMM and discuss our work towards developing maturity level requirements for ML explanations. While our efforts are focused on SciML, we believe that such evaluation requirements are relevant more broadly for ML, and we hope to provoke further discussion on the assessment of ML explainability.

SNL is managed and operated by NTESS under DOE NNSA contract DE-NA0003525. SAND2024-01386C.

Slides GitLab