Geneia chief data scientist explains AI interpretability in a podcast
Data Science

Geneia Conversations: Five Reasons AI Interpretability Is Important

August 4, 2021
Chief data scientist Fred Rahmanian joins the podcast to discuss interpretability.
Vice President, Marketing

In the newest episode of Geneia Conversations: Redefining Healthcare, chief data scientist Fred Rahmanian discusses the five reasons model interpretability is important, the difference between interpretability and explainability, and how we address interpretability for models created by the Geneia Data Intelligence Lab.

Model Explainability and Interpretability

Model interpretability – our ability to determine the cause and effect of a model – and model explainability – our ability to understand which features in the model are important to its performance – are critical, especially in healthcare. The extent to which clinicians and care managers have confidence in the model’s output and their ability to explain why, for example, Geneia’s opioid abuse and overdose model* predicted a patient is at high risk for opioid misuse, are critical to the continued adoption of AI in healthcare.

As Fred explains in our new podcast, model interpretability helps to:

  1. Facilitate debugging
  2. Detect potential bias
  3. Understand recourse for those who are adversely impacted by a model’s prediction
  4. Assess when to trust a model’s prediction
  5. Vet the model to determine if it’s ready to deploy

Fred also shares the techniques the Geneia Data Intelligence Lab uses to address interpretability in the creation of our models. For starters, our data scientists build inherently interpretable models with techniques such as regression and fewer variables. For example, the opioid predictive model uses 22 variables whereas others use as many as 200 data points to achieve comparable predictive accuracy. When that’s not possible, we use post hoc techniques such as Shapley values and LIME (local model interpretability.)

To learn more about model explainability and interpretability, listen now.

*Patent pending for the Opioid Abuse and/or Overdose Model. Predictive models, by their very nature, contain certain assumptions. This is not an attempt to practice medicine or provide specific medical advice, and it should not be used to make a diagnosis or to replace or overrule a qualified healthcare provider’s judgment. Certain data used in these studies were supplied by International Business Machines Corporation. Any analysis, interpretation, or conclusion based on these data is solely that of the authors and not International Business Machines Corporation.