Imagine your mother – let’s call her Abigail – is having knee replacement surgery next month. Most knee and hip replacement patients will experience some post-surgical pain which is why orthopedic surgeons are the third-highest prescribers of opioid medication.
As a part of her pre-surgical prep, Abigail’s orthopedic surgeon uses Geneia’s opioid model to determine her risk of opioid misuse. She is identified as a patient expected to experience opioid misuse.
When the surgeon shares his recommendation for alternative pain remedies, your mother wants to know why she will not receive an opioid prescription. He explains that people can become addicted to opioids in as few as five days. Since she’s at higher risk for opioid misuse, he wants to avoid opioids. The surgeon’s conversation with your mother prompts him to do more research and understand why she is at high risk for opioid abuse or overdose.
Model interpretability – the degree to which a human can understand the cause of a decision – allows the surgeon to explain why the opioid model predicted she was at high risk for opioid misuse. On one hand, your mother is 66 years old and does not have any history of mental health disorders, which would mark her as low risk by most screening tools. She has, however, five opioid fills in the past six months, a diagnosis of back pain and two benzodiazepine prescriptions, all of which contribute to increased risk.
The bottom line – physicians and their patients ‘won’t trust decisions made by AI and machine learning if they don’t have at least a general understanding of how they were made,’ even more so when it comes to people’s health. So interpretability is critical to the continued adoption of AI in healthcare.
In the newest version of Geneia Conversations: Redefining Healthcare, data scientist Andrew Fairless discusses model interpretability, why it’s important and how we address interpretability for models created by the Geneia Data Intelligence Lab.