Getting Started with AI: A Q&A with Geneia's CTO

March 28, 2018
Fred Rahmanian, Chief Technology Officer


Banner image of a doctor looking at a brain.

So what is AI? Really?

“Artificial intelligence is getting computers to do things that traditionally require human intelligence, like reasoning, problem solving, common sense knowledge, learning, vision, speech and language understanding, planning, decision making and so on.”

Pedro Domingos, Professor, University of Washington
Author or Co-author of more than 200 technical publications in machine learning, data mining and more

The field of artificial intelligence (AI) is about 50 years old. For the first three decades, the AI researchers were mostly focused on ‘knowledge engineering’. For instance, if I wanted an AI system to diagnose appendicitis, I had to interview doctors and program the doctor’s knowledge of diagnosing appendicitis in the form of rules into the computer. As one can imagine, this approach was not scalable. In fact, venture capital funding for AI almost completely disappeared in the 80s and 90s

It seems like we hear about AI almost daily now. So what changed?

Researchers found a new way of achieving AI called machine learning. Machine learning is a subfield of AI. Instead of trying to program the computers to do things, the computers program themselves by learning from data. 

So in case of the medical diagnosis example, we give a machine learning algorithm a lot of patient records and the computer will learn medical knowledge from the data. It can then make medical diagnoses and sometimes even better diagnoses than a human.

So why has machine learning become so popular in the last few years?

As one would expect, machine learning requires data from which to learn. More data typically results in better AI. We have seen an explosion of data in many domains, including healthcare. Our ability to cost-effectively capture and process large amounts of data has also improved greatly. The reasons for the success of machine learning are the combination of Big Data, improvements in computing power and better algorithms.

Is it really that simple? More data coupled with better computers means better AI?

Well almost. As it turns out, training machines to learn new concepts faces some of the same challenges as human learning. 

Bias-variance tradeoff is one of the biggest challenges in machine learning. When training a machine, it is quite possible for it to start memorizing the right answers. This is often referred to as variance. For sure, it is okay for third graders to memorize the multiplication table but memorization cannot help when solving a multi-variate calculus problem. The technical definition of variance is the error from sensitivity to small fluctuations in the training set. In other words, will our AI perform worse when presented with data that is different from what was used to train it? You may also hear this referred to as over-fitting.

Bias, on the other hand, is when the AI hasn’t learned enough. This usually happens when it did not have enough data from which to learn. It is like studying for a biology final but reading only two of 10 chapters. For the sake of completeness, bias is the error from erroneous assumptions in the learning. This is also often referred to as under-fitting. 

A good data scientist spends much time and energy finding the right balance between bias and variance in the machine learning algorithm.

So what are some of these terms I hear about such as deep learning, supervised learning, semi-supervised learning and more?

Deep learning is one of the algorithms/methods used in machine learning. The roots of deep learning have been around for a long time but it has grown so much recently that it has become almost its own subfield of AI. 

Graphic of a blue brain. 

Deep learning is a form of neural network that has achieved much success in the fields of imaging and vision. As it relates to healthcare, it has been used successfully for reading and interpreting medical images. As it relates to population health and care management, the adoption of deep learning has lagged. My guess is this is due to the ‘black box’ nature of deep learning. Interpretability of deep learning results is still challenging. Nevertheless, we are starting to see some research in the areas of deep learning for population healthi.

Supervised, semi-supervised, unsupervised and reinforced learning are different types of machine learning algorithms:

  • Supervised algorithms learn from a training data set, similar to someone supervising the learning process.  We use data with correct answers to teach the algorithm to make predictions.
  • Unsupervised algorithms, unlike supervised algorithms, do not have access to the correct answers. It is left to learn from data on its own usually by finding similar groups in the data or by discovering rules that explain the data.
  • As one would guess, semi-supervised leaning is something in between supervised and unsupervised learning. Semi-supervised deals with the expensiveness of ‘labeling’ data sets with the correct answers so it can be used to train algorithms.

So how is Geneia taking advantage of AI?

Geneia has used artificial intelligence throughout our Theon® analytics, insights and care management platform in the form of risk factors, risk numbers, propensity to engage in clinical programs, and propensity to enroll in insurance plans. The information is presented in a manner that is interpretable by the user, so the underlying AI is not necessarily obvious.  Data generated by AI is used to produce suspect and missing Hierarchical Condition Category (HCC) codes, among other things. 
 
Other potential uses for AI include:
  • Patient stratification - Can we train an AI to identify patients before they become high risk or chronic? For instance, we know when a COPD patient reaches stage 3 or above because that patient’s care becomes very costly. It would be better to identify markers that will let us predict if a patient is about to reach stage 3, so we can help prevent her from progressing.
  • Treatment variation – Can we train an AI to identify variation in treatment using just claims data? This allows us to identify:
  1. Non-use of evidence-based and data-based approaches to clinical decision-making
  2. Disparate outcomes that often result from inappropriate variation
  3. Either unanticipated or suboptimal outcomes, and
  4. Higher utilization, costs and waste
  • 30-day hospital readmission – Can we train an AI to predict the probability a patient will be readmitted for any reason within 30 days of their discharge?
  • Provider teaming – In the absence of referral information, can we train an AI using claims data to:
  1. Find questionable relationships among providers by applying social network techniques to provider relationships
  2. Identify unusual or questionable referral patterns
  3. Optimize provider network by various aspects, e.g. cost or outcome
  • Medication adherence – Using only prescription claims, can we train an AI to identify:
  1. The mode of engagement that will improve adherence for each patient
  2. Patients who will not continue to adhere
Graphic of care coordinator and patient. 

 This is probably one of the hardest AI tasks in population health today because:

  1. It involves understanding human behavior
  2. Not everyone responds in the same way
  3. Every patient may need multiple modes of engagement
  4. Patients’ behavior will change over time and so does the way they can be engaged

Geneia Sandbox

Lastly, at Geneia we have created a fully managed sandbox designed from the ground up to support the data science workflow within our organization. The Geneia sandbox facilitates and encourages collaboration among data scientists and data engineers. It is completely language agnostic, supporting R, Python, Julia, Java and SAS. The workbench comes with some of the most advanced machine learning algorithms in the industry, such as h2o, mxnet, TensorFlow, Keras, and caret.

 


iRajkomar, A., Oren, E., Chen, K., Dai, A. M., Hajaj, N., Liu, P. J., Dean, J. (2018). Scalable and Accurate Deep Learning for Electronic Health Records. Retrieved from http://arxiv.org/abs/1801.07860

Gulshan, V., Peng, L., Coram, M., Stumpe, M. C., Wu, D., Narayanaswamy, A., Webster, D. R. (2016). Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs. JAMA, 316(22), 2402. https://doi.org/10.1001/jama.2016.17216

Miotto, R., Li, L., Kidd, B. A., & Dudley, J. T. (2016). Deep Patient: An Unsupervised Representation to Predict the Future of Patients from the Electronic Health Records. Scientific Reports, 6(1), 26094. https://doi.org/10.1038/srep26094


Related Blogs