Amazon Web Services boosts machine learning to treat depression


Data scientists from Takeda and institute ConvergeHEALTH by Deloitte are applying artificial intelligence to pools of patient data to test how treatment-resistant depression responds to the medication.

Pharmaceutical company Takeda and research and development data science institute ConvergeHEALTH by Deloitte have partnered to study patient datasets to better understand the etiology, progression, and most effective therapies for difficult diseases.

Using insurance claims information including diagnoses, medical procedures, and prescriptions, they ran linear and non-linear models on disease datasets like treatment-resistant depression. The goal was to identify data factors with the highest impact on predicting patient outcomes.

By combining the right data and the right questions, the organizations improved the predictability of deep learning models, allowing for the analysis of wider and more complex data sets and a better understanding of patient trajectories.

They also identified potential for these machine-learning techniques for use on other difficult-to-diagnose diseases, to determine which patients are more prone to these illnesses and the best courses for personalized treatment.

“In severe depression, patients often go through multiple medications before finding one that works,” said Dan Housman, chief technology officer at ConvergeHEALTH by Deloitte. “This testing process can be challenging for patients and their psychiatrists.”

The approach is “prescribed after other medications did not work in what is deemed a treatment-resistant patient, he added. “We’re interested in looking at depression patients and their journey between treatments to better understand which patients may fall into the treatment-resistant category and when a certain switch will be sustained without further switches.”

“We’re generally looking to use AI, machine learning, and deep learning to demonstrate that we can predict a future event in a data set with good accuracy.”

Dan Housman, ConvergeHEALTH by Deloitte

The organizations are using claims data sets with machine learning to build predictive models to determine the patients who may be resistant and the medications or classes of depression medications for patients to switch between.

With effective predictive models, they can work to adjust guidelines or provide digital diagnostic tools that look at patient histories to identify who would likely benefit from switching to a product earlier or potentially using it as a first-line treatment.

“The benefit to the patient is a shorter journey to a drug that will keep them well and less time struggling with their depression,” Housman explained. “The benefit to Takeda is to be able to build tools both with guidelines or decision support systems to help physicians find the patients who can benefit from our products.”

“Predicting who will likely fail or succeed with a drug is a very challenging problem to determine given the many nuances in medical records,” he added.

Patient histories include related temporal events, comorbidities, diagnostic pathways, and procedures. So the organizations worked through data science and machine learning, ultimately testing deep learning methods to determine the predictability of medication switches and determine if they could isolate patterns useful for practicing medicine.

“We’re generally looking to use AI, machine learning, and deep learning to demonstrate that we can predict a future event in a data set with good accuracy, while also looking to understand the factors or patterns in the data that are important for driving that prediction,” Housman said.

“We used traditional machine learning models that are able to identify among the thousands of potential features in a patient record both which ones are most predictive and given the ensemble of many features what the prediction is of an event happening,” he added.

The data scientists do this by turning the data they have available into training and test data sets. The training data allows them to hone models. The test data allows them to see how those models perform against data where they already know the results but don’t provide them to the algorithm. This allows them to measure how accurate the models are.

“These prediction systems such as random forest are already very powerful tools but they fall short in certain key areas,” Housman said. “One of those areas is in looking at the timeline of a patient’s record.”

“Recurrent neural network deep learning algorithms had demonstrated great utility in helping to recognize patterns in natural language because they can learn not just from the words in the sentence, but also the relative order of one word relative to the others,” he explained. “So we used deep learning through recurrent neural networks to obtain better scores on our tests, presumably by being able to factor the order of patient events.”

Historically it’s been difficult, expensive, and time-consuming to run machine learning experiments on large data sets because low-cost, high-performance computing was not available on demand. To execute these capabilities, the organizations needed to solve a number of scaling problems with computing and also needed tools for executing the analyses.

“We leveraged the Amazon Web Services computing systems including GPU on-demand servers in order to build and train the models,” Housman said. “To manage the creation of pipelines and execution of machine learning models we used Deloitte’s Deep Miner tools and Amazon’s underlying SageMaker tools for managing the execution of the machine learning jobs.”

“The analytical tools, data availability, and scalable computational infrastructure has brought the cost of doing data science experiments like these within reach for many projects that previously would have been too expensive to consider,” he added.

Housman said the results of the application of the various artificial intelligence methods were promising.

“AUC, the area under the curve, scores that manage the matrix of true positives, true negatives, false positives, and false negatives for predictive power are what we use to determine the effectiveness of our models,” he explained. “An AUC score of 50 percent would mean our model was close to random at getting a prediction right, which is not a good model. A score of 100 percent would mean the model was perfect.”

The researchers said they were encouraged that the models using different techniques demonstrated increasing predictive power. In treatment-resistant depression, they found that AUC went from a low of 55.1 percent in traditional linear models, to 90.2 percent using RNN deep learning models.

“We were able to look at the key features among hundreds of thousands of features and could see that most of the features related to known etiology of the disease but also some unknown correlations that we can investigate further,” Housman said.

“The encouraging factor is that we know that the deep learning algorithms can use temporal patterns to predict treatment switches,” he added, “But we don’t know what those patterns are yet because the deep learning models are opaque.”


Please enter your comment!
Please enter your name here