Blog from April, 2019

Samara Rosenfeld 


In the U.S., more than 7 million patients have undiagnosed Type 2 diabetes mellitus. But a recent study found that by using machine learning on data that already exist in a patient’s electronic health record (EHR), large populations of patients at high-risk of the condition can be predicted with 88% sensitivity.

What’s more, the machine learning model had a positive predictive value of 68.6%.

Chaitanya Mamillapalli, M.D., endocrinologist at Springfield Clinic in Illinois, and Shaun Tonstad, principal and software architect at Clarion Group in Illinois, along with their research team, aimed to evaluate a machine learning model to screen EHRs and identify potential patients with undiagnosed Type 2 diabetes mellitus.

Mamillapalli told Inside Digital Health™ that the team extracted data from an EHR at the Springfield Clinic. The data extracted was based on non-glucose parameters, including age, gender, race, body mass index, blood pressure, creatinine, triglycerides, family history of diabetes and tobacco use.

The team had an initial sample size of 618,022 subjects, but only 85,719 subjects had complete records.

After extracting the data, the subjects were equally split into training and validation datasets.

In the training group, the machine learning model was trained using the decision jungle binary classifier algorithm based on the parameters to learn if a subject is at-risk of diabetes.

The validation set classified the risk of the disease from the extracted non-glycemic parameters.

The validation subject probabilities were then compared to how the team defined Type 2 diabetes mellitus — random glucose greater than 140 mg/dL and/or HbA1c greater than 6.5%.

The predictive accuracy was also measured with area under the curve for the receiver operating characteristic curve and F1-score.

In the dataset, the model identified more than 23,000 true positives and 3,250 false negatives.

If the machine learning model is deployed in the back end of the EHR, physicians will be prompted if a patient’s health data shows that the patient is at high-risk of diabetes and should be screened, Mamillapalli said.

Mamillapalli said that patients generally go undiagnosed for four to six years before formally knowing that they have Type 2 diabetes mellitus. He told us that because of this, the patient is exposed to complications, which could cost up to $33 billion per year in the U.S.

But identifying the condition as early as possible could decrease the risk of complications.

However, screening rates for diabetes is still only at 50%.

In a written statement to Inside Digital Health™ from Mamillapalli, he wrote that “using an automated, scalable electronic model, we can deploy this tool to screen large chunks of the population.”

Mamillapalli said that the second phase of the team’s research is to change the algorithm slightly to diagnose prediabetes, which affects 90 million people, but is only diagnosed in 10% of that population.

“As the predictive accuracy is improved, this machine learning model may become a valuable tool to screen large populations of patients for undiagnosed (Type 2 diabetes mellitus),” the authors wrote.

The findings of the study, titled “Development and validation of a machine learning model to predict diabetes mellitus diagnoses in a multi-specialty clinical setting,” were presented at the American Association of Clinical Endocrinologists in California.


Bill Siwicki


The VEVA tool has helped improve caregiver productivity and efficiency – and providers appreciate that it reduces the number of clicks per query.


Vanderbilt University Medical Center in Nashville, Tennessee, is one of the largest academic medical centers in the Southeast, serving the primary and specialty healthcare needs for patients throughout Tennessee and the mid-south.

THE PROBLEM

Like many other healthcare organizations, Vanderbilt's caregivers have felt the administrative burden of clinical documentation and labor-intensive healthcare technologies. Caregivers found that the day-to-day practice of medicine was challenged by IT workflows that got in the way of, rather than improving, patient care.

Querying and entering patient information via keyboard and mouse, for example, proved to be an inefficient use of the caregivers' expertise and was taking them away from engaging with their patients at the bedside.

PROPOSAL

In 2011, when Apple debuted Siri, and in 2016, when the Amazon Echo became prolific, it also became clear that advances in artificial intelligence and natural language processing had matured to the point where communicating naturally with technology was no longer science-fiction, said Dr. Yaa Kumah-Crystal, core design advisor at Vanderbilt University and assistant professor of biomedical informatics and pediatric endocrinology at Vanderbilt University Medical Center.

"We knew we could leverage these technologies to entirely bypass the keyboard and mouse and instead empower our providers to use their voice to interact with the HER," she said.

Vanderbilt partnered with Nuance Communications at that point to develop a voice user interface prototype for the electronic health record. They called it the Vanderbilt EHR Voice Assistant, or VEVA.

This virtual tool enables caregivers to interact with the EHR using speech naturally. In this way, caregivers can easily retrieve information from the EHR to better understand the patient story when delivering care, she said. Today, VEVA linked with the EHR has been tested with more than 20 caregiver users.

MARKETPLACE

Vanderbilt's VEVA HER voice assistant is a homegrown technology, with the help of a vendor, Nuance. As a result, it is difficult to point to direct examples of other such products in the marketplace. But there are other vendors on the market with different types of voice assistant technology, such as IBM, Infor, Oracle, Salesforce and SAP.

MEETING THE CHALLENGE

In partnership with Nuance, Vanderbilt's team of medical informaticists, clinicians and software engineers used AI and natural language processing to build a voice interface prototype.

"Our providers can launch the VEVA application through the EHR in the context of a patient encounter," Kumah-Crystal explained. "FHIR resources are used to retrieve relevant content about the patient and render it back to the provider. For example, the provider can ask VEVA a question or say, 'Tell me about this patient.'

"VEVA applies its natural language understanding engine to translate that voice command into text and presents relevant results to the provider, such as a summary of the most recent patient visit," she said.

"VEVA does not only answer the question posed but, using natural language understanding, infers the intent behind the question and provides additional context."  Dr. Yaa Kumah-Crystal, Vanderbilt University Medical Center

Providers also can ask specific questions about recent diagnoses, lab test results and medications.

"VEVA does not only answer the question posed but, using natural language understanding, infers the intent behind the question and provides additional context," she noted. "If the provider asks about the patient's weight, for example, the system not only provides the current weight, but will also mention the previous weight, degree of change and other trend information. This information is presented as voice replies, textual information and on-screen graphics."

In other words, the VEVA assistant is designed to support busy caregivers and their workflows, diminishing the administrative and information retrieval burden of navigating unintuitive graphical user interfaces, she added.

RESULTS

By testing the voice assistant's functionality in the caregiver workflow, Vanderbilt caregivers have realized enhancements in the delivery of care. They are armed with efficient, simple ways to retrieve valuable patient information, which helps them better understand the patient's story in order to manage care, Kumah-Crystal said.

"VEVA has demonstrated the ability to impact on caregiver productivity and efficiency," she explained. "In particular, providers can save time by reducing the number of clicks per query, which translates to a 15 percent improvement in task-time savings."

Finally, VEVA rates well in system usability testing and has improved the providers' workflow experiences by enabling them to interact with the EHR using simple, intuitive, natural language queries, she added.

ADVICE FOR OTHERS

"We recommend that organizations start with a baseline assessment of both the amount of time providers spend on administrative tasks, as well as the impact this creates on workflows and caregiver productivity," Kumah-Crystal advised. "Without this baseline, it can be difficult to measure progress and success of the virtual assistant or any voice-powered technology."

Next, do the required homework and create a business case for workflow optimization, she said. Explore advances in machine learning, AI and natural language processing, and learn more about prototypes that are available and working in other organizations, she said. Find out about vendors that can leverage these technologies to make voice interaction with EHRs a reality, she added.

"A cross-functional team is essential as well," she suggested. "Include subject matter experts across a range of disciplines, including caregivers, clinical informaticists, software engineers and information theorists. Build an overarching model and make sure you take into consideration the providers' workflow, information needs and so on."

Additionally, build a prototype while users test and provide their feedback, she said.

"You want to understand information theory and map queries and content to satisfy provider's needs," she concluded. "Find out how you might 'break' the technology to uncover the commands it can't handle, for example, so you can overcome those before you make the solution widely available."


Samara Rosenfeld 


A new machine learning algorithm was highly accurate in determining whether a patient is likely to have a cholesterol-raising genetic disease that can cause early heart problems, according to the results of a study conducted by researchers at the Stanford University School of Medicine.

The algorithm was 88 percent accurate in identifying familial hypercholesterolemia (FH) in one data sample and 85 percent accurate in another.

In the study published in npj Digital Medicine, Joshua Knowles, M.D., Ph.D., assistant professor of cardiovascular medicine at Stanford, and his research team created an algorithm using data from Stanford’s FH clinic to learn what distinguishes an FH patient in an electronic health record (EHR).

The algorithm was trained to pick up on a combination of family history, current prescriptions, lipid levels, lab tests and more to understand what signals the disease.

The foundation of the algorithm was built using data from 197 patients who had FH and 6,590 patients who did not, so the program could learn the difference between positive and negative results.

When the algorithm was trained, the research team initially ran it on a set of roughly 70,000 new de-identified patient records. The team reviewed 100 patient charts from the patients flagged and found that the algorithm had detected patients who had FH with 88 percent accuracy.

Knowles and his partner, Nigam Shah, MBBS, Ph.D., associate professor of medicine and biomedical data science at Stanford, collaborated with Geisinger Healthcare System to further test the algorithm.

The algorithm was tested on 466 patients with FH and 5,000 patients without FH, and the predictions came back with 85 percent accuracy.

Shah said that him and Knowles knew that a lot of the Geisinger patients had a confirmed FH diagnosis with genetic sequencing.

“So that’s how we convinced ourselves that yes, this indeed works,” he said.

FH is an underdiagnosed genetic condition that leads to an increased risk of coronary artery disease if untreated. A patient with FH faces 10 times the risk of heart disease than someone with normal cholesterol. The condition can lead to death or a heart attack, and there are clear benefits of timely management, yet it is estimated that less than 10 percent of those with FH in the U.S. have been diagnosed.

Early diagnosis and treatment of FH can neutralize the threat of the condition. And one diagnosis could help multiple people because FH is genetic, making it likely that other relatives have it too.

Lead author Juan Banda, Ph.D., former research scientist at Stanford, wrote that when the algorithm is applied broadly to screen FH, it is possible to identify thousands of undiagnosed patients with the condition. This could lead to more effective therapy and screening of their families, Banda wrote.


Mike Miliard


From brain-computer interfaces to nanorobotics, a new report from Frost & Sullivan explores leading edge developments and disruptive tech.


A new study from Frost & Sullivan takes stock of some of the rapid-fire developments in the world of patient monitoring, which is expanding its capabilities by leaps and bounds with the maturation of sensors, artificial intelligence and predictive analytics.

WHY IT MATTERS
"Patient monitoring has evolved from ad hoc to continuous monitoring of multiple parameters, causing a surge in the amount of unprocessed and unorganized data available to clinicians for decision-making," according to F&S researchers. "To extract actionable information from this data, healthcare providers are turning to big data analytics and other analysis solutions.

The ability of such analytics to both assess patients in the moment and point toward their potential future condition had health systems investing more than $566 million in the technology during 2018, the report notes.

But data-crunching is only the beginning of what hospitals and healthcare providers will need to be prepared to manage in the years ahead if they hope to take full advantage of fast-evolving patient monitoring technology.

Wearables and embedded biosensors – such as continuous glucose monitors, blood pressure monitors, pulse oximeters and ECG monitors – are an obvious place to start, as health systems look to manage chronic conditions and population health, both in and out of the hospital.

But there's many more advances already starting to gain traction, such as smart prosthetics and smart implants. "These are crucial for patient management post-surgery or rehabilitation," researchers said, as "they help in measuring the key parameters to support monitoring and early intervention to avoid readmission or complexities."

Another innovation that's set for big growth is digital pills and nanorobots, which can help monitor medication adherence. In addition, advanced materials and smart fabrics are opening new frontiers in wound management and cardiac monitoring, the report notes. And brain-computer interfaces can enable direct monitoring and measurement of key health metrics to assess patients' psychological, emotional and cognitive state.

THE LARGER TREND
In a recent interview with Healthcare IT News, digital health pioneer Dr. Eric Topol, founder and director of Scripps Research Translational Institute, was asked which developments in AI and mobile technology he thought would be be most transformative in the year ahead.

"Longer term, the biggest thing of all is remote monitoring and getting rid of hospital rooms," said Topol. "And there, the opportunity is extraordinary. Because obviously the main cost in healthcare is personnel. And if you don't have hospital rooms, you have a whole lot less personnel. So setting up surveillance centers with remote monitoring – which can be exquisite and very inexpensive with the right algorithms, when it's validated – would be the biggest single way to improve things for patients, because they're in the comfort of their own home"

The value of patient monitoring is recognized at the federal level too. Centers for Medicare and Medicaid Services Administrator Seema Verma has called for expansion of reimbursement for remote care, with CMS seeking to "make sure home health agencies can leverage innovation to provide state-of-the-art care," she said.

ON THE RECORD
"In the future, patient monitoring data will be combined with concurrent streams from numerous other sensors, as almost every life function will be monitored and its data captured and stored," said said Sowmya Rajagopalan, global director of Frost & Sullivan's Advanced Medical Technologies division. "The data explosion can be harnessed and employed through technologies such as Artificial Intelligence (AI), machine learning, etc., to deliver targeted, outcome-based therapies."

Rajagopalan added that, "as mHealth rapidly gains traction, wearables, telehealth, social media and patient engagement are expected to find adoption among more than half of the population in developed economies by 2025. The patient monitoring market is expected to be worth more than $350 billion by 2025, as the focus is likely to move beyond device sales to solutions."


Samara Rosenfeld


Machine learning algorithms using administrative data can be valuable and feasible tools for more accurately identifying opioid overdose risk, according to a new study published in JAMA Network Open. 

Wei-Hsuan Lo-Ciganic, Ph.D., College of Pharmacy at the University of Florida, Gainesville, along with her research team, found that machine learning algorithms performed well for risk prediction and stratification of opioid overdose — especially in identifying low-risk subgroups with minimal risk of overdose.

Lo-Ciganic told Inside Digital Health™ that machine learning algorithms outperformed the traditional approach because the algorithms take into account more complex interactions and can identify hidden relationships that traditionally go unseen.

The researchers used prescription drug and medical claims for a 5 percent random sample of Medicare beneficiaries between January 2011 and December 2015. The team identified fee-for-service adult beneficiaries without cancer who were U.S. residents and received at least one opioid prescription during the study period.

The team compiled 268 opioid overdose predictor candidates, including total and mean daily morphine milligram equivalent, cumulative and continuous duration of opioid use and total number of opioid prescriptions overall and by active ingredient.

The cohort was randomly and equally divided into training, testing and validation samples. Prediction algorithms were developed and tested for opioid overdose using five commonly used machine-learning approaches: multivariate logistic regression, least absolute shrinkage and selection operator-type regression, random forest, gradient boosting machine and deep neural network.

Prediction performance was compared with the 2019 Centers for Medicare and Medicaid Services opioid safety measures, which are meant to identify high-risk individuals and opioid use behavior in Medicare recipients.

In order to find the extent to which patients who were predicted to be high-risk exhibited higher overdose rates compared with those predicted to be low-risk, the researchers compared the C-statistic and precision-recall curves across different method from the sample using the DeLong Test.

Low-risk patients had a predicted score below the optimized threshold, medium-risk had a score between the threshold and 10th percentile and high-risk patients were at the top 10th percentile of scores.

Based on the findings, the deep neural network and gradient boosting machine performed the best, with the deep neural network having a C-statistic of 0.91 and the gradient boosting machine having a C-statistic of 0.90.

With the gradient boost machine algorithm, 77.6 percent of the sample were categorized into low-risk, while 11.4 percent were medium-risk and 11 percent were high-risk. And with the deep neural network algorithm, 76.2 percent of people were predicted to be at low-risk, and 99.99 percent of those individuals did not have an overdose.

Lo-Ciganic said that with the promising results of the study, the next step would be to develop software to be incorporated into health systems — or an electronic health record — to see if the algorithms can be applied in real-world settings to help clinicians identify high-risk individuals.