In the U.S., more than 7 million patients have undiagnosed Type 2 diabetes mellitus. But a recent study found that by using machine learning on data that already exist in a patient’s electronic health record (EHR), large populations of patients at high-risk of the condition can be predicted with 88% sensitivity.
What’s more, the machine learning model had a positive predictive value of 68.6%.
Chaitanya Mamillapalli, M.D., endocrinologist at Springfield Clinic in Illinois, and Shaun Tonstad, principal and software architect at Clarion Group in Illinois, along with their research team, aimed to evaluate a machine learning model to screen EHRs and identify potential patients with undiagnosed Type 2 diabetes mellitus.
Mamillapalli told Inside Digital Health™ that the team extracted data from an EHR at the Springfield Clinic. The data extracted was based on non-glucose parameters, including age, gender, race, body mass index, blood pressure, creatinine, triglycerides, family history of diabetes and tobacco use.
The team had an initial sample size of 618,022 subjects, but only 85,719 subjects had complete records.
After extracting the data, the subjects were equally split into training and validation datasets.
In the training group, the machine learning model was trained using the decision jungle binary classifier algorithm based on the parameters to learn if a subject is at-risk of diabetes.
The validation set classified the risk of the disease from the extracted non-glycemic parameters.
The validation subject probabilities were then compared to how the team defined Type 2 diabetes mellitus — random glucose greater than 140 mg/dL and/or HbA1c greater than 6.5%.
The predictive accuracy was also measured with area under the curve for the receiver operating characteristic curve and F1-score.
In the dataset, the model identified more than 23,000 true positives and 3,250 false negatives.
If the machine learning model is deployed in the back end of the EHR, physicians will be prompted if a patient’s health data shows that the patient is at high-risk of diabetes and should be screened, Mamillapalli said.
Mamillapalli said that patients generally go undiagnosed for four to six years before formally knowing that they have Type 2 diabetes mellitus. He told us that because of this, the patient is exposed to complications, which could cost up to $33 billion per year in the U.S.
But identifying the condition as early as possible could decrease the risk of complications.
However, screening rates for diabetes is still only at 50%.
In a written statement to Inside Digital Health™ from Mamillapalli, he wrote that “using an automated, scalable electronic model, we can deploy this tool to screen large chunks of the population.”
Mamillapalli said that the second phase of the team’s research is to change the algorithm slightly to diagnose prediabetes, which affects 90 million people, but is only diagnosed in 10% of that population.
“As the predictive accuracy is improved, this machine learning model may become a valuable tool to screen large populations of patients for undiagnosed (Type 2 diabetes mellitus),” the authors wrote.
The findings of the study, titled “Development and validation of a machine learning model to predict diabetes mellitus diagnoses in a multi-specialty clinical setting,” were presented at the American Association of Clinical Endocrinologists in California.
The VEVA tool has helped improve caregiver productivity and efficiency – and providers appreciate that it reduces the number of clicks per query.
Vanderbilt University Medical Center in Nashville, Tennessee, is one of the largest academic medical centers in the Southeast, serving the primary and specialty healthcare needs for patients throughout Tennessee and the mid-south.
Like many other healthcare organizations, Vanderbilt's caregivers have felt the administrative burden of clinical documentation and labor-intensive healthcare technologies. Caregivers found that the day-to-day practice of medicine was challenged by IT workflows that got in the way of, rather than improving, patient care.
Querying and entering patient information via keyboard and mouse, for example, proved to be an inefficient use of the caregivers' expertise and was taking them away from engaging with their patients at the bedside.
In 2011, when Apple debuted Siri, and in 2016, when the Amazon Echo became prolific, it also became clear that advances in artificial intelligence and natural language processing had matured to the point where communicating naturally with technology was no longer science-fiction, said Dr. Yaa Kumah-Crystal, core design advisor at Vanderbilt University and assistant professor of biomedical informatics and pediatric endocrinology at Vanderbilt University Medical Center.
"We knew we could leverage these technologies to entirely bypass the keyboard and mouse and instead empower our providers to use their voice to interact with the EHR," she said.
Vanderbilt partnered with Nuance Communications at that point to develop a voice user interface prototype for the electronic health record. They called it the Vanderbilt EHR Voice Assistant, or VEVA.
This virtual tool enables caregivers to interact with the EHR using speech naturally. In this way, caregivers can easily retrieve information from the EHR to better understand the patient story when delivering care, she said. Today, VEVA linked with the EHR has been tested with more than 20 caregiver users.
Vanderbilt's VEVA EHR voice assistant is a homegrown technology, with the help of a vendor, Nuance. As a result, it is difficult to point to direct examples of other such products in the marketplace. But there are other vendors on the market with different types of voice assistant technology, such as IBM, Infor, Oracle, Salesforce and SAP.
MEETING THE CHALLENGE
In partnership with Nuance, Vanderbilt's team of medical informaticists, clinicians and software engineers used AI and natural language processing to build a voice interface prototype.
"Our providers can launch the VEVA application through the EHR in the context of a patient encounter," Kumah-Crystal explained. "FHIR resources are used to retrieve relevant content about the patient and render it back to the provider. For example, the provider can ask VEVA a question or say, 'Tell me about this patient.'
"VEVA applies its natural language understanding engine to translate that voice command into text and presents relevant results to the provider, such as a summary of the most recent patient visit," she said.
"VEVA does not only answer the question posed but, using natural language understanding, infers the intent behind the question and provides additional context." Dr. Yaa Kumah-Crystal, Vanderbilt University Medical Center
Providers also can ask specific questions about recent diagnoses, lab test results and medications.
"VEVA does not only answer the question posed but, using natural language understanding, infers the intent behind the question and provides additional context," she noted. "If the provider asks about the patient's weight, for example, the system not only provides the current weight, but will also mention the previous weight, degree of change and other trend information. This information is presented as voice replies, textual information and on-screen graphics."
In other words, the VEVA assistant is designed to support busy caregivers and their workflows, diminishing the administrative and information retrieval burden of navigating unintuitive graphical user interfaces, she added.
By testing the voice assistant's functionality in the caregiver workflow, Vanderbilt caregivers have realized enhancements in the delivery of care. They are armed with efficient, simple ways to retrieve valuable patient information, which helps them better understand the patient's story in order to manage care, Kumah-Crystal said.
"VEVA has demonstrated the ability to impact on caregiver productivity and efficiency," she explained. "In particular, providers can save time by reducing the number of clicks per query, which translates to a 15 percent improvement in task-time savings."
Finally, VEVA rates well in system usability testing and has improved the providers' workflow experiences by enabling them to interact with the EHR using simple, intuitive, natural language queries, she added.
ADVICE FOR OTHERS
"We recommend that organizations start with a baseline assessment of both the amount of time providers spend on administrative tasks, as well as the impact this creates on workflows and caregiver productivity," Kumah-Crystal advised. "Without this baseline, it can be difficult to measure progress and success of the virtual assistant or any voice-powered technology."
Next, do the required homework and create a business case for workflow optimization, she said. Explore advances in machine learning, AI and natural language processing, and learn more about prototypes that are available and working in other organizations, she said. Find out about vendors that can leverage these technologies to make voice interaction with EHRs a reality, she added.
"A cross-functional team is essential as well," she suggested. "Include subject matter experts across a range of disciplines, including caregivers, clinical informaticists, software engineers and information theorists. Build an overarching model and make sure you take into consideration the providers' workflow, information needs and so on."
Additionally, build a prototype while users test and provide their feedback, she said.
"You want to understand information theory and map queries and content to satisfy provider's needs," she concluded. "Find out how you might 'break' the technology to uncover the commands it can't handle, for example, so you can overcome those before you make the solution widely available."