Although health IT developers and public health systems were caught off guard by COVID-19, both should be ready for the next wave or a future pandemic.
Developing, implementing, and assessing a plan for EHR systems and public health information systems require a boost in health IT, governance, and overall strategy, according to a study published in The Journal of the American Medical Informatics Association (JAMIA).
COVID-19 response efforts have included the collection and analysis of individual and community EHR data from healthcare organizations, public health departments, and socioeconomic indicators. But those resources haven’t been deployed the same way in all healthcare organizations, the researchers stated.
“The current state of COVID-19 data reflects a patchwork of uncoordinated, temporary fixes to a historically neglected public safety function,” wrote the researchers. “As the US enters its second decade of nationally-coordinated digital infrastructure for healthcare delivery and to modernize patient care, COVID-19 has demonstrated that this infrastructure is inadequate to respond to public health emergencies.”
Researchers analyzed the COVID-19 response efforts from 15 health organizations that saw delays in correctly understanding, predicting, and mitigating the COVID-19 spread. The research group focused on EHR data. They also outlined the current health IT infrastructure, such as data registries and clinical data networks, and data ecosystem challenges that are relevant to the current pandemic.
Through that analysis, the research team determined a number of steps that could help organizations in the current and future steps to mitigate the pandemic, which most experts say is in its third wave. The researchers’ recommendations may also help in future public health crises.
Health IT infrastructure needs to support public health that leverages EHR systems and associated patient data, but it cannot be developed and implemented right away, the researchers began.
Additionally, having better control of the timeliness of data analysis will be essential. Because analytic methods do not always give real-time results, it is easy to overlook or underuse EHR data.
“While public health tools for horizon scanning, disease surveillance, epidemiological modeling, capacity planning, ‘hot spotting,’ and targeted intervention strategies (such as isolation or contact tracing in the case of a transmissible pathogen) use as much available data as possible, the speed with which these data are collected, organized and analyzed is slow,” researchers explained.
Researchers also found public health information infrastructure does not currently support larger-scale integration. Due to this issue, health organizations have been largely unable to gather information during the pandemic because it requires multiple data submissions to a number of agencies.
“Unless COVID-19 data initiatives are coordinated and systems are interoperable, much effort and money will be spent into each initiative individually: these initiatives will compete with each other, will only provide partial answers, and will still not properly support public health decision making for this and the next pandemic, and for other diseases that have a large national impact,” explained study authors.
If developers create new health IT to fill current COVID-19 data needs, it may not be able to be used for a future pandemic, said researchers. Although health IT developers and public health systems were caught off guard by the current pandemic, both should be ready for the next COVID-19 wave or a future pandemic.
Researchers said the value of boosting technology, governance, and an overall strategy can be analyzed through cost and benefits, but many stakeholders must adapt quickly to these three changes because the cost of optimizing a current health IT system can be overwhelming to health organizations.
“We call all stakeholders to act now to build a coordinated system of data sharing to combat COVID-19, and to prepare for the inevitable next pandemic,” wrote researchers.
“Successful implementation of the measures outlined in this article will enable evidence-based approaches to coordinate testing and contact tracing, predict needed resources and prepare accordingly (so “non-essential” healthcare services will not need to be shut down unnecessarily), conduct basic, preventive or therapeutic research, and provide a trusted, factual basis for answering public health questions of critical importance for this pandemic and other health conditions,” concluded the research team.
Integrating social determinants of health (SDOH) data into the EHR can help providers and researchers gain insight on COVID-19.
The Gravity Project, a community-led HL7 Fast Healthcare Interoperability Resources (FHIR) Accelerator, published an implementation and recommendation guide for social determinants of health (SDOH) data and terminology, with a focus on food insecurity, housing instability and quality, and transportation access.
Research shows that identifying and implementing a patient’s SDOH data into the EHR is crucial to finding answers to significant health issues. Studies show this data accounts for 80 to 90 percent of individuals’ health.
Once identified, SDOH data can create opportunities to offer social services and interventions for high-risk individuals.
Health systems across the country are attempting to implement SDOH data into patient health records. Yet, most health systems face issues, such as interoperability, when trying to implement SDOH into their respective EHRs, meaning there isn’t an abundance of information about what healthcare can do with SDOH data.
With this publication, The Gravity Project developed data elements and standards to gather, exchange, and utilize SDOH data across screening, diagnosis, planning, and intervention platforms.
Founded by the University of California San Francisco (UCSF) Social Interventions Research and Evaluation Network (SIREN) in 2018, Gravity Project consists of over 1,000 healthcare stakeholders. These stakeholders include academic and federal food insecurity experts, community-based organizations, payers, patients, providers, and health IT vendors.
The spread of COVID-19 highlighted the importance of SDOH data collection and integration, making it an area of extreme focus for providers and laboratories.
COVID-19 data from OCHIN, an Oregon-based nonprofit health information network, reported Black patients were 2.5 times more likely than White patients to have a COVID-19 diagnosis observed in the EHR. Additionally, Hispanic patients were two times as likely as Caucasian patients to have a COVID-19 diagnosis listed.
The researchers also noted homeless, or those in housing were almost two times more likely to test COVID-19 positive.
“The Gravity Project’s work to document and integrate social risk in clinical care has never been more urgent than now,” said Tom Giannulli, chief medical information officer of the American Medical Association (AMA).
“With COVID-19, doctors see the intersection of social determinants and health status daily. The AMA is proud to contribute our expertise and to sponsor Gravity’s critical work.”
Gravity Project aims to expand the way healthcare cares for all individual and community needs by capturing and exchanging SDOH data.
Regenstrief Institute, the ICD-10 Coordination and Maintenance Committee, and SNOMED International will help Gravity Project translate consensus data recommendations on food insecurity into code for integration.
Gravity Project noted Regenstrief’s COVID-19 standardized codes for laboratory testing and clinical observations to the Logical Observation Identifiers Names and Codes (LOINC) dataset as the gold standard of data integration.
Looking forward as a separate HL7 FHIR Accelerator project, Gravity Project is gathering the healthcare community’s consensus on data elements and developing a FHIR Implementation Guide for health IT professionals to use as a guide for 2021 implementations.
By the time the healthcare sector launches The Office of the National Coordinator for Health Information Technology’s (ONC) and Centers for Medicare & Medicaid Services’ (CMS) interoperability rules in January 2021, Gravity Project will have data ready for integration on food, housing, and transportation.
“Highmark remains focused on the health and vitality of the communities we serve,” Deborah Donovan, executive committee member of The Gravity Project and vice president of Social Determinants of Health Strategy and Operations at Highmark.
“The Gravity Project’s development of data standards and exchange of SDOH data will be critical to our ability to understand the social needs of our members, patients and communities, and make decisions that best support our customers.”
From genetic sequencing to symptom tracking to vaccine development, machine learning algorithms have been instrumental in helping uncover hidden clues about the novel coronavirus, says Cris Ross.
In his opening keynote Tuesday at the HIMSS Machine Learning & AI for Healthcare Digital Summit, Mayo Clinic CIO Cris Ross enumerated some of the many ways artificial intelligence has been crucial to our evolving understanding of COVID-19.
Way back in March, for instance, researchers were already using an AI algorithm – trained on data from the 2003 SARS outbreak – for "a recurrent neural network to predict numbers of new infections over time," he said. "Even from the beginning of COVID-19, artificial intelligence is one of the tools that scientists have been using to try and respond to this urgent situation."
And just this past month, Boston-based nference – whose clinical-analytics platform is used by Mayo Clinic – sifted through genetic data from 10,967 samples of novel coronavirus. Along the way, researchers discovered "a snippet of DNA code – a particular one that was distinct from predecessor coronaviruses," said Ross. "The effect of that sequence was it mimics a protein that helps the human body regulate salt and fluid balances.
"That wasn't something that they went looking for," he said. "They simply discovered it in a large dataset. It's since been replicated and used to support other research to discover how genetic mutations and other factors are present in COVID-19 that help, both with the diagnosis of the disease, but also its treatment."
Many other now commonly understood characteristics of the novel coronavirus – the loss of smell it can cause, its effects on blood coagulation – were discovered using AI.
Around the world, algorithms are being put to work to "find powerful things that help us diagnose, manage and treat this disease, to watch its spread, to understand where it's coming next, to understand the characteristics around the disease and to develop new therapies," said Ross. "It's certainly being used in things like vaccine development."
At the same time, there are already some signs that "we need to be careful around how AI is used," he said.
For example, the risk of algorithmic bias is very real.
"We know that Black and Hispanic patients are infected and die at higher rates than other populations. So we need to be vigilant for the possibility that that fact about the genetic or other predisposition that might be present in those populations could cause us to develop triage algorithms that might cause us to reduce resources available to Black or Hispanic patients because of one of the biases introduced by algorithm development."
The profusion of data since the pandemic began has allowed advanced models to be purpose-built at speed – and has also enabled surprise findings along the way.
Sure, "some of the models that are being built that are labeled AI are really just fancy regression models," said Ross. "But in a way, does it really matter? In any case, [they're] ways to use data in powerful ways to discover new things, ... drive new insights, and to bring advantages to all of us dealing with this disease."
It's notable too that the big datasets needed for AI and machine learning "simply didn't exist in the pre-electronic health record days," he added.
"Just imagine where we would have been if it was a decade ago and we were trying to battle COVID-19 with data that had to be abstracted from paper files, ... manila folders, and a medical records room someplace," said Ross.
"The investments we've made to digitize healthcare have paid off. We've learned that the downstream value of data that's contained in electronic health records systems is incredibly powerful."
The eHealth Exchange exchanged over 250 million more patient documents per year following its gateway technology integration.
In the early part of the 21st century, US citizens were in a radically different place regarding patient data privacy concerns and healthcare.
Although patient data security and privacy anxieties remain today, federal agencies and healthcare organizations faced a separate set of fears at the turn of the century, when health technology was first being integrated into care.
“There were definitely some fears,” Jay Nakashima, executive director of eHealth Exchange, said in an interview with EHRIntelligence. “First, there was the fear of healthcare data being broadly breached. Then there was the fear of some sort of an entity out of Washington, DC that maintained a central location housing all patient health information.”
It was out of those fears the Office of the National Coordinator for Health IT (ONC) and the Nationwide Health Information Network (NHIN) conceived of the eHealth Exchange in 2006 to securely exchange patient health data across the country.
Nakashima described the structure of the HIE as a “federal network.”
“A federal network means each healthcare organization needed to create and maintain a pipeline to other healthcare organizations within the eHealth Exchange that it wanted to establish a patient data exchange,” he explained.
By 2009, eHealth Exchange first exchanged data between the Veterans Health Administration (VHA) and Kaiser Permanente. Within two more years, the network added 23 participants and by 2012, The Sequoia Project took the reins and fully supported the eHealth Exchange.
Now, the HIE connects to 75 percent of all US hospitals, over 60 regional or state HIEs, and four government agencies. It also connects 120 million patients across the country.
And most recently, eHealth Exchange can boast progress with its new gateway technology that simplifies connectivity for participants through a single streamlined connection.
“For example, Mayo Clinic has a patient that goes to Stanford in California because they are on vacation or working in the area,” Nakashima explained. “Our job is to create one to five or even 10 direct connections from Mayo Clinic to Stanford Health Care.”
If a health organization creates less than 10 connections, it will not be burdensome or unmanageable for the HIE. However, if a health organization starts to generate hundreds of connections between two health systems, it can become onerous for the HIE and its participants.
“The eHealth Exchange implemented a centralized technology, called gateway technology,” he continued. “It's a single on-ramp or a single connection to the country. Our providers and other healthcare organizations can create one connection to the eHealth Exchange. Then we route their transactions to providers all across the country so they do not have to have a high number of connections.”
eHealth Exchange exchanged roughly 550 million clinical documents using this new structure, which is up almost 300 million transactions from the old format dating about a year and a half ago.
“Data is flowing much more frequently and our customers aren't having to spend as much money on creating and maintaining all of those point-to-point connections,” Nakashima said. “This means they are able to free up significant health IT resources to work on more valuable tasks.”
Furthermore, the new structure helps health organizations expand their national footprint and implement innovative capabilities, such as real-time content quality validation and a national record locator service.
The new approach also helps organizations prepare for regulatory changes, including the ONC interoperability rule and the Trust Exchange Framework and Common Agreement (TEFCA).
Looking forward to 2021, Nakashima said he expects to see more “data pushing,” rather than “data pulling” from health organizations.
This is a more proactive approach to health information exchange, Nakashima explained.
When a patient arrives for her afternoon appointment, her data will already be available at that exact time and place, rather than having to pull the data when the patient arrives at the appointment.
Pulling patient data at the last moment could result in mismatched patient data and potential patient safety issues.
“The vast majority of our participants query every night,” Nakashima said. “An organization will say they have over 100 surgeries and 400 appointments the next day and the system will automatically query the night before, or a couple of hours before an appointment or a surgery, to pull that information and have it available in the EHR system for the clinician.”
Furthermore, eHealth Exchange participants can also set up push notifications to public health agencies across their respective state and county, and even across the country.
“Most participants have their EHR configured to automatically report when, for example, a patient tests positive for Hepatitis B, to automatically push a report to the county and state public health agencies, and then potentially the [Centers for Disease Control and Prevention] CDC.”
Nakashima expects patient data exchange to continue to develop and improve in 2021 and beyond.