Blog

Blog from December, 2018

Ashley Lyles


Simple intervention cut monitoring time by 17% in randomized trial


Electronic health record (EHR) alerts when a telemetry order exceeds the recommended duration contributed to a safe decline in cardiac monitoring in a cluster-randomized clinical trial.

The EHR notification cut telemetry monitoring by 8.7 hours per hospitalization compared with no notification (P=0.001), and there wasn't a significant variation in emergency calls (6.0% vs 5.6%, P=0.90) or urgent medical events between groups, reported Nader Najafi, MD, of the University of California San Francisco, and colleagues in JAMA Internal Medicine.

The effect on telemetry duration was "notably smaller" than seen in other multicomponent quality improvement interventions, Najafi's group wrote.

However, it "was achieved without a concomitant educational or audit and feedback campaign, without human resources dedicated to monitoring telemetry use, and without an increase in adverse events as measured by rapid-response or medical emergency activation," they noted, so it would be "less costly and more scalable."

The study assessed 1,021 patients. The intervention group had a mean age of 64.5 and were 45% women, while the control group had a mean age of 63.8 and were 46% women.

The 12 general medicine service health teams, four of which were hospitalist teams and eight of which were house-staff teams, were cluster randomized at the team level to get or not get pop-up alerts on their computer screen during order entry in daytime hours when a patient had an active telemetry order outside the ICU that didn't meet the American Heart Association indication-specific best practice standards (with a few local tweaks).

When physicians received a telemetry notification, they decided to stop telemetry monitoring 62% of the time, 7% of the time they disregarded the notification, 21% of the time they requested telemetry again, and 11% of the time physicians responded to the alert but continued with the current course, the investigators found.

The mean telemetry hours per hospitalization were 41.3 with the intervention versus 50.0 among controls, a reduction of 17%.

The investigators acknowledged the limitations of their work: The results might not generalize to other locations, as the study is based on a single medical facility. And, the suggestions for telemetry hours were partially based on local expert outlook, making them more lenient than national practice guidelines.

"Finally, the preintervention mean telemetry hours at the UCSF Medical Center general medicine service was already lower than the baseline in prior studies,which may have limited the effect size of this intervention," the researchers wrote.


Samara Rosenfeld


Recurrent neural networks (RNN) provided significantly better accuracy levels than the clinical reference tool in predicting severe complications during critical care after cardiothoracic surgery, a new study found.
 
Alexander Meyer, M.D., department of cardiothoracic and vascular surgery at German Heart Center Berlin, and his team used deep learning methods to predict several severe complications — mortality, renal failure with a need for renal replacement therapy and postoperative bleeding leading to operative revision — in post-cardiosurgical care in real time.

“For all tasks, the RNN approach provided significantly better accuracy levels than the respective clinical reference tool,” the researchers wrote.

Mortality was the most accurately predicted, scoring a 90 percent positive predictive value (PPV) and an 85 percent sensitivity score. Renal failure had an 87 percent PPV and 94 percent sensitivity score.

The deep machine learning method also showed area under the curve scores that surpassed clinical reference tools, especially soon after admission.
 
Of the data studied, postoperative bleeding was the most difficult method to predict, due to how accurate the predictions were for mortality and renal failure. Postoperative bleeding had a PPV of 87 percent and sensitivity of 74 percent.
 
The team studied electronic health record (EHR) data from 11,492 adults over the age of 18 years old who had undergone major open-heart surgery from January 2000 through December 2016 in a German tertiary care center for cardiovascular diseases.
 
Patients’ data sets were studied for the 24 hours after the initial study, and if any complication occurred, patients were labeled accordingly.
 
Researchers measured the accuracy and timeliness of the deep learning model’s forecasts and compared predictive quality to established standard-of-care clinical reference tools.
 

Meyer told Healthcare Analytics News™ that one of the major findings of this study was that the system developed outperformed all three pre-existing benchmarks. He added that it is possible to work on a real-time uncurated clinical data stream.

With this information, physicians in emergency care units can perform interventions immediately if a patient is experience complications.
 
“Health systems should openly embrace this technology and ideally try to make use of it,” Meyer said.
 
At the very least, health systems can try to get regulations and developments so that this technology can be used.
 
In a clinical setting, technology like this is difficult to implement and generally demands a financial incentive.
 
Hospitals can work with researchers and companies to push this technology forward and gain support from politicians to help provide financial means and ways to attain these tools.