With doctors’ busy schedules and the ongoing issue of clinician burden, an EHR “nudge” may be effective to prompt medical assistants to set up and order cancer screenings for doctors to sign once they see the patient.
Clinician burden is high, reports confirm, and with other conflicting issues like limited patient engagement and EHR use challenges, key elements of care can slip through the cracks. Preventive screenings, like cancer screenings, may not happen as regularly as they should.
But new data, published in JAMA Network Open suggests an EHR nudge could address many of those problems. According to study authors Mitesh Patel, MD, and Esther Hsiang, MD, pushing EHR nudges out to medical assistance – not physicians themselves – could help address gaps in care that pervade the patient experience and drive cancer screening rates.
The EHR was a natural place to start, the researchers said. More than 90 percent of clinicians and health systems now use EHRs, making them accessible for study participants.
“This was a project that was initiated actually by the primary care practices,” said Patel. “We had done a pilot a few years ago showing that a slightly different version of the nudge could potentially work. So, we worked with them to improve the design and then this was rolled out at three other practices and compared to the control groups here.”
Instead, researchers targeted medical assistants specifically to account for physician burnout challenges and EHR complexity that often bogs down physicians.
“Providers, especially primary care providers in the outpatient primary care setting, are expected to do so many different things in terms of addressing patient problems and remembering health maintenance screening, including cancer screenings and often increasingly shorter and shorter visits,” Hsiang said in an interview with EHRIntelligence.com.
Pushing the nudges out to medical assistants was a key strategy for addressing that burden, she continued.
“Just to try to relieve some of that burden, one way to think about it is to what degree can the use of emerging technologies or increased implementation of technologies help to flag some of those things more automatically to help address the issues of health maintenance themselves,” explained Hsiang.
And, ultimately, this approach had positive results.
The researchers, who hailed from the Perelman School of Medicine at the University of Pennsylvania, found a 22 percent increase in screening orders for breast cancer and a 14 percent increase for those treating colorectal cancer. Overall, 88 percent of the breast cancer patients and 82 percent of colorectal patients included in the study had a cancer screening ordered due to the nudges.
But there is room for improvement – and further research – going forward. For one, there are questions about whether these nudges could be sent to other types of providers.
Patel said the standard is to deploy it on the physicians. However, the novelty of this study was to target the assistants to save the physician’s time.
“So instead of physicians responding to alerts, physicians could have conversations with patients about cancer screening,” explained Patel. “It was less time dealing with alerts and more time talking to patients. But, like I said, the standard approach is to alert doctors.”
There’s also the question of patients actually receiving preventive screenings.
Although the percentage of cancer screenings increased, there were minimal changes in the rates of patients who followed through within one year and completed their screenings.
Both authors believe patient-centered nudges should be next, suggesting a path forward for future research. Although there was a major increase in the percent of doctors that order the tests, there was little change from the patient, Patel noted. Hsiang believes that patient-centered nudges can be implemented into mobile technology via a smartphone or tablet.
But delivering nudges to patients, potentially through a smartphone or tablet, could help address the patient engagement barriers keeping preventive care access low.
“In this study we found that physicians were ordering these tests appropriately, more so after the nudges were implemented, but the patient completion rates did not increase,” concluded Hsiang. “And I think we have several different hypotheses for what's driving that, but better analyzing it, understanding the different factors that are causing the patients not to get the cancer screenings done is particularly important.”
Machine learning models using radiomics can help radiologists classify renal cell carcinomas (RCCs), according to new findings published in the American Journal of Roentgenology.
“CT is gradually evolving into a useful imaging tool in renal mass differential diagnosis,” wrote Xue-Ying Sun, First Affiliated Hospital with Nanjing Medical University in China, and colleagues. “Several studies have reported that the use of enhancement threshold levels could help to distinguish RCC subtypes and discriminate RCC from benign oncocytoma with 77–84% accuracy. However, the differential diagnosis of renal mass-forming lesions is still difficult, and a variety of imaging findings have been described with different performance results reported.”
To see how machine learning can help improve such classification, the authors explored contrast-enhanced CT (CECT) scans showing 254 RCCs. A team of radiologists manually segmented lesions so that a full radiologic-radiomic analysis could be performed. A machine learning model was then trained to classify renal masses using a 10-fold cross-validation, and the models’ performance was compared to that of four veteran radiologists.
Overall, when differentiating clear cell RCCs (ccRCCs) from papillary RCCs (pRCCs) and chromophobe RCCs (chrRCCs), the four radiologists achieved a sensitivity that ranged from 73.7% to 96.8% and specificity that ranged from 48.4% to 71.9%. The team’s ML model had a sensitivity of 90% and specificity of 89.1% for that same scenario.
When differentiating ccRCCs from fat-poor angioleiomyolipomas and oncocytomas, the radiologists achieved a sensitivity that ranged from 73.7% to 96.8% and specificity that ranged from 52.8% to 88.9%. The ML model had a sensitivity of 86.3% and specificity of 83.3%.
Finally, when differentiating pRCCs and chrRCCs from fat-poor angioleiomyolipomas and oncocytomas, the radiologists achieved a sensitivity that ranged from 28.1% to 60.9% and a specificity that ranged from 75% to 88.9%. The ML model had a sensitivity of 73.4% and specificity of 91.7%.
“Our results show that routine expert-level radiologist interpretation of CT images had obviously large variances with relatively low accuracy in differentiation of benign and malignant solid renal masses, whereas the radiologic-radiomic ML approach provides an assessment of their ability to aid standardization of CECT interpretation,” the authors concluded. “Our radiologic-radiomic ML model, comprising quantitative radiomic features and a priori radiologic hallmarks that are different from a DL black-box algorithm, was able to significantly reduce the misclassification of renal mass lesions. Considering the interpretability of our radiologic-radiomic ML model, we believe that the radiologic-radiomic ML approach could be a potential adjunct to expert-level radiologist interpretation of CT images for improving interreader concordance and diagnostic performance in routine clinical assessment of renal masses.