Alert fatigue has been one of the most challenging side effects of digitization in the healthcare sector. So much so, that it has consistently been named one of the top hazards in health technology over previous years.
Alert fatigue refers to the desensitization of clinicians and nurses to audio-visual warnings issued by smart patient monitoring devices or computerized provider order entry systems (CPOEs). This is caused by the sheer number and irrelevance of alerts that are delivered to healthcare workers daily – sometimes to the tune of 60,000+ alerts per day.
Alert fatigue not only erodes the trust of healthcare workers in decision support systems, but it also compromises the quality of care delivered by providers to their patients. In fact, alert fatigue is estimated to cause 200 deaths and 500 adverse events on an annual basis.
When urgent and irrelevant notifications are delivered to clinicians in the same stream, they must actively scan those notifications for relevance. For 90% of the alarms, no action is taken by clinicians, and some even try to bypass them in the fastest way possible.
Clinical decision support systems (CDSS), CPOEs, and monitoring devices are deployed to improve care delivery outcomes by notifying healthcare workers of significant events, allergic responses, or to prevent prescription errors. However, when the alerts delivered by them do not conform to the ‘principles of alert appropriateness’, they generate more noise than signals.
This issue is related partly to digital system design, and partly to the difficulty of implementing them effectively. Healthcare processes involve numerous workers, like clinicians from multiple sub-specialties and support staff. Poor implementations and lack of effective controls lead to incidents where topical steroid warnings related to a patient end up being delivered to a surgeon in an operating room. That such alerts should be ignored, then becomes a part of the problem.
Alert fatigue is essentially caused by a classification problem. If the “critical” alerts delivered by CDS systems can be classified into those that are really critical and those that aren’t, the noise can be eliminated from the flood of notifications received by them. Moreover, those notifications that are non-critical can be grouped together, and delivered to the right clinicians to reduce the overall number of notifications received by them in a typical workday.
So far, few attempts have been made to employ these strategies through the use of AI. Previously, closed-loop systems have been implemented, where clinicians actively provide feedback on each alert to weed out irrelevant alerts from going off the next time. However, such systems merely shift the burden from one type of action to a different one.
The real breakthrough would arrive in the form of a self-learning system that can assess the criticality of an alert through various attributes associated with it, and how clinicians respond to them. For instance, low-priority alerts are typically associated with topical steroid application warnings, duplicate order notifications, and low-value contraindication alarms. Such notifications can be filtered, grouped, and then delivered to clinicians with visual markers to indicate their evaluated criticality. This entails an analysis of medication alerts that fire at a healthcare organization, and identifying the top alerts by volume. These are then contextualized by drawing patient data from Electronic Health Record (EHR) systems to understand the impact of the alert changes, and the root cause that triggered the alert. These findings can then be fed back to the alerting system to optimize alert volume, and integrated with Best Practice Advisories (BPA) to further optimize the alert frequency and volume.
In a study, artificial neural networks (ANNs) based prediction models resulted in the best performance when compared to other algorithms. The implementation sought to predict clinician response to disease medication-related CDSS notifications, where ANNs offered an accuracy of 0.885, with 87% sensitivity, and 83% specificity. While these efforts are recent, the rapid adoption of cloud and modern software architectures in healthcare points to a significant opportunity for improving care quality.
Two of the key issues that must be considered in the design of such systems are data privacy and explainability of alert suppression and classification algorithms. In the EU and the USA, current regulations already require the use of anonymization techniques and impose explainability requirements on healthcare analytics solutions. The latter can prove challenging in the case of AI techniques like neural networks, where the reasoning behind the decisions of black box models must be explained.
Alert fatigue is not only a crucial technology hazard that impacts the quality of care, but it also wastes valuable time of clinicians while eroding their trust in digital systems. While the time is ripe to eliminate this long-persistent issue in healthcare technology with the use of AI, providers must pay close attention to issues like data privacy and explainability before implementing such systems in the real world.