Explainable AI in the analysis of medical time series
Camelia Oprea
Chair of Embedded Software (Computer Science 11) - RWTH Aachen University
Details
![]() In intensive care units (ICUs), the comprehensive monitoring of patients generates substantial volumes of multivariate, high-resolution time series data. Machine learning, particularly deep learning, has significantly advanced the analysis of these datasets for various clinical applications. However, the "black box" nature inherent to many employed algorithms has sparked ongoing discussions about their utility in critical healthcare settings. Central concerns include the trustworthiness of decisions made by machine learning algorithms and the question of liability in instances where hazardous decisions occur. In response to these challenges, a new field is emerging: explainable AI. This area encompasses methodologies designed to provide human-understandable rationales for decision-making processes within algorithms. The effectiveness of such methods in assisting end-users and addressing liability issues remains subject of research. Moreover, the reliance on “black box” systems in patient treatment is not unprecedented; numerous approved medications operate without fully elucidated mechanisms and often entail side effects. This talk will introduce the concept of explainable AI, showcasing concrete examples related to the detection of complications within intensive care time series. Additionally, it will explore various dimensions concerning the implementation of machine learning technologies in medical contexts. |