
Medical imaging is a critical component of clinical decision-making, patient diagnosis, treatment planning, intervention, and therapy. However, due to the shortage of qualified radiologists, there is an increasing burden on healthcare practitioners, which underscores the need to develop reliable automated methods for interpreting medical images to reduce the time spent on commonplace cases and to support radiologists on more complex cases. Despite the development of novel computational techniques, automatically interpreting medical images remains challenging due to the subtlety and nuance of the patterns to be interpreted as well as the presence of noise and varying acquisition conditions. One promising solution to improve the reliability and accuracy of automated medical image analysis is interactive machine learning (IML), which integrates human expertise into the model training process. However, IML methods often lack compelling explanations to help users understand how a model is processing an image. To overcome this limitation, this study introduces a novel approach that leverages active learning (AL) to iteratively query for high-uncertainty samples while utilizing explanations from a prototypical part network to improve model classification. The proposed approach utilizes prototypical parts, which are snapshots of image sections, to determine an unlabelled image’s class based on the presence of the prototypical parts. Interaction occurs during the selection of prototypes and the AL phase, where a set of decision rules is designed to consider the contributions of which combinations of prototypical parts are the most representative of the unlabeled image output by the AL. The proposed explainable interactive machine learning (XIL) framework empowers medical experts to interact with the model’s training process, enabling more efficient and personalized learning through explanation and interaction.