Introduction to ProtoRadNet and AI in radiology
Artificial intelligence is rapidly transforming modern healthcare, especially in the field of radiology where imaging forms the backbone of diagnosis. A significant advancement in this area is ProtoRadNet, an explainable artificial intelligence model developed by researchers at IISER Bhopal and published in the journal Artificial Intelligence in Medicine. This study was conducted by Prateek Sarangi, Dr Riya Agarwal and Dr Tanmay Basu and published in the Journal “Artificial Intelligence in Medicine” in December 2025. The research team developed an explainable artificial intelligence model designed to interpret radiology images such as MRI, CT scans and chest X rays with high accuracy and transparency. Their work demonstrates how prototype based deep learning can assist radiologists by highlighting suspicious regions in scans and explaining the reasoning behind its predictions, marking a significant step toward trustworthy and clinically usable AI in radiology. This model has been designed to interpret radiology scans in a way that closely resembles how trained radiologists analyse medical images, combining diagnostic accuracy with transparent reasoning.
Rising imaging burden and radiologist shortage
Radiology plays a crucial role in detecting brain tumours, strokes, lung diseases and several neurological conditions. MRI, CT scans and X rays are indispensable tools in modern medicine. However, the availability of trained radiologists has not kept pace with the rapidly increasing demand for imaging studies. In India, the shortage of radiologists remains a major concern. Estimates suggest that there is approximately one radiologist for every one lakh population, and most of these specialists are concentrated in metropolitan areas. This imbalance often results in delays in reporting and increased workload on available radiologists, particularly affecting smaller cities and rural regions.
Development of ProtoRadNet at IISER Bhopal
ProtoRadNet was developed by researchers at IISER Bhopal to address both the shortage of radiology specialists and the lack of transparency in conventional AI systems. Traditional deep learning models often function as black boxes, producing results without explaining the reasoning behind them. This lack of interpretability limits their acceptance in clinical settings where accountability and understanding of diagnostic reasoning are essential. ProtoRadNet was designed as an explainable AI system that highlights suspicious regions in medical scans and explains why those regions appear abnormal, closely mimicking the reasoning process of a radiologist.
How ProtoRadNet works like a radiologist
The model uses a prototype based learning approach. Instead of providing only a final diagnosis, ProtoRadNet identifies meaningful visual patterns within an image and compares them with previously learned disease prototypes. When analysing a new scan, the system highlights suspicious areas and presents similar patterns from previously diagnosed cases that influenced its conclusion. This approach resembles how radiologists interpret scans by comparing them with past cases and identifying characteristic abnormalities. By showing visual evidence and reasoning, the model allows clinicians to verify its findings and use it as a decision support tool rather than a replacement for human expertise.
Accuracy and testing of the AI model
The development and testing of ProtoRadNet took approximately one and a half years. During the initial evaluation phase, the AI system was tested on publicly available datasets and achieved an accuracy of around 92 percent in detecting abnormalities across different imaging datasets. After this phase, the model underwent clinical evaluation. Five radiologists in Bhopal used the system to analyse more than 190 radiology reports during testing. These real world evaluations demonstrated that the model could assist in identifying suspicious findings and provide explainable outputs that aligned with radiologists’ reasoning.
Performance across multiple imaging modalities
ProtoRadNet has been evaluated across multiple imaging modalities including MRI, CT scans and chest X rays. It has been tested on datasets related to brain tumours, Alzheimer’s disease and lung abnormalities. The model demonstrated strong performance across these datasets and maintained high interpretability while delivering competitive accuracy and F1 scores compared to existing state of the art deep learning systems. Its ability to perform without requiring extensive pixel level annotations makes it practical for real world deployment where annotated datasets are often limited.
Importance of explainable AI in healthcare
One of the biggest barriers to AI adoption in medicine has been the lack of transparency in conventional systems. Doctors are unlikely to rely on algorithms that cannot explain their reasoning, especially when clinical and medico legal responsibilities are involved. ProtoRadNet addresses this issue by making its decision making process visible. It highlights suspicious regions in scans and explains why those regions indicate possible disease. This transparency allows clinicians to cross verify findings, improving confidence and trust in AI assisted diagnosis.
Potential impact on healthcare systems
The introduction of explainable AI models such as ProtoRadNet could significantly improve healthcare delivery, particularly in regions with limited specialist availability. With only one radiologist per one lakh population in India and most specialists located in urban areas, patients in smaller towns often face delays in imaging reports. AI assisted systems can help provide preliminary interpretations, prioritise urgent cases and reduce reporting delays. In emergencies such as stroke or brain haemorrhage, faster interpretation can directly influence treatment outcomes and survival.
Challenges and future implementation
Despite promising results, widespread clinical implementation of ProtoRadNet will require regulatory approvals, integration into hospital workflows and further validation in real world clinical settings. Issues related to data privacy, algorithm bias and medico legal responsibility must also be addressed. AI in radiology is expected to function as a supportive tool that enhances human expertise rather than replacing radiologists. Future developments may integrate clinical history, laboratory data and imaging into unified decision support systems.
Conclusion
ProtoRadNet represents an important advancement in explainable artificial intelligence for radiology. Developed over one and a half years and tested on public datasets with an accuracy of around 92 percent, followed by evaluation on more than 190 real clinical reports, the model demonstrates both reliability and transparency. By combining diagnostic accuracy with visible reasoning, ProtoRadNet bridges the gap between advanced deep learning and clinical trust. With proper validation and implementation, such AI systems could play a major role in addressing radiologist shortages, improving diagnostic speed and enhancing patient care across healthcare systems.
Source: Sarangi P, Agarwal R, Basu T. ProtoRadNet: Prototypical patches of Convolutional Features for Radiology Image Classification Network. Artificial Intelligence in Medicine. 2025 Dec 3:103324.