The Explainable Medical Diagnosis Assistant is a proof-of-concept AI-powered clinical support system designed to assist doctors in diagnosing diseases from medical images, with a primary focus on pneumonia detection from chest X-rays.
The core mission of this project is not to replace medical professionals, but to augment clinical decision-making by providing a fast, transparent, and explainable second opinion. The system directly addresses the “black box” problem in medical AI by combining visual explanations, confidence scores, clinical summaries, and AI-driven patient guidance.
Medical AI systems often produce highly accurate predictions but fail to explain why a decision was made. In high-risk domains like healthcare, this lack of interpretability prevents real-world adoption.
This project transforms medical AI from a black box into a glass box by:
- Explaining model decisions visually and textually
- Highlighting regions of interest in medical images
- Generating clinician-friendly summaries
- Providing supportive AI-driven patient guidance
Goal: Make medical AI trustworthy, interpretable, and clinically useful.
The Problem:
Traditional AI systems may output a diagnosis such as “PNEUMONIA” without any justification. Doctors cannot rely on predictions that lack transparency and evidence.
Our Solution:
This system:
- Predicts disease with confidence
- Highlights critical regions in the X-ray using saliency maps
- Generates an AI-based clinical explanation
- Suggests general patient care guidance
Explainable AI (XAI) focuses on answering:
“Why did the model make this decision?”
| Black Box AI | Explainable AI (Glass Box) |
|---|---|
| Only predictions | Predictions + explanations |
| No transparency | Visual and textual reasoning |
| Hard to trust | Clinically interpretable |
XAI is essential in healthcare, finance, and legal systems where decisions carry serious consequences.
A Saliency Map is a heatmap that highlights image regions most influential to the model’s decision.
Highlighter Analogy:
Just as important text is highlighted in a document, saliency maps highlight critical lung regions that influenced pneumonia detection.
- 🔴 Red / Yellow → High importance
- 🔵 Blue / Green → Low importance
These maps enable clinicians to visually verify AI reasoning.
The system produces a professional, human-readable clinical summary using a Large Language Model (LLM).
The summary:
- Interprets prediction confidence
- Explains saliency map findings
- Uses medical-style language
- Encourages specialist verification
This bridges the gap between AI output and clinical understanding.
Patients often ask, “What should I do next?”
This module provides general, non-diagnostic care suggestions when pneumonia is detected.
These suggestions do not replace professional medical advice.
When pneumonia is detected, the system may recommend:
- 🏥 Seek immediate medical consultation
- 💊 Follow prescribed antibiotics or antiviral treatment
- 💧 Stay hydrated and rest adequately
- 🌡️ Monitor symptoms such as fever or breathlessness
- 🚭 Avoid smoking and air pollution
- 📅 Attend follow-up checkups as advised
These suggestions are:
- Context-aware
- Generated dynamically
- Intended for supportive guidance only
- Data preprocessing
- Model training
- Explainability (XAI)
- LLM integration
- Web-based deployment
Demonstrates a complete MLOps pipeline.
- Visual: Saliency / Grad-CAM maps
- Textual: LLM-generated summaries
Combining both significantly increases trust.
- Uses Llama 3 via Groq API
- Prompt-engineered for medical context
- Generates professional clinical explanations
- Handles class imbalance
- Uses weighted loss functions
- Improves minority-class (pneumonia) detection
-
Image Upload
- Doctor uploads chest X-ray
frontend/app.py/frontend/app_gradio.py
-
Image Preprocessing
- Resize and normalize
data_prep/augmentations.py
-
Model Prediction
- CNN outputs diagnosis and confidence
models/lightning_model.py
-
Explainability (Grad-CAM)
- Generates saliency heatmap
xai/captum_utils.py
-
Visualization
- Overlay heatmap on original X-ray
xai/visualizer.py
-
Clinical Summary Generation
- LLM produces explanation
xai/text_generator.py
-
AI Patient Guidance
- Supportive care suggestions generated
-
Unified Dashboard Display
- Original image
- Saliency map
- Diagnosis & confidence
- Clinical summary
- Patient guidance
This project demonstrates how Deep Learning, Explainable AI, and Large Language Models can be combined to build transparent, trustworthy medical AI systems.
It goes beyond simple classification by delivering:
- Interpretability
- Clinical relevance
- Ethical AI design
- Real-world applicability
Not just an AI model — a clinically-aware AI assistant.
- Multi-disease detection
- EHR integration
- FDA-compliant validation
- Multimodal data fusion (reports + images)
- Real-time hospital deployment ----Done