Explainable Deep Ensemble Meta-Learning Framework for Brain Tumor Classification Using MRI Images
Cancers
Abstract
Background: Brain tumors can severely impair neurological function, leading to symptoms such as headaches, memory loss, motor coordination deficits, and visual disturbances. In severe cases, they may cause permanent cognitive damage or become life-threatening without early detection.
Methods: To address this, we propose an interpretable deep ensemble model for tumor detection in Magnetic Resonance Imaging (MRI) by integrating pre-trained Convolutional Neural Networks—EfficientNetB7, InceptionV3, and Xception—using a soft voting ensemble to improve classification accuracy. The framework is further enhanced with a Light Gradient Boosting Machine as a meta-learner to increase prediction accuracy and robustness within a stacking architecture. Hyperparameter tuning is conducted using Optuna, and overfitting is mitigated through batch normalization, L2 weight decay, dropout, early stopping, and extensive data augmentation.
Results: The proposed model integrates extensive data augmentation and advanced regularization techniques to enhance generalization. Explainable Artificial Intelligence (XAI) methods are incorporated to strengthen clinical trust: Gradient-Weighted Class Activation Mapping++ (Grad-CAM++) provides spatial localization by highlighting MRI regions most influential to predictions. Local Interpretable Model-Agnostic Explanations (LIME) and Shapley Additive Explanations (SHAP) provide complementary interpretability.
Conclusions: This transparent and reproducible framework aims to serve as a foundation for the research community, supporting the development of reliable tools for screening, review, and follow-up while reducing diagnostic workload and prioritizing patient safety.
Keywords
Brain tumor detection, MRI images, deep learning, explainable artificial intelligence, ensemble learning, meta-learning, Grad-CAM++, SHAP, LIME
Highlights
- Integrates EfficientNetB7, InceptionV3, and Xception using soft voting ensemble
- Light Gradient Boosting Machine (LightGBM) as meta-learner in stacking architecture
- Hyperparameter optimization using Optuna framework
- Explainable AI methods: Grad-CAM++, LIME, and SHAP for clinical interpretability
- Comprehensive regularization: batch normalization, L2 weight decay, dropout, early stopping
Links
Citation
@article{kakon2025,
author = {Kakon, S. and Chakrabarty, S. and Al Sazid, Z. and A. Begum,
I. and Abdus Samad, Md and S. M. S. Hosen, A.},
title = {Explainable {Deep} {Ensemble} {Meta-Learning} {Framework} for
{Brain} {Tumor} {Classification} {Using} {MRI} {Images}},
journal = {Cancers},
volume = {17},
number = {17},
pages = {2853},
date = {2025-08-30},
url = {https://www.mdpi.com/2072-6694/17/17/2853},
doi = {10.3390/cancers17172853},
langid = {en}
}