Interpretable attention-based mechanisms for medical waste sorting with Meta learning

dc.contributor.author Kikome, Christine
dc.contributor.author Wagisha, Emmanuel
dc.contributor.author Okumu, Geoffrey
dc.date.accessioned 2025-11-26T08:18:12Z
dc.date.available 2025-11-26T08:18:12Z
dc.date.issued 2025
dc.description A project report submitted to the Department of Computer Science in partial fulfilment of the requirements for the Degree of Bachelor of Science in Computer Science of Makerere University. en_US
dc.description.abstract Improper disposal and poor management of medical waste, stemming from reliance on traditional color-coded classification systems, pose significant environmental and health risks. The limited availability of labeled medical waste image datasets and the opaque nature of current deep learning models hinder their adoption in critical healthcare applications requiring transparency in decision-making processes. To address the risks posed by improper medical waste disposal and the limitations of traditional classification methods, this study developed and evaluated three meta learning models, Model-Agnostic Meta-Learning (MAML), Prototypical Networks (ProtoNet), and Reptile as well as conventional deep learning models including EfficientNetB0, Visual Geometry Group (VGG), DenseNet, and Vision Transformers. These models were trained on a custom medical waste image dataset to enhance classification performance and interpretability in few-shot learning scenarios. MAML, ProtoNet, and Reptile achieved classification accuracies of 100%, 99.71%, and 65.62% respectively, while EfficientNetB0 reached 99.40% accuracy. The models were then agonistically inspected for transparency using Gradient-weighted Class Activation Mapping (Grad-CAM), revealing that features such as color, shape, and surface texture were the most important for decision-making in well-represented classes like sharps and infectious waste. In contrast, features associated with chemical waste due to their limited data, were found to be least important and often misclassified. Gradient attention weights were assigned to input features to derive visual explanations, where regions with strong color intensity and distinctive morphology stood out as the most influential in model predictions. These explainable AI (XAI) visualizations significantly enhanced the interpretability of the deep and meta-learning models, supporting better generalization and trustworthiness, especially in healthcare contexts demanding transparent decision-making. The top performing model was deployed into a Application Programming Interface (API) accessible via any mobile or desktop device. The API leveraging meta learning and attention-based mechanisms can be integrated in IOT supported devices to automate medical waste classification. This approach demonstrates the potential of combining meta-learning with attention-based feature extraction and explainable AI to deliver accurate, interpretable, and scalable solutions for medical waste classification in real-world environments. en_US
dc.identifier.citation Kikome, C., Wagisha, E. & Okumu, G. (2025). Interpretable attention-based mechanisms for medical waste sorting with Meta learning (Unpublished undergraduate dissertation). Makerere University, Kampala, Uganda. en_US
dc.identifier.uri http://hdl.handle.net/20.500.12281/21246
dc.language.iso en en_US
dc.publisher Makerere University en_US
dc.subject Medical Wastes en_US
dc.subject Machine Learning en_US
dc.subject Meta Learning en_US
dc.subject Model Agnostic Explanations en_US
dc.subject Prototypical Networks en_US
dc.subject Reptile Networks en_US
dc.subject Healthcare en_US
dc.title Interpretable attention-based mechanisms for medical waste sorting with Meta learning en_US
dc.type Thesis en_US
Files