Show simple item record

dc.contributor.authorNuwagaba, Raymond
dc.date.accessioned2022-11-21T06:28:30Z
dc.date.available2022-11-21T06:28:30Z
dc.date.issued2022-03-03
dc.identifier.citationNuwagaba, Raymond. (2022). Deep Learning for Cervical Cancer Screening. (Unpublished undergraduate dissertation) Makerere University; Kampala, Uganda.en_US
dc.identifier.urihttp://hdl.handle.net/20.500.12281/13591
dc.descriptionA research report submitted to the College of Engineering Design and Art in partial fulfillment of the requirement for the award of the degree Bachelor of Telecommunications Engineering of Makerere University.en_US
dc.description.abstractTraditional screening of cervical cancer type classification majorly depends on the pathologist’s experience, which also has less accuracy. Colposcopy is a critical component of cervical cancer prevention. In conjunction with precancer screening and treatment, colposcopy has played an essential role in lowering the incidence and mortality from cervical cancer over the last 50 years. However, due to the increase in workload, vision screening causes misdiagnosis and low diagnostic efficiency. Medical image processing using the convolutional neural network (CNN) model shows its superiority for the classification of cervical cancer type in the field of deep learning. This project proposes deep learning CNN architectures to detect cervical cancer using the colposcopy images. The project was divided into two tasks: task one where we have a dataset containing images of three types of cervixes that is type 1, type 2 and type 3 and task two were we have a dataset containing images of cancerous (positive) and non-cancerous cervixes (negative). The datasets where obtained using a colposcope. In task one the dataset was trained using deep learning CNN architectures; Yolo V5 and Yolo V4. In dataset one the Yolo V4 gave a better performance with a mAP of 0.646 the compared to the Yolo V5 which gave mAP of 0.283. In task two we trained the classification model and the object detection model. Under the classification model the xception model had a better performance with a training accuracy of 97.13%, validation accuracy of 89.01% and test accuracy of 91.3% compared to the Inception V3 model which gave a training accuracy of 88.21%, validation accuracy of 78.1% and testing accuracy of 75.9%. under the object detection model, we only trained using the Yolo V4 which gave mAP of 0.879424. In conclusion, with more datasets more GPU time the accuracy level of our model can be improved.en_US
dc.language.isoenen_US
dc.publisherMakerere Universityen_US
dc.subjectArtificial Intelligenceen_US
dc.subjectMachine Learningen_US
dc.subjectConvolution Neural Networken_US
dc.titleDeep Learning for Cervical Cancer Screening.en_US
dc.typeThesisen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record