Show simple item record

dc.contributor.authorAkwero Lapir, Catherine
dc.date.accessioned2023-01-23T08:08:48Z
dc.date.available2023-01-23T08:08:48Z
dc.date.issued2022-10-21
dc.identifier.citationAkwero Lapir, Catherine. (2022). Deep learning for cervical cancer lesion segmentation in mobile colposcopy images. (Unpublished undergraduate dissertation) Makerere University; Kampala, Uganda.en_US
dc.identifier.urihttp://hdl.handle.net/20.500.12281/14639
dc.descriptionA research report submitted to the College of Engineering Design and Art in partial fulfillment of the requirement for the award of the degree Bachelor of Science Electrical Engineering of Makerere University.en_US
dc.description.abstractCervical cancer develops in a woman’s cervix, which is the vaginal entrance to the uterus. The primary cause of cervical cancer is persistent infection with high-risk types of human papillomavirus (HPV), an extremely common family of viruses transmitted through sexual contact. According to the World Health Organization, cervical cancer is the fourth leading cause of death in women. Furthermore, according to a report published by the International Agency for Research on Cancer, an estimated 604 000 women worldwide will be diagnosed with cervical cancer by 2020, with 342 000 dying as a result of the disease. Cervical cancer is common in low and middle-income countries, with a 90% mortality rate. In Uganda, cervical cancer has the highest incidence and mortality rate of any cancer. Manual analysis of the colposcopy images is time-consuming, error-prone and requires highly skilled labor. As a result, we proposed developing a more efficient and rapid deep learning model for image segmentation of cervical precancerous lesions. We completed the task using the following procedure during the project. First, we collected our datasets from Marconi Laboratory and performed exploratory data analysis to better understand the datasets. Following that, we did data processing, which included Augmentation with a data generator. In the model development, three different backbones, Resnet13, Vgg16, and Mobilenet, were used with Unet architecture and trained for 300 epochs. The following results were obtained. The mobilenet + U-net model outperforms the others quantitatively, with a dice coefficient of 0.9866 and an intersection over union of 0.9666. Although the Resnet + U-net model provides better qualitative prediction than all other models with 0.9242 and intersection over union of 0.8712, The model was made available on the web platform for easy access. The other models were outperformed by Resnet as a backbone with Unet architecture. The model’s performance was tested using 50 images, though some of the predictions were not very accurate. As a result, our model provides a more dependable lesion segmentation method for actual clinical practice. Colposcopy images should be collected and made available to researchers, and the model should be implemented in hospitals to assist medical personnel, particularly gynecologists. Finally, while our study only focuses on automatic segmentation of cervical lesions, additional requirements such as the classification of the cervical transformation zone and the suggested location for tissue biopsies should be considered in actual clinical practice.en_US
dc.language.isoenen_US
dc.publisherMakerere Universityen_US
dc.subjectDeep learningen_US
dc.subjectcervical cancer lesionen_US
dc.subjectmobile colposcopy imagesen_US
dc.titleDeep learning for cervical cancer lesion segmentation in mobile colposcopy imagesen_US
dc.typeThesisen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record