• Login
    View Item 
    •   Mak UD Home
    • College of Engineering, Design, Art and Technology (CEDAT)
    • School of Engineering (SEng.)
    • School of Engineering (SEng.) Collections
    • View Item
    •   Mak UD Home
    • College of Engineering, Design, Art and Technology (CEDAT)
    • School of Engineering (SEng.)
    • School of Engineering (SEng.) Collections
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Deep Learning for Cervical Cancer Screening.

    Thumbnail
    View/Open
    Undergraduate Dissertation (5.707Mb)
    Date
    2022-03-03
    Author
    Nuwagaba, Raymond
    Metadata
    Show full item record
    Abstract
    Traditional screening of cervical cancer type classification majorly depends on the pathologist’s experience, which also has less accuracy. Colposcopy is a critical component of cervical cancer prevention. In conjunction with precancer screening and treatment, colposcopy has played an essential role in lowering the incidence and mortality from cervical cancer over the last 50 years. However, due to the increase in workload, vision screening causes misdiagnosis and low diagnostic efficiency. Medical image processing using the convolutional neural network (CNN) model shows its superiority for the classification of cervical cancer type in the field of deep learning. This project proposes deep learning CNN architectures to detect cervical cancer using the colposcopy images. The project was divided into two tasks: task one where we have a dataset containing images of three types of cervixes that is type 1, type 2 and type 3 and task two were we have a dataset containing images of cancerous (positive) and non-cancerous cervixes (negative). The datasets where obtained using a colposcope. In task one the dataset was trained using deep learning CNN architectures; Yolo V5 and Yolo V4. In dataset one the Yolo V4 gave a better performance with a mAP of 0.646 the compared to the Yolo V5 which gave mAP of 0.283. In task two we trained the classification model and the object detection model. Under the classification model the xception model had a better performance with a training accuracy of 97.13%, validation accuracy of 89.01% and test accuracy of 91.3% compared to the Inception V3 model which gave a training accuracy of 88.21%, validation accuracy of 78.1% and testing accuracy of 75.9%. under the object detection model, we only trained using the Yolo V4 which gave mAP of 0.879424. In conclusion, with more datasets more GPU time the accuracy level of our model can be improved.
    URI
    http://hdl.handle.net/20.500.12281/13591
    Collections
    • School of Engineering (SEng.) Collections

    DSpace 5.8 copyright © Makerere University 
    Contact Us | Send Feedback
    Theme by 
    Atmire NV
     

     

    Browse

    All of Mak UDCommunities & CollectionsTitlesAuthorsBy AdvisorBy Issue DateSubjectsBy TypeThis CollectionTitlesAuthorsBy AdvisorBy Issue DateSubjectsBy Type

    My Account

    LoginRegister

    Statistics

    Most Popular ItemsStatistics by CountryMost Popular Authors

    DSpace 5.8 copyright © Makerere University 
    Contact Us | Send Feedback
    Theme by 
    Atmire NV