Machine learning mobile application
View/ Open
Date
2024-05Author
Suubi, John Trevor
Nantanda, Jamilah
Mulungi, Steven Junior
Nabwire, Esther
Metadata
Show full item recordAbstract
The Sign Language Converter project aims to develop a mobile application that bridges the communication gap between the Deaf and Hard of Hearing (DHH) community and those who do not use sign language. Utilizing advanced computer vision and machine learning technologies, the application converts sign language gestures into text or speech and vice versa. This innovative solution supports real-time, accurate translation, enhancing accessibility and fostering inclusive communication. The project leverages a modular architecture to ensure scalability, maintainability, and ease of development. Key modules include User Authentication and Profile Management, Gesture Capture and Recognition, Speech Conversion, and a Learning Module. The application will utilize Firebase for robust authentication, authorization, and database management, ensuring secure and scalable user data handling. Furthermore, Google Analytics will be employed to track user activities, providing valuable insights into usage patterns and aiding in continuous improvement of the application. To gather comprehensive requirements, we engaged with the DHH community, specifically targeting the Mulago School for the Deaf in Uganda. This engagement provided critical insights into user needs and preferences, shaping the development of user-friendly and effective features. The project is expected to significantly enhance communication for the DHH community, providing a reliable, user-friendly tool for translating sign language into text or speech, and vice versa. By fostering better understanding and interaction, the Sign Language Converter aims to contribute to a more inclusive society.