Implementation and Design of Luganda voice-controlled Elevator System.
Abstract
In recent years, voice-activated systems have become a cornerstone of modern human-computer
interaction, yet many such systems lack support for indigenous languages like Luganda. This
project focuses on the implementation and design of a voice-controlled elevator system that
understands and responds to spoken commands in Luganda, one of Uganda’s major local
9
languages. Leveraging machine learning and natural language processing (NLP) techniques, the
system performs intent recognition to interpret user commands such as requesting specific floors,
opening or closing the elevator door.
The proposed solution integrates a Luganda speech-to-intent pipeline, combining automatic
speech recognition (ASR) with a supervised intent classification model. Given the low-resource
nature of Luganda, a custom dataset was developed through audio data collection and manual
annotation. The model was trained to recognize key elevator-related intents and deployed on an
embedded system equipped with a microphone interface. The result is a functional prototype
capable of facilitating hands-free, voice-based control of an elevator system in a local language.