This research is dedicated to promoting the inclusion of individuals with hearing disabilities, addressing their unique communication needs through the development of a sign language translation system. Efficiently predicting the gestures of non-hearing individuals is essential for breaking barriers and facilitating smooth communication in their daily lives. To achieve this goal, we propose a three-phase method that involves data preparation and cleaning. For modeling, we leverage cutting-edge techniques, including random forest with data augmentation, recurrent neural networks (RNNs), and a voting system to combine the best-performing models. Our approach is centered on the 'Australian Sign Language signs' dataset, which offers a valuable resource for sign language recognition. By incorporating these advanced methods, we strive to achieve unparalleled accuracy, precision, recall, and F1-score in predicting signs within the Australian sign language using this dataset. Moreover, our work sets the foundation for future research, encouraging the exploration of advanced supervised modeling techniques to further elevate the obtained results. We envision that the integration of RNN, random forest with data augmentation, and the voting system will enable us to break new ground in sign language translation, empowering individuals with hearing disabilities to engage fully in their personal and professional endeavors with improved accessibility and inclusivity.