The exact localization of sensor nodes is one of the important issues in Wireless Sensor Networks (WSNs) for different applications. However, traditional methods of localization may suffer from several types of errors. This research examines a machine learning (ML) approach for predicting Average Localization Error (ALE) in WSNs. This study applies two powerful ML models: K-nearest neighbors Regression (KNNR) and Light Gradient Boosting Machine (LGBM). KNNR is light and easy to interpret, while LGBM has the capability to model complex relationships among features. Furthermore, an optimizer in the form of the Walrus Optimization Algorithm (WaOA) is utilized to boost the performance of the model. WaOA is a nature-inspired algorithm that is efficient in fine-tuning the parameters of ML models to improve their prediction accuracy. The LGWO model performed better on the test set, with an RMSE value of 0.066 and an R2 of 0.980, compared with other traditional models, such as KNN, at 0.131 and 0.915, respectively. During the testing phase, the LGWO model demonstrated the highest performance based on the Mean Squared Error (MSE) metric, achieving a value of 0.004, while the KNWO model ranked third with an MSE value of 0.015. Similarly, in the validation phase, the LGWO model achieved the best performance in terms of the Relative Absolute Error (RAE) metric, with a value of 2.799. The second-best performance in the validation phase was observed with the LGBM model, which recorded an RAE value of 3.931. In terms of the minimum prediction error and best accuracy within the entire training, validation, and testing processes, the LGWO model proves robust and reliable.