Consensus algorithms simplify decision-making and conflict-management processes. On the other hand, machine learning is one of the best methods for data management and inference. The main objective of this paper is to introduce a method that can combine these two fields to enhance the accuracy of machine-learning models. In the proposed approach, the final results obtained from the training phase of the machine-learning models converge using two consensus layers. In the first layer, models pull their values closer to each other, and we will rank them based on their proximity to the actual final value. Thus, the models with more positive impacts on other models will gain higher coefficients to attract other models more in the next iterations, and models with negative impacts will lose their effects by reducing their coefficients. Three deduction approaches to obtain the final value have been considered in the second layer. The hard consensus method uses majority voting, the soft consensus method assigns the same weights to each model, and the consensus with the trust factor method sets weights according to the model's closeness to the actual value. This study combined four machine learning models for the regression task and evaluated our approach on three Kaggle datasets based on their R2 score and statistical test. Moreover, we compared our model with an ensemble method of stacking regression that our model outperformed. We have also demonstrated the models' effects on each other and how they approach the actual final value. Lastly, the results show that using consensus methods along with machine-learning models can potentially improve the accuracy of machine-learning models. Specifically, our hard consensus method could outperform other models in most cases, while soft consensus and trust factors can achieve better performance in fewer cases.