In today's digital world, there is an overwhelming amount of opinionated data on the Web. However, effectively analyzing all available data proves to be a resource-intensive endeavor, requiring substantial time and financial investments to curate high-quality training datasets. To mitigate such problems, this paper compares data augmentation models for aspect-based sentiment analysis. Specifically, we analyze the effect of several BERT-based data augmentation methods on the performance of the state-of-the-art HAABSA++ model. We consider the following data augmentation models: EDA-adjusted (baseline), BERT, Conditional-BERT, BERTprepend, and BERTexpand. Our findings show that incorporating data augmentation techniques can significantly improve the out-of-sample accuracy of the HAABSA++ model. Specifically, our results highlight the effectiveness of BERTprepend and BERTexpand, increasing the test accuracy from 78.56% to 79.23% and from 82.62% to 84.47% for the SemEval 2015 and SemEval 2016 datasets, respectively.