Sentiment analysis has become an important task in natural language processing because it is used in many different areas. This paper gives a detailed review of sentiment analysis, including its definition, challenges, and uses. Different approaches to sentiment analysis are discussed, focusing on how they have changed and their limitations. Special attention is given to recent improvements with transformer models and transfer learning. Detailed reviews of well-known transformer models like BERT, RoBERTa, XLNet, ELECTRA, DistilBERT, ALBERT, T5, and GPT are provided, looking at their structures and roles in sentiment analysis. In the experimental section, the performance of these eight transformer models is compared across 22 different datasets. The results show that the T5 model consistently performs the best on multiple datasets, demonstrating its flexibility and ability to generalize. XLNet performs very well in understanding irony and sentiments related to products, while ELECTRA and RoBERTa perform best on certain datasets, showing their strengths in specific areas. BERT and DistilBERT often perform the lowest, indicating that they may struggle with complex sentiment tasks despite being computationally efficient.