Facial expressions play a crucial role in interpersonal communication, conveying a wide range of emotions and intentions. Understanding the specific facial regions that contribute significantly to different facial expressions can give insightful information about the underlying mechanics of emotion recognition and facilitate the progression of more accurate and efficient computer vision systems. In this paper, a technique based on deep learning to find the regions of the face that contribute greatly to various facial expressions is proposed. The technique uses a convolutional neural network (CNN) architecture, based on a significant facial emotion dataset, to discover the discriminative features of a substantial facial expression database. By analysing the learned representations, one can identify the facial regions that exhibit the highest activation and influence in expressing specific emotions. To evaluate the effectiveness of the approach, publicly available facial expression datasets, such as the Facial Expression Recognition and Analysis Challenge dataset are used. Through trial-and-error deep learning experiments, proposed method demonstrates that it accurately localizes and focuses the facial regions around the eyes and eyebrows that contribute significantly to different expressions of joy, sorrow, rage, outrage, surprise, anxiety, and dislike. The results indicate that the model successfully identifies the crucial regions across diverse environmental conditions and head poses, highlighting its robustness and practical applicability. The ability to identify expression-contributing regions in the face can have numerous applications, including human-computer interaction, affective computing, and social robotics. The findings presented in this research contributed to the understanding of facial expression analysis and provided a framework for future research in the field of emotion classification.