Understanding action recognition in egocentric videos has emerged as a vital research topic with numerous practical applications. With the limitation in the scale of egocentric data collection, learning robust learning-based action recognition models remains difficult. Transferring knowledge learned from the scale exocentric data to the egocentric data is challenging due to the difference in videos across views. work introduces a novel cross-view learning approach to action recognition (CVAR) that effectively transfers knowledge from the exocentric to the selfish view. First, we present a novel geometric-based constraint the self-attention mechanism in Transformer based on analyzing the camera positions between two Then, we propose a new cross-view self-attention loss learned on unpaired cross-view data to enforce the attention mechanism learning to transfer knowledge across views. Finally, to further improve the performance of our cross-view learning approach, we present the metrics to measure the correlations in videos and attention maps effectively. Experimental results on standard egocentric action recognition benchmarks, i.e., Charades Ego, EPIC-Kitchens-55, and EPIC-Kitchens-100, have shown our approach's effectiveness and state-of-the-art performance.