In recent years, the field of action recognition using spatio-temporal graph convolution models for human skeletal data has made significant progress. However, current methodologies tend to prioritize spatial graph convolution, which leads to an underutilization of valuable information present in skeletal data. It limits the model's ability to effectively capture complex data patterns, especially in time-series data, ultimately impacting recognition accuracy significantly. To address the above issues, this article introduces an attention-based semantic-guided multistream graph convolution network (ASMGCN), which can extract the deep features in skeletal data more fully. Specifically, ASMGCN incorporates a novel temporal convolutional module featuring an attention mechanism and a multiscale residual network, which can dynamically adjust the weights between skeleton graphs at different time points, enabling better capture of relational features. In addition, semantic information is introduced into the loss function, enhancing the model's ability to distinguish similar actions. Furthermore, the coordinate information of different joints within the same frame is explored to generate new relative position features known as centripetal and centrifugal streams based on the center of gravity. These features are integrated with the original position and motion features of skeleton, including joints and bones, enriching the inputs to the GCN. Experimental results on the NW-UCLA, NTU RGB + D (NTU60), and NTU RGB + D 120 (NTU120) datasets demonstrate that ASMGCN outperforms other state-of-the-art (SOTA) human action recognition (HAR) methods, signifying its potential in advancing the field of action recognition using skeletal data.