As life forms feature joints, dynamic skeletons have been studied extensively as a potential tool for action detection. As a result of their focus on modelling skeletons, older methods were unable to adequately convey the whole range of motion at more erratic skeletal joints. Despite introducing spatial methods for unstructured data, detecting many simultaneous motions remains challenging. As part of this study, now article proposes a CCGS for skeleton-based action recognition by depicting skeletons organically on joints and therefore attempting to capture the spatiotemporal variation in the data. Let here propose a motion regression to learn the shape statistically from numerous data points since the representation of the joints is crucial to motion convolution. To achieve this sparsity in the underlying CCGS for efficient representation, now give spatial modelling of skeletons and formulate an optimisation problem with a slip-critical joint. Each joint has a strong or weak connection to the other joints in the same frame and to key joints in the action frames that have been processed. The coordinates of the skeleton sequence and the positions of the joints are then sent to the motion-caused graph, which has been tuned, so that features based on Chebyshev, spatial, and angular approximations of the motion may be learned. Also, the article examines the motion features approximation's representation of variation. The results of the experiments show that the recommended joints regression works and the proposed CCGS achieves state-of-the-art performance on the most popular datasets. The present research suggests a CCGS-based method for skeleton-based action recognition that captures spatiotemporal variations by intuitively depicting skeletons on joints. The experimental results suggest that the proposed method obtains state-of-the-art performance, with accuracy enhancements of up to 2% over existing methods on prominent datasets.