The intricate and restricted movements of joints form the core of pedestrian gait characteristics, with these traits externally reflected through the overall and synchronized gait movements. Thus, identifying features of such coordinated motions significantly boosts the discriminative effectiveness of gait analysis. Addressing this, we have introduced a novel gait feature mining approach that amalgamates multi -semantic information, effectively utilizing the combined strengths of silhouette and skeleton data through a meticulously designed dual -branch network. This network aims to isolate coordinated constraint features from these distinct modalities. To derive the coordinated constraint features from silhouette data, we crafted a silhouette posture graph, which employs 2D skeleton data to navigate through the silhouette's obscured portions, alongside a specialized local micro -motion constraint module. This module's integration of feature maps allows for the detailed extraction of features indicative of limb coordination. Concurrently, for the nuanced extraction of joint motion constraints, we developed a global motion graph convolution operator. This operator layers the motion constraint relations of physically separate joints onto the human skeleton graph's adjacency matrix, facilitating a comprehensive capture of both local and overarching limb motion constraints. Furthermore, a constraint attention module has been innovated to dynamically emphasize significant coordinated motions within the feature channels, thus enriching the representation of pivotal coordinated motions. This advanced network underwent thorough training and validation on the CASIA-B dataset. The ensuing experimental outcomes affirm the method's efficacy, demonstrating commendable recognition accuracy and remarkable stability across varying viewing angles and conditions.