As wearable devices gain popularity, gesture recognition technology is becoming increasingly vital. Merely identifying gesture categories is insufficient for devices operating in complex environments. A significant challenge lies in enabling devices to autonomously and efficiently perform gesture recognition tasks, particularly in complex decision-making. Addressing this, this paper introduces an implicit relationship constraint-based offline reinforcement learning model, termed the Relationship Transformer Guided Generative Policy Network (RelTrans), designed for complex gesture decision-making tasks. The model includes an Implicit Constraint-Constructing Network (ICCN) that uses immediate rewards, unbound by predefined reward values, to extract relationship data for guiding the Generative Policy Network (GPN) in predicting action sequences. Additionally, it integrates a knowledge distillation-based Soft-Bias loss function, which not only allows the GPN to leverage ICCN's implicit constraints but also controls its self-generalization, ensuring effective information exchange and coordinated network updates. These advancements enable the model to comprehend and adapt to higher-level decision-making and reasoning across varying environmental conditions, enhancing the agent and applicability of gesture recognition technology in a broad spectrum of application areas. Extensive experimentation across 19 subtasks in the D4RL offline benchmark suite demonstrates that RelTrans matches or surpasses the performance of previous state-of-the-art approaches in various autonomous decision-making tasks.