Multi-task Reinforcement learning is a key current trend in the field of reinforcement learning. It can accomplish multiple tasks using a single network, which is superior to single-task learning in integrating information from different tasks. However, uncertainty remains on the issue of how to effectively share parameters across tasks in the network. To address this problem, this paper proposes a 'soft parallel recombination network' approach, which can share task information across network layers without being limited to between adjacent layers, thus enhancing the information sharing capability of the network. Specifically, the types of multi-task learning in this paper include various manipulator control tasks executed in the Meta-world environment, such as pick-and-place, push, and stacking. For optimal performance, a weight network is introduced which automatically determines the optimal path for each task and outputs the probability of each module being selected. The proposed method efficiently learns the relationships between tasks from parallel recombination networks and determines the optimal path for a task through a weight network. Further, the weight relationship between the current training samples and the current strategy is found, which improves the training efficiency in combination with the parallel recombination network. By combining the proposed 'Soft Parallel Recombination Network' method with the SAC algorithm (PRSAC) and validating it on the Meta-world multi-task training platform, the experimental results demonstrate that the proposed method significantly outperforms existing baseline algorithms in terms of sample efficiency and performance.