Existing multi-agent reinforcement learning (MARL) algorithms focus primarily on maximizing global game gains or encouraging cooperation between agents, often overlooking the weak ties between them. In multi- agent environments, the quality of the information exchanged is crucial for optimal policy learning. To this end, we propose a novel MARL framework that integrates weak-tie theory with graph modeling to forma weak- tie modeling module. We use the distribution of tie strengths and the dominant agent which is computed based on tie graph to control the information exchange between agents. Our method is evaluated against various baseline models indifferent multi-agent environments. Experimental results show that our method significantly improves the adversarial win rates and rewards of agents, and reduces the combat losses of agents in confrontation. Our method provides insights into how to reduce information redundancy in the training of large-scale agents.