Multipath routing remains a challenging issue in traffic engineering (TE) as existing solutions are incapable of handling the evolving network dynamics and stringent quality-of-service (QoS) requirements. To address it, multiagent deep reinforcement learning (MADRL) is a promising technique that provides more elaborate multipath routing strategies. However, prevalent MADRL-based solutions still suffer inapplicability as they fail to ensure both scalability and QoS awareness. In this paper, we leverage the emerging hybrid knowledge-defined networking (KDN) architecture, and propose a collaborative MADRL-based multipath routing algorithm. Two novel mechanisms, i.e., parallel agent replication and periodic policy synchronization, are devised for agent design to ensure the practicality of the proposed method. In addition, an efficient communication mechanism is established to facilitate multiagent collaboration by enabling scalable observation and reward exchange. Featuring a multiagent twin-actor-critic (MA-TAC) learning structure and a proximal policy optimization (PPO) -based training process, the proposed algorithm consists of alternately scheduled execution and training phases for practical deployment. We compare the performance of our proposed method with those of several benchmark methods. Extensive simulation results demonstrate that the proposed method achieves significantly better scalability, QoS awareness, and stability than the benchmark methods under various environment settings.