The automatic summarization of judgment documents is a challenging task due to their length and the dispersed nature of the important information they contain. The prevailing approach to tackling the summarization of lengthy documents involves the integration of both extractive and abstractive summarization models. However, current extractive models face challenges in capturing all essential details due to the scattered distribution of pertinent information within judgment documents. Additionally, the existing abstractive models still grapple with the problem of "hallucinations" which leads to generating inaccurate information. In our work, we proposed a novel hybrid legal summarization method that incorporates legal domain knowledge into both the extractive model and abstractive model. The method consists of two parts: (1) The rhetorical role of sentences is identified by the sentence-level sequence labeling method, and the rhetorical information is integrated into the extractive model based on WoBERT through the conditional normalization to ensure that the identification of key sentences is both precise and complete. (2) The pre-trained model RoFormer is combined with Seq2Seq to construct a long text summarization model, and the prior knowledge in the external resources and the document itself is introduced into the decoding process to improve the faithfulness and coherence of the composed summary. In addition, the contrastive learning strategy is employed during the training process to enhance the robustness of the abstractive model. Experimental results on the CAIL2020 dataset show that the proposed model is superior to the baseline methods. Furthermore, our method outperforms GPT and other LLMs in processing judgment documents.