[Objective] In the digital age, brief titles are critical for efficient reading. However, headline generation technology is mostly used in news rather than in other domains. Generating key points in classroom scenarios can enhance comprehension and improve learning efficiency. Traditional extractive algorithms such as Lead-3 and the original TextRank algorithm fail to effectively capture the critical information of an article. They merely rank sentences based on factors such as position or text similarity, overlooking keyword. To address this issue, herein, an improved TextRank algorithm—text ranking combining keywords and sentence positions (TKSP)—is proposed. Extractive models extract information without expanding on the original text, while generative models generate brief and coherent headlines, they sometimes misunderstand the source text, resulting in inaccurate and repetitive headings. To address this issue, TKSP is combined with the UniLM generative model (UniLM-TK model) to incorporate text topic information. [Methods] Courses are collected from a MOOC platform, and audio are extracted from teaching videos. Speech-to-text conversion are performed using an audio transcription tool. The classroom teaching text are organized, segmented based on knowledge points, and manually titled to generate a dataset. Thereafter> an improved TextRank algorithm—TKSP—proposed here is used to automatically generate knowledge points. First, the algorithm applies the Word2Vec word vector model to textrank. TKSP considers four types of sentence critical influences: (1) Sentence position factor: The first paragraph serves as a general introduction to the knowledge point, leading to higher weight. Succeeding sentences have decreasing weights based on their position. (2) keyword number factor: Sentences with keyword contain valuable information, and their importance increases with the number of keyword present. The TextRank algorithm generates a keyword list from the knowledge content. Sentence weights are adjusted based on the number of keyword, assigning higher weights to sentences with more keyword. (3) keyword importance factor: keyword weight reflects keyword importance arranged in descending order. Accordingly, sentence weights are adjusted; the sentence with the first keyword has the highest weight, while sentences with the second and third keyword have lower weights. (4) Sentence importance factor: The first sentence with a keyword serves as a general introduction, more relevant to the knowledge point. The sentence weight is the highest for this sentence and decreases with subsequent occurrences of the keyword. These four influencing factors of sentence weight are integrated to establish the sentence weight calculation formula. Based on the weight value of the sentence, the top-ranked sentence is chosen to create the text title. Herein, the combined TKSP algorithm and UniLM model, called the UniLM-TK model, is proposed. The TKSP algorithm is employed to extract critical sentences, and the textrank algorithm is employed to extract a topic word from the knowledge text. These are separately embedded into the model input sequence, which undergoes transformer block processing. The critical sentence captures text context using self-attention, while the topic word incorporates topic information through cross-attention. The final attention formula is established by weighting and summing these representations. The attention mechanism output is further processed by a feedforward network to extract high-level features. The focused sentences extracted by TKSP can effectively reduce the extent of model computation and data processing difficulty, allowing the model to focus more on extracting and generating focused information. [Results] The TKSP algorithm outperformed classical extractive algorithms (namely maximal marginal relevance, latent Dirichlet allocation, Lead-3, and textrank) in ROUGE-1, ROUGE-2, and ROUGE-L metrics, achieving optimal performances of 51. 20%, 33. 42%, and 50. 48%, respectively. In the ablation experiments of the UniLM-TK model, the optimal performance was achieved by extracting seven key sentences, with specific indicator performances of 73.29%, 58.12%, and 72.87%, respectively. Comparing the headings generated by the UniLM-TK model and GPT3. 5 API, the headings generated by UniLM-TK were brief, clear, accurate, and more readable in summarizing the text topic. Experiments were performed for real headings using a large-scale Chinese scientific literature dataset to compare the UniLM-TK and ALBERT models; the UniLM-TK model improved the ROUGE-1, ROUGE-2, and ROUGE-L metrics by 6.45%, 3.96%, and 9. 34%, respectively. [Conclusions] The effectiveness of the TKSP algorithm is demonstrated by comparing it with other extractive methods and proving that the headings generated by UniLM-TK exhibit better accuracy and readability. © 2024 Tsinghua University. All rights reserved.