Abstractive Text Summarization for Legal Documents Using Optimization-Based Bi-LSTM with Encoder-Decoder Model

被引:0
|
作者
Aggarwal, Deepti [1 ]
Sharma, Arun [1 ]
机构
[1] Indira Gandhi Delhi Tech Univ Women, IT Dept, New Delhi 110006, Delhi, India
关键词
Feature extraction; pre-processing; encoder; decoder; abstractive summary; optimization; Bi-LSTM; attention mechanism;
D O I
10.1142/S0219622025500129
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In recent years, the amount of textual data has grown quickly, creating a useful resource for information extraction and analysis. Due to the high complexity and unstructured nature of legal documents, automated text summarizing (ATS) is a necessary but difficult process. ATS is a technique that uses computer power to summarize lengthy paragraphs quickly. Summarizing the extensive materials manually is a highly difficult and time-consuming operation for people. As a result, an optimization-based deep learning (DL) model for abstract summarization is presented in the paper (AS). The proposed working procedure is broken down into three stages. They are text pre-processing, feature extraction, and abstractive summary categorization. The dataset is first placed through a pre-processing process, including stop word removal, tokenization, lemmatization, and stemming. The words and phrases are then represented in vector format throughout the feature extraction phase, which is handled by the NumberBatch (a combination of Enhanced GloVe Model (EGM), Enhanced FastText (EFT), and word2vector). To produce the abstractive text summary, the features obtained from the Numberbatch are fed into the DL model Bi-LSTM-based encoder-decoder with attention model. The metaheuristic optimization Honey Badger algorithm (HBA) is employed to optimize the network weights. This would increase the effectiveness of creating summaries based on ROUGE scores, and the proposed Bi-LSTM-HBA model performs better than currently used methods.
引用
收藏
页数:27
相关论文
共 50 条
  • [1] A Normalized Encoder-Decoder Model for Abstractive Summarization Using Focal Loss
    Shi, Yunsheng
    Meng, Jun
    Wang, Jian
    Lin, Hongfei
    Li, Yumeng
    NATURAL LANGUAGE PROCESSING AND CHINESE COMPUTING, NLPCC 2018, PT II, 2018, 11109 : 383 - 392
  • [2] A Dual Attention Encoder-Decoder Text Summarization Model
    Hakami, Nada Ali
    Mahmoud, Hanan Ahmed Hosni
    CMC-COMPUTERS MATERIALS & CONTINUA, 2023, 74 (02): : 3697 - 3710
  • [3] Abstractive text summarization from Amazon food review dataset using modified attention based Bi-LSTM autoencoder model
    Prabha, P. L.
    Parvathy, M.
    JOURNAL OF THE CHINESE INSTITUTE OF ENGINEERS, 2025,
  • [4] Short-term ship roll motion prediction using the encoder-decoder Bi-LSTM with teacher forcing
    Li, Shiyang
    Wang, Tongtong
    Li, Guoyuan
    Skulstad, Robert
    Zhang, Houxiang
    OCEAN ENGINEERING, 2024, 295
  • [5] An Optimized Abstractive Text Summarization Model Using Peephole Convolutional LSTM
    Rahman, Md Motiur
    Siddiqui, Fazlul Hasan
    SYMMETRY-BASEL, 2019, 11 (10):
  • [6] Power optimization of a single-core processor using LSTM based encoder-decoder model for online DVFS
    Thethi, Sukhmani Kaur
    Kumar, Ravi
    SADHANA-ACADEMY PROCEEDINGS IN ENGINEERING SCIENCES, 2023, 48 (02):
  • [7] Text Recognition on Khmer Historical Documents using Glyph Class Map Generation with Encoder-Decoder Model
    Valy, Dona
    Verleysen, Michel
    Chhun, Sophea
    ICPRAM: PROCEEDINGS OF THE 8TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION APPLICATIONS AND METHODS, 2019, : 749 - 756
  • [8] Encoder-Decoder Architectures based Video Summarization using Key-Shot Selection Model
    Yashwanth, Kolli
    Soni, Badal
    MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (11) : 31395 - 31415
  • [9] Encoder-Decoder Architectures based Video Summarization using Key-Shot Selection Model
    Kolli Yashwanth
    Badal Soni
    Multimedia Tools and Applications, 2024, 83 : 31395 - 31415
  • [10] Abstractive text summarization using LSTM-CNN based deep learning
    Shengli Song
    Haitao Huang
    Tongxiao Ruan
    Multimedia Tools and Applications, 2019, 78 : 857 - 875