Statistical models for text segmentation

被引:317
作者
Beeferman, D [1 ]
Berger, A [1 ]
Lafferty, J [1 ]
机构
[1] Carnegie Mellon Univ, Sch Comp Sci, Pittsburgh, PA 15213 USA
关键词
exponential models; text segmentation; maximum entropy; inductive learning; natural language processing; decision trees; language modeling;
D O I
10.1023/A:1007506220214
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper introduces a new statistical approach to automatically partitioning text into coherent segments. The approach is based on a technique that incrementally builds an exponential model to extract features that are correlated with the presence of boundaries in labeled training text. The models use two classes of features: topicality features that use adaptive language models in a novel way to detect broad changes of topic, and cue-word features that detect occurrences of specific words, which may he domain-specific, that tend to be used near segment boundaries, Assessment of our approach on quantitative and qualitative grounds demonstrates its effectiveness in two very different domains, Wall Street Journal news articles and television broadcast news story transcripts. Quantitative results on these domains are presented using a new probabilistically motivated error metric, which combines precision and recall in a natural and flexible way. This metric is used to make a quantitative assessment of the relative contributions of the different feature types, as well as a comparison with decision trees and previously proposed text segmentation algorithms.
引用
收藏
页码:177 / 210
页数:34
相关论文
共 25 条
  • [1] BEEFERMAN D, 1997, 35 ANN M ASS COMP LI
  • [2] Berger AL, 1996, COMPUT LINGUIST, V22, P39
  • [3] CHRISTE Y, 1995, CAH CIVILIS MEDIEVAL, V38, P4
  • [4] Inducing features of random fields
    DellaPietra, S
    DellaPietra, V
    Lafferty, J
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 1997, 19 (04) : 380 - 393
  • [5] Friedman JH., 1984, BIOMETRICS, V40, P874, DOI [DOI 10.2307/2530946, 10.2307/2530946]
  • [6] Gelman A, 2013, BAYESIAN DATA ANAL, DOI DOI 10.1201/9780429258411
  • [7] Hastie T., 1990, Generalized additive model
  • [8] Hearst M., 1994, P 32 ANN M ASS COMP
  • [9] Hearst MA, 1997, COMPUT LINGUIST, V23, P33
  • [10] Hirschberg J., 1993, Computational Linguistics, V19, P501