Contrastive Training for Models of Information Cascades

被引:0
|
作者
Xu, Shaobin [1 ]
Smith, David A. [1 ]
机构
[1] Northeastern Univ, Coll Comp & Informat Sci, 440 Huntington Ave, Boston, MA 02115 USA
来源
THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE | 2018年
基金
美国安德鲁·梅隆基金会; 美国国家卫生研究院;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper proposes a model of information cascades as directed spanning trees (DSTs) over observed documents. In addition, we propose a contrastive training procedure that exploits partial temporal ordering of node infections in lieu of labeled training links This combination of model and unsupervised training makes it possible to improve on models that use infection times alone and to exploit arbitrary features of the nodes and of the text content of messages in information cascades. With only basic node and time lag features similar to previous models, the DST model achieves performance with unsupervised training comparable to strong baselines on a blog network inference task. Unsupervised training with add content features achieves significantly better results, reaching half the accuracy of a fully supervised model.
引用
收藏
页码:483 / 490
页数:8
相关论文
共 50 条
  • [1] Improved Contrastive Divergence Training of Energy-Based Models
    Du, Yilun
    Li, Shuang
    Tenenbaum, Joshua
    Mordatch, Igor
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [2] Supervised contrastive pre-training models for mammography screening
    Cao, Zhenjie
    Deng, Zhuo
    Yang, Zhicheng
    Ma, Jie
    Ma, Lan
    JOURNAL OF BIG DATA, 2025, 12 (01)
  • [3] Contrastive estimation reveals topic posterior information to linear models
    Tosh, Christopher
    Krishnamurthy, Akshay
    Hsu, Daniel
    Journal of Machine Learning Research, 2021, 22
  • [4] Supervised Contrastive Pre-training for Mammographic Triage Screening Models
    Cao, Zhenjie
    Yang, Zhicheng
    Tang, Yuxing
    Zhang, Yanbo
    Han, Mei
    Xiao, Jing
    Ma, Jie
    Chang, Peng
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2021, PT VII, 2021, 12907 : 129 - 139
  • [5] Cascading disasters, information cascades and continuous time models of domino effects
    Mizrahi, Shlomo
    INTERNATIONAL JOURNAL OF DISASTER RISK REDUCTION, 2020, 49
  • [6] Contrastive Learning With Enhancing Detailed Information for Pre-Training Vision Transformer
    Liang, Zhuomin
    Bai, Liang
    Fan, Jinyu
    Yang, Xian
    Liang, Jiye
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2025, 35 (01) : 219 - 231
  • [7] A Batch Noise Contrastive Estimation Approach for Training Large Vocabulary Language Models
    Oualil, Youssef
    Klakow, Dietrich
    18TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2017), VOLS 1-6: SITUATED INTERACTION, 2017, : 264 - 268
  • [8] Information cascades in the laboratory
    Anderson, LR
    Holt, CA
    AMERICAN ECONOMIC REVIEW, 1997, 87 (05): : 847 - 862
  • [9] Aggregate information cascades
    Guarino, Antonio
    Harmgart, Heike
    Huck, Steffen
    GAMES AND ECONOMIC BEHAVIOR, 2011, 73 (01) : 167 - 185
  • [10] Information Cascades With Noise
    Tho Ngoc Le
    Subramanian, Vijay G.
    Berry, Randall A.
    IEEE TRANSACTIONS ON SIGNAL AND INFORMATION PROCESSING OVER NETWORKS, 2017, 3 (02): : 239 - 251