Deep Natural Language Feature Learning for Interpretable Prediction

被引:0
作者
Urrutia, Felipe [1 ,2 ]
Buc, Cristian [1 ]
Barriere, Valentin [1 ,2 ]
机构
[1] Ctr Nacl Inteligencia Artificial, Macul, Chile
[2] Univ Chile, Dept Comp Sci, Santiago, Chile
来源
2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, EMNLP 2023 | 2023年
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We propose a general method to break down a main complex task into a set of intermediary easier sub-tasks, which are formulated in natural language as binary questions related to the final target task. Our method allows for representing each example by a vector consisting of the answers to these questions. We call this representation Natural Language Learned Features (NLLF). NLLF is generated by a small transformer language model (e.g., BERT) that has been trained in a Natural Language Inference (NLI) fashion, using weak labels automatically obtained from a Large Language Model (LLM). We show that the LLM normally struggles for the main task using in-context learning, but can handle these easiest subtasks and produce useful weak labels to train a BERT. The NLI-like training of the BERT allows for tackling zero-shot inference with any binary question, and not necessarily the ones seen during the training. We show that this NLLF vector not only helps to reach better performances by enhancing any classifier, but that it can be used as input of an easy-to-interpret machine learning model like a decision tree. This decision tree is interpretable but also reaches high performances, surpassing those of a pre-trained transformer in some cases. We have successfully applied this method to two completely different tasks: detecting incoherence in students' answers to open-ended mathematics exam questions, and screening abstracts for a systematic literature review of scientific papers on climate change and agroecology.(1)
引用
收藏
页码:3736 / 3763
页数:28
相关论文
共 56 条
[1]  
Angwin Julia, ProPublica
[2]  
[Anonymous], 2017, Classification and regression trees, DOI [10.1201/9781315139470-8, DOI 10.1201/9781315139470-8]
[3]   Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI [J].
Barredo Arrieta, Alejandro ;
Diaz-Rodriguez, Natalia ;
Del Ser, Javier ;
Bennetot, Adrien ;
Tabik, Siham ;
Barbado, Alberto ;
Garcia, Salvador ;
Gil-Lopez, Sergio ;
Molina, Daniel ;
Benjamins, Richard ;
Chatila, Raja ;
Herrera, Francisco .
INFORMATION FUSION, 2020, 58 :82-115
[4]  
Barriere V, 2017, PROCEEDINGS OF THE 19TH ACM INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, ICMI 2017, P647, DOI 10.1145/3136755.3137035
[5]  
Barriere Valentin, 2022, AACL IJCNLP
[6]  
Brown TB, 2020, ADV NEUR IN, V33
[7]  
Bubeck Sebastien, 2023, arXiv
[8]  
Canete J., 2020, PML4DC AT ICLR 2020, V2020, P2020
[9]   Can we open the black box of AI? [J].
Castelvecchi D. .
Nature, 2016, 538 (7623) :20-23
[10]  
Chung Hyung Won, 2022, Scaling instruction-finetuned language models