Discrete and continuous representations and processing in deep learning: Looking forward

被引:16
作者
Cartuyvels, Ruben [1 ]
Spinks, Graham [1 ]
Moens, Marie-Francine [1 ]
机构
[1] Katholieke Univ Leuven, Dept Comp Sci, Celestijnenlaan 200A, B-3001 Leuven, Belgium
来源
AI OPEN | 2021年 / 2卷
基金
欧洲研究理事会; 比利时弗兰德研究基金会;
关键词
Artificial intelligence; Deep learning; Machine learning; Representation learning; Natural language processing; NEURAL-NETWORKS; LANGUAGE; CONNECTIONIST; COMPREHENSION; OBJECT; WORDS; GAME; GO;
D O I
10.1016/j.aiopen.2021.07.002
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Discrete and continuous representations of content (e.g., of language or images) have interesting properties to be explored for the understanding of or reasoning with this content by machines. This position paper puts forward our opinion on the role of discrete and continuous representations and their processing in the deep learning field. Current neural network models compute continuous -valued data. Information is compressed into dense, distributed embeddings. By stark contrast, humans use discrete symbols in their communication with language. Such symbols represent a compressed version of the world that derives its meaning from shared contextual information. Additionally, human reasoning involves symbol manipulation at a cognitive level, which facilitates abstract reasoning, the composition of knowledge and understanding, generalization and efficient learning. Motivated by these insights, in this paper we argue that combining discrete and continuous representations and their processing will be essential to build systems that exhibit a general form of intelligence. We suggest and discuss several avenues that could improve current neural networks with the inclusion of discrete elements to combine the advantages of both types of representations.
引用
收藏
页码:143 / 159
页数:17
相关论文
共 221 条
[1]  
Agarwal A., 2020, CoRR abs/2005.08045
[2]   Unified rational protein engineering with sequence-based deep representation learning [J].
Alley, Ethan C. ;
Khimulya, Grigory ;
Biswas, Surojit ;
AlQuraishi, Mohammed ;
Church, George M. .
NATURE METHODS, 2019, 16 (12) :1315-+
[3]  
Amodei D, 2016, PR MACH LEARN RES, V48
[4]   Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering [J].
Anderson, Peter ;
He, Xiaodong ;
Buehler, Chris ;
Teney, Damien ;
Johnson, Mark ;
Gould, Stephen ;
Zhang, Lei .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :6077-6086
[5]   Neural Module Networks [J].
Andreas, Jacob ;
Rohrbach, Marcus ;
Darrell, Trevor ;
Klein, Dan .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :39-48
[6]  
Andreas Jacob, 2016, NAACL HLT 2016
[7]  
[Anonymous], 1975, The language of thought
[8]  
[Anonymous], 2018, P 8 WORKSH COGN MOD
[9]  
[Anonymous], 2015, CoRR abs/1505.00521
[10]  
Antoniou G., 1997, Nonmonotonic reasoning