On simulating one-trial learning using morphological neural networks

被引:5
作者
Feng, Naiqin [1 ]
Sun, Bin [1 ]
机构
[1] Zhengzhou Univ Ind Technol, Sch Informat Engn, Zhengzhou 451150, Henan, Peoples R China
来源
COGNITIVE SYSTEMS RESEARCH | 2019年 / 53卷
关键词
Machine learning; One-trial learning; Cognitive psychology; Simulation; Morphological neural networks; Morphological associative memories; IMPLICIT; EXPLICIT; METHODOLOGY; PREDICTION; MODEL;
D O I
10.1016/j.cogsys.2018.05.003
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
"Learning once, remembering forever", this wonderful cognitive phenomenon sometimes occurs in the learning process of human beings. Psychologists call this psychological phenomenon "one-trial learning". The traditional artificial neural networks can simulate the psychological phenomenon of "implicit learning", but can't simulate the cognitive phenomenon of "one-trial learning". Therefore, cognitive psychology gives a challenge to the traditional artificial neural networks. From two aspects of theory and practice in this paper, the possibility of simulating this kind of psychological phenomenon was explored by using morphological neural networks. This paper takes advantage of morphological associative memory networks to realize the simulation of "one-trial learning" for the first time, and gives 5 simulating practical examples. Theoretical analysis and simulation experiments show that the morphological associative memory networks are a higher effective machine learning method, and can better simulate the cognitive phenomenon of "one-trial learning", therefore provide a theoretical basis and technological support for the study of intelligent science and cognitive science. (C) 2018 Published by Elsevier B.V.
引用
收藏
页码:61 / 70
页数:10
相关论文
共 50 条
[31]   Artificial Neural Networks and Deep Learning in the Visual Arts: a review [J].
Santos, Iria ;
Castro, Luz ;
Rodriguez-Fernandez, Nereida ;
Torrente-Patino, Alvaro ;
Carballal, Adrian .
NEURAL COMPUTING & APPLICATIONS, 2021, 33 (01) :121-157
[32]   Deep learning in spiking neural networks [J].
Tavanaei, Amirhossein ;
Ghodrati, Masoud ;
Kheradpisheh, Saeed Reza ;
Masquelier, Timothee ;
Maida, Anthony .
NEURAL NETWORKS, 2019, 111 :47-63
[33]   Learning molecular potentials with neural networks [J].
Gokcan, Hatice ;
Isayev, Olexandr .
WILEY INTERDISCIPLINARY REVIEWS-COMPUTATIONAL MOLECULAR SCIENCE, 2022, 12 (02)
[34]   Parallel learning by multitasking neural networks [J].
Agliari, Elena ;
Alessandrelli, Andrea ;
Barra, Adriano ;
Ricci-Tersenghi, Federico .
JOURNAL OF STATISTICAL MECHANICS-THEORY AND EXPERIMENT, 2023, 2023 (11)
[35]   Causal Structure Learning With One-Dimensional Convolutional Neural Networks [J].
Xu, Chuanyu ;
Xu, Wei .
IEEE ACCESS, 2021, 9 :162147-162155
[36]   Forecasting Heart Disease Risk using Machine Learning and Regularized Neural Networks [J].
Kalyani, M. ;
Basha, Pathan Hussain ;
Sriharsha, V .
2024 SECOND INTERNATIONAL CONFERENCE ON INVENTIVE COMPUTING AND INFORMATICS, ICICI 2024, 2024, :80-84
[37]   Machine learning prediction of structural dynamic responses using graph neural networks [J].
Li, Qilin ;
Wang, Zitong ;
Li, Ling ;
Hao, Hong ;
Chen, Wensu ;
Shao, Yanda .
COMPUTERS & STRUCTURES, 2023, 289
[38]   Learning to play Go using recursive neural networks [J].
Wu, Lin ;
Baldi, Pierre .
NEURAL NETWORKS, 2008, 21 (09) :1392-1400
[39]   Learning trajectory analysis in English learning using LSTM neural networks [J].
Zhou, Gang .
JOURNAL OF COMPUTATIONAL METHODS IN SCIENCES AND ENGINEERING, 2025,
[40]   Morphological neural networks for localization and mapping [J].
Villaverde, I. ;
Grana, M. ;
d'Anjou, A. .
PROCEEDINGS OF THE 2006 IEEE INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE FOR MEASUREMENT SYSTEMS AND APPLICATIONS, 2006, :9-+