WORKING MEMORY NETWORKS FOR LEARNING TEMPORAL-ORDER WITH APPLICATION TO 3-DIMENSIONAL VISUAL OBJECT RECOGNITION

被引:38
作者
BRADSKI, G
CARPENTER, GA
GROSSBERG, S
机构
[1] BOSTON UNIV,CTR ADAPT SYST,BOSTON,MA 02215
[2] BOSTON UNIV,DEPT COGNIT & NEURAL SYST,BOSTON,MA 02215
关键词
D O I
10.1162/neco.1992.4.2.270
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Working memory neural networks, called Sustained Temporal Order REcurrent (STORE) models, encode the invariant temporal order of sequential events in short-term memory (STM). Inputs to the networks may be presented with widely differing growth rates, amplitudes, durations, and interstimulus intervals without altering the stored STM representation. The STORE temporal order code is designed to enable groupings of the stored events to be stably learned and remembered in real time, even as new events perturb the system. Such invariance and stability properties are needed in neural architectures which self-organize learned codes for variable-rate speech preception, sensorimotor planning, or three-dimensional (3-D) visual object recognition. Using such a working memory, a self-organizing architecture for invariant 3-D visual object recognition is described. The new model is based on the model of Seibert and Waxman (1990a), which builds a 3-D representation of an object from a temporally ordered sequence of its two-dimensional (2-D) aspect graphs. The new model, called an ARTSTORE model, consists of the following cascade of processing modules: Invariant Preprocessor --> ART 2 --> STORE Model --> ART 2 --> Outstar Network.
引用
收藏
页码:270 / 286
页数:17
相关论文
共 31 条