机构:
Univ Penn, Philadelphia, PA 19104 USAUniv Penn, Philadelphia, PA 19104 USA
Rostami, Mohammad
[1
]
Kolouri, Soheil
论文数: 0引用数: 0
h-index: 0
机构:
HRL Labs LLC, Malibu, CA USAUniv Penn, Philadelphia, PA 19104 USA
Kolouri, Soheil
[2
]
论文数: 引用数:
h-index:
机构:
McClelland, James
[3
]
Pilly, Praveen
论文数: 0引用数: 0
h-index: 0
机构:
HRL Labs LLC, Malibu, CA USAUniv Penn, Philadelphia, PA 19104 USA
Pilly, Praveen
[2
]
机构:
[1] Univ Penn, Philadelphia, PA 19104 USA
[2] HRL Labs LLC, Malibu, CA USA
[3] Stanford Univ, Stanford, CA 94305 USA
来源:
THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE
|
2020年
/
34卷
关键词:
D O I:
暂无
中图分类号:
TP18 [人工智能理论];
学科分类号:
081104 ;
0812 ;
0835 ;
1405 ;
摘要:
After learning a concept, humans are also able to continually generalize their learned concepts to new domains by observing only a few labeled instances without any interference with the past learned knowledge. In contrast, learning concepts efficiently in a continual learning setting remains an open challenge for current Artificial Intelligence algorithms as persistent model retraining is necessary. Inspired by the Parallel Distributed Processing learning and the Complementary Learning Systems theories, we develop a computational model that is able to expand its previously learned concepts efficiently to new domains using a few labeled samples. We couple the new form of a concept to its past learned forms in an embedding space for effective continual learning. Doing so, a generative distribution is learned such that it is shared across the tasks in the embedding space and models the abstract concepts. This procedure enables the model to generate pseudo-data points to replay the past experience to tackle catastrophic forgetting.