Transfer Learning for the Visual Arts: The Multi-modal Retrieval of Iconclass Codes

被引:2
作者
Banar, Nikolay [1 ]
Daelemans, Walter [1 ]
Kestemont, Mike [1 ]
机构
[1] Univ Antwerp, CLiPS, Stadscampus L,Lange Winkelstr 40-42, B-2000 Antwerp, Belgium
来源
ACM JOURNAL ON COMPUTING AND CULTURAL HERITAGE | 2023年 / 16卷 / 02期
关键词
Iconclass; cultural heritage; transfer learning; deep learning; natural language processing; multi-modal retrieval; multi-lingual retrieval;
D O I
10.1145/3575865
中图分类号
C [社会科学总论];
学科分类号
03 ; 0303 ;
摘要
Iconclass is an iconographic thesaurus, which is widely used in the digital heritage domain to describe subjects depicted in artworks. Each subject is assigned a unique descriptive code, which has a corresponding textual definition. The assignment of Iconclass codes is a challenging task for computational systems, due to the large number of available labels in comparison to the limited amount of training data available. Transfer learning has become a common strategy to overcome such a data shortage. In deep learning, transfer learning consists in fine-tuning the weights of a deep neural network for a downstream task. In this work, we present a deep retrieval framework, which can be fully fine-tuned for the task under consideration. Our work is based on a recent approach to this task, which already yielded state-of-the-art performance, although it could not be fully fine-tuned yet. This approach exploits the multi-linguality and multi-modality that is inherent to digital heritage data. Our framework jointly processesmultiple input modalities, namely, textual and visual features. We extract the textual features from the artwork titles in multiple languages, whereas the visual features are derived from photographic reproductions of the artworks. The definitions of the Iconclass codes, containing useful textual information, are used as target labels instead of the codes themselves. As our main contribution, we demonstrate that our approach outperforms the state-of-the-art by a large margin. In addition, our approach is superior to the (MP)-P-3 feature extractor and outperforms the multi-lingual CLIP in most experiments due to the better quality of the visual features. Our out-of-domain and zero-shot experiments show poor results and demonstrate that the Iconclass retrieval remains a challenging task. We make our source code and models publicly available to support heritage institutions in the further enrichment of their digital collections.
引用
收藏
页数:16
相关论文
共 57 条
[1]  
Anderson P, 2018, PROC CVPR IEEE, P6077, DOI [10.1109/CVPR.2018.00636, 10.1002/ett.70087]
[2]  
Banar N, 2020, P42
[3]   Multi-modal Label Retrieval for the Visual Arts: The Case of Iconclass [J].
Banar, Nikolay ;
Daelemans, Walter ;
Kestemont, Mike .
ICAART: PROCEEDINGS OF THE 13TH INTERNATIONAL CONFERENCE ON AGENTS AND ARTIFICIAL INTELLIGENCE - VOL 1, 2021, :622-629
[4]  
Baraldi L, 2018, INT C PATT RECOG, P1097, DOI 10.1109/ICPR.2018.8545064
[5]  
Brandhorst H, 2019, World 3: Iconographic archives
[6]   Deep learning approaches to pattern extraction and recognition in paintings and drawings: an overview [J].
Castellano, Giovanna ;
Vessio, Gennaro .
NEURAL COMPUTING & APPLICATIONS, 2021, 33 (19) :12263-12282
[7]  
Cetinic Eva, 2021, Pattern Recognition. ICPR International Workshops and Challenges. Proceedings. Lecture Notes in Computer Science (LNCS 12663), P502, DOI 10.1007/978-3-030-68796-0_36
[8]   Towards Generating and Evaluating Iconographic Image Captions of Artworks [J].
Cetinic, Eva .
JOURNAL OF IMAGING, 2021, 7 (08)
[9]   UNITER: UNiversal Image-TExt Representation Learning [J].
Chen, Yen-Chun ;
Li, Linjie ;
Yu, Licheng ;
El Kholy, Ahmed ;
Ahmed, Faisal ;
Gan, Zhe ;
Cheng, Yu ;
Liu, Jingjing .
COMPUTER VISION - ECCV 2020, PT XXX, 2020, 12375 :104-120
[10]  
Cho K., 2014, LEARNING PHRASE REPR, DOI [10.3115/v1/D14-1179, DOI 10.3115/V1/D14-1179]