Beyond Subspace Isolation: Many-to-Many Transformer for Light Field Image Super-Resolution

被引:0
|
作者
Hu, Zeke Zexi [1 ]
Chen, Xiaoming [2 ]
Chung, Vera Yuk Ying [1 ]
Shen, Yiran [3 ]
机构
[1] Univ Sydney, Sch Comp Sci, Darlington, NSW 2008, Australia
[2] Beijing Technol & Business Univ, Sch Comp & Artificial Intelligence, Beijing 102488, Peoples R China
[3] Shandong Univ, Sch Software, Jinan 250100, Peoples R China
基金
中国国家自然科学基金; 北京市自然科学基金;
关键词
Transformers; Light fields; Tensors; Superresolution; Spatial resolution; Cameras; Correlation; Image reconstruction; Training; Optimization; Light field; super-resolution; image processing; deep learning;
D O I
10.1109/TMM.2024.3521795
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The effective extraction of spatial-angular features plays a crucial role in light field image super-resolution (LFSR) tasks, and the introduction of convolution and Transformers leads to significant improvement in this area. Nevertheless, due to the large 4D data volume of light field images, many existing methods opted to decompose the data into a number of lower-dimensional subspaces and perform Transformers in each sub-space individually. As a side effect, these methods inadvertently restrict the self-attention mechanisms to a One-to-One scheme accessing only a limited subset of LF data, explicitly preventing comprehensive optimization on all spatial and angular cues. In this paper, we identify this limitation as subspace isolation and introduce a novel Many-to-Many Transformer (M2MT) to address it. M2MT aggregates angular information in the spatial subspace before performing the self-attention mechanism. It enables complete access to all information across all sub-aperture images (SAIs) in a light field image. Consequently, M2MT is enabled to comprehensively capture long-range correlation dependencies. With M2MT as the foundational component, we develop a simple yet effective M2MT network for LFSR. Our experimental results demonstrate that M2MT achieves state-of-the-art performance across various public datasets, and it offers a favorable balance between model performance and efficiency, yielding higher-quality LFSR results with substantially lower demand for memory and computation. We further conduct in-depth analysis using local attribution maps (LAM) to obtain visual interpretability, and the results validate that M2MT is empowered with a truly non-local context in both spatial and angular subspaces to mitigate subspace isolation and acquire effective spatial-angular representation.
引用
收藏
页码:1334 / 1348
页数:15
相关论文
共 50 条
  • [41] Light Field Image Super-Resolution Based on Feature Interaction Fusion and Attention Mechanism
    Xu, Xinyi
    Deng, Huiping
    Sen, Xiang
    Jin, Wu
    LASER & OPTOELECTRONICS PROGRESS, 2023, 60 (14)
  • [42] Light Field Super-Resolution Using Edge-Preserved Graph-Based Regularization
    Ghassab, Vahid Khorasani
    Bouguila, Nizar
    IEEE TRANSACTIONS ON MULTIMEDIA, 2020, 22 (06) : 1447 - 1457
  • [43] Spatial relaxation transformer for image super-resolution
    Li, Yinghua
    Zhang, Ying
    Zeng, Hao
    He, Jinglu
    Guo, Jie
    JOURNAL OF KING SAUD UNIVERSITY-COMPUTER AND INFORMATION SCIENCES, 2024, 36 (07)
  • [44] SQformer: Spectral-Query Transformer for Hyperspectral Image Arbitrary-Scale Super-Resolution
    Jiang, Shuguo
    Li, Nanying
    Xu, Meng
    Zhang, Shuyu
    Jia, Sen
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2024, 62
  • [45] Light Field Spatial Super-resolution via CNN Guided by A Single High-resolution RGB Image
    Jin, Jing
    Hou, Junhui
    Chen, Jie
    Yeung, Henry
    Kwong, Sam
    2018 IEEE 23RD INTERNATIONAL CONFERENCE ON DIGITAL SIGNAL PROCESSING (DSP), 2018,
  • [46] Dual Self-Attention Swin Transformer for Hyperspectral Image Super-Resolution
    Long, Yaqian
    Wang, Xun
    Xu, Meng
    Zhang, Shuyu
    Jiang, Shuguo
    Jia, Sen
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2023, 61
  • [47] A Transformer-Based Model for Super-Resolution of Anime Image
    Xu, Shizhuo
    Dutta, Vibekananda
    He, Xin
    Matsumaru, Takafumi
    SENSORS, 2022, 22 (21)
  • [48] Fusformer: A Transformer-Based Fusion Network for Hyperspectral Image Super-Resolution
    Hu, Jin-Fan
    Huang, Ting-Zhu
    Deng, Liang-Jian
    Dou, Hong-Xia
    Hong, Danfeng
    Vivone, Gemine
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2022, 19
  • [49] LKFormer: large kernel transformer for infrared image super-resolution
    Qin, Feiwei
    Yan, Kang
    Wang, Changmiao
    Ge, Ruiquan
    Peng, Yong
    Zhang, Kai
    MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (28) : 72063 - 72077
  • [50] DTCNet: Transformer-CNN Distillation for Super-Resolution of Remote Sensing Image
    Lin, Cong
    Mao, Xin
    Qiu, Chenghao
    Zou, Lilan
    IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2024, 17 : 11117 - 11133