Space-filling Curves for Modeling Spatial Context in Transformer-based Whole Slide Image Classification

被引:0
|
作者
Erkan, Cihan [1 ]
Aksoy, Selim [1 ]
机构
[1] Bilkent Univ, Dept Comp Engn, TR-06800 Ankara, Turkiye
来源
MEDICAL IMAGING 2023 | 2023年 / 12471卷
关键词
Digital pathology; space-filling curves; vision transformer; whole slide image classification;
D O I
10.1117/12.2654191
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The common method for histopathology image classification is to sample small patches from large whole slide images and make predictions based on aggregations of patch representations. Transformer models provide a promising alternative with their ability to capture long-range dependencies of patches and their potential to detect representative regions, thanks to their novel self-attention strategy. However, as a sequence-based architecture, transformers are unable to directly capture the two-dimensional nature of images. While it is possible to get around this problem by converting an image into a sequence of patches in raster scan order, the basic transformer architecture is still insensitive to the locations of the patches in the image. The aim of this work is to make the model be aware of the spatial context of the patches as neighboring patches are likely to be part of the same diagnostically relevant structure. We propose a transformer-based whole slide image classification framework that uses space-filling curves to generate patch sequences that are adaptive to the variations in the shapes of the tissue structures. The goal is to preserve the locality of the patches so that neighboring patches in the one-dimensional sequence are closer to each other in the two-dimensional slide. We use positional encodings to capture the spatial arrangements of the patches in these sequences. Experiments using a lung cancer dataset obtained from The Cancer Genome Atlas show that the proposed sequence generation approach that best preserves the locality of the patches achieves 87.6% accuracy, which is higher than baseline models that use raster scan ordering (86.7% accuracy), no ordering (86.3% accuracy), and a model that uses convolutions to relate the neighboring patches (81.7% accuracy).
引用
收藏
页数:8
相关论文
共 41 条
  • [41] Medical image classification: Knowledge transfer via residual U-Net and vision transformer-based teacher-student model with knowledge distillation
    Song, Yucheng
    Wang, Jincan
    Ge, Yifan
    Li, Lifeng
    Guo, Jia
    Dong, Quanxing
    Liao, Zhifang
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2024, 102