共 30 条
[1]
Alexey D, 2020, arXiv, DOI [arXiv:2010.11929, DOI 10.48550/ARXIV.2010.11929]
[2]
Why Attentions May Not Be Interpretable?
[J].
KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING,
2021,
:25-34
[3]
Bastings J, 2020, Arxiv, DOI [arXiv:2010.05607, 10.48550/arXiv.2010.05607]
[4]
Bolya D, 2022, arXiv, DOI [arXiv:2210.09461, DOI 10.48550/ARXIV.2210.09461]
[5]
Emerging Properties in Self-Supervised Vision Transformers
[J].
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021),
2021,
:9630-9640
[6]
Transformer Interpretability Beyond Attention Visualization
[J].
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021,
2021,
:782-791
[7]
Exploring Simple Siamese Representation Learning
[J].
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021,
2021,
:15745-15753
[8]
An Empirical Study of Training Self-Supervised Vision Transformers
[J].
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021),
2021,
:9620-9629
[10]
Which Tokens to Use? Investigating Token Reduction in Vision Transformers
[J].
2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS, ICCVW,
2023,
:773-783