共 50 条
- [21] Homogeneous Learning: Self-Attention Decentralized Deep Learning IEEE ACCESS, 2022, 10 : 7695 - 7703
- [22] Self-attention with Functional Time Representation Learning ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
- [23] Compressed Self-Attention for Deep Metric Learning THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 3561 - 3568
- [24] Route-Based Proactive Content Caching Using Self-Attention in Hierarchical Federated Learning IEEE ACCESS, 2022, 10 : 29514 - 29527
- [25] An abstractive text summarization technique using transformer model with self-attention mechanism Neural Computing and Applications, 2023, 35 : 18603 - 18622
- [26] An abstractive text summarization technique using transformer model with self-attention mechanism NEURAL COMPUTING & APPLICATIONS, 2023, 35 (25): : 18603 - 18622
- [28] Self-Attention Networks Can Process Bounded Hierarchical Languages 59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (ACL-IJCNLP 2021), VOL 1, 2021, : 3770 - 3785