共 18 条
- [1] An Empirical Evaluation of Allgatherv on Multi-GPU Systems 2018 18TH IEEE/ACM INTERNATIONAL SYMPOSIUM ON CLUSTER, CLOUD AND GRID COMPUTING (CCGRID), 2018, : 123 - 132
- [2] Empirical Performance Analysis of Collective Communication for Distributed Deep Learning in a Many-Core CPU Environment APPLIED SCIENCES-BASEL, 2020, 10 (19):
- [3] Efficient Multi-GPU Memory Management for Deep Learning Acceleration 2018 IEEE 3RD INTERNATIONAL WORKSHOPS ON FOUNDATIONS AND APPLICATIONS OF SELF* SYSTEMS (FAS*W), 2018, : 37 - 43
- [4] Performance and Energy Aware Training of a Deep Neural Network in a Multi-GPU Environment with Power Capping EURO-PAR 2023: PARALLEL PROCESSING WORKSHOPS, PT II, EURO-PAR 2023, 2024, 14352 : 5 - 16
- [5] Comprehensive techniques of multi-GPU memory optimization for deep learning acceleration Cluster Computing, 2020, 23 : 2193 - 2204
- [6] Comprehensive techniques of multi-GPU memory optimization for deep learning acceleration CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS, 2020, 23 (03): : 2193 - 2204
- [8] Collective Communication Performance Evaluation for Distributed Deep Learning Training APPLIED SCIENCES-BASEL, 2024, 14 (12):
- [9] Parallel Computing Model and Performance Prediction based on Multi-GPU Environments 2011 INTERNATIONAL CONFERENCE ON FUTURE COMPUTERS IN EDUCATION (ICFCE 2011), VOL I, 2011, : 309 - 312
- [10] Multi-GPU Server Deign Parameters Selection based on Empirical Observation of HPL Behavior 2021 36TH INTERNATIONAL TECHNICAL CONFERENCE ON CIRCUITS/SYSTEMS, COMPUTERS AND COMMUNICATIONS (ITC-CSCC), 2021,