共 50 条
High-Throughput Approximate Multiplication Models in PyTorch
被引:2
|作者:
Trommer, Elias
[1
]
Waschneck, Bernd
[2
]
Kumar, Akash
[3
]
机构:
[1] Tech Univ Dresden, Infineon Technol, Dresden, Germany
[2] Infineon Technol, Dresden, Germany
[3] Tech Univ Dresden, Ctr Adv Elect Cfaed, Dresden, Germany
来源:
关键词:
neural networks;
approximate computing;
deep learning;
D O I:
10.1109/DDECS57882.2023.10139366
中图分类号:
TP301 [理论、方法];
学科分类号:
081202 ;
摘要:
Approximate multipliers can reduce the resource consumption of neural network accelerators. To study their effects on an application, they need to be simulated during network training. We develop simulation models for a common class of approximate multipliers. Our models speed up execution by replacing time-consuming type conversions and memory accesses with fast floating-point arithmetic. Across six different neural network architectures, these models increase throughput by 2.7x over the commonly used array lookup while recreating behavioral simulation with high fidelity.
引用
收藏
页码:79 / 82
页数:4
相关论文