Faster, Smaller, and Simpler Model for Multiple Facial Attributes Transformation

被引:3
作者
Soeseno, Jonathan Hans [1 ]
Tan, Daniel Stanley [1 ]
Chen, Wen-Yin [2 ]
Hua, Kai-Lung [1 ,3 ]
机构
[1] Natl Taiwan Univ Sci & Technol, Dept Comp Sci & Informat Technol, Taipei 10607, Taiwan
[2] Natl Taipei Univ Educ, Dept Arts & Design, Taipei 10478, Taiwan
[3] Natl Taiwan Univ Sci & Technol, Ctr Cyber Phys Syst Innovat, Taipei 10607, Taiwan
关键词
Facial attribute transformations; generative adversarial networks; image translation;
D O I
10.1109/ACCESS.2019.2905147
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
There are many existing models that are capable of changing hair color or changing facial expressions. These models are typically implemented as deep neural networks that require a large number of computations in order to perform the transformations. This is why it is challenging to deploy on a mobile platform. The usual setup requires an internet connection, where the processing can be done on a server. However, this limits the application's accessibility and diminishes the user experience for consumers with low internet bandwidth. In this paper, we develop a model that can simultaneously transform multiple facial attributes with lower memory footprint and fewer number of computations, making it easier to be processed on a mobile phone. Moreover, our encoder-decoder design allows us to encode an image only once and transform multiple times, making it faster as compared to the previous methods where the whole image has to be processed repeatedly for every attribute transformation. We show in our experiments that our results are comparable to the state-of-the-art models but with 4 x fewer parameters and 3 x faster execution time.
引用
收藏
页码:36400 / 36412
页数:13
相关论文
empty
未找到相关数据