A Novel Unsupervised domain adaptation method for inertia-Trajectory translation of in-air handwriting

被引:7
作者
Xu, Songbin [1 ]
Xue, Yang [1 ,2 ]
Zhang, Xin [1 ]
Jin, Lianwen [1 ,2 ]
机构
[1] South China Univ Technol, Sch Elect & Informat Engn, Guangzhou, Peoples R China
[2] Guangdong Artificial Intelligence & Digital Econ, Guangzhou, Guangdong, Peoples R China
关键词
In-air handwriting; Bi-directional inertia-Trajectory translation; Unsupervised domain adaptation; Latent-level adversarial learning; RECOGNITION; ALIGNMENT; TRACKING; MOTION;
D O I
10.1016/j.patcog.2021.107939
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
As a new method of human-computer interaction, inertial sensor-based in-air handwriting can provide natural and unconstrained interaction to express more complex and rich information in 3D space. However, most of the existing literature is mainly focused on in-air handwriting recognition (IAHR), which makes these works suffer from the poor readability of inertial signals and the lack of labeled samples. To address these two problems, we use an unsupervised domain adaptation method to recover the trajectory of inertial signals and generate inertial samples using handwritten trajectories. In this paper, we propose an Air-Writing Translator model to learn the bi-directional translation between trajectory domain and inertial domain in the absence of paired inertial and trajectory samples. Through latent-level adversarial learning and latent classification loss, the proposed model learns to extract domain-invariant features between the inertial signal and the trajectory while preserving semantic consistency during the translation across the two domains. In addition, the proposed framework can accept inputs of arbitrary length and translate between different sampling rates. Experiments on two public datasets, 6DMG (in-air handwriting dataset) and CT (handwritten trajectory dataset), are conducted and the results demonstrate that the proposed model can achieve reliable translation between inertial domain and trajectory domain. Empirically, our method also yields the best results in comparison to the state-of-the-art methods for IAHR. (c) 2021 Elsevier Ltd. All rights reserved.
引用
收藏
页数:14
相关论文
共 45 条
  • [1] Airwriting: Hands-free Mobile Text Input by Spotting and Continuous Recognition of 3d-Space Handwriting with Inertial Sensors
    Amma, Christoph
    Georgi, Marcus
    Schultz, Tanja
    [J]. 2012 16TH INTERNATIONAL SYMPOSIUM ON WEARABLE COMPUTERS (ISWC), 2012, : 52 - 59
  • [2] [Anonymous], 2017, P 31 INT C NEURAL IN
  • [3] Air-Writing Recognition-Part II: Detection and Recognition of Writing Activity in Continuous Stream of Motion Data
    Chen, Mingyu
    AlRegib, Ghassan
    Juang, Biing-Hwang
    [J]. IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS, 2016, 46 (03) : 436 - 444
  • [4] Air-Writing Recognition-Part I: Modeling and Recognition of Characters, Words, and Connecting Motions
    Chen, Mingyu
    AlRegib, Ghassan
    Juang, Biing-Hwang
    [J]. IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS, 2016, 46 (03) : 403 - 413
  • [5] Chen Z., 2017, INT SYMP INERT SENSO, P1
  • [6] Smartphone Inertial Sensor-Based Indoor Localization and Tracking With iBeacon Corrections
    Chen, Zhenghua
    Zhu, Qingchang
    Soh, Yeng Chai
    [J]. IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2016, 12 (04) : 1540 - 1549
  • [7] Convolutional Two-Stream Network Fusion for Video Action Recognition
    Feichtenhofer, Christoph
    Pinz, Axel
    Zisserman, Andrew
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 1933 - 1941
  • [8] Survey of Motion Tracking Methods Based on Inertial Sensors: A Focus on Upper Limb Human Motion
    Filippeschi, Alessandro
    Schmitz, Norbert
    Miezal, Markus
    Bleser, Gabriele
    Ruffaldi, Emanuele
    Stricker, Didier
    [J]. SENSORS, 2017, 17 (06)
  • [9] Ganin Y, 2015, PR MACH LEARN RES, V37, P1180
  • [10] Image Style Transfer Using Convolutional Neural Networks
    Gatys, Leon A.
    Ecker, Alexander S.
    Bethge, Matthias
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 2414 - 2423