Human Silhouette and Skeleton Video Synthesis Through Wi-Fi Signals

被引:8
作者
Avola, Danilo [1 ]
Cascio, Marco [1 ]
Cinque, Luigi [1 ]
Fagioli, Alessio [1 ]
Foresti, Gian Luca [2 ]
机构
[1] Sapienza Univ Rome, Dept Comp Sci, Via Salaria 113, I-00198 Rome, Italy
[2] Univ Udine, Dept Comp Sci Math & Phys, Via Sci 206, I-33100 Udine, Italy
关键词
Human silhouette; video synthesis; Wi-Fi signal; skeleton; IMAGE SYNTHESIS; GAN; RECOGNITION; MODEL;
D O I
10.1142/S0129065722500150
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The increasing availability of wireless access points (APs) is leading toward human sensing applications based on Wi-Fi signals as support or alternative tools to the widespread visual sensors, where the signals enable to address well-known vision-related problems such as illumination changes or occlusions. Indeed, using image synthesis techniques to translate radio frequencies to the visible spectrum can become essential to obtain otherwise unavailable visual data. This domain-to-domain translation is feasible because both objects and people affect electromagnetic waves, causing radio and optical frequencies variations. In the literature, models capable of inferring radio-to-visual features mappings have gained momentum in the last few years since frequency changes can be observed in the radio domain through the channel state information (CSI) of Wi-Fi APs, enabling signal-based feature extraction, e.g. amplitude. On this account, this paper presents a novel two-branch generative neural network that effectively maps radio data into visual features, following a teacher-student design that exploits a cross-modality supervision strategy. The latter conditions signal-based features in the visual domain to completely replace visual data. Once trained, the proposed method synthesizes human silhouette and skeleton videos using exclusively Wi-Fi signals. The approach is evaluated on publicly available data, where it obtains remarkable results for both silhouette and skeleton videos generation, demonstrating the effectiveness of the proposed cross-modality supervision strategy.
引用
收藏
页数:20
相关论文
共 100 条
[1]   Deep convolutional neural network for the automated detection and diagnosis of seizure using EEG signals [J].
Acharya, U. Rajendra ;
Oh, Shu Lih ;
Hagiwara, Yuki ;
Tan, Jen Hong ;
Adeli, Hojjat .
COMPUTERS IN BIOLOGY AND MEDICINE, 2018, 100 :270-278
[2]   Automated EEG-based screening of depression using deep convolutional neural network [J].
Acharya, U. Rajendra ;
Oh, Shu Lih ;
Hagiwara, Yuki ;
Tan, Jen Hong ;
Adeli, Hojjat ;
Subha, D. P. .
COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE, 2018, 161 :103-113
[3]   See Through Walls with Wi-Fi! [J].
Adib, Fadel ;
Katabi, Dina .
ACM SIGCOMM COMPUTER COMMUNICATION REVIEW, 2013, 43 (04) :75-86
[4]  
Avola D., IEEE T AFFECT COMPUT
[5]  
Avola D., IEEE T CIRC SYST VID
[6]   R-SigNet: Re duce d space writer-independent feature learning for offline writer-dependent signature verification [J].
Avola, Danilo ;
Bigdello, Manoochehr Joodi ;
Cinque, Luigi ;
Fagioli, Alessio ;
Marini, Marco Raoul .
PATTERN RECOGNITION LETTERS, 2021, 150 :189-196
[7]   Machine learning for video event recognition [J].
Avola, Danilo ;
Cascio, Marco ;
Cinque, Luigi ;
Foresti, Gian Luca ;
Pannone, Daniele .
INTEGRATED COMPUTER-AIDED ENGINEERING, 2021, 28 (03) :309-332
[8]   MS-Faster R-CNN: Multi-Stream Backbone for Improved Faster R-CNN Object Detection and Aerial Tracking from UAV Images [J].
Avola, Danilo ;
Cinque, Luigi ;
Diko, Anxhelo ;
Fagioli, Alessio ;
Foresti, Gian Luca ;
Mecca, Alessio ;
Pannone, Daniele ;
Piciarelli, Claudio .
REMOTE SENSING, 2021, 13 (09)
[9]   LieToMe: An Ensemble Approach for Deception Detection from Facial Cues [J].
Avola, Danilo ;
Cascio, Marco ;
Cinque, Luigi ;
Fagioli, Alessio ;
Foresti, Gian Luca .
INTERNATIONAL JOURNAL OF NEURAL SYSTEMS, 2021, 31 (02)
[10]   LieToMe: Preliminary study on hand gestures for deception detection via Fisher-LSTM [J].
Avola, Danilo ;
Cinque, Luigi ;
De Marsico, Maria ;
Fagioli, Alessio ;
Foresti, Gian Luca .
PATTERN RECOGNITION LETTERS, 2020, 138 (455-461) :455-461