Personalized Auxiliary Information Presentation System for Mobile Network Based on Multimodal Information

被引:1
作者
Liu, Yuli [1 ]
Ijaz, Muhammad Fazal [2 ]
机构
[1] Heze Univ, Dept Psychol, Shandong 274000, Peoples R China
[2] Sejong Univ, Dept Intelligent Mechatron Engn, Seoul 05006, South Korea
关键词
Multimodal information; Mobile network; Personalized assistance; Full convolution neural network;
D O I
10.1007/s11036-022-02076-5
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
The existing personalized auxiliary information presentation system has some problems, such as low presentation efficiency, poor emotion recognition effect and so on. In this paper, multimodal information is introduced to improve the personalized auxiliary information presentation system of mobile network. On the basis of obtaining user demand data, data processing was carried out through input data analysis layer and logic service layer. The multi-modal information was obtained at the perception layer and transmitted to the system service layer to realize the user multi-modal state identification of the personalized auxiliary information presentation system in the mobile network. The long-term and short-term memory network model was constructed to identify the personalized auxiliary information of mobile network, and the personalized auxiliary information of mobile network is presented according to the user's emotion. The experimental results show that the system can effectively identify the user emotion, and realize the personalized presentation of personalized auxiliary information in the mobile network according to the user emotion. At this time, the learning rate is 0.02, and the optimal length is 25 words.
引用
收藏
页码:2611 / 2621
页数:11
相关论文
共 21 条
[1]  
Abdullah S.M.S.A., 2021, Journal of Applied Science and Technology Trends, V2, P52, DOI DOI 10.38094/JASTT20291
[2]  
Chang LC., 2021, CHIN SCI INFORM SCI, V51, P2089
[3]  
Chen Qi, 2021, Journal of Physics: Conference Series, DOI 10.1088/1742-6596/1982/1/012147
[4]  
Fan Z., 2021, COMPUT SIMUL, V38, P336
[5]   Multi-Modal Emotion Aware System Based on Fusion of Speech and Brain Information [J].
Ghoniem, Rania M. ;
Algarni, Abeer D. ;
Shaalan, Khaled .
INFORMATION, 2019, 10 (07)
[6]   Multimodal emotion recognition with hierarchical memory networks [J].
Lai, Helang ;
Wu, Keke ;
Li, Lingli .
INTELLIGENT DATA ANALYSIS, 2021, 25 (04) :1031-1045
[7]  
Lee J., 2020, J INFORM COMMUN CONV, V18, P189
[8]   Exploring temporal representations by leveraging attention-based bidirectional LSTM-RNNs for multi-modal emotion recognition [J].
Li, Chao ;
Bao, Zhongtian ;
Li, Linhao ;
Zhao, Ziping .
INFORMATION PROCESSING & MANAGEMENT, 2020, 57 (03)
[9]  
Li J., 2020, INT J UNCERTAIN FUZZ, V18, P126
[10]   Human Memory Update Strategy: A Multi-Layer Template Update Mechanism for Remote Visual Monitoring [J].
Liu, Shuai ;
Wang, Shuai ;
Liu, Xinyu ;
Gandomi, Amir H. ;
Daneshmand, Mahmoud ;
Muhammad, Khan ;
De Albuquerque, Victor Hugo C. .
IEEE TRANSACTIONS ON MULTIMEDIA, 2021, 23 :2188-2198