Personalized Clothing Prediction Algorithm Based on Multi-modal Feature Fusion

被引:0
|
作者
Liu, Rong [1 ,2 ]
Joseph, Annie Anak [1 ]
Xin, Miaomiao [2 ]
Zang, Hongyan [2 ]
Wang, Wanzhen [2 ]
Zhang, Shengqun [2 ]
机构
[1] Univ Malaysia Sarawak, Fac Engn, Kota Samarahan, Sarawak, Malaysia
[2] Qilu Inst Technol, Comp & Informat Engn, Jinan, Peoples R China
关键词
fashion consumers; image; text data; personalized; multi-modal fusion;
D O I
10.46604/ijeti.2024.13394
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
With the popularization of information technology and the improvement of material living standards, fashion consumers are faced with the daunting challenge of making informed choices from massive amounts of data. This study aims to propose deep learning technology and sales data to analyze the personalized preference characteristics of fashion consumers and predict fashion clothing categories, thus empowering consumers to make well-informed decisions. The Visuelle's dataset includes 5,355 apparel products and 45 MB of sales data, and it encompasses image data, text attributes, and time series data. The paper proposes a novel 1DCNN-2DCNN deep convolutional neural network model for the multi-modal fusion of clothing images and sales text data. The experimental findings exhibit the remarkable performance of the proposed model, with accuracy, recall, F1 score, macro average, and weighted average metrics achieving 99.59%, 99.60%, 98.01%, 98.04%, and 98.00%, respectively. Analysis of four hybrid models highlights the superiority of this model in addressing personalized preferences.
引用
收藏
页码:216 / 230
页数:15
相关论文
共 50 条
  • [41] A Novel Chinese Character Recognition Method Based on Multi-Modal Fusion
    Liu, Jin
    Lyu, Shiqi
    Yu, Chao
    Yang, Yihe
    Luan, Cuiju
    FUZZY SYSTEMS AND DATA MINING V (FSDM 2019), 2019, 320 : 487 - 492
  • [42] Representation and Fusion Based on Knowledge Graph in Multi-Modal Semantic Communication
    Xing, Chenlin
    Lv, Jie
    Luo, Tao
    Zhang, Zhilong
    IEEE WIRELESS COMMUNICATIONS LETTERS, 2024, 13 (05) : 1344 - 1348
  • [43] Deep multi-modal fusion network with gated unit for breast cancer survival prediction
    Yuan, Han
    Xu, Hongzhen
    COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING, 2024, 27 (07) : 883 - 896
  • [44] MFF-Net: Towards Efficient Monocular Depth Completion With Multi-Modal Feature Fusion
    Liu, Lina
    Song, Xibin
    Sun, Jiadai
    Lyu, Xiaoyang
    Li, Lin
    Liu, Yong
    Zhang, Liangjun
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2023, 8 (02) : 920 - 927
  • [45] Fuel consumption prediction for pre-departure flights using attention-based multi-modal fusion
    Lin, Yi
    Guo, Dongyue
    Wu, Yuankai
    Li, Lishuai
    Wu, Edmond Q.
    Ge, Wenyi
    INFORMATION FUSION, 2024, 101
  • [46] Multi-sensory and Multi-modal Fusion for Sentient Computing
    Christopher Town
    International Journal of Computer Vision, 2007, 71 : 235 - 253
  • [47] Multi-sensory and multi-modal fusion for sentient computing
    Town, Christopher
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2007, 71 (02) : 235 - 253
  • [48] Environment Reconstruction Based on Multi-User Selection and Multi-Modal Fusion in ISAC
    Lin, Bo
    Zhao, Chuanbin
    Gao, Feifei
    Ye Li, Geoffrey
    Qian, Jing
    Wang, Hao
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2024, 23 (10) : 15083 - 15095
  • [49] MULTI-MODAL PREDICTION OF PTSD AND STRESS INDICATORS
    Rozgic, Viktor
    Vazquez-Reina, Amelio
    Crystal, Michael
    Srivastava, Amit
    Tan, Veasna
    Berka, Chris
    2014 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2014,
  • [50] MFF-PR: Point Cloud and Image Multi-modal Feature Fusion for Place Recognition
    Liu, Wenlei
    Fei, Jiajun
    Zhu, Ziyu
    2022 IEEE INTERNATIONAL SYMPOSIUM ON MIXED AND AUGMENTED REALITY (ISMAR 2022), 2022, : 647 - 655