Improving Clothing Product Quality and Reducing Waste Based on Consumer Review Using RoBERTa and BERTopic Language Model

被引:9
作者
Alamsyah, Andry [1 ]
Girawan, Nadhif Ditertian [1 ]
机构
[1] Telkom Univ, Sch Econ & Business, Bandung 40257, Indonesia
关键词
big data; multilabel classification; natural language processing; sustainability; NEURAL-NETWORK; CLASSIFICATION;
D O I
10.3390/bdcc7040168
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The disposability of clothing has emerged as a critical concern, precipitating waste accumulation due to product quality degradation. Such consequences exert significant pressure on resources and challenge sustainability efforts. In response, this research focuses on empowering clothing companies to elevate product excellence by harnessing consumer feedback. Beyond insights, this research extends to sustainability by providing suggestions on refining product quality by improving material handling, gradually mitigating waste production, and cultivating longevity, therefore decreasing discarded clothes. Managing a vast influx of diverse reviews necessitates sophisticated natural language processing (NLP) techniques. Our study introduces a Robustly optimized BERT Pretraining Approach (RoBERTa) model calibrated for multilabel classification and BERTopic for topic modeling. The model adeptly distills vital themes from consumer reviews, exhibiting astounding accuracy in projecting concerns across various dimensions of clothing quality. NLP's potential lies in endowing companies with insights into consumer review, augmented by the BERTopic to facilitate immersive exploration of harvested review topics. This research presents a thorough case for integrating machine learning to foster sustainability and waste reduction. The contribution of this research is notable for its integration of RoBERTa and BERTopic in multilabel classification tasks and topic modeling in the fashion industry. The results indicate that the RoBERTa model exhibits remarkable performance, as demonstrated by its macro-averaged F1 score of 0.87 and micro-averaged F1 score of 0.87. Likewise, BERTopic achieves a coherence score of 0.67, meaning the model can form an insightful topic.
引用
收藏
页数:21
相关论文
共 79 条
  • [1] Quality matters: reviewing the connections between perceived quality and clothing use time
    Aakko, Maarit
    Niinimaki, Kirsi
    [J]. JOURNAL OF FASHION MARKETING AND MANAGEMENT, 2022, 26 (01) : 107 - 125
  • [2] Alamsyah A, 2019, 2019 7TH INTERNATIONAL CONFERENCE ON INFORMATION AND COMMUNICATION TECHNOLOGY (ICOICT), P114, DOI [10.1109/icoict.2019.8835382, 10.1109/icpeca47973.2019.8975408]
  • [3] A transformer neural network for predicting near-surface temperature
    Alerskans, Emy
    Nyborg, Joachim
    Birk, Morten
    Kaas, Eigil
    [J]. METEOROLOGICAL APPLICATIONS, 2022, 29 (05)
  • [4] Amed I., 2022, The state of Fashion 2022, P1
  • [5] [Anonymous], 2021, Statista eCommerce Report 2021-Fashion Statista Digital Market Outlook-Segment Report Bilder Immer Einfarben in: Blue, Accent Color 1
  • [6] [Anonymous], 2023, Getwebooster AMZReviews-Amazon Review Scraper
  • [7] Analyzing QAnon on Twitter in Context of US Elections 2020: Analysis of User Messages and Profiles Using VADER and BERT Topic modeling
    Anwar, Ahmed
    Ilyas, Sardar Haider Waseem
    Yaqub, Ussama
    Zaman, Salma
    [J]. PROCEEDINGS OF THE 22ND ANNUAL INTERNATIONAL CONFERENCE ON DIGITAL GOVERNMENT RESEARCH, DGO 2021, 2021, : 82 - 88
  • [8] Arroyo R, 2020, Arxiv, DOI arXiv:2010.03331
  • [9] Atiea MA, 2022, INT J ADV COMPUT SC, V13, P357
  • [10] A Topic Modeling Approach to Discover the Global and Local Subjects in Membrane Distillation Separation Process
    Aytac, Ersin
    Khayet, Mohamed
    [J]. SEPARATIONS, 2023, 10 (09)