Advancing Quality Assessment in Vertical Field: Scoring Calculation for Text Inputs to Large Language Models

被引:0
作者
Yi, Jun-Kai [1 ]
Yao, Yi-Fan [1 ]
机构
[1] Beijing Informat Sci & Technol Univ, Coll Automat, Beijing 100192, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2024年 / 14卷 / 16期
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
VFS evaluation algorithm; text quality; large language models; generative AI;
D O I
10.3390/app14166955
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
With the advent of Transformer-based generative AI, there has been a surge in research focused on large-scale generative language models, especially in natural language processing applications. Moreover, these models have demonstrated immense potential across various vertical fields, ranging from education and history to mathematics, medicine, information processing, and cybersecurity. In research on AI applications in Chinese, it has been found that the quality of text generated by generative AI has become a central focus of attention. However, research on the quality of input text still remains an overlooked priority. Consequently, based on the vectorization comparison of vertical field lexicons and text structure analysis, proposes three input indicators D1, D2, and D3 that affect the quality of generation. Based on this, we studied a text quality evaluation algorithm called VFS (Vertical Field Score) and designed an output evaluation metric named V-L (Vertical-Length). Our experiments indicate that higher-scoring input texts enable generative AI to produce more effective outputs. This enhancement aids users, particularly in leveraging generative AI for question-answering in specific vertical fields, thereby improving response effectiveness and accuracy.
引用
收藏
页数:15
相关论文
共 24 条
[11]  
[李晓 Li Xiao], 2017, [计算机科学, Computer Science], V44, P256
[12]   A survey of transformers [J].
Lin, Tianyang ;
Wang, Yuxin ;
Liu, Xiangyang ;
Qiu, Xipeng .
AI OPEN, 2022, 3 :111-132
[13]   Bidirectional LSTM with attention mechanism and convolutional layer for text classification [J].
Liu, Gang ;
Guo, Jiabao .
NEUROCOMPUTING, 2019, 337 :325-338
[14]   A survey of deep neural network architectures and their applications [J].
Liu, Weibo ;
Wang, Zidong ;
Liu, Xiaohui ;
Zeng, Nianyin ;
Liu, Yurong ;
Alsaadi, Fuad E. .
NEUROCOMPUTING, 2017, 234 :11-26
[15]   Recent advances in deep learning based dialogue systems: a systematic survey [J].
Ni, Jinjie ;
Young, Tom ;
Pandelea, Vlad ;
Xue, Fuzhao ;
Cambria, Erik .
ARTIFICIAL INTELLIGENCE REVIEW, 2023, 56 (04) :3055-3155
[16]  
Ouyang L, 2022, ADV NEUR IN
[17]  
Rahman W, 2020, 58TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2020), P2359, DOI 10.18653/v1/2020.acl-main.214
[18]  
Srivastava Aarohi., 2022, arXiv, DOI DOI 10.48550/ARXIV.2206.04615
[19]  
Vaswani A, 2017, ADV NEUR IN, V30
[20]  
Wei X, 2024, Arxiv, DOI [arXiv:2302.10205, 10.48550/arXiv.2302.10205, DOI 10.48550/ARXIV.2302.10205]