Quantum space-efficient large language models for Prolog query translation
被引:0
作者:
Ahmed, Roshan
论文数: 0引用数: 0
h-index: 0
机构:
Vellore Inst Technol, Sch Comp Sci & Engn, Dept AI & Robot, Chennai 600127, Tamil Nadu, IndiaVellore Inst Technol, Sch Comp Sci & Engn, Dept AI & Robot, Chennai 600127, Tamil Nadu, India
Ahmed, Roshan
[1
]
Sridevi, S.
论文数: 0引用数: 0
h-index: 0
机构:
Vellore Inst Technol, Sch Comp Sci & Engn, Chennai 600127, Tamil Nadu, IndiaVellore Inst Technol, Sch Comp Sci & Engn, Dept AI & Robot, Chennai 600127, Tamil Nadu, India
Sridevi, S.
[2
]
机构:
[1] Vellore Inst Technol, Sch Comp Sci & Engn, Dept AI & Robot, Chennai 600127, Tamil Nadu, India
[2] Vellore Inst Technol, Sch Comp Sci & Engn, Chennai 600127, Tamil Nadu, India
Word2Vec;
Large language model;
Generative AI;
Quantum computing;
Quantum machine learning;
Transfer learning;
Prolog;
D O I:
10.1007/s11128-024-04559-8
中图分类号:
O4 [物理学];
学科分类号:
0702 ;
摘要:
As large language models (LLMs) continue to expand in complexity, their size follows an exponential increase following Moore's law. However, implementing such complex tasks with LLMs poses a significant challenge, as classical computers may lack the necessary space to run or store the model parameters. In this context leveraging the principles of hybrid quantum machine learning for language models offers a promising solution to mitigate this issue by reducing storage space for model parameters. Although pure quantum language models have demonstrated success in recent past, they are constrained by limited features and availability. In this research we propose the DeepKet model an approach with a quantum embedding layer, which utilizes the Hilbert space generated by quantum entanglement to store feature vectors, leading to a significant reduction in size. The experimental analysis evaluates the performance of open-source pre-trained models like Microsoft Phi and CodeGen, specifically fine-tuned for generating Prolog code for geo-spatial data retrieval. Furthermore, it investigates the effectiveness of quantum DeepKet embedding layers by comparing them with the total parameter count of traditional models.
引用
收藏
页数:20
相关论文
共 23 条
[21]
Tang Lappoon R, 2001, P 12 EUR C MACH LEAR, P466
[22]
Vaswani A, 2017, ADV NEUR IN, V30
[23]
Wang Y, 2021, 2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), P8696