共 45 条
- [1] Jie Huang, Chen-Chuan Chang Kevin, Towards reasoning in large language models: A survey [C], Proc of Findings of the Association for Computational Linguistics, pp. 1049-1065, (2023)
- [2] Mori M, MacDorman K, Kageki N., The uncanny valley from the field[J], IEEE Robotics & Automation Magazine, 19, 2, (2012)
- [3] Zhang Zhexin, Lei Leqi, Wu Lindong, Et al., Safetybench: Evaluating the safety of large language models with multiple choice questions, (2023)
- [4] Sun Hao, Zhang Zhexin, Deng Jiawen, Et al., Safety assessment of Chinese large language models, (2023)
- [5] Deshpande A, Murahari V, Rajpurohit T, Et al., Toxicity in ChatGPT: Analyzing persona-assigned language models [J], (2023)
- [6] Xiaoyuan Yi, Xing Xie, Unpacking the ethical value alignment in big models[J], Journal of Computer Research and Development, 60, 9, pp. 1926-1945, (2023)
- [7] Xi Zhihen, Chen Wenxiang, Guo Xin, Et al., The rise and potential of large language model based agents: A survey, (2023)
- [8] Xu Guohai, Liu Jiay, Yan Ming, Et al., Cvalues: Measuring the values of Chinese large language models from safety to responsibility, (2023)
- [9] Khalatbari L, Bang Yejin, Su Dan, Et al., Learn What NOT to learn: Towards generative safety in Chatbots, (2023)
- [10] Balestriero R, Cosentino R, Shekkizhar S., Characterizing large language model geometry solves toxicity detection and generation, (2023)