共 61 条
- [1] Ouyang L., Wu J., Jiang X., Training Language Models to Follow Instructions with Human Feedback., (2022)
- [2] Touvron H., Lavril T., Izacard G., Llama: Open and Efficient Foundation Language Models., (2023)
- [3] Bubeck S., Chandrasekaran V., Eldan R., Sparks of artificial general intelligence: early experiments with gpt-4., (2023)
- [4] Gpt-4 Technical Report, (2023)
- [5] Zhao W., Zhao Y., Lu X., ) is Chatgpt Equipped with Emotional Dialogue Capabilities?, (2023)
- [6] Al-Qablan T.A., Mohd Noor M.H., Al-Betar M.A., Et al., A survey on sentiment analysis and its applications, Neural Computing and Applications, pp. 1-35, (2023)
- [7] Sun H., Wang H., Liu J., Et al., Cubemlp: An mlp-based model for multimodal sentiment analysis and depression estimation, In: Proceedings of the 30Th ACM International Conference on Multimedia, pp. 3722-3729, (2022)
- [8] Baltrusaitis T., Ahuja C., Morency L.P., Multimodal machine learning: a survey and taxonomy, IEEE Trans Pattern Anal Mach Intell, 41, 2, pp. 423-443, (2019)
- [9] Hazarika D., Zimmermann R., Poria S., Misa: Modality-invariant and -specific representations for multimodal sentiment analysis, . In: Proceedings of the 28Th ACM International Conference on Multimedia, 20, pp. 1122-1131, (2020)
- [10] Liu Y., Yuan Z., Mao H., Et al., Make acoustic and visual cues matter: Ch-sims v2.0 dataset and av-mixup consistent module, In: Proceedings of the 2022 International Conference on Multimodal Interaction., pp. 247-258, (2022)