Introduction: During influenza season, some patients tend to seek medical advice through online platforms. However, due to time constraints, the informational and emotional support provided by physicians is limited. Large language models (LLMs) can rapidly provide medical knowledge and empathy, but their capacity for providing informational support to patients with influenza and assisting physicians in providing emotional support is unclear. Therefore, this study evaluated the quality of LLM-generated influenza advice and its emotional support performance in comparison with physician advice. Methods: This study utilized 200 influenza question-answer pairs from the online health community. Data collection consisted of two parts: (1) A panel of board-certified physicians evaluated the quality of LLM advice vs physician advice. (2) Physician advice was polished using an LLM, and the LLM-rewritten advice was compared to the original physician advice using the LLM module. Results: For informational support, there was no significant difference between LLM and physician advice in terms of the presence of incorrect information, omission of information, extent of harm or empathy. Nevertheless, compared to physician advice, LLM advice was more likely to cause harm and to be in line with medical consensus. LLM was also able to assist physicians in providing emotional support, since the LLM-rewritten advice was significantly more respectful, friendly and empathetic, when compared with physician advice. Also, the LLMrewritten advice was logically smooth. In most cases, LLM did not add or omit the original medical information. Conclusion: This study suggests that LLMs can provide informational and emotional support for influenza patients. This may help to alleviate the pressure on physicians and promote physician-patient communication.