Background: Large language models (LLMs) have substantially advanced natural language processing (NLP) capabilities butoften struggle with knowledge-driven tasks in specialized domains such as biomedicine. Integrating biomedical knowledge sourcessuch as SNOMED CT into LLMs may enhance their performance on biomedical tasks. However, the methodologies andeffectiveness of incorporating SNOMED CT into LLMs have not been systematically reviewed.Objective: This scoping review aims to examine how SNOMED CT is integrated into LLMs, focusing on (1) the types andcomponents of LLMs being integrated with SNOMED CT, (2) which contents of SNOMED CT are being integrated, and (3)whether this integration improves LLM performance on NLP tasks.Methods: Following the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension forScoping Reviews) guidelines, we searched ACM Digital Library, ACL Anthology, IEEE Xplore, PubMed, and Embase forrelevant studies published from 2018 to 2023. Studies were included if they incorporated SNOMED CT into LLM pipelines fornatural language understanding or generation tasks. Data on LLM types, SNOMED CT integration methods, end tasks, andperformance metrics were extracted and synthesized.Results: The review included 37 studies. Bidirectional Encoder Representations from Transformers and its biomedical variantswere the most commonly used LLMs. Three main approaches for integrating SNOMED CT were identified: (1) incorporatingSNOMED CT into LLM inputs (28/37, 76%), primarily using concept descriptions to expand training corpora; (2) integratingSNOMED CT into additional fusion modules (5/37, 14%); and (3) using SNOMED CT as an external knowledge retriever duringinference (5/37, 14%). The most frequent end task was medical concept normalization (15/37, 41%), followed by entity extractionor typing and classification. While most studies (17/19, 89%) reported performance improvements after SNOMED CT integration,only a small fraction (19/37, 51%) provided direct comparisons. The reported gains varied widely across different metrics andtasks, ranging from 0.87% to 131.66%. However, some studies showed either no improvement or a decline in certain performancemetrics.Conclusions: This review demonstrates diverse approaches for integrating SNOMED CT into LLMs, with a focus on usingconcept descriptions to enhance biomedical language understanding and generation. While the results suggest potential benefitsof SNOMED CT integration, the lack of standardized evaluation methods and comprehensive performance reporting hindersdefinitive conclusions about its effectiveness. Future research should prioritize consistent reporting of performance comparisonsand explore more sophisticated methods for incorporating SNOMED CT's relational structure into LLMs. In addition, thebiomedical NLP community should develop standardized evaluation frameworks to better assess the impact of ontology integrationon LLM performance.