Speech contains rich emotional information, especially in its time and frequency domains. Therefore, extracting emotional information from these domains to model the global emotional representation of speech has been successful in Speech Emotion Recognition (SER). However, this global emotion modeling method, particularly in the frequency domain, mainly focuses on modeling the emotional correlations between frequency bands while neglecting the modeling of dynamic changes infrequency bins within local frequency bands. It should be noted that related studies indicate that the energy distribution within local frequency bands contains important emotional cues. Therefore, relying solely on global modeling may not be able to effectively capture these critical local emotional cues. To address this issue, we introduce the Adaptive Multi-Graph Convolutional Network (AMGCN) for SER, which integrates both local and global analysis to more comprehensively capture emotional information from speech. The AMGCN comprises two core components: the Local Multi-Graph Convolutional Network (Local Multi-GCN) and the Global Multi-Graph Convolutional Network (Global MultiGCN). Specifically, the Local Multi-GCN focuses on modeling dynamic changes infrequency bins within local frequency bands. In other words, each frequency band within the network has its own independent graph convolution network, thereby obtaining local frequency domain contextual information and avoiding the loss of emotional information in the local frequency domain. Furthermore, the Global Multi-GCN combines two distinct graph convolutions to model global time-frequency pattern: one that extends from the Local MultiGCN to further consider emotional correlations between frequency bands, and another that forges connections with the initial features to utilize the global temporal contextual information. By combining local and global modeling, AMGCN can leverage complementary information from both levels to obtain a more discriminative and robust emotional representation. The effectiveness of AMGCN has been empirically validated on three benchmark datasets: IEMOCAP, CASIA, and ABC, where it has achieved impressive accuracies of 74.25%, 49.67%, and 70.93%, respectively, surpassing existing state-of-the-art methods in SER.