In recent years, deep neural networks (DNNs) have been widely applied across various fields, but the sensitivity of DNNs to adversarial attacks has attracted widespread attention. Existing research has highlighted the potential of ensemble attacks, which blend the strengths of transfer-based and query-based methods, to create highly transferable adversarial examples. It has been noted that simply amalgamating outputs from various models, without considering the gradient variances, can lead to low transferability. Furthermore, employing static model weights or inefficient weight update strategies may contribute to an unnecessary proliferation of query iterations. To address these issues, this paper introduces a novel black-box ensemble attack algorithm (DSWEA) that combines the Ranking Variance Reduced (RVR) ensemble strategy with the Dynamic Surrogate Weighting (DSW) weight update strategy. RVR employs multiple internal iterations within each query to compute and accumulate unbiased gradients, which are then used to update adversarial examples. This optimization of the gradient diminishes the negative impact of excessive gradient discrepancies between models, thereby enhancing the transferability of perturbations. DSW dynamically adjusts the surrogate weights in each query iteration based on model gradient information, guiding the efficient generation of perturbations. We conduct extensive experiments on the ImageNet and CIFAR-10 datasets, involving various models with varying architectures. Our empirical results reveal that our methodology outperforms existing state-of-the-art techniques, showcasing superior efficacy in terms of Attack Success Rate (ASR) and Average Number of Queries (ANQ).