Current deep learning-based models for image classification are effective at making decisions, but their lack of transparency can be a significant concern in high-stakes settings. To address this issue, many state-of-the-art methods define (post-hoc) explanations as visual heatmaps or image segments deemed responsible for the classifiers' outputs. However, the static nature of these explanations often fails to align with human explanatory practices. To obtain human-oriented explanations, we propose an alternative, novel form of dialogue-based interactive explanations for image classifiers: visual debates between two fictional players who interact to argue for and against the classifiers' outputs. Specifically, in our method, the players propose arguments, which are (abstract) features drawn from classifiers' latent representations and these arguments are countered by the opposing player. We present a realization of visual debates based on quantization for extracting arguments, recurrent networks for modelling player behaviour, and network dissection for argument visualization. Experimentally, we show that our visual debates satisfy the desiderata of dialecticity, convergence, and faithfulness.