Recent studies on the perception of speech have suggested that vowel identification depends on dynamic cues, rather than a single 'static' spectral slice at the vowel midpoint. The experiments reported in this paper seek both to test the extent to which vowel recognition depends on dynamic information, and to identify the nature of the dynamic cues on which such recognition might depend. Gaussian classification techniques, as well as different kinds of neural network architectures, were used to classify some 3000 vowels in /CVd/ citation-form Australian English words, following training on roughly the same number of vowel tokens produced by different talkers. The first set of experiments shows that when vowels are classified from three spectral slices taken at the vowel margins and midpoint, only diphthongs, but not monophthongs, benefit from the additional spectral information at the vowel margins. A further experiment shows that vowels are no better classified from a time-delay neural network than from the three-slice network in which time is not explicitly represented. At least for the citation-form, Australian English vowels in this study, these results are interpreted as being more consistent with a target, rather than a dynamic, theory of vowel perception.