This paper presents a novel system that exploits synchrony spectra and deep neural networks (DNNs) for automatic speech recognition (ASR) in challenging noisy environments. Synchrony spectra measure the extent to which each frequency channel in an auditory model is entrained to a particular pitch period, and they are used together with F0 estimates either in a DNN for time-frequency (T-M) mask estimation or to augment the input features for a DNN-based ASR system. The proposed approach was evaluated in the context of the CHiME 3 Challenge. Our experiments show that the synchrony spectra features work best when augmenting the input features to the DNN-based ASR system. Compared to the CHiME-3 baseline system, our best system provides a word error rate (WER) reduction of more than 14% absolute and achieved a WER of 18.56% on the evaluation test set.