In this paper, we present a novel sparsity-based acceleration logic, that leverages both-sided fine-grained sparsity in the activation and weight tensors to skip ineffectual computations, thereby implementing an efficient convolution engine in a hardware accelerator. However, as demonstrated in this paper, such circuit-level faults, manifested in the sparsity logic can result in graceless degradation in classification accuracy, as well as control failure in the sparse DNN accelerator in mission mode. To circumvent this, we propose RASH, a Reliable deep learning Acceleration framework using Sparsity-based Hardware, that enables in-field detection of faults manifested in the sparsity logic of the DNN accelerator.