Claiming ownership of trained neural networks is critical towards stakeholders investing heavily in high performance neural networks. There is an associated cost for the entire pipeline starting from data curation to high performance computing infrastructure for neural architecture search and training the model. Watermarking neural networks is a potential solution to the problem, but standard techniques suffer from vulnerabilities demonstrated by attackers. In this paper, we propose a robust watermarking mechanism for neural architectures. Our proposed method ROWBACK turns two properties of neural networks, the presence of adversarial examples and the ability to trap backdoors in the network while training, into a scheme that guarantees strong proofs of ownership. We redesign the Trigger Set for watermarking using adversarial examples of the model which needs to be watermarked, and assign specific labels based on adversarial behaviour. We also mark every layer separately, during training, in order to ensure that removing watermarks requires complete retraining. We have tested ROWBACK for satisfying key indicative properties expected of a reliable watermarking scheme (generates accuracies within 1 - 2% of actual model, and a complete 100% match on the Trigger Set for verification), whilst being robust against state-of-the-art watermark removal attacks [1] (requires re-training of all layers with at least 60% samples and for at least more than 45% of epochs of actual training).