The performance of deep learning models highly depends on the amount of training data. It is common practice for today's data holders to merge their datasets and train models collaboratively, which yet poses a threat to data privacy. Different from existing methods, such as secure multi-party computation (MPC) and federated learning (FL), we find representation learning has unique advantages in collaborative learning due to its low privacy budget, wide applicability to tasks and lower communication overhead. However, data representations face the threat of model inversion attacks. In this article, we formally define the collaborative learning scenario, and present ARS (for adversarial representation sharing), a collaborative learning framework wherein users share representations of data to train models, and add imperceptible adversarial noise to data representations against reconstruction or attribute extraction attacks. By theoretical analysis and evaluating ARS in different contexts, we demonstrate that our mechanism is effective against model inversion attacks, and can achieve great utility and low communication complexity while preserving data privacy. Moreover, the ARS framework has wide applicability, which can be easily extended to the vertical data partitioning scenario and utilized in different tasks.