Although the importance of sleep is increasingly recognized, the lack of general and transferable algorithms hinders scalable sleep assessment in healthy persons and those with sleep disorders. A deep understanding of the sleep posture, state, or stage is the premise of diagnosing and treating sleep diseases. At present, most existing methods draw support from supervised learning to monitor the whole sleep process. However, in the absence of sufficient labeled sleep data, it is difficult to guarantee the reliability of sleep recognition networks. To solve this problem, we propose a transferable self-supervised instance learning model for three sleep recognition tasks, i.e., sleep posture, state, and stage recognition. Firstly, a SleepGAN is designed to generate sleep data, and then, we combine enough self-supervised rotating sleep data and original data for non-parametric classification at the instance-level, finally, different sleep postures, states, or stages can be distinguished precisely. The proposed model can be applied to multimodal sleep data such as signals and images, and makeup for the inaccuracy caused by insufficient data, and can be transferred to sleep datasets of different sizes. The experimental results show that our algorithm for the physiological changes in the sleep process is superior to several state-of-the-art studies, which may be helpful to promote the intelligence of sleep assessment and monitoring.