In this paper, we propose a novel 3D skeleton human motion data refinement method that is based on a bidirectional recurrent autoencoder (BRA). The BRA has two main characteristics: (1) the motion manifold is extracted by a bidirectional long short-term memory recurrent neural network (B-LSTM-RNN) and (2) apart from statistical information of motion data, kinematic information including smoothness and bone length constrain, are also simultaneously exploited with noisy-clean motion pairs. Using a bidirectional LSTM unit, which is more suitable for time series and can infer information from the data in both time directions, our autoencoder extracts a manifold that can exploit the spatial and temporal relationships between previous and subsequent motion data. As a result, the refined data that are projected by the decoder from the motion manifold have much lower reproduction error. Furthermore, owing to the consideration of kinematic information, our reproduced motion data are of higher visual quality, while preserving positional precision. The proposed method is not action-specific and can handle a wide variety of noise types. The proposed method does not require the noise amplitude, which may be unknown in many scenarios, as a priori knowledge. Extensive experimental results demonstrate that our method outperforms several state-of-the-art methods. (C) 2019 Elsevier Ltd. All rights reserved.