When the bottom-up learning approaches are implemented for mechanical systems, we must face a problem including huge number of trials. They take much time and give hard stress to the actual system. Simulator is often used only for evaluation of the learning method. However, it needs simulator modeling process, and never guarantees repeatability for the actual system. In this study, we are considering a construction of simulator directly from the actual robot with neural-networks. Afterward a constructed simulator is used for reinforcement learning to train a task, and the obtained optimal controller is applied to the actual robot. In this work, we picked up a five-linked manipulator robot, and made it track a ball as a training task. Both learning processes make load against the hardware sufficiently smaller, and the objective controller can be obtained faster than using only actual one.