This paper proposes a reinforcement learning (RL) approach with composite disturbance observer to investigate the tracking control for medium-scale unmanned aerial helicopter (UAH) under outside disturbances and model uncertainties. Each disturbance consists of a modelable component and a bounded time-varying one, which better reflects real-world scenarios. Firstly, to facilitate controller design, the nonlinear UAH model is decomposed into position and attitude loops. Secondly, for the position loop, an actor-critic network structure is utilized to approximate model uncertainties while a design of composite disturbance observer including a coordinated disturbance observer (CDO) and a nonlinear disturbance observer (NDO) is presented to estimate outside disturbance and approximation error. The CDO employs a master DO and multiple slave DOs working in coordination to ensure rapid convergence of modelable disturbances under small gain conditions. Additionally, adaptive laws are developed for the weights of the actor and critic networks. Notably, system modeling error is incorporated into the weight update of the actor network to promote the rapid convergence of the weights. Thirdly, by combining tracking errors with the aforementioned estimations and approximations, a position tracking controller is developed to derive the corresponding closed-loop system. On the other hand, the attitude tracking controller is implemented similarly to the position loop, except for that the dynamic surface technique is employed to simplify analytical calculations. Fourthly, the Lyapunov stability theory is applied to prove that all error signals of the overall closed-loop system are uniformly ultimately bounded, and a co-design method for the CDOs, network weights, and controllers is developed based on a set of inequalities, demonstrating that its capability can not only effectively address the disturbances and uncertainties, but also significantly improve the tracking accuracy and UAH system stability.