Unmanned aerial vehicle (UAV) and base station jointly assisted multiaccess edge computing (UB-MEC) technology is a promising direction to provide flexible computing services for resource-limited devices. Due to the non-real-time observation of device loads and the dynamic nature of demand in UB-MEC, it is a highly challenging problem to make UAV respond in real time to meet user's dynamic preferences in UB-MEC. To this end, we propose a multiobjective deep reinforcement learning (MODRL) for computation offloading and trajectory control (COTC) of UAV. First, the problem is formulated as a multiobjective Markov decision process (MOMDP), where the traditional scalar rewards are extended to vector, corresponding to the number of task data collected, the completion delay, and the UAV's energy consumption, and the weights are dynamically adjusted to meet different user preferences. Then, considering the device load information stored in UAV is non-real-time, an attentional long short-term memory (ALSTM) network is designed to predict real-time states by autofocusing important historical information. The near on-policy experience replay (NOER) reviews experiences close to on-policy can better promote learning of current strategy. The simulation results show that the proposed algorithm can obtain the action policy which meets the user's time-varying preferences, and can achieve a good balance between different objectives under different preferences.