We investigate the joint trajectory control and task offloading (JTCTO) problem in multiunmanned aerial vehicle (UAV)-enabled mobile edge computing (MEC). However, existing JTCTO solutions primarily focus on fixed UAV-enabled MEC scenario variations and necessitate extensive interaction to adapt to new scenarios. Moreover, we consider minimizing task latency and UAV's energy consumption, and maximizing the quantity of tasks collected by the UAV as optimization goals. However, this optimization problem is characterized by multiple conflicting goals that should be adjusted according to their relative significance. In this article, we present a multiobjective actor-variations critic-based JTCTO solution (MO-AVC). First, a group of reinforcement learning strategies is utilized to collect experience on training scenarios, which are employed to learn embeddings of both strategies and scenarios. Further, these two embeddings are used as inputs to train the actor-variations critic (AVC), which explicitly estimates the total return in a space of JTCTO strategies and UAV-enabled MEC scenarios. When adapting to a new scenario, just a few steps of scenario interaction are enough to predict the scenario embedding, thus selecting strategies by maximizing the trained AVC. Second, we propose an actor-conditioned critic framework where the outputs are conditioned on the varying significance of goals, and present a weight dynamic memory-based experience replay to address the intrinsic instability of the dynamic weight context. Finally, simulation results show that MO-AVC can quickly adapt to new scenarios. Moreover, MO-AVC reduces the latency by 7.56%-10.57%, the energy consumption by 11.11%-17.27%, and increases the tasks number by 10.33%-15.54% compared to existing solutions.