As a potential technique, unmanned aerial vehicle (UAV) assisted mobile edge computing (MEC) can provide flexible coverage and computing services for real-time applications such as emergency search, traffic control and disaster rescue. In this paper, we investigate a freshness sensitive multi-UAV assisted MEC system where tasks arrive stochastically. The system aims to minimize the age of information (AoI), subject to the constraints on computation offloading, trajectory control and communication resource allocation. Due to the dynamic environment and the coupling of variables, we develop a multi-agent reinforcement learning (MARL) scheme, in which a federated updating method is introduced. Through our scheme, smart mobile devices, UAVs and cloud center can collaborate to learn interactive policies. Simulation results validate that our scheme outperforms local computing, remote computing, and centralized solutions in terms of both the average AoI and convergence.