Weight is a parameter used for measuring the priority in multi-objective reinforcement learning when linearly scalarizing the reward vector for each objective. The weights need to be set in advance; however, most real-world problems have numerous objectives. Therefore, adjusting the weights requires many trials and errors by the designer. In addition, a method to automatically estimate weights is needed to reduce the burden on designers to set weights. In this paper, we propose a novel method for estimating the weights based on the reward vector for each objective and the expert trajectories using the framework of inverse reinforcement learning (IRL). In particular, we adopt deep IRL with deep reinforcement learning and multiplicative weights apprenticeship learning for fast weight estimation in a continuous state space. Through experiments in a benchmark environment for multi-objective sequential decision-making problems in a continuous state space, we verified that our novel weight estimation method is superior to the projection method and Bayesian optimization.