Task scheduling (TS) in cloud computing is a complex problem that involves balancing workload distribution, resource allocation, and power consumption. Existing methods often fail to optimize these objectives simultaneously and efficiently. This paper introduces a novel technique for scheduling independent tasks in cloud computing using multi-objective optimization and deep reinforcement learning (DRL). The proposed technique, DMOTS-DRL, combines Dueling deep Q-networks and dynamic prioritized experience replay to optimize two critical objectives: scheduling completion time (makespan) and power consumption. The performance of DMOTS-DRL is evaluated using CloudSim and compared with several state-of-the-art TS algorithms. The experimental results show that DMOTS-DRL outperforms the other algorithms in reducing makespan, power consumption, and other metrics, demonstrating its effectiveness and reliability for cloud computing services. Specifically, DMOTS-DRL achieves percentage improvements ranging from - 44.04 to - 0.19% in makespan, from - 0.26 to - 27.90% in power consumption, as well as better performance on other metrics such as energy consumption, degree of imbalance, resource utilization, and average waiting time.