One of the major challenges in reinforcement learning (RL) is its applications in episodic tasks, such as chess game, molecular structure design, healthcare, among others, where the rewards in such scenarios are usually sparse and can only be obtained at the end of an episode. The challenges posed by such episodic RL tasks place stringent demands on the exploration and credit assignment capabilities of the agent. In the current literature, many techniques have been presented to address these two issues, for example, various exploration methods have been proposed to increase the exploration ability of the agents to obtain diverse experience samples, and for the delayed reward problem, reward redistribution methods have provided dense task-oriented guidance to the agents by reshaping the sparse and delayed environmental rewards with the assistance of the episodic feedback. Although some successes have been achieved, with current existing techniques, the agents are usually unable to quickly assign credits to the explored key transitions or the related methods are prone to be misled by behavioral policies that fall into local optima and lead to sluggish learning efficiency. To alleviate inefficient learning due to sparse and delayed rewards, we propose a guided reward approach, namely Exploratory Intrinsic with Mission Guidance Reward (EMR), which organically combines intrinsic rewards of exploration mechanisms with reward redistribution in RL to balance exploration and exploitation of RL agents in such tasks. By using entropy-based intrinsic incentives and a simple uniform reward redistribution method, EMR will enable an agent with both the strong exploration and exploitation capability to efficiently overcome challenging tasks with such sparse and delayed rewards. We evaluated and analyzed EMR on several tasks in the Deep Mind Control Suite benchmark, experimental results show that the EMR-equipped agent has faster learning efficiency and even better performance than those using the exploration bonus or the reward redistribution method alone.