Despite significant advancements in deep reinforcement learning (DRL), existing methods for autonomous driving often need to overcome the cold-start problem, requiring extensive training to converge and fail to fully address safety concerns in dynamic driving environments. To address these limitations, we propose an efficient DRL framework, SGLPER, which integrates Prioritized Experience Replay (PER), expert demonstrations, and safe speed calculation model to improve learning efficiency and decision-making safety. Specifically, PER mitigates the cold-start problem by prioritizing high-value experiences and accelerating training convergence. The Long Short-Term Memory (LSTM) method also captures spatiotemporal information from observed states, enabling the agent to make informed decisions based on past experiences in complex, dynamic traffic scenarios. The safety strategy incorporates the Gipps model, introducing relatively safe speed limits into the reinforcement learning (RL) process to enhance driving safety. Moreover, Kullback-Leibler (KL) divergence combines RL with expert demonstrations, enabling the agent to learn human-like driving behaviors effectively. Experimental results in two simulated driving scenarios validate the robustness and effectiveness of the proposed framework. Compared to traditional DRL methods, SGLPER demonstrates safer strategies, higher success rates, and faster convergence. This study presents a promising approach for developing safer, more efficient autonomous driving systems.