High dynamic range (HDR) imaging aims to reconstruct ghost-free and detail-rich HDR images from multiple low dynamic range (LDR) images. Challenges such as exposure saturation and significant motion in the LDR image sequence can result in quality issues like ghosting, blurring, and distortion in the final synthesized image. To address these challenges, we present a new approach called Multi-Scale Progressive Reconstruction Network (MPRNet). The network consists of an encoder-decoder, Multi-Scale Progressive Reconstruction Module (MSPRM), and Dual-Stream Reconstruction Module (DERM). MSPRM utilizes a feature pyramid to tackle large-scale motions gradually. It incorporates an attention mechanism and scale selection module to progressively refine motion information within and across scales. DERM adopts a symmetric dual-stream structure to concurrently perform exposure recovery and content reconstruction. It guides the fine-grained restoration of overexposed regions through a joint loss function. The experimental results indicate that the MPRNet fusion results outperform the dominant models in qualitative and quantitative assessments, especially in accurately representing exposure-saturated regions, preserving nonaligned edge details, and maintaining color fidelity.