The demand for mobile video streaming has seen a substantial surge in recent years. However, current platforms heavily depend on network capacity to ensure the delivery of high-quality video streams. The emergence of neural-enhanced video streaming presents a promising solution to address this challenge by leveraging client-side computation, thereby reducing bandwidth consumption. Nonetheless, deploying advanced super-resolution (SR) models on mobile devices is hindered by the computational demands of existing SR models. In this paper, we propose REM, a novel neural-enhanced mobile video streaming framework. REM utilizes a customized lookup table to facilitate real-time neural-enhanced video streaming on mobile devices. Initially, we conduct a series of measurements to identify abundant macroblock redundancies across frames in a video stream. Subsequently, we introduce a dynamic macroblock selection algorithm that prioritizes important macroblocks for neural enhancement. The SR-enhanced results are stored in the lookup table and efficiently reused to meet real-time requirements and minimize resource overhead. By considering macroblock-level characteristics of the video frames, the lookup table enables efficient and fast processing. Additionally, we design a lightweight macroblock-aware SR module to expedite inference. Finally, we perform extensive experiments on various mobile devices. The results demonstrate that REM enhances overall processing throughput by up to 10.2 times and reduces power consumption by up to 58.6% compared to state-of-the-art methods. Consequently, this leads to a 38.06% improvement in the quality of experience for mobile users.