Recently emerged radio frequency-based lip-reading recognition technologies leverage their independence from lighting and penetration capabilities to expand the applications of lip-reading. Unlike visual-based lip-reading, this technology penetrates barriers, such as masks, glass, and wood to detect lip movements. However, the previous studies utilizing devices like millimeter-wave radar face limitations due to frequency and power consumption, which restrict the types of penetrable materials and application scenarios. Although low-frequency through-wall radar offers significant penetration capabilities, it has reduced sensitivity to small movements, posing a challenge for lip-reading recognition. Moreover, the high cost of radar equipment and the scarcity of commercially available devices hinder the technology's development. To address these challenges, we propose TWLip, a word-level lip-reading recognition system utilizing coherent single-input and single-output through-wall radar to detect tiny lip movements behind walls. We utilize I/Q 3-D curves derived from the radar signals corresponding to lip movements as the network input. These curves reflect the amplitude, frequency, and rotational characteristics of lip movements in the complex plane. Furthermore, we designed the IQResNet, built with 1-D preactivation residual bottleneck units, to extract and classify lip-movement features from I/Q 3-D curves. We propose a data-augmentation method for radar lip-reading to enhance model efficacy and generalizability. We created a through-wall radar lip-reading data set containing 20 words from eight volunteers, totaling 9583 samples. The TWLip demonstrated the ability to recognize these words through a 24 cm brick wall from two meters away with 88.51% accuracy, validating the algorithm's superiority through detailed comparative studies.