In the aerospace field, visual runway detection is crucial for aircraft landing operations. Accurate detection of the location and orientation of the runway can effectively assist in safe aircraft landings and avoid potential risks. However, existing object detection methods suffer from issues such as long training times, poor detection accuracy, and limited adaptability to complex scenarios, making it difficult to accurately detect the target runway. To address these issues, this paper proposes an image detection method for high-resolution network-based runways that can effectively extract key point features of the runway and achieve key point prediction for aircraft runways, thereby accurately detecting the aircraft runway. The network maintains high-resolution representations by connecting high-resolution to low-resolution convolutions in parallel and enhances these representations through multi-scale fusion. Additionally, to solve the problem of detected key points failing to form the shape of the runway, this paper optimizes the loss function by introducing a shape loss function. This aims to incorporate shape priors, enhance the accurate learning of aircraft runway shapes by the network model, and make the detected aircraft runway shapes more consistent with expectations. Experimental results on a self-built dataset demonstrate that the proposed method can effectively detect aircraft runways. Compared to the previously methods, it achieves a 4.97% improvement, providing a valuable reference for future improvements and optimizations in the field of aircraft landing.