Deep learning models are highly susceptible to adversarial samples, and even minuscule image perturbations that are not perceptible to the naked eye can disable well-trained deep learning models. Recent research indicates that these perturbations can exist in the physical world. This paper provides insight into physical adversarial attacks on deep learning object detection models, clarifying the concept of physical adversarial attack and outlining the general process of such attacks on object detection. According to the different attack tasks, a series of physical adversarial attack methods against object detection networks in recent years are reviewed from vehicle detection and pedestrian detection. Other attacks against target detection models, other attack tasks and other attack methods are briefly introduced. The current challenges of physical adversarial attack are discussed, the limitations of adversarial training are leaded out, and future development directions and application prospect are suggested. © 2024 Journal of Computer Engineering and Applications Beijing Co., Ltd.; Science Press. All rights reserved.