Small target information has a lower proportion and severe background interference in the image, which significantly restrains the performance of small object detection algorithms. Most detection models today have a large size, making them unsuitable for deployment on mobile terminals. Based on YOLOv5s, we proposed a light-weight model, LTEA-YOLO, with a model size of only 13.2MB, which has a Light-weight Transformer and Efficient Attention mechanism for small object detection. Firstly, a new light-weight Transformer module, called the inverted Residual Mobile Block (iRMB), is employed as a back-bone network to extract features. Secondly, we created a DBMCSP module (Diverse Branch Modules are inserted into Cross-Stage Partial network), which takes the place of all C3 modules in the fusion section, to extract a wider range of feature information without compromising the speed of inference. We then employ WIoUv(3) as the loss function of box regression to accelerate training convergence and improve positioning precision. Finally, we developed a light-weight and efficient Coordinate and Adaptive Pooling Attention (CAPA) module, which performs better than the Coordinate Attention (CA) module, to be embedded into the SPPF module to enhance detection accuracy. Our model gets 97.8% at mAP@0.5 on the NWPU VHR-10 dataset, which is 3.7% better than YOLOv8s and 6% better than the baseline model YOLOv5s-7.0. In experiments with the VisDrone 2019 dataset, its mAP@0.5 reached 35.8%, outperforming other comparison models. Our LTEA-YOLO, with its small model size, demonstrates superior overall performance in detecting challenging small objects.