Tea bud detection plays a crucial role in early-stage tea production estimation and robotic harvesting, significantly advancing the integration of computer vision and agriculture. Currently, tea bud detection faces several challenges such as reduced accuracy due to high background similarity, and the large size and parameter count of the models, which hinder deployment on mobile devices. To address these issues, this study introduces the lightweight Tea Bud DG model, characterized by the following features: 1) The model employs a Dynamic Head (DyHead), which enhances tea bud feature extraction through three types of perceptual attention mechanisms-scale, spatial, and task awareness. Scale awareness enables the model to adapt to objects of varying sizes; spatial awareness focuses on discriminative regions to distinguish tea buds against complex backgrounds; task awareness optimizes feature channels for specific tasks, such as classification or localization of tea buds. 2) A lightweight C3ghost module is designed, initially generating basic feature maps with fewer filters, followed by simple linear operations (e.g., translation or rotation) to create additional "ghost" feature maps, thus reducing the parameter count and model size, facilitating deployment on lightweight mobile devices. 3) By introducing the alpha-CIoU loss function with the parameter alpha, the loss and gradient of objects with different IoU scores can be adaptively reweighted by adjusting the alpha parameter. This approach emphasizes objects with higher IoU, enhancing the ability to identify tea buds in environments with high background similarity. The use of alpha-CIoU focuses on accurately differentiating tea buds from surrounding leaves, improving detection performance. The experimental results show that compared with YOLOv5s, the Tea Bud DG model reduces the model size by 31.41 % and the number of parameters by 32.21 %. Compared with YOLOv7_tiny, the size and parameters are reduced by 18.94 % and 23.84 %, respectively. It achieved improvements in mAP@0.5 by 3 %, 3.9 %, and 5.1 %, and in mAP@0.5_0.95 by 2.6 %, 3.2 %, and 4 % compared with YOLOv5s, YOLOv8s, and YOLOv9s, respectively. The Tea Bud DG model estimates the tea yield with an error range of 10 % to 16 %, providing valuable data support for tea plantation management.