Many previous studies have explored the integration of a specific You Only Look Once (YOLO) model with real-time trackers like Deep Simple Online and Realtime Tracker (DeepSORT) and Strong Simple Online and Realtime Tracker (StrongSORT). However, few have conducted a comprehensive and in-depth analysis of integrating the family of YOLO models with these real-time trackers to study the performance of the resulting pipeline and draw critical conclusions. This work aims to fill this gap, with the primary objective of investigating the effectiveness of integrating the YOLO series, in light-sized versions, with the real-time DeepSORT and StrongSORT tracking algorithms for real-time object tracking in a computationally limited environment. This work will systematically compare various lightweight YOLO versions, from YOLO version 3 (YOLOv3) to YOLO version 10 (YOLOv10), combined with both tracking algorithms. It will evaluate their performance using detailed metrics across diverse and challenging real-world datasets: the Multiple Object Tracking 2017 (MOT17) and Multiple Object Tracking 2020 (MOT20) datasets. The goal of this work is to assess the robustness and accuracy of these light models in multiple complex real-world environments in scenarios with limited computational resources. Our findings reveal that YOLO version 5 (YOLOv5), when combined with either tracker (DeepSORT or StrongSORT), offers not only a solid baseline in terms of the model's size (enabling real-time performance on edge devices) but also competitive overall performance (in terms of Multiple Object Tracking Accuracy (MOTA) and Multiple Object Tracking Precision (MOTP)). The results suggest a strong correlation between the choice regarding the YOLO version and the tracker's overall performance.