In industrial applications, mobile robot navigation involves autonomously moving towards specific target areas. Visual sensors are commonly used to identify and locate the target, enabling actions like manipulator grasping. However, when using the Yolo series for target positioning, two challenges arise. The first challenge is that the evaluation index Intersection over Union (IOU) only provides limited information and does not fully express the confidence in the target position. To overcome this, the Yolo layer utilizes a complete IOU as its loss function, which considers factors such as overlap, distance, and aspect ratio. Additionally, a Gaussian model layer is introduced between the non-maximum suppression layer and the output layer to estimate the coordinate uncertainty. The second challenge is that the prediction framework for target positioning lacks accuracy, resulting in significant deviations from the ground truth. To improve precision, this paper proposes combining the B-spline level set method for target segmentation. This technique adjusts the deviation from the minimum rectangular box surrounding the target. The proposed global optimization segmentation model, based on B-spline, effectively handles local optimal values in non-convex and traditional variational segmentation models. Experimental results demonstrate that: In the VOC dataset, compared with the Yolo-v3 method and Faster R-CNN method, the mAP of the GC Yolo-v3 method is increased by 2.95% and 5.85%, respectively, and the recall value is increased by 0.03 and 0.02, respectively. Compared with the Yolo-v3 method in VOC dataset, the average IOU value of the GC Yolo-v3 method is increased by 8.31%. In the real dataset, compared with the Yolo-v3 method and the Faster R-CNN method, the mAP of the GC Yolo-v3 method is increased by 3.32% and 7.41% respectively, and the recall value is increased by 0.08 and 0.05 respectively. The average IOU value of the GC Yolo-v3 method is 6.35% higher than that of the Yolo-v3 method in real dataset.