Rapid target plant image mosaic based on depth and color information from Kinect combining K-means algorithm

被引:0
作者
Shen Y. [1 ]
Zhu J. [1 ]
Liu H. [1 ]
Cui Y. [2 ]
Zhang B. [1 ]
机构
[1] School of Electrical and Information Engineering, Jiangsu University, Zhenjiang
[2] Nantong Guangyi Electromechanical Co. LTD, Nantong
来源
Nongye Gongcheng Xuebao/Transactions of the Chinese Society of Agricultural Engineering | 2018年 / 34卷 / 23期
关键词
Algorithms; Color and depth information; Image fusion; Image processing; K-means clustering; Machine vision; SURF algorithm;
D O I
10.11975/j.issn.1002-6819.2018.23.016
中图分类号
学科分类号
摘要
Image mosaic can establish high resolution images with wide viewing angle, which is very important for realizing agricultural intelligence. Because of the light or wind and some other factors, traditional image mosaic methods have some disadvantages, such as dislocation, missing and long mosaic time. The method of plant image mosaic based on depth and color dual information feature source from Kinect has high accuracy, but it cannot meet the real-time requirement. It is difficult to meet the requirements of the reliability of agricultural vehicle applications by using image feature element method for image mosaic. Aiming at this problem, in this paper, we proposed a method of feature plant image mosaic based on color and depth information of Kinect sensor. First of all, the effective plant parts of color image were obtained by K-means algorithm and plant depth information. SURF (speeded-up robust features) algorithm was used to extract the effective parts, because the speed of SURF algorithm is three times of SIFT (scale-invariant feature transform) algorithm. It is helpful to reduce the number of feature points matching and improve the speed and accuracy of feature point matching. Thirdly, feature points matches were gotten by similarity measure. But some wrong matches existed with this method. Too many mismatches may result in mosaic errors. Therefore, a solution was needed to remove mismatches to improve the accuracy of the matches. From the nature of Kinect, if Kinect moves horizontally, the depth data of a fixed point is the same. Based on this characteristic, some mismatches would be removed. Then the RANSAC (random sample consensus) algorithm was used to find the projection transformation matrix. The RANSAC algorithm uses the least possible points to estimate the model and then as far as possible to expand scope of the influence of the model. The projection transformation matrix is more accurate than image mosaic method reported in literature of Shen et al (2018) on account of the removing of mismatches. Finally, the multi-resolution image fusion method based on the suture line algorithm was used. The method was used for image fusion. From indoor and outdoor test, the mosaic method based on color and depth dual information feature source had obvious advantages, it can effectively overcome the light, wind and other environmental factors and avoid mosaic errors such as the loss of image and the difference of brightness. In the indoor test, the mosaic method of this article took 3.52 s, the accuracy of matches was 96.8%, in comparison with traditional method of 14.04 s with the accuracy of matches of 88.6%, and with image mosaic method reported in literature hat uses 12.14 s with the accuracy of matches of 96.6%. In the outdoor test, the mosaic method of this article took 7.11 s, the accuracy of matches was 95.2%, compared with the traditional method which takes 56.32 s, with the accuracy of matches of 91.3%, and with image mosaic method reported in literature that takes 45.67 s with the accuracy of matches of 95.2%. So, the mosaic method in this article used less time than the traditional method and method in literature. The data of mosaic accuracy showed that the average matching accuracy of the method in this article was 96.8%, and the average accuracy was higher than traditional image mosaic. So, this method can be further applied in other occasions of image mosaic. It can realize precise spraying of drug fertilizers and the control of pests and diseases based on information collected by Kinect. © 2018, Editorial Department of the Transactions of the Chinese Society of Agricultural Engineering. All right reserved.
引用
收藏
页码:134 / 141
页数:7
相关论文
共 30 条
  • [1] Zhang W., Guo B., Li M., Et al., Improved seam-line searching algorithm for UAV image mosaic with optical flow, Sensors, 18, 4, pp. 1210-1219, (2018)
  • [2] Guo S., Sun S., Guo J., The application of image mosaic in information collecting for an amphibious spherical robot system, IEEE International Conference on Mechatronics and Automation, pp. 1547-1552, (2016)
  • [3] Yao L., Zhou G., Ni Z., Et al., Matching method for fruit surface image based on scale invariant feature transform algorithm, Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 31, 9, pp. 161-166, (2015)
  • [4] Zhou Z., Yan M., Chen S., Et al., Image registration and stitching algorithm of rice low-altitude remote sensing based on harris corner self-adaptive detection, Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 31, 14, pp. 186-193, (2015)
  • [5] Cao N., Wang P., Seamless image stitching based on SIFT feature matching, Computer and Applied Chemistry, 28, 2, pp. 242-244, (2011)
  • [6] He G., Ma J., Zhang X., Et al., An improved image mosaic algorithm based on SURF and RANSAC, Applied Science and Technology, 44, 10, pp. 198-205, (2017)
  • [7] Shen Y., Zhu J., Liu H., Et al., Plant image mosaic based on depth and color dual information feature source from Kinect, Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 34, 5, pp. 176-182, (2018)
  • [8] Khoshelham K., Elberink S.O., Accuracy and resolution of Kinect depth data for indoor mapping application, Sensors, 12, 2, pp. 1437-1454, (2012)
  • [9] He D., Shao X., Wang D., Et al., Denoising method of 3D point cloud data of plants obtained by kinect, Transactions of the Chinese Society for Agricultural Machinery, 47, 1, pp. 331-336, (2016)
  • [10] Smisek J., Jancosek M., Pajdla T., 3D with Kinect, IEEE International Conference on Computer Vision Workshops, pp. 1154-1160, (2011)