Integrating explainable AI and depth cameras to achieve automation in grasping Operations: A case study of shoe company

被引:3
作者
Chiu, Ming-Chuan [1 ]
Yang, Li-Sheng [1 ]
机构
[1] Natl Tsing Hua Univ, Dept Ind Engn & Engn Management, Engn Bldg I,101 Sect 2,Kuang Fu Rd, Hsinchu 30013, Taiwan
关键词
Explainable AI; Yolov7; Stacked objects; Depth camera; Robotic arm; Mask R-CNN; NETWORKS;
D O I
10.1016/j.aei.2024.102583
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In today's highly competitive industrial environment, digital transformation and smart manufacturing have become crucial strategies for enhancing competitiveness. Companies undergoing digital transformation often face challenges like high initial investment, hardware-software integration difficulties, and debugging issues due to low interpretability in deep learning implementation. Therefore, this study focuses on the integration of explainable AI models and depth cameras in the footwear industry to achieve model explainability and automation of production line processes in an economic manner. By combining YOLOv7 and Mask R-CNN, a real-time object detection system is achieved to provide accurate object coordinates and tilt angles. The integration with depth cameras enables the robotic arm to grasp objects accurately in a cluttered environment. The proposed model exhibits a high accuracy rate of 97% in a simulated scenario of stacking insole pads. This technology brings significant advantages, including reducing hardware equipment investment by 20%, streamlining production processes, reducing labor costs, and enhancing overall productivity. Moreover, the model's explainability aids in system troubleshooting and errors reduction caused during digital transformation. By leveraging this integrated approach, businesses in the footwear industry can upgrade their production processes, reduce costs, and improve competitiveness in the market.
引用
收藏
页数:15
相关论文
共 52 条