Deep Learning-Based Crack Detection on Cultural Heritage Surfaces

被引:0
作者
Huang, Wei-Che [1 ]
Luo, Yi-Shan [1 ]
Liu, Wen-Cheng [1 ]
Liu, Hong-Ming [1 ]
机构
[1] Natl United Univ, Dept Civil & Disaster Prevent Engn, Miaoli 360302, Taiwan
来源
APPLIED SCIENCES-BASEL | 2025年 / 15卷 / 14期
关键词
deep learning; GoogleNet; SegNet; segmentation; crack detection; cultural heritage; ARCHITECTURE;
D O I
10.3390/app15147898
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
This study employs a deep learning-based object detection model, GoogleNet, to identify cracks in cultural heritage images. Subsequently, a semantic segmentation model, SegNet, is utilized to determine the location and extent of the cracks. To establish a scale ratio between image pixels and real-world dimensions, a parallel laser-based measurement approach is applied, enabling precise crack length calculations. The results indicate that the percentage error between crack lengths estimated using deep learning and those measured with a caliper is approximately 3%, demonstrating the feasibility and reliability of the proposed method. Additionally, the study examines the impact of iteration count, image quantity, and image category on the performance of GoogleNet and SegNet. While increasing the number of iterations significantly improves the models' learning performance in the early stages, excessive iterations lead to overfitting. The optimal performance for GoogleNet was achieved at 75 iterations, whereas SegNet reached its best performance after 45,000 iterations. Similarly, while expanding the training dataset enhances model generalization, an excessive number of images may also contribute to overfitting. GoogleNet exhibited optimal performance with a training set of 66 images, while SegNet achieved the best segmentation accuracy when trained with 300 images. Furthermore, the study investigates the effect of different crack image categories by classifying datasets into four groups: general cracks, plain wall cracks, mottled wall cracks, and brick wall cracks. The findings reveal that training GoogleNet and SegNet with general crack images yielded the highest model performance, whereas training with a single crack category substantially reduced generalization capability.
引用
收藏
页数:37
相关论文
共 66 条
[1]  
Ashraf A, 2023, Indonesian Journal of Electrical Engineering and Informatics (IJEEI), V11, DOI [10.52549/ijeei.v11i1.4362, 10.52549/ijeei.v11i1.4362, DOI 10.52549/IJEEI.V11I1.4362]
[2]  
Attard L, 2019, INT SYMP IMAGE SIG, P152, DOI 10.1109/ISPA.2019.8868619
[3]  
Bai Y, 2021, ISPRS Annals of the Photogrammetry Remote Sensing and Spatial Information Sciences, VV-2-2021, P161, DOI [10.5194/isprs-annals-v-2-2021-161-2021, DOI 10.5194/ISPRS-ANNALS-V-2-2021-161-2021, 10.5194/isprs-annals-V-2-2021-161-2021]
[4]  
Bochkovskiy A, 2020, Arxiv, DOI [arXiv:2004.10934, DOI 10.48550/ARXIV.2004.10934]
[5]   Deep Learning-Based Crack Damage Detection Using Convolutional Neural Networks [J].
Cha, Young-Jin ;
Choi, Wooram ;
Buyukozturk, Oral .
COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, 2017, 32 (05) :361-378
[6]   Measurement Invariance Investigation for Performance of Deep Learning Architectures [J].
Chen, Dewang ;
Lu, Yuqi ;
Hsu, Chih-Yu .
IEEE ACCESS, 2022, 10 :78070-78087
[7]   Automated crack segmentation in close-range building facade inspection images using deep learning techniques [J].
Chen, Kaiwen ;
Reichard, Georg ;
Xu, Xin ;
Akanmu, Abiola .
JOURNAL OF BUILDING ENGINEERING, 2021, 43
[8]   Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation [J].
Chen, Liang-Chieh ;
Zhu, Yukun ;
Papandreou, George ;
Schroff, Florian ;
Adam, Hartwig .
COMPUTER VISION - ECCV 2018, PT VII, 2018, 11211 :833-851
[9]   Pavement crack detection and recognition using the architecture of segNet [J].
Chen, Tingyang ;
Cai, Zhenhua ;
Zhao, Xi ;
Chen, Chen ;
Liang, Xufeng ;
Zou, Tierui ;
Wang, Pan .
JOURNAL OF INDUSTRIAL INFORMATION INTEGRATION, 2020, 18
[10]   Application of Mask R-CNN and YOLOv8 Algorithms for Concrete Crack Detection [J].
Choi, Yongjin ;
Bae, Byongkyu ;
Han, Taek Hee ;
Ahn, Jaehun .
IEEE ACCESS, 2024, 12 :165314-165321