Identifying Bias in Deep Neural Networks Using Image Transforms

被引:0
作者
Erukude, Sai Teja [1 ]
Joshi, Akhil [1 ]
Shamir, Lior [1 ]
机构
[1] Kansas State Univ, Dept Comp Sci, Manhattan, KS 66502 USA
关键词
bias; convolutional neural networks; machine learning; experimental design; FACE RECOGNITION; BLACK-BOX;
D O I
10.3390/computers13120341
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
CNNs have become one of the most commonly used computational tools in the past two decades. One of the primary downsides of CNNs is that they work as a "black box", where the user cannot necessarily know how the image data are analyzed, and therefore needs to rely on empirical evaluation to test the efficacy of a trained CNN. This can lead to hidden biases that affect the performance evaluation of neural networks, but are difficult to identify. Here we discuss examples of such hidden biases in common and widely used benchmark datasets, and propose techniques for identifying dataset biases that can affect the standard performance evaluation metrics. One effective approach to identify dataset bias is to perform image classification by using merely blank background parts of the original images. However, in some situations, a blank background in the images is not available, making it more difficult to separate foreground or contextual information from the bias. To overcome this, we propose a method to identify dataset bias without the need to crop background information from the images. The method is based on applying several image transforms to the original images, including Fourier transform, wavelet transforms, median filter, and their combinations. These transforms are applied to recover background bias information that CNNs use to classify images. These transformations affect the contextual visual information in a different manner than it affects the systemic background bias. Therefore, the method can distinguish between contextual information and the bias, and can reveal the presence of background bias even without the need to separate sub-image parts from the blank background of the original images. The code used in the experiments is publicly available.
引用
收藏
页数:18
相关论文
共 46 条
[1]  
Agarwal S., 2017, BIOMED PHARMACOL J, V10, P831, DOI DOI 10.13005/bpj/1174
[2]   Evaluating Saliency Map Explanations for Convolutional Neural Networks: A User Study [J].
Alqaraawi, Ahmed ;
Schuessler, Martin ;
Weiss, Philipp ;
Costanza, Enrico ;
Berthouze, Nadia .
PROCEEDINGS OF THE 25TH INTERNATIONAL CONFERENCE ON INTELLIGENT USER INTERFACES, IUI 2020, 2020, :275-285
[3]  
[Anonymous], Yale Face database
[4]  
[Anonymous], OPENCV DOCUMENTATION
[5]   Assessing the Trustworthiness of Saliency Maps for Localizing Abnormalities in Medical Imaging [J].
Arun, Nishanth ;
Gaw, Nathan ;
Singh, Praveer ;
Chang, Ken ;
Aggarwal, Mehak ;
Chen, Bryan ;
Hoebel, Katharina ;
Gupta, Sharut ;
Patel, Jay ;
Gidwani, Mishka ;
Adebayo, Julius ;
Li, Matthew D. ;
Kalpathy-Cramer, Jayashree .
RADIOLOGY-ARTIFICIAL INTELLIGENCE, 2021, 3 (06)
[6]   Is AI leading to a reproducibility crisis in science? [J].
Ball, Philip .
NATURE, 2023, 624 (7990) :22-25
[7]   Analysis of Explainers of Black Box Deep Neural Networks for Computer Vision: A Survey [J].
Buhrmester, Vanessa ;
Muench, David ;
Arens, Michael .
MACHINE LEARNING AND KNOWLEDGE EXTRACTION, 2021, 3 (04) :966-989
[8]   WHAT IS FAST FOURIER TRANSFORM [J].
COCHRAN, WT ;
COOLEY, JW ;
FAVIN, DL ;
HELMS, HD ;
KAENEL, RA ;
LANG, WW ;
MALING, GC ;
NELSON, DE ;
RADER, CM ;
WELCH, PD .
PROCEEDINGS OF THE INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS, 1967, 55 (10) :1664-+
[9]  
Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
[10]  
Dhanani S., 2013, Digital Video Processing for Engineers, P19, DOI [10.1016/B978-0-12-415760-6.00004-0, DOI 10.1016/B978-0-12-415760-6.00004-0]