Searching for Features with Artificial Neural Networks in Science: The Problem of Non-Uniqueness

被引:0
作者
Yao, Siyu [1 ]
Hagar, Amit [1 ]
机构
[1] Indiana Univ Bloomington, Dept Hist & Philosophy Sci & Med, 1020 East Kirkwood Ave,Ballantine Hall 916, Bloomington, IN 47405 USA
关键词
Machine learning; non-uniqueness; artificial neural network; evidence; transparency; BLACK-BOX;
D O I
10.1080/02698595.2024.2346871
中图分类号
N09 [自然科学史]; B [哲学、宗教];
学科分类号
01 ; 0101 ; 010108 ; 060207 ; 060305 ; 0712 ;
摘要
Artificial neural networks and supervised learning have become an essential part of science. Beyond using them for accurate input-output mapping, there is growing attention to a new feature-oriented approach. Under the assumption that networks optimised for a task may have learned to represent and utilise important features of the target system for that task, scientists examine how those networks manipulate inputs and employ the features networks capture for scientific discovery. We analyse this approach, show its hidden caveats, and suggest its legitimate use. We distinguish three things that scientists call a 'feature': parametric, diagnostic, and real-world features. The feature-oriented approach aims for real-world features by interpreting the former two, which also partially rely on the network. We argue that this approach faces a problem of non-uniqueness: there are numerous discordant parametric and diagnostic features and ways to interpret them. When the approach aims at novel discovery, scientists often need to choose between those options, but they lack the background knowledge to justify their choices. Consequentially, features thus identified are not promised to be real. We argue that they should not be used as evidence but only used instrumentally. We also suggest transparency in feature selection and the plurality of choices.
引用
收藏
页码:51 / 67
页数:17
相关论文
共 26 条
[1]   Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI) [J].
Adadi, Amina ;
Berrada, Mohammed .
IEEE ACCESS, 2018, 6 :52138-52160
[2]  
[Anonymous], 2012, Computational cognitive neuroscience. Online Book
[3]   Complexity, Networks, and Non-Uniqueness [J].
Baker, Alan .
FOUNDATIONS OF SCIENCE, 2013, 18 (04) :687-705
[4]  
Bishop CM., 2006, Pattern Recognition and Machine Learning
[5]   Two Dimensions of Opacity and the Deep Learning Predicament [J].
Boge, Florian J. .
MINDS AND MACHINES, 2022, 32 (01) :43-75
[6]   Empiricism without magic: transformational abstraction in deep convolutional neural networks [J].
Buckner, Cameron .
SYNTHESE, 2018, 195 (12) :5339-5372
[7]  
Cat Jordi., 2022, INMETAPHORS ANALOGIE, P115
[8]   Testing the Robustness of Attribution Methods for Convolutional Neural Networks in MRI-Based Alzheimer's Disease Classification [J].
Eitel, Fabian ;
Ritter, Kerstin .
INTERPRETABILITY OF MACHINE INTELLIGENCE IN MEDICAL IMAGE COMPUTING AND MULTIMODAL LEARNING FOR CLINICAL DECISION SUPPORT, 2020, 11797 :3-11
[9]   The primacy of geometry [J].
Hagar, Amit ;
Hemmo, Meir .
STUDIES IN HISTORY AND PHILOSOPHY OF MODERN PHYSICS, 2013, 44 (03) :357-364
[10]  
Heinzmann Gerhard., 2017, STANFORD ENCY PHILOS