Deep neural network-based feature selection with local false discovery rate estimation

被引:1
作者
Cao, Zixuan [1 ,2 ]
Sun, Xiaoya [1 ,2 ]
Fu, Yan [1 ,2 ]
机构
[1] Chinese Acad Sci, Acad Math & Syst Sci, CEMS, NCMIS,RCSDS, Beijing 100190, Peoples R China
[2] Univ Chinese Acad Sci, Sch Math Sci, Beijing 100049, Peoples R China
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
Feature selection; Deep neural networks; Error control; Local false discovery rate;
D O I
10.1007/s10489-024-05944-7
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Feature selection, aiming at identifying the most significant subset of features from the original data, plays a prominent role in high-dimensional data processing. To a certain extent, feature selection can mitigate the issue of poor interpretability of deep neural networks (DNNs). Despite recent advancements in DNN-based feature selection, most methods overlook the error control of selected features and lack reproducibility. In this paper, we propose a new method called DeepTD to perform error-controlled feature selection for DNNs, in which artificial decoy features are constructed and subjected to competition with the original features according to the feature importance scores computed from the trained network, enabling p-value-free local false discovery rate (FDR) estimation of selected features. The merits of DeepTD include: a new DNN-derived measure of feature importance combining the weights and gradients of the network; the first algorithm that estimates the local FDR based on DNN-derived scores; confidence assessment of individual selected features; better robustness to small numbers of important features and low FDR thresholds than competition-based FDR control methods, e.g., the knockoff filter. On multiple synthetic datasets, DeepTD accurately estimated the local FDR and empirically controlled the FDR with 10%\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\%$$\end{document} higher power on average than knockoff filter. At lower FDR thresholds, the power of our method has even reached two to three times that of other state-of-the-art methods. DeepTD was also applied to real datasets and selected 31%\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\%$$\end{document}-49%\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\%$$\end{document} more features than alternatives, demonstrating its validity and utility.
引用
收藏
页数:16
相关论文
共 37 条
  • [1] [Anonymous], 2015, Neural networks and deep learning, DOI DOI 10.1145/2939672.2945397
  • [2] CONTROLLING THE FALSE DISCOVERY RATE VIA KNOCKOFFS
    Barber, Rina Foygel
    Candes, Emmanuel J.
    [J]. ANNALS OF STATISTICS, 2015, 43 (05) : 2055 - 2085
  • [3] Benjamini Y, 2001, ANN STAT, V29, P1165
  • [4] CONTROLLING THE FALSE DISCOVERY RATE - A PRACTICAL AND POWERFUL APPROACH TO MULTIPLE TESTING
    BENJAMINI, Y
    HOCHBERG, Y
    [J]. JOURNAL OF THE ROYAL STATISTICAL SOCIETY SERIES B-STATISTICAL METHODOLOGY, 1995, 57 (01) : 289 - 300
  • [5] Correcting false discovery rates for their bias toward false positives
    Bickel, David R.
    Rahal, Abbas
    [J]. COMMUNICATIONS IN STATISTICS-SIMULATION AND COMPUTATION, 2021, 50 (11) : 3699 - 3713
  • [6] Panning for gold: "model-X' knockoffs for high dimensional controlled variable selection
    Candes, Emmanuel
    Fan, Yingying
    Janson, Lucas
    Lv, Jinchi
    [J]. JOURNAL OF THE ROYAL STATISTICAL SOCIETY SERIES B-STATISTICAL METHODOLOGY, 2018, 80 (03) : 551 - 577
  • [7] Ensemble deep learning in bioinformatics
    Cao, Yue
    Geddes, Thomas Andrew
    Yang, Jean Yee Hwa
    Yang, Pengyi
    [J]. NATURE MACHINE INTELLIGENCE, 2020, 2 (09) : 500 - 508
  • [8] Single-Cell RNA-Seq Reveals Hypothalamic Cell Diversity
    Chen, Renchao
    Wu, Xiaoji
    Jiang, Lan
    Zhang, Yi
    [J]. CELL REPORTS, 2017, 18 (13): : 3227 - 3241
  • [9] Explainable deep neural networks for novel viral genome prediction
    Dasari, Chandra Mohan
    Bhukya, Raju
    [J]. APPLIED INTELLIGENCE, 2022, 52 (03) : 3002 - 3017
  • [10] Empirical Bayes analysis of a microarray experiment
    Efron, B
    Tibshirani, R
    Storey, JD
    Tusher, V
    [J]. JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 2001, 96 (456) : 1151 - 1160