Eliminating accidental deviations to minimize generalization error and maximize replicability: Applications in connectomics and genomics

被引:20
作者
Bridgeford, Eric W. [1 ]
Wang, Shangsi [1 ]
Wang, Zeyi [1 ]
Xu, Ting [3 ]
Craddock, Cameron [3 ]
Dey, Jayanta [1 ]
Kiar, Gregory [1 ]
Gray-Roncal, William [1 ]
Colantuoni, Carlo [1 ]
Douville, Christopher [1 ]
Noble, Stephanie [4 ]
Priebe, Carey E. [1 ]
Caffo, Brian [1 ]
Milham, Michael [3 ]
Zuo, Xi-Nian [2 ,5 ,6 ,7 ]
Vogelstein, Joshua T. [1 ,8 ]
Ma, Jian
Richards, Blake A.
Ma, Jian
Richards, Blake A.
Ma, Jian
Richards, Blake A.
机构
[1] Johns Hopkins Univ, Baltimore, MD 21218 USA
[2] Shanghai Jiao Tong Univ, Shanghai, Peoples R China
[3] Child Mind Inst, New York, NY USA
[4] Yale Univ, New Haven, CT USA
[5] Beijing Normal Univ, Beijing, Peoples R China
[6] Nanning Normal Univ, Nanning, Peoples R China
[7] Univ Chinese Acad Sci, Beijing, Peoples R China
[8] Progress Learning, Baltimore, MD 21215 USA
基金
美国国家科学基金会;
关键词
FUNCTIONAL NEUROIMAGING EXPERIMENTS; QUANTITATIVE-EVALUATION; RELIABILITY; FMRI;
D O I
10.1371/journal.pcbi.1009279
中图分类号
Q5 [生物化学];
学科分类号
071010 ; 081704 ;
摘要
Replicability, the ability to replicate scientific findings, is a prerequisite for scientific discovery and clinical utility. Troublingly, we are in the midst of a replicability crisis. A key to replicability is that multiple measurements of the same item (e.g., experimental sample or clinical participant) under fixed experimental constraints are relatively similar to one another. Thus, statistics that quantify the relative contributions of accidental deviations-such as measurement error-as compared to systematic deviations-such as individual differences-are critical. We demonstrate that existing replicability statistics, such as intra-class correlation coefficient and fingerprinting, fail to adequately differentiate between accidental and systematic deviations in very simple settings. We therefore propose a novel statistic, discriminability, which quantifies the degree to which an individual's samples are relatively similar to one another, without restricting the data to be univariate, Gaussian, or even Euclidean. Using this statistic, we introduce the possibility of optimizing experimental design via increasing discriminability and prove that optimizing discriminability improves performance bounds in subsequent inference tasks. In extensive simulated and real datasets (focusing on brain imaging and demonstrating on genomics), only optimizing data discriminability improves performance on all subsequent inference tasks for each dataset. We therefore suggest that designing experiments and analyses to optimize discriminability may be a crucial step in solving the replicability crisis, and more generally, mitigating accidental measurement error. Author summaryIn recent decades, the size and complexity of data has grown exponentially. Unfortunately, the increased scale of modern datasets brings many new challenges. At present, we are in the midst of a replicability crisis, in which scientific discoveries fail to replicate to new datasets. Difficulties in the measurement procedure and measurement processing pipelines coupled with the influx of complex high-resolution measurements, we believe, are at the core of the replicability crisis. If measurements themselves are not replicable, what hope can we have that we will be able to use the measurements for replicable scientific findings? We introduce the "discriminability" statistic, which quantifies how discriminable measurements are from one another, without limitations on the structure of the underlying measurements. We prove that discriminable strategies tend to be strategies which provide better accuracy on downstream scientific questions. We demonstrate the utility of discriminability over competing approaches in this context on two disparate datasets from both neuroimaging and genomics. Together, we believe these results suggest the value of designing experimental protocols and analysis procedures which optimize the discriminability.
引用
收藏
页数:20
相关论文
共 58 条
  • [1] [Anonymous], 2016, MEASUREMENT VERY SHO
  • [2] [Anonymous], 2015, NEWS ARTICLE NATURE
  • [3] [Anonymous], 2014, DIABETES CARE, V1, pS14
  • [4] [Anonymous], 2017, DRUG SAF UPDATE, V10, P1, DOI DOI 10.1186/s13104-016-2345-3
  • [5] Toward discovery science of human brain function
    Biswal, Bharat B.
    Mennes, Maarten
    Zuo, Xi-Nian
    Gohel, Suril
    Kelly, Clare
    Smith, Steve M.
    Beckmann, Christian F.
    Adelstein, Jonathan S.
    Buckner, Randy L.
    Colcombe, Stan
    Dogonowski, Anne-Marie
    Ernst, Monique
    Fair, Damien
    Hampson, Michelle
    Hoptman, Matthew J.
    Hyde, James S.
    Kiviniemi, Vesa J.
    Kotter, Rolf
    Li, Shi-Jiang
    Lin, Ching-Po
    Lowe, Mark J.
    Mackay, Clare
    Madden, David J.
    Madsen, Kristoffer H.
    Margulies, Daniel S.
    Mayberg, Helen S.
    McMahon, Katie
    Monk, Christopher S.
    Mostofsky, Stewart H.
    Nagel, Bonnie J.
    Pekar, James J.
    Peltier, Scott J.
    Petersen, Steven E.
    Riedl, Valentin
    Rombouts, Serge A. R. B.
    Rypma, Bart
    Schlaggar, Bradley L.
    Schmidt, Sein
    Seidler, Rachael D.
    Siegle, Greg J.
    Sorg, Christian
    Teng, Gao-Jun
    Veijola, Juha
    Villringer, Arno
    Walter, Martin
    Wang, Lihong
    Weng, Xu-Chu
    Whitfield-Gabrieli, Susan
    Williamson, Peter
    Windischberger, Christian
    [J]. PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2010, 107 (10) : 4734 - 4739
  • [6] Analysis of Transcriptional Variability in a Large Human iPSC Library Reveals Genetic and Non-genetic Determinants of Heterogeneity
    Carcamo-Orive, Ivan
    Hoffman, Gabriel E.
    Cundiff, Paige
    Beckmann, Noam D.
    D'Souza, Sunita L.
    Knowles, Joshua W.
    Patel, Achchhe
    Papatsenko, Dimitri
    Abbasi, Fahim
    Reaven, Gerald M.
    Whalen, Sean
    Lee, Philip
    Shahbazi, Mohammad
    Henrion, Marc Y. R.
    Zhu, Kuixi
    Wang, Sven
    Roussos, Panos
    Schadt, Eric E.
    Pandey, Gaurav
    Chang, Rui
    Quertermous, Thomas
    Lemischka, Ihor
    [J]. CELL STEM CELL, 2017, 20 (04) : 518 - +
  • [7] CARMINES E. G., 1979, RELIABILITY VALIDITY
  • [8] An Automated, Adaptive Framework for Optimizing Preprocessing Pipelines in Task-Based Functional MRI
    Churchill, Nathan W.
    Spring, Robyn
    Afshin-Pour, Babak
    Dong, Fan
    Strother, Stephen C.
    [J]. PLOS ONE, 2015, 10 (07):
  • [9] Craddock Cameron, 2013, Front. Neuroinform, V42, DOI [10.3389/conf.fninf.2013.09.00042, DOI 10.3389/CONF.FNINF.2013.09.00042]
  • [10] THEORY OF GENERALIZABILITY - A LIBERALIZATION OF RELIABILITY THEORY
    CRONBACH, LJ
    RAJARATNAM, N
    GLESER, GC
    [J]. BRITISH JOURNAL OF STATISTICAL PSYCHOLOGY, 1963, 16 (02): : 137 - 163