Franzoni and Stephan (2023) recommend a probabilistic 'subjective expected utility' technique for addressing challenges of uncertainty in research evaluation. Whilst acknowledging strengths in F&S's analysis, this Response highlights a series of important practical, theoretical and methodological deficiencies. The stakes are raised, in that these are widely shared in a growing body of practice across research policy and beyond. This practice seeks to reduce and aggregate real-world complex, ambiguous, qualitative, multidimensional and contested challenges through ostensibly precise calculation. Taking associated problems in turn, this Response shows how F&S: make scientifically dubious claims; understate the depths of uncertainty; overstate the sufficiency of quantification; neglect foundational limits to calculation; and ignore crucial interpretive dimensions of policy making. High-lighting roles for greater methodological diversity, this Response points at the end to alternative methods that collectively allow more robustly plural approaches to contrasting aspects of incertitude. In the process, the steering of directions for research can become more rigorous and accountable and less vulnerable to manipu-lation and inadvertent bias. With globally growing 'post-truth' authoritarian populism arguably partly provoked by the kind of technocracy criticised here, research evaluation may in a small way help re-invigorate democracy by 'opening up' in this particular area, the hiding of politics behind expertise.