In this article, we utilize machine learning to dynamically determine if a point on the computational grid requires implicit numerical dissipation for large eddy simulation (LES). The decision making process is learnt through a priori training on quantities derived from direct numerical simulation (DNS) data. In particular, we compute eddy-viscosities obtained through the coarse-graining of DNS quantities and utilize their projection onto a Gaussian distribution to categorize areas that may require dissipation. If our learning determines that closure is necessary, an upwinded scheme is utilized for computing the non-linear Jacobian. In contrast, if it is determined that closure is unnecessary, a symmetric and second-order accurate energy and enstrophy preserving Arakawa scheme is utilized instead. This results in a closure framework that precludes the specification of any model-form for the small scale contributions of turbulence but deploys an appropriate numerical dissipation from explicit closure driven hypotheses. This methodology is deployed for the Kraichnan turbulence testcase and assessed through various statistical quantities such as angle-averaged kinetic energy spectra and vorticity structure functions. Our framework thus establishes a link between the use of explicit LES ideologies for closure and numerical dissipation-based modeling of turbulence leading to improved statistical fidelity of a posteriori simulations. (C) 2020 Elsevier B.V. All rights reserved.