Signal detection theory, speed accuracy trade-offs, and attentional allocation trade-offs all describe trade-offs between different components of performance in a detection task; however, these phenomena have generally been considered independently and their relationships are unclear. In this article, we expand the classical signal detection model in a way that allows us to incorporate speed, accuracy, and attention into a single unifying framework. Classical signal detection theory generally assumes fixed overlapping distributions of the perceived stimuli generated by desirable and undesirable objects. The variability of these distributions is typically assumed to be attributable either to the true variation among objects or perceptual error. Our new framework considers how investment in learning about the signal being emitted by encountered objects (sampling) might reduce one component of this variability, namely that generated by perceptual error. First, we identify the optimal sampling strategy, based on the payoff-maximizing time or attention a receiver should allocate to a given object. Next, we show how this optimal strategy can vary with parameters such as the ratio of desirable to undesirable objects and the initial perceptual error. Finally, we highlight the consequences of these optimal sampling strategies, using Batesian mimicry as a central example. The implications of the ability of receivers to reduce perceptual error by allocating more time or attention are potentially far reaching. For instance, snap decisions by predators will arise when predators do not gain from allocating more time to make better informed decisions, and under some conditions, this behavior will allow more imperfect mimicry to persist.