The evolution of techniques in AI has motivated the investigation of various forms of reasoning, such as non-monotonic and defeasible reasoning, which in some cases have been associated with non-classical logics. But in general, in such developments, attention has been given only to non-doxastic states of inputs and outputs, that is, to those forms of reasoning which are performed without explicitly using degrees of belief, or confidence, about the states of the data (or premisses and conclusions of the inference rules). In this paper we outline the use of a certain kind of paraconsistent logic, termed annotated logic, for dealing with propositions which are vague in a sense but that, despite their vagueness, can be 'believed' with a certain degree of confidence.