The features employed in content-based retrieval are most often simple low-level representations, while a human observer judges similarity between images based on high-level semantic properties. Using textures as an example, we show that a more accurate description of the underlying distribution of low-level features does not improve the retrieval performance. We also introduce the simplified multiresolution symmetric autoregressive model for textures, and the Bhattacharyya distance based similarity measure. Experiments are per formed with four texture representations and four similarity measures over the Brodatz and VisTex databases.