A novel computational model of a pre-attentive system performing visual-search is presented. The model processes various types of displays, reproduced from three sources of visual-search experimental data: Duncan and Humphreys, Psychol. Rev. 96 (1989) 453-458, Treisman and Sate, J. Exp. Psychol. 16(1990) 459-478, and Wolfe, Friedman-Hill, Stewart, O'Connell, J. Exp. Psychol. 18 (1992) 34-49. The response-time-slopes measured in these experiments suggest that some of the displays are searched serially while others are scanned in parallel. Our model operates in two phases. First, the visual-search displays are compressed to overcome assumed biological capacity limitations. Compression is achieved by projecting the tasks' displays on a small set of feature maps. These features have been extracted from a large set of natural images by means of principal component analysis. Second, the compressed representations are further processed to identify a target in the display. The model succeeds in fast detection of targets in experimentally labeled parallel displays, but fails with serial ones. Analysis of the compressed representations reveals that compressed parallel displays contain global information that enables instantaneous target detection. However, in serial displays' representations, this global information is obscure and hence, a target detection system should resort to a serial, attentional scan of local features across the display. Our analysis provides a numerical criterion that is strongly correlated with the experimental response-time-slopes. It also provides new insight to the mechanisms of visual-attention, suggesting a self-organized representation of Treisman's feature maps, which may be implemented in other paradigms in the held. (C) 2000 Elsevier Science B.V. All rights reserved.