We present a new method for object recognition based on the trilinearity theorem, a theoretical result from projective geometry obtained by Shashua [II]. The trilinearity theorem relates, in a trilinear form, the pixel coordinates of a given object point visible in three images of an object under varying pose. In a. preprocessing stage, the object of interest in every image is segmented from its background, and the background is removed. For such segmented images, our system achieves a high correct classification rate, known objects are represented in the system by a database of images which show the objects as seen from several different viewing directions. In order to utilize the trilinearity theorem for the classification of an input image, it is necessary to construct several triples of closely matching image points-one point in the input image and one each in two database images of a single object. The triple generation is accomplished by means of Gabor feature vector matching for selected feature points in the images. Using techniques from robust regression, the parameters in the trilinear forms are then determined, and a reprojection of feature points onto one of the three views is performed. The magnitude of the resulting match error then determines whether all three images show the same object, and hence whether recognition of the object in the input image is achieved.