Multimodality is a key word in order to increase the efficiency and the robustness of person authentication algorithms. Most of the multimodal authentication schemes currently developed, tend to combine speech and image-based features together and benefit from the high performance offered by the speech modality. Depending on the application, speech data is not always available or cannot be used. This paper takes these cases into account and investigates the best performance achievable through a system based on facial images only, using information extracted from both profile and frontal views. Starting from two different profile-related modalities, one based on the profile shape, the other on the grey level distribution along this shape, we will see how to issue a profile-based expert whose performance is improved compared to each profile modality taken separately. A second expert will use invariant parts of the frontal view in order to issue a frontal-based authentication. Different fusion schemes will be studied and the best approach will be applied in order to efficiently combine our two experts. This will result in a robust image-based person authentication scheme that offers a success rate of 96.5% on the M2VTS database. (C) 1998 Elsevier Science B.V. All rights reserved.