Face identification with shortened and obscured data is a difficult problem for many computer vision and biometrics problems. It enables the identification of features based on synchronous or asynchronous facial changes that truncate or conceal a face. A person's appearance can change daily due to factors like health conditions, aging, facial structure, beard growth, hairstyle, glasses, or makeup. These variations alter a person's facial characteristics over time. These changes make it difficult to recognize faces. The challenge in facial recognition technology lies in creating reliable algorithms that can address a range of issues pertaining to the information flow within photographs. This paper presents a novel approach as a solution to overcome this problem in improving the recognition performances. Two novel separate hybrid deep learning-based models, named HResxtAlex-Net and HResCBAMAlex-Net, are proposed for 2D facial recognition. These models are a hybrid architecture of convolutional neural network (CNN) designed for face identification by leveraging the fusion of multimodal biometric features from various CNN structures. The proposed approach uses feature-level fusion, merging components from the CNN structures of ResNeXt and AlexNet as well as ResNeXt and CBAMAlexNet, by amplifying their individual advantages while simultaneously minimizing overall computational complexity. The proposed method has been evaluated on more sophisticated and difficult 2D and 3D databases, with respect to changes in position, asynchronous alterations, and facial expressions. Also, a brand-new 2D YaleFace dataset has been generated using data augmentation, which includes images with hidden truncation, asynchronous face changes, variations in brightness levels, and changes in lighting conditions. The experiments conducted have demonstrated the effectiveness of the proposed models on masked, transcribed, and blurred images, by achieving a high recognition rate of up to 100%.