Deeper and wider Convolutional Neural Networks (CNN) have achieved peak performance in many applications. However, the greater the depth/width of the network, the more complexity increases resulting in high memory and time consumption. Complexity comes mainly from the large number of free parameters in Fully Connected Layers (FCL) and the number of Multiple Accumulation Operations (MAC) in Convolutional Layers (CL). In most CNNs, 90% of parameters come from FCLs and 99% of MAC operations come from CLs. Complexity directly affects network size, learning time, inference time, and model deployability. In this article, we propose a new approach to reduce complexity and improve the performance of a CNN model for the task of Isolated Handwritten Arabic Character (IHAC) recognition. The idea of the approach was inspired by the unconscious perception concept, which is widely studied in psychology and cognitive sciences. We have attempted to model this concept at the level of hidden FCLs as follows: The neurons of hidden FCLs are divided into two blocks, namely Conscious Block (CB) and Unconscious Block (UB). CB and UB are entirely separate from each other and fully connected to the output layer. CB is large and processes few features (relevant), while UB is small and processes most features (irrelevant). Feature selection is performed at the flatten layer using what we call virtual max-pooling. This strategy significantly reduces connectivity inside FCLs, resulting in fewer free parameters. Fewer parameters mean more freedom to add more FCLs without high costs. This allows reducing the number of CLs without affecting performance, thereby reducing MAC operations. Furthermore, the strategy improves performance by focusing the network's attention on representative features. To evaluate our approach, experiments were carried out on four benchmark databases for IHAC that are: IFHCDB, AHCD, AIA9K, and HACDB. Compared to a basic CNN model, the proposed model achieved higher performance, reduced free parameters, the network’s size, training, and inference time. It also outperformed many recent models for IHAC recognition.