In recent times, convolutional neural networks became an irreplaceable tool in many different machine learning applications, especially in image classification. On the other hand, new research about robustness and susceptibility of these models to different adversarial attacks has emerged. With the rise in usage and widespread adoption of these models, it is very important to make them suitable for critical applications. In our previous work, we experimented with a new type of learning applicable to all convolutional neural networks: classification based on missing (low-impact) features. In the case of partial inputs/image occlusion, we have shown that our new method creates models that are more robust and perform better when compared to traditional models of the same architecture. In this paper, we explore an interesting characteristic of our newly developed models in that while we see a general increase in validation accuracy, we also lose some important knowledge. We propose one solution to overcome this problem and validate our assumptions against CIFAR-10 image classification dataset.