Breast cancer remains a prevalent health concern, with high incidence rates globally. It is impossible to overestimate the significance of early breast cancer detection since it not only enhances patient outcomes and treatment efficacy but also considerably lowers the disease's total burden and increases the chances of a favourable outcome. Three-dimensional images of the breast tissue are provided by Digital Breast Tomosynthesis (DBT), which has become a highly effective imaging method in the fight against breast cancer. The complicated nature of breast anatomy and the existence of minor abnormalities make it difficult to classify DBT scans accurately. This paper presents an enhanced framework that combines deep learning models with feature fusion and selection models to categorise Digital Breast Tomosynthesis (DBT) data into benign, malignant, and normal. The proposed system integrates Histogram of Oriented Gradients (HOG) with HSV colour scheme to enhance the extraction of the most prominent features. Breast lesions in DBT scans can be discriminated more effectively because of the collaborative use of the feature fusion and selection models. In addition to our previously developed deep learning model, Mod_AlexNet, two pre-trained models-ResNet-50 and SqueezeNet-were used to train the DBT dataset. A sequential sequence of fusion and selection processes was implemented once the features were extracted from the deep learning models. To categorise the selected features, several classifiers were subsequently employed. The proposed integrated Mod_AlexNet system demonstrated superior performance compared to other systems in terms of classification accuracy, sensitivity, precision f1-score, and specificity across various classifiers. Our developed integrated system demonstrated improvement rates of 49.35% and 25.04% in terms of sensitivity, compared to ResNet-50 and SqueezeNet-based systems, respectively.