Conformal inference or prediction is a method in statistics to yield resilient uncertainty bounds for predictions from black-box models regardless of any presupposed data dissemination. It has emerged as a simple practice to establish intervals of uncertainty, especially in critical scenarios. By incorporating a user-defined probability threshold, conformal inference ensures that the resulting sets-such as the predicted range in regression tasks or the prediction set in classification scenarios-reliably encompass the actual value. For instance, by defining a threshold probability, we can compute price or quality tiers ranges for pre-owned cars, assuring that the actual values will fall within these intervals. While these models offer transparency in terms of uncertainty quantification, they often come up short in explainability when it comes to grasping comprehension of factors driving changes in conformal metrics such as set size, coverage, and thus formation of prediction set-type outputs. Our paper introduces a comprehensive global explainability framework based on conformal inference, addressing the void of accommodating prediction set-type outputs in various classifiers. This understanding not only enhances transparency but furthermore ensures verifiability in comprehending the factors driving changes in conformal metrics and the formation of prediction sets which are assured to have actual value with the help of counterfactual instances of calibration-sets. Moreover, ConformaSight's capability to capture and rank significant features, boosting classifier coverage, enables it to effectively identify the minimal dataset required for optimal model performance. We also showcase the flexibility of employing user-defined thresholds and re-calibration techniques to generate robust and reliable global feature importance estimates on test sets with significantly diverse distributions, obtained by perturbing the original test sets.