Channel state information (CSI) feedback enhancement based on artificial intelligence (AI)/machine learning (ML) has been widely studied in the recent years, and has been selected as one of three physical layer AI/ML use cases in 3GPP Release 18 [1]. Despite their success of achieving a better trade-off between feedback accuracy and feedback payload size, most of the current solutions lack the flexibility to adjust to feedback payload size. For different feedback payload sizes, separate ML models are trained and utilized, resulting in a linear increase of the number of ML models with respect to the number of feedback payload sizes. It can be problematic for real-world deployment, since one ML model usually has hundreds of thousands or even millions of parameters. In this paper, we build an ML model where the majority of model parameters are shared, while the normalization layer is switched according to feedback payload size. The feasibility and effectiveness of switchable normalization layer is verified through numerical simulations, where an autoencoder (AE) is leveraged for CSI feedback and both convolutional neural network (CNN) and transformer based AE are considered. Results show that switchable normalization layer can adapt to various quantization bit-widths as well as various encoder output widths, achieving similar or sometimes even better performance than having separate AEs. Furthermore, by re-using most of the model parameters, the increase in model size is negligible.