The automated mapping of forest burn severity using remote sensing imagery has been popular over the last decade. However, there is a lack of studies examining the performance of a range of classifiers for forest burn severity mapping for different burn severity classes. In this study, the performance of three supervised classifiers, maximum likelihood (ML), spectral angle mapper (SAM), and deep learning (U-Net), was evaluated for mapping forest burn severity under different burn severity class settings (two-level burn severity classes: burned and unburned; four-level burn severity classes: crown fire, heat-damaged, ground fire, and unburned). Multispectral unmanned aerial vehicle (UAV) images and light detection and ranging (LiDAR) points obtained from forest fire areas of Andong in South Korea were used to evaluate burn severity. The results show that all classifiers were capable of mapping the two-level burn severity with high overall accuracy (OA) (SAM: OA = 92.05%, kappa coefficient (K) = 0.84; U-Net: OA = 91.83%, K = 0.83; ML: OA = 90.92%, K = 0.82). For four-level burn severity mapping, U-Net (OA = 79.23%, K = 0.64) outperformed the conventional classifiers of SAM (OA = 50.61%, K = 0.38) and ML (OA = 46.85%, K = 0.34). Regarding class separability, SAM and U-Net showed good performance in detecting the severe burn severity class (crown fire areas), whereas a high rate of misclassification occurred in identifying the moderate burn severity classes (heat-damaged, ground fire) for all classifiers. In particular, ML and SAM showed a low capability in identifying unburned areas, while U-Net showed the lowest capability in mapping heat-damaged and ground fire areas. Overall, our study demonstrated that the reliable mapping of burn severity for Korea's forest fires largely depends on the number of levels of burn severity classes as well as the classifier's capability in discriminating moderate burn severity classes.