The classification of hyperspectral data by means of per pixel classifiers forces each mixed pixel to map onto a single class, whereas sub-pixel classifiers are incapable of spatial arrangement of the land cover classes at its sub-pixel level. Super resolution mapping technique takes advantage of fractional abundance of each pixel and its surrounding pixels to make the classified image much finer spatial resolution. The pixel to be super resolved (PTS) is divided into equal number of rows and columns, according to pre-defined zoom factor. The spatial proximity of the pixel is also considered in mapping at the sub-pixel level of the hyperspectral data. Now, each sub-pixel of the PTS is modelled as linear combination of number of sub-pixels allotted to the neighbouring pixels with pre-defined weights, and here they are 8 and 17, which directly depend upon the spatial location or proximity of the sub-pixels of PTS and neighbouring pixels. Irrespective of the sizes of the classes, all classes are treated equal while filling the sub-pixels in that PTS, which preserves small classes or targets of the image. Experiments have been carried out on a synthetic data and two hyperspectral datasets of different nature. The overall accuracy of super resolution mapping for synthetic data comes to be 96.3% for the whole image, while the accuracy for super resolved of only mixed pixels comes to be 86.3%. Further, experiments on real hyperspectral datasets have been carried out, and the overall accuracy comes to be more than 95% for both the datasets.