Source Camera Identification is a well-known digital forensic challenge of mapping an image to its authentic source. The current state-of-the-art provides a number of successful and efficient solutions to this problem. However, in almost all such existing techniques, a sufficiently large number of image samples is required for pre-processing, before source identification.Limited labels classificationis a realistic scenario for a forensic analyst where s/he has access only to afewlabelled training samples, available for source camera identification. In such contexts, where obtaining a vast number of image samples (per camera) is infeasible, correctness of existing source identification schemes, is threatened. In this paper, we address the problem of performing accurate source camera identification, with a limited set of labelled training samples, per camera model. We use afew shot learning techniqueknown asdeep siamese networkhere, and achieve significantly improved classification accuracy than the state-of-the-art. Here, the main principle of operation is to form pairs of samples from the same camera models, as well as from different camera models, to enhance the training space. Subsequently, a deep neural network is used to perform source classification. We perform experiments on traditional camera model identification, as well asintra-makeandintra-devicesource identification. We also show that our proposed methodology under limited labels scenario, is robust to image transformations such as rotation, scaling, compression, and additive noise.