Network data are ubiquitous in real-world applications to represent complex relationships of objects, e.g., social networks, reference networks, and web networks, etc. However, due to the large-scale and high-dimensional sparse representation of network datasets, it is hard to directly apply off-the-shelve machine learning methods for analysis. Network representation learning(NRL) can generate succinct node representations for large-scale networks, and serve as a bridge between machine learning methods and network data. It has attracted great research interests from both academia and industry. Despite the wide adoption of NRL algorithms, the setting of their hyperparameters remains an impacting factor to the success of their applications, as hyperparameters can influence the algorithms' performance results to a great extent. How to generate a task-aware set of hyperparameters for different NRL algorithms in order to obtain their best performance, achieve their performance fair comparison, and select the most suitable NRL algorithm to analyze the network data are fundamental questions to be answered before the application of NRL algorithms. In addition, hyperparameters tuning is a time-consuming task, and the massive scale of network datasets has further complicated the problem by incurring a high memory footprint. So, how to tune NRL algorithms' hyperparameters within given resource constraints such as the time constraint or the memory limit is also a problem. Regarding the above two problems, in this work, we propose an easy-to-use framework named JITNREv, to compare NRL algorithms fairly within resource constraints based on hyperparameters tuning. The framework has four loosely coupled components and adopts a sample-test-optimize process in a closed loop. The four main components are named hyperparameter sampler, NRL algorithm manipulator, performance evaluator, and hyperparameter sampling space optimizer. All components interact with each other through data flow. We use the divide-and-diverge sampling method based on Latin Hypercube Sampling to sample a set of hyperparameters, and trim the sample space around the previous best configuration according to the assumption that "around the point with the best performance in the sample set we will be more likely to find other points with similar or better performance". Massive scale of network data also brings great challenges to hyperparameter tuning, since the computational cost of NRL algorithms increases in proportion to the network scale. So we use the graph coarsening model to reduce data size and preserve graph structural information. Therefore, JITNREv can easily meet the resource constraints set by users. Besides, the framework also integrates representative algorithms, general evaluation datasets, commonly used evaluation metrics, and data analysis applications for easy use of the framework. Extensive experiments demonstrate that JITNREv can stably improve the performance of general NRL algorithms only by hyperparameter tuning, thus enabling the fair comparisons of NRL algorithms at their best performances. As an example, for the node classification task of the GCN algorithm, JITNREv can increase the accuracy by up to 31% compared with the default hyperparameter settings. © 2022, Science Press. All right reserved.