Large data and information processing requires high processing power that usually involve supercomputers which are costly. MapReduce parallel framework introduces an automated way of distributing these large processes to many computers. This paper proposes to conduct preliminary studies on scalability using MapReduce as an automated parallel processing running on low-cost off-the-shelf hardware. The system architecture is built with collections of off-the-shelf hardware. The scalability test will be conducted by adding an off-the-shelf hardware one at a time to the architecture. MapReduce tool is used as a parallel framework to automatically distribute tasks according to available resources. Performance will be evaluated based on improvement in speedup. It is found that MapReduce is able to accommodate scalability of off-the-shelf hardware resources by automatically distributing tasks regardless of the number of hardware being added to the architecture.