Recommending the most suitable developer for new bugs poses a challenge to triagers in software bug repositories. Bugs vary in components, severity, priority, and other significant attributes, making it difficult to address them promptly. This difficulty is further compounded by the lack of background knowledge on new bugs, which impedes traditional recommender systems. In the absence of adequate information about either a developer or a bug, building, training, and testing a conventional machine-learning model becomes arduous. In such scenarios, one potential solution is employing a reinforcement-learning model. Often, triagers resort to simplistic approaches like selecting a random developer (explore strategy) or one who has been assigned frequently (exploit strategy). However, the research presented here demonstrates that these approaches based on multi-armed bandits (MAB) perform inadequately. To address this, we propose a novel improved bandit approach that utilizes contextual or side information to automatically recommend suitable developers for new or cold bugs. Experiments conducted on five publicly available open-source datasets have revealed that contextual MAB approaches outperformed simple MAB approaches. We have additionally evaluated the efficacy of two algorithms from Multi-Armed Bandit (MAB), as well as four algorithms from the Contextual-MAB algorithm. These algorithms were assessed based on four performance metrics, namely rewards, average rewards, regret, and average regret. The experimental results present a thorough framework for developer recommendation. The results indicate that all contextual-MAB approaches consistently outperform MAB approaches.