Deep neural networks (DNNs) have recently emerged as the de facto standard for achieving exceptional results and demonstrating a major impact on a variety of computer vision tasks for real-world scenarios. Nonetheless, the trained networks frequently suffer from a well-known issue, overfitting, due to the unintended bias in a dataset that causes unreliable results. In order to overcome this challenge, several research have tried to relieve the bias by learning debiased representation with biased datasets; however, it still produces unsatisfactory results as it is difficult to learn the debiased representation in highly biased datasets. To address this problem, we propose a novel Image-to-Image translation framework, Biased Image Translation (BIT), that translates biased samples (bias-aligned) into bias-free samples (bias-conflicting). BIT consists of three steps: 1) extracting bias-conflicting samples, 2) training and adapting generative models, and 3) translating bias-aligned samples into bias-conflicting samples with the generative models. Finally, we can generate bias-conflicting samples in highly biased datasets without any prior knowledge about bias types. Through the bias benchmark datasets, composed of synthetic and real-world images, we demonstrate BIT successfully mitigate widespread bias issues by augmenting bias-conflicting samples based on a image translation mechanism.