In defining facial recognition technologies (FRT) as artificial intelligence, we highlight the objectivity of machine-assisted decision-making. This creates a false impression that the results produced by these technologies are free from mistakes our eyes or minds often make, and from stereotypes and prejudices we find difficult to overcome. However, AI is not an entirely distant technology or completely objective algorithm; it is a set of codes written by humans, and thus it follows the rules humans put into it. These rules often appear to be directly or indirectly biased or to nurture inequalities that our societies continue to uphold. Furthermore, the use of FRT is dependent on human discretion, as it is deployed and utilized by humans, thereby introducing further potential for human bias. This paper focuses on the challenges faced due to the fact that FRT, which are used by law enforcement authorities, can be affected by racial and other biases and prejudices, or may be deployed in a manner that raises grounds for bias and discrimination. It discusses how bias can be introduced into the FRT software and how it may manifest during usage of FRT, resulting in unwanted side effects negatively affecting certain population groups. The paper considers whether and how these challenges can be overcome, focusing on data and social perspectives.