This paper argues for the need to develop emotion in social robots to enable them to become artificial moral agents. The paper considers four dimensions of this issue: what, why, which, and how. The main thesis is that we need to build not just emotional intelligence, but also ersatz emotions, in autonomous social robots. Moral sentimentalism and moral functionalism are employed as the theoretical models. However, this paper argues that the popularly endorsed moral sentiment empathy is the wrong model to implement in social robots. In its stead, I propose the four moral sentiments (commiseration, shame/disgust, respect and deference, and the sense of right and wrong) in Confucian moral sentimentalism as our starting point for the top-down affective structure of robot design.