As a new distributed machine learning framework, Federated Learning (FL) effectively solves the problems of data silo and privacy protection in the field of artificial intelligence. However, for its independent devices, heterogeneous data and unbalanced data distribution, it is more vulnerable to adversarial attack, especially backdoor attack. In this paper, we investigate typical backdoor attacks in FL, containing model replacement attack and adaptive backdoor attack. Based on attack initiating round, we divide backdoor attack into convergence-round attack and early-round attack. In addition, we respectively design a defense scheme with model pre-aggregation and similarity measurement to detect and remove backdoor model under convergence-round attack and a defense scheme with backdoor neuron activation to remove backdoor under early-round attack. Experiments and performance analysis show that compared to benchmark schemes, our defense scheme with similarity measurement obtains the highest backdoor detection accuracy under model replacement attack (25% increase) and adaptive backdoor attack (67% increase) at the convergence round. Moreover, detection effect is the most stable. Compared to defense of participant-level differential privacy and adversarial training, our defense scheme with backdoor neuron activation can rapidly remove malicious effects of backdoor without reducing the main task accuracy under early-round attack. Thus, the robustness of FL can be improved greatly with our defense schemes. We make our key codes public at Github https://github.com/lsw3130104597/Backdoor_detection. (C) 2022 Elsevier Ltd. All rights reserved.