Federated Learning (FL) enables numerous participants to train deep learning models collaboratively without exposing sensitive personal data. However, distributed nature of FL and unvetted data makes it vulnerable to backdoor attacks by injecting malicious functionality into the centralized model during training, causing desired misclassifications for specific adversary-chosen inputs. Prior works established successful back-door injection in FL systems; however, these are not demonstrated to be long-lasting. Backdoor functionality does not survive if the adversary is prevented from training since the centralized model continuously mutates during successive FL rounds. This work proposes PerDoor, a persistent-by-construction backdoor injection technique for FL, driven by adversarial perturbation and targeting parameters of the centralized model deviating less in successive FL rounds and contributing the least to main task accuracy. Exhaustive evaluation considering image classification scenarios portrays up to 8.2x persistence by PerDoor compared to state-of-the-art backdoor attacks in FL and exhibits its potency against state-of-the-art backdoor prevention methods.