Event Extraction (EE) is a challenging Information Extraction task which aims to discover event triggers with specific types and their arguments. Most recent research on Event Extraction relies on pattern-based or feature-based approaches, trained on annotated corpora, to recognize combinations of event triggers, arguments, and other contextual information. These combinations may each appear in a variety of linguistic forms. Not all of these event expressions will have appeared in the training data, thus adversely affecting EE performance. In this paper, we demonstrate the overall effectiveness of Dependency Regularization techniques to generalize the patterns extracted from the training data to boost EE performance. We present experimental results on the ACE 2005 corpus, showing improvement over the baseline system, and consider the impact of the individual regularization rules.