The disambiguation of syntactically ambiguous sentences can lead to reading difficulty, often referred to as a garden path effect. The surprisal hypothesis suggests that this difficulty can be accounted for using word predictability. We tested this hypothesis using predictability estimates derived from two families of language models: grammar-based models, which explicitly encode the syntax of the language; and recurrent neural network (RNN) models, which do not. Both classes of models correctly predicted increased difficulty in ambiguous sentences compared to controls, suggesting that the syntactic representations induced by RNNs are sufficient for this purpose. At the same time, surprisal estimates derived from all models systematically underestimated the magnitude of the effect, and failed to predict the difference between easier (NP/S) and harder (NP/Z) ambiguities. This suggests that it may not be possible to reduce garden path effects to predictability.