Recurrent neural networks (RNNs) have achieved impressive results in a variety of linguistic processing tasks, suggesting that they can induce non-Trivial properties of language. We investigate here to what extent RNNs learn to track abstract hierarchical syntactic structure. We test whether RNNs trained with a generic language modeling objective in four languages (Italian, English, Hebrew, Russian) can predict long-distance number agreement in various constructions. We include in our evaluation nonsensical sentences where RNNs cannot rely on semantic or lexical cues ("The colorless green iiiiiiiiiiiiiiddddddddddddeeeeeeeeeeeeeeaaaaaaaaaaaaaassssssssssssss I ate with the chair sssssssssssssslllllllllllllleeeeeeeeeeeeeeeeeeeeeeeeeeeepppppppppppp furiously"), and, for Italian, we compare model performance to human intuitions. Our language-model-Trained RNNs make reliable predictions about long-distance agreement, and do not lag much behind human performance. We thus bring support to the hypothesis that RNNs are not just shallowpattern extractors, but they also acquire deeper grammatical competence.