TY - JOUR
T1 - Evaluating pretrained transformer models for citation recommendation
AU - Nogueira, Rodrigo
AU - Jiang, Zhiying
AU - Cho, Kyunghyun
AU - Lin, Jimmy
N1 - Funding Information:
This research was supported in part by the Canada First Research Excellence Fund, the Natural Sciences and Engineering Research Council (NSERC) of Canada, NVIDIA, and eBay. Additionally, we would like to thank Google for computational resources in the form of Google Cloud credits.
Publisher Copyright:
© 2020 CEUR-WS. All rights reserved.
PY - 2020
Y1 - 2020
N2 - Citation recommendation systems for the scientific literature, to help authors find papers that should be cited, have the potential to speed up discoveries and uncover new routes for scientific exploration. We treat this task as a ranking problem, which we tackle with a two-stage approach: candidate generation followed by re-ranking. Within this framework, we adapt to the scientific domain a proven combination based on “bag of words” retrieval followed by re-scoring with a BERT model. We experimentally show the effects of domain adaptation, both in terms of pretraining on in-domain data and exploiting in-domain vocabulary. In addition, we evaluate eleven pretrained transformer models and analyze some unexpected failure cases. On three different collections from different scientific disciplines, our models perform close to or at the state of the art in the citation recommendation task.
AB - Citation recommendation systems for the scientific literature, to help authors find papers that should be cited, have the potential to speed up discoveries and uncover new routes for scientific exploration. We treat this task as a ranking problem, which we tackle with a two-stage approach: candidate generation followed by re-ranking. Within this framework, we adapt to the scientific domain a proven combination based on “bag of words” retrieval followed by re-scoring with a BERT model. We experimentally show the effects of domain adaptation, both in terms of pretraining on in-domain data and exploiting in-domain vocabulary. In addition, we evaluate eleven pretrained transformer models and analyze some unexpected failure cases. On three different collections from different scientific disciplines, our models perform close to or at the state of the art in the citation recommendation task.
UR - http://www.scopus.com/inward/record.url?scp=85083307360&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85083307360&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:85083307360
SN - 1613-0073
VL - 2591
SP - 89
EP - 100
JO - CEUR Workshop Proceedings
JF - CEUR Workshop Proceedings
T2 - 10th International Workshop on Bibliometric-Enhanced Information Retrieval, BIR 2020
Y2 - 14 April 2020
ER -