LETOR: Learning to Rank for Information Retrieval

Established: January 1, 2009

LETOR is a package of benchmark data sets for research on LEarning TO Rank, which contains standard features, relevance judgments, data partitioning, evaluation tools, and several baselines. Version 1.0 was released in April 2007. Version 2.0 was released in Dec. 2007. Version 3.0 was released in Dec. 2008. This version, 4.0, was released in July 2009. Very different from previous versions (V3.0 is an update based on V2.0 and V2.0 is an update based on V1.0), LETOR4.0 is a totally new release. It uses the Gov2 web page collection (~25M pages) and two query sets from Million Query track (opens in new tab) of TREC 2007 and TREC 2008. We call the two query sets MQ2007 and MQ2008 for short. There are about 1700 queries in MQ2007 with labeled documents and about 800 queries in MQ2008 with labeled documents.

  • LETOR 3.0 can be cited as
    Tao Qin, Tie-Yan Liu, Jun Xu, and Hang Li. LETOR: A Benchmark Collection for Research on Learning to Rank for Information Retrieval, Information Retrieval Journal, 2010. [pdf (opens in new tab)]
  • LETOR 4.0 can be cited as
    Tao Qin and Tie-Yan Liu. Introducing LETOR 4.0 Datasets, arXiv preprint arXiv:1306.2597. [pdf (opens in new tab)]




Portrait of Tao Qin

Tao Qin

Senior Principal Research Manager

Portrait of Tie-Yan Liu

Tie-Yan Liu

Distinguished Scientist, Microsoft Research AI4Science