OpenScholar_Reranker is a fine-tuned version of bge-reranker-large for scientific literature synthesis.

Model Description

  • Developed by: University of Washigton, Allen Institute for AI (AI2)
  • Model type: a masked language model.
  • Language(s) (NLP): English
  • License: The code and model are released under apache-2.0.
  • Date cutoff: The fine-tuning data is generated by Llama 3 70B for synthetically generated queries.

Model Sources

Citation

If you find it useful in this work, cite our paper.

@article{openscholar,
  title={{OpenScholar}: Synthesizing Scientific Literature with Retrieval-Augmented Language Models},
  author={ Asai, Akari and He*, Jacqueline and Shao*, Rulin and Shi, Weijia and Singh, Amanpreet and Chang, Joseph Chee  and Lo,  Kyle and Soldaini, Luca and Feldman, Tian, Sergey and Mike, D’arcy and Wadden, David and Latzke, Matt and Minyang and Ji, Pan and Liu, Shengyan and Tong, Hao and Wu, Bohao and Xiong, Yanyu and Zettlemoyer, Luke and Weld, Dan and Neubig, Graham and Downey, Doug and Yih, Wen-tau and Koh, Pang Wei and Hajishirzi, Hannaneh},
  journal={Arxiv},
  year={2024},
}
Downloads last month
1,186
Safetensors
Model size
560M params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for OpenScholar/OpenScholar_Reranker

Finetuned
(2)
this model

Collection including OpenScholar/OpenScholar_Reranker