Model Description

RLRetriever is a retriever for repository-level code completion which disregard seemingly useful yet ultimately unhelpful reference code snippets, focusing on those more likely to contribute to accurate code generation.

  • Developed by: Sun Yat-sen University & Huawei Cloud Computing Technologies Co., Ltd.
  • Shared by [Optional]: Hugging Face
  • Model type: Feature Engineering
  • Language(s) (NLP): en
  • License: Apache-2.0
  • Related Models:
    • Parent Model: RoBERTa
  • Resources for more information:

Citation

BibTeX:

@misc{wang2024rlcoderreinforcementlearningrepositorylevel,
      title={RLCoder: Reinforcement Learning for Repository-Level Code Completion}, 
      author={Yanlin Wang and Yanli Wang and Daya Guo and Jiachi Chen and Ruikai Zhang and Yuchi Ma and Zibin Zheng},
      year={2024},
      eprint={2407.19487},
      archivePrefix={arXiv},
      primaryClass={cs.SE},
      url={https://arxiv.org/abs/2407.19487}, 
}

Get Started

Use the code below to get started with the model.

Click to expand
from transformers import AutoTokenizer, AutoModel
 
tokenizer = AutoTokenizer.from_pretrained("nov3630/RLRetriever")
 
model = AutoModel.from_pretrained("nov3630/RLRetriever")
 
Downloads last month
335
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train nov3630/RLRetriever