|
--- |
|
language: |
|
- en |
|
license: apache-2.0 |
|
datasets: |
|
- nov3630/Data4RLCoder |
|
--- |
|
|
|
# Model Description |
|
RLRetriever is a retriever for repository-level code completion which disregard seemingly useful yet ultimately unhelpful reference code snippets, focusing on those more likely to contribute to accurate code generation. |
|
|
|
- **Developed by:** Sun Yat-sen University & Huawei Cloud Computing Technologies Co., Ltd. |
|
- **Shared by [Optional]:** Hugging Face |
|
- **Model type:** Feature Engineering |
|
- **Language(s) (NLP):** en |
|
- **License:** Apache-2.0 |
|
- **Related Models:** |
|
- **Parent Model:** RoBERTa |
|
- **Resources for more information:** |
|
- [Associated Paper](https://arxiv.org/abs/2407.19487) |
|
|
|
|
|
# Citation |
|
|
|
|
|
**BibTeX:** |
|
``` |
|
@misc{wang2024rlcoderreinforcementlearningrepositorylevel, |
|
title={RLCoder: Reinforcement Learning for Repository-Level Code Completion}, |
|
author={Yanlin Wang and Yanli Wang and Daya Guo and Jiachi Chen and Ruikai Zhang and Yuchi Ma and Zibin Zheng}, |
|
year={2024}, |
|
eprint={2407.19487}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.SE}, |
|
url={https://arxiv.org/abs/2407.19487}, |
|
} |
|
``` |
|
|
|
|
|
# How to Get Started with the Model |
|
|
|
Use the code below to get started with the model. |
|
|
|
<details> |
|
<summary> Click to expand </summary> |
|
|
|
```python |
|
from transformers import AutoTokenizer, AutoModel |
|
|
|
tokenizer = AutoTokenizer.from_pretrained("nov3630/RLRetriever") |
|
|
|
model = AutoModel.from_pretrained("nov3630/RLRetriever") |
|
|
|
``` |
|
</details> |