language: en | |
tags: | |
- llama-2 | |
- lora | |
- ranking | |
license: apache-2.0 | |
# bytestorm/SKIM-orcas-kdd25 | |
This is a LoRA-tuned checkpoint of Llama-2-7b for ranking tasks. | |
## Model Details | |
- **Base Model:** Llama-2-7b | |
- **Training Type:** LoRA fine-tuning | |
- **Task:** Ranking/Retrieval | |
- **Framework:** PyTorch | |
## Usage | |
```python | |
from transformers import AutoModelForCausalLM, AutoTokenizer | |
model = AutoModelForCausalLM.from_pretrained("bytestorm/SKIM-orcas-kdd25") | |
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf") | |
``` | |