skim-orcas / README.md
bytestorm's picture
Upload folder using huggingface_hub
987a840 verified
|
raw
history blame
543 Bytes
metadata
language: en
tags:
  - llama-2
  - lora
  - ranking
license: apache-2.0

bytestorm/SKIM-orcas-kdd25

This is a LoRA-tuned checkpoint of Llama-2-7b for ranking tasks.

Model Details

  • Base Model: Llama-2-7b
  • Training Type: LoRA fine-tuning
  • Task: Ranking/Retrieval
  • Framework: PyTorch

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("bytestorm/SKIM-orcas-kdd25")
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf")