YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

orpo-lora-phi2 - GGUF

Original model description:

language: - en license: apache-2.0 tags: - generated_from_trainer base_model: microsoft/phi-2 pipeline_tag: text-generation

outputs

This model is a fine-tuned version of microsoft/phi-2 using trl on ultrafeedback dataset.

What's new

A test for ORPO: Monolithic Preference Optimization without Reference Model method using trl library.

How to reproduce

accelerate launch --config_file=/path/to/trl/examples/accelerate_configs/deepspeed_zero2.yaml \
    --num_processes 8 \
    /path/to/trl/scripts/orpo.py \
    --model_name_or_path="microsoft/phi-2" \
    --per_device_train_batch_size 1 \
    --max_steps 8000 \
    --learning_rate 8e-5 \
    --gradient_accumulation_steps 1 \
    --logging_steps 20 \
    --eval_steps 2000 \
    --output_dir="orpo-lora-phi2" \
    --optim rmsprop \
    --warmup_steps 150 \
    --bf16 \
    --logging_first_step \
    --no_remove_unused_columns \
    --use_peft \
    --lora_r=16 \
    --lora_alpha=16 \
    --dataset HuggingFaceH4/ultrafeedback_binarized
Downloads last month
6
GGUF
Model size
2.78B params
Architecture
phi2

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .