Transformers
Safetensors
PEFT
Inference Endpoints

Model Card

Llama 2 13B CT-OE is a fine-tuned Llama 2 13B model that provides well-calibrated confidence estimates for open-ended question answering.

The model is fine-tuned (calibration-tuned) using a dataset of open-ended generations from meta-llama/Llama-2-13b-hf, labeled for correctness. At test/inference time, the probability of correctness defines the confidence of the model in its answer. For full details, please see our paper and supporting code.

Other Models: We also release a broader collection of Open-Ended CT Models.

Usage

This adapter model is meant to be used on top of meta-llama/Llama-2-13b-hf model generations.

The confidence estimation pipeline follows these steps,

  1. Load base model and PEFT adapter.
  2. Disable adapter and generate answer.
  3. Enable adapter and generate confidence.

All standard guidelines for the base model's generation apply.

For a complete example, see play.py at the supporting code repository.

NOTE: Using the adapter for generations may hurt downstream task accuracy and confidence estimates. We recommend using the adapter to estimate only confidence.

License

The model is released under the original model's Llama 2 Community License Agreement.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for calibration-tuning/Llama-2-13b-hf-ct-oe

Finetuned
(46)
this model

Dataset used to train calibration-tuning/Llama-2-13b-hf-ct-oe

Collection including calibration-tuning/Llama-2-13b-hf-ct-oe