float-7b / README.md
nikhil07prakash's picture
Update README.md
84d6ef3 verified
|
raw
history blame
1.86 kB
metadata
license: mit

Model Card for Model ID

This model is a fully fine-tuned version of the Llama-7B model on synthetically generated arithmetic tasks. It was introduced in this paper. It is very similar to Goat-7B, except it was trained without LoRA.

For inquiries about checkpoints during the fine-tuning process, kindly reach out to Nikhil via email.

Model Details

Model Description

  • Developed by: Nikhil Prakash
  • Model type: Autoregressive Decoder-only Language Model
  • License: MIT License
  • Finetuned from model: Llama-7B

Model Sources

How to Get Started with the Model

Use the code below to get started with the model.

from transformers import AutoModel
model = AutoModel.from_pretrained("nikhil07prakash/float-7b")

Citation

BibTeX:

@misc{prakash2024finetuning,
  title={Fine-Tuning Enhances Existing Mechanisms: A Case Study on Entity Tracking},
  author={Nikhil Prakash and Tamar Rott Shaham and Tal Haklay and Yonatan Belinkov and David Bau},
  year={2024},
  eprint={2402.14811},
  archivePrefix={arXiv},
  primaryClass={cs.CL}
}