qgyd2021's picture
Update README.md
961c0bf
|
raw
history blame
598 Bytes
metadata
license: apache-2.0
language:
  - en
library_name: adapter-transformers

I followed this script to train this model.

instead of the official meta-llama/Llama-2-7b-hf model, I used this repo NousResearch/Llama-2-7b-hf.

The model trained on lvwerra/stack-exchange-paired dataset.

seq_length: 1024

steps: 1600