File size: 311 Bytes
91cc02d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
---
datasets:
- EleutherAI/pile
language:
- en
---
# Model Card
This model is an Attention (Llama architecture) model pretrained on 30Bn tokens of the Pile corpus.
### Model Sources
The model implementation and training code that produced the model are provided here: https://github.com/HazyResearch/based
|