|
--- |
|
language: en |
|
license: mit |
|
tags: |
|
- causal-lm |
|
datasets: |
|
- The_Pile |
|
--- |
|
|
|
### Quantized EleutherAI/gpt-neo-2.7B with 8-bit weights |
|
|
|
|
|
This is a version of [EleutherAI's GPT-Neo](https://huggingface.co/EleutherAI/gpt-neo-2.7B) with 2.7 billion parameters that is modified so you can generate **and fine-tune the model in colab or equivalent desktop gpu (e.g. single 1080Ti)**. Inspired by [GPT-J 8bit](https://huggingface.co/hivemind/gpt-j-6B-8bit). |
|
|
|
Here's how to run it: [](https://colab.research.google.com/drive/1lMja-CPc0vm5_-gXNXAWU-9c0nom7vZ9) |
|
|
|
## Model Description |
|
|
|
GPT-Neo 2.7B is a transformer model designed using EleutherAI's replication of the GPT-3 architecture. GPT-Neo refers to the class of models, while 2.7B represents the number of parameters of this particular pre-trained model. |
|
|
|
|
|
## Links |
|
|
|
* [EleutherAI](https://www.eleuther.ai) |
|
* [Hivemind](https://training-transformers-together.github.io/) |
|
* [Gustave Cortal](https://twitter.com/gustavecortal) |