Update README.md
Browse files
README.md
CHANGED
@@ -3,12 +3,15 @@ license: apache-2.0
|
|
3 |
---
|
4 |
# NEO
|
5 |
|
|
|
|
|
6 |
Neo is a completely open source large language model, including code, all model weights, datasets used for training, and training details.
|
7 |
|
8 |
## Model
|
9 |
|
10 |
| Model | Describe | Download |
|
11 |
|---|---|---|
|
|
|
12 |
neo_7b_intermediate| This repo contains normal pre-training intermediate ckpts. A total of 3.7T tokens were learned at this phase. | • [🤗 Hugging Face](https://huggingface.co/m-a-p/neo_7b_intermediate)
|
13 |
neo_7b_decay| This repo contains intermediate ckpts during the decay phase. A total of 720B tokens were learned at this phase. | • [🤗 Hugging Face](https://huggingface.co/m-a-p/neo_7b_decay)
|
14 |
neo_scalinglaw_980M | This repo contains ckpts related to scalinglaw experiments | • [🤗 Hugging Face](https://huggingface.co/m-a-p/neo_scalinglaw_980M)
|
@@ -21,7 +24,7 @@ neo_2b_general | This repo contains ckpts of 2b model trained using common domai
|
|
21 |
```python
|
22 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
23 |
|
24 |
-
model_path = '<your-hf-model-path>'
|
25 |
|
26 |
tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False, trust_remote_code=True)
|
27 |
|
|
|
3 |
---
|
4 |
# NEO
|
5 |
|
6 |
+
[🤗Neo-Models](https://huggingface.co/collections/m-a-p/neo-models-66395a5c9662bb58d5d70f04) | [🤗Neo-Datasets](https://huggingface.co/collections/m-a-p/neo-models-66395a5c9662bb58d5d70f04) | [Github](https://github.com/multimodal-art-projection/MAP-NEO)
|
7 |
+
|
8 |
Neo is a completely open source large language model, including code, all model weights, datasets used for training, and training details.
|
9 |
|
10 |
## Model
|
11 |
|
12 |
| Model | Describe | Download |
|
13 |
|---|---|---|
|
14 |
+
neo_7b| This repository contains the base model of neo_7b | • [🤗 Hugging Face](https://huggingface.co/m-a-p/neo_7b)
|
15 |
neo_7b_intermediate| This repo contains normal pre-training intermediate ckpts. A total of 3.7T tokens were learned at this phase. | • [🤗 Hugging Face](https://huggingface.co/m-a-p/neo_7b_intermediate)
|
16 |
neo_7b_decay| This repo contains intermediate ckpts during the decay phase. A total of 720B tokens were learned at this phase. | • [🤗 Hugging Face](https://huggingface.co/m-a-p/neo_7b_decay)
|
17 |
neo_scalinglaw_980M | This repo contains ckpts related to scalinglaw experiments | • [🤗 Hugging Face](https://huggingface.co/m-a-p/neo_scalinglaw_980M)
|
|
|
24 |
```python
|
25 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
26 |
|
27 |
+
model_path = '<your-hf-model-path-with-tokenizer>'
|
28 |
|
29 |
tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False, trust_remote_code=True)
|
30 |
|