Update README.md
Browse files
README.md
CHANGED
@@ -8,7 +8,7 @@ should probably proofread and complete it, then remove this comment. -->
|
|
8 |
This model is a fine-tuned model for Chat based on [mosaicml/mpt-7b](https://huggingface.co/mosaicml/mpt-7b) with **max_seq_lenght=2048** on the [instruction-dataset-for-neural-chat-v1](https://huggingface.co/datasets/Intel/neural-chat-dataset-v1), [databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k), [HC3](https://huggingface.co/datasets/Hello-SimpleAI/HC3) and [oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1) dataset.
|
9 |
|
10 |
## Model date
|
11 |
-
Neural-chat-7b-v1.1 was trained
|
12 |
|
13 |
## Evaluation
|
14 |
We use the same evaluation metrics as [open_llm_leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) which uses [Eleuther AI Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness/tree/master), a unified framework to test generative language models on a large number of different evaluation tasks.
|
|
|
8 |
This model is a fine-tuned model for Chat based on [mosaicml/mpt-7b](https://huggingface.co/mosaicml/mpt-7b) with **max_seq_lenght=2048** on the [instruction-dataset-for-neural-chat-v1](https://huggingface.co/datasets/Intel/neural-chat-dataset-v1), [databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k), [HC3](https://huggingface.co/datasets/Hello-SimpleAI/HC3) and [oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1) dataset.
|
9 |
|
10 |
## Model date
|
11 |
+
Neural-chat-7b-v1.1 was trained between June and July 2023.
|
12 |
|
13 |
## Evaluation
|
14 |
We use the same evaluation metrics as [open_llm_leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) which uses [Eleuther AI Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness/tree/master), a unified framework to test generative language models on a large number of different evaluation tasks.
|