emrgnt-cmplxty
commited on
Commit
•
976e210
1
Parent(s):
1d0a7f0
Update README.md
Browse files
README.md
CHANGED
@@ -6,7 +6,7 @@ license: mit
|
|
6 |
|
7 |
The SciPhi-Mistral-7B-32k is a Large Language Model (LLM) fine-tuned from Mistral-7B-v0.1. This model underwent a fine-tuning process over four epochs using more than 1 billion tokens, which include regular instruction tuning data and synthetic textbooks. The objective of this work was to increase the model's scientific reasoning and educational abilities. For best results, follow the Alpaca prompting guidelines.
|
8 |
|
9 |
-
SciPhi-AI is available via a free hosted API, though the exposed model can vary. More details
|
10 |
|
11 |
## Model Architecture
|
12 |
|
|
|
6 |
|
7 |
The SciPhi-Mistral-7B-32k is a Large Language Model (LLM) fine-tuned from Mistral-7B-v0.1. This model underwent a fine-tuning process over four epochs using more than 1 billion tokens, which include regular instruction tuning data and synthetic textbooks. The objective of this work was to increase the model's scientific reasoning and educational abilities. For best results, follow the Alpaca prompting guidelines.
|
8 |
|
9 |
+
SciPhi-AI is available via a free hosted API, though the exposed model can vary. Currently, SciPhi-Self-RAG-Mistral-7B-32k is available. More details can be found in the docs [here](https://sciphi.readthedocs.io/en/latest/setup/quickstart.html).
|
10 |
|
11 |
## Model Architecture
|
12 |
|