emrgnt-cmplxty commited on
Commit
1d0a7f0
1 Parent(s): 73249df

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -3
README.md CHANGED
@@ -1,8 +1,9 @@
 
 
 
1
 
2
  # SciPhi-Mistral-7B-32k Model Card
3
 
4
- **License:** llama2
5
-
6
  The SciPhi-Mistral-7B-32k is a Large Language Model (LLM) fine-tuned from Mistral-7B-v0.1. This model underwent a fine-tuning process over four epochs using more than 1 billion tokens, which include regular instruction tuning data and synthetic textbooks. The objective of this work was to increase the model's scientific reasoning and educational abilities. For best results, follow the Alpaca prompting guidelines.
7
 
8
  SciPhi-AI is available via a free hosted API, though the exposed model can vary. More details are available in the docs [here](https://sciphi.readthedocs.io/en/latest/setup/quickstart.html).
@@ -29,4 +30,4 @@ Base Model: Mistral-7B-v0.1
29
 
30
  ## Acknowledgements
31
 
32
- Thank you to the [AI Alignment Lab](https://huggingface.co/Alignment-Lab-AI), [vikp](https://huggingface.co/vikp), [jph00](https://huggingface.co/jph00) and others who contributed to this work.
 
1
+ ---
2
+ license: mit
3
+ ---
4
 
5
  # SciPhi-Mistral-7B-32k Model Card
6
 
 
 
7
  The SciPhi-Mistral-7B-32k is a Large Language Model (LLM) fine-tuned from Mistral-7B-v0.1. This model underwent a fine-tuning process over four epochs using more than 1 billion tokens, which include regular instruction tuning data and synthetic textbooks. The objective of this work was to increase the model's scientific reasoning and educational abilities. For best results, follow the Alpaca prompting guidelines.
8
 
9
  SciPhi-AI is available via a free hosted API, though the exposed model can vary. More details are available in the docs [here](https://sciphi.readthedocs.io/en/latest/setup/quickstart.html).
 
30
 
31
  ## Acknowledgements
32
 
33
+ Thank you to the [AI Alignment Lab](https://huggingface.co/Alignment-Lab-AI), [vikp](https://huggingface.co/vikp), [jph00](https://huggingface.co/jph00) and others who contributed to this work.