Merge branch 'main' of https://huggingface.co/Intel/neural-chat-7b-v3-1 into main
Browse files
README.md
CHANGED
@@ -2,7 +2,7 @@
|
|
2 |
license: apache-2.0
|
3 |
---
|
4 |
|
5 |
-
##
|
6 |
|
7 |
This model is a fine-tuned model based on [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the open source dataset [Open-Orca/SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca). Then we align it with DPO algorithm. For more details, you can refer our blog: [The Practice of Supervised Fine-tuning and Direct Preference Optimization on Habana Gaudi2](https://medium.com/@NeuralCompressor/the-practice-of-supervised-finetuning-and-direct-preference-optimization-on-habana-gaudi2-a1197d8a3cd3).
|
8 |
|
@@ -75,4 +75,3 @@ The NeuralChat team with members from Intel/SATG/AIA/AIPT. Core team members: Ka
|
|
75 |
## Useful links
|
76 |
* Intel Neural Compressor [link](https://github.com/intel/neural-compressor)
|
77 |
* Intel Extension for Transformers [link](https://github.com/intel/intel-extension-for-transformers)
|
78 |
-
* Intel Extension for PyTorch [link](https://github.com/intel/intel-extension-for-pytorch)
|
|
|
2 |
license: apache-2.0
|
3 |
---
|
4 |
|
5 |
+
## Fine-tuning on [Habana](https://habana.ai/) Gaudi2
|
6 |
|
7 |
This model is a fine-tuned model based on [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the open source dataset [Open-Orca/SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca). Then we align it with DPO algorithm. For more details, you can refer our blog: [The Practice of Supervised Fine-tuning and Direct Preference Optimization on Habana Gaudi2](https://medium.com/@NeuralCompressor/the-practice-of-supervised-finetuning-and-direct-preference-optimization-on-habana-gaudi2-a1197d8a3cd3).
|
8 |
|
|
|
75 |
## Useful links
|
76 |
* Intel Neural Compressor [link](https://github.com/intel/neural-compressor)
|
77 |
* Intel Extension for Transformers [link](https://github.com/intel/intel-extension-for-transformers)
|
|