Update README.md
Browse files
README.md
CHANGED
@@ -252,6 +252,8 @@ This model also features Grouped Attention Query (GQA) so that memory usage scal
|
|
252 |
|
253 |
Instruction fine tuning was performed with a combination of supervised fine-tuning (SFT), rejection sampling, proximal policy optimization (PPO), and direct policy optimization (DPO).
|
254 |
|
|
|
|
|
255 |
## Special thanks
|
256 |
|
257 |
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
|
|
|
252 |
|
253 |
Instruction fine tuning was performed with a combination of supervised fine-tuning (SFT), rejection sampling, proximal policy optimization (PPO), and direct policy optimization (DPO).
|
254 |
|
255 |
+
Check out their blog post for more information [here](https://ai.meta.com/blog/meta-llama-3/)
|
256 |
+
|
257 |
## Special thanks
|
258 |
|
259 |
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
|