Update README.md
Browse files
README.md
CHANGED
@@ -20,6 +20,8 @@ It includes an extended tokenizer that was pretrained to better leverage tokens
|
|
20 |
|
21 |
Dynamo 8B has not been instruction fine-tuned and has not undergone alignment using techniques like reinforcement learning from human feedback. The intention behind crafting this model is to provide the research community with a model to explore vital multilingual capabilities that enable widespread use of LLMs globally.
|
22 |
|
|
|
|
|
23 |
|
24 |
# Model Specifications:
|
25 |
|
|
|
20 |
|
21 |
Dynamo 8B has not been instruction fine-tuned and has not undergone alignment using techniques like reinforcement learning from human feedback. The intention behind crafting this model is to provide the research community with a model to explore vital multilingual capabilities that enable widespread use of LLMs globally.
|
22 |
|
23 |
+
For additional details, please refer to our [blog post](https://www.dynamofl.com/blogs/introducing-dynamo-8b-a-multilingual-foundation-model-for-global-enterprises).
|
24 |
+
|
25 |
|
26 |
# Model Specifications:
|
27 |
|