Update README.md
Browse files
README.md
CHANGED
@@ -15,6 +15,7 @@ tags:
|
|
15 |
- Math
|
16 |
- text-generation-inference
|
17 |
- Math-CoT
|
|
|
18 |
---
|
19 |
|
20 |
# **Deepthink-Reasoning-14B**
|
@@ -85,4 +86,4 @@ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
|
85 |
4. **Long-Context Limitations:** Although it supports up to 128K tokens, performance may degrade or exhibit inefficiencies with extremely lengthy or complex contexts.
|
86 |
5. **Bias in Outputs:** The model might reflect biases present in its training data, affecting its objectivity in certain contexts or cultural sensitivity in multilingual outputs.
|
87 |
6. **Dependence on Prompt Quality:** Results heavily depend on well-structured and clear inputs. Poorly framed prompts can lead to irrelevant or suboptimal responses.
|
88 |
-
7. **Error in Multilingual Output:** Despite robust multilingual support, subtle errors in grammar, syntax, or cultural nuances might appear, especially in low-resource languages.
|
|
|
15 |
- Math
|
16 |
- text-generation-inference
|
17 |
- Math-CoT
|
18 |
+
- Deep-think
|
19 |
---
|
20 |
|
21 |
# **Deepthink-Reasoning-14B**
|
|
|
86 |
4. **Long-Context Limitations:** Although it supports up to 128K tokens, performance may degrade or exhibit inefficiencies with extremely lengthy or complex contexts.
|
87 |
5. **Bias in Outputs:** The model might reflect biases present in its training data, affecting its objectivity in certain contexts or cultural sensitivity in multilingual outputs.
|
88 |
6. **Dependence on Prompt Quality:** Results heavily depend on well-structured and clear inputs. Poorly framed prompts can lead to irrelevant or suboptimal responses.
|
89 |
+
7. **Error in Multilingual Output:** Despite robust multilingual support, subtle errors in grammar, syntax, or cultural nuances might appear, especially in low-resource languages.
|