Update README.md
Browse files
README.md
CHANGED
@@ -53,4 +53,18 @@ generated_ids = [
|
|
53 |
]
|
54 |
|
55 |
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
56 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
53 |
]
|
54 |
|
55 |
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
56 |
+
```
|
57 |
+
|
58 |
+
# **Intended Use:**
|
59 |
+
QwQ-SuperNatural-3B is designed for:
|
60 |
+
1. **Role-play and interactive chatbots:** It excels in generating contextually relevant and engaging supernatural-themed responses.
|
61 |
+
2. **Long-form content generation:** Its capability to handle over 8,000 tokens makes it suitable for generating detailed narratives, articles, or creative writing.
|
62 |
+
3. **Structured data understanding:** The model can process and interpret structured inputs such as tables, schemas, and JSON formats, making it useful for data-driven applications.
|
63 |
+
4. **Dynamic prompt responses:** Its resilience to diverse prompts makes it ideal for applications requiring adaptable behavior, such as virtual assistants and domain-specific simulations.
|
64 |
+
|
65 |
+
# **Limitations:**
|
66 |
+
1. **Domain specificity:** While fine-tuned for supernatural contexts, its general knowledge might be less accurate or nuanced outside this domain.
|
67 |
+
2. **Token constraints:** Although capable of generating long texts, extremely large inputs or outputs might exceed processing limits.
|
68 |
+
3. **Bias and creativity trade-offs:** The model may reflect biases present in its training data and could produce less creative or diverse outputs in domains where it lacks fine-tuning.
|
69 |
+
4. **Reliance on input clarity:** Ambiguous or poorly structured prompts can lead to less coherent or contextually accurate responses.
|
70 |
+
5. **Computational requirements:** Handling a model with 3 billion parameters requires significant computational resources, which may limit its accessibility for smaller-scale applications.
|