Update README.md
Browse files
README.md
CHANGED
@@ -10,7 +10,21 @@ tags:
|
|
10 |
- text-generation-inference
|
11 |
---
|
12 |
|
13 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
14 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
15 |
import transformers
|
16 |
import torch
|
|
|
10 |
- text-generation-inference
|
11 |
---
|
12 |
|
13 |
+
# π Falcon-RW-1B-Instruct-OpenOrca
|
14 |
+
|
15 |
+
Falcon-RW-1B-Instruct-OpenOrca is a 1B parameter, causal decoder-only model based on [Falcon-RW-1B](https://huggingface.co/tiiuae/falcon-rw-1b) and finetuned on the [Open-Orca/SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca) dataset.
|
16 |
+
|
17 |
+
**π Motivations**
|
18 |
+
1. To create a smaller, open-source, instruction-finetuned, ready-to-use model accessible for users with limited computational resources; to make a decent ready-to-use model for running on lower-end consumer GPUs.
|
19 |
+
2. To harness the strength of Falcon-RW-1B, a competitive model in its own right, and enhance its capabilities with instruction finetuning.
|
20 |
+
|
21 |
+
## π How to Use
|
22 |
+
|
23 |
+
The model operates with a structured prompt format, incorporating `<SYS>`, `<INST>`, and `<RESP>` tags to demarcate different parts of the input. The system message and instruction are placed within these tags, with the `<RESP>` tag triggering the model's response.
|
24 |
+
|
25 |
+
### π Example Code
|
26 |
+
|
27 |
+
```python
|
28 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
29 |
import transformers
|
30 |
import torch
|