Update README.md
Browse files
README.md
CHANGED
@@ -15,6 +15,25 @@ A lightweight model (2.8B) with enhanced RAG capabilities, with lower risk of ha
|
|
15 |
|
16 |
This is a DPO fine-tune of the Phi-2 architecture (in particular, dolphin-2_6-phi-2) over the dataset https://huggingface.co/datasets/jondurbin/contextual-dpo-v0.1.
|
17 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
18 |
|
19 |
## Prompt format
|
20 |
|
|
|
15 |
|
16 |
This is a DPO fine-tune of the Phi-2 architecture (in particular, dolphin-2_6-phi-2) over the dataset https://huggingface.co/datasets/jondurbin/contextual-dpo-v0.1.
|
17 |
|
18 |
+
## Usage
|
19 |
+
|
20 |
+
Load the model as
|
21 |
+
|
22 |
+
```
|
23 |
+
model = AutoModelForCausalLM.from_pretrained(
|
24 |
+
"vicgalle/phi-2-contextual",
|
25 |
+
torch_dtype="auto",
|
26 |
+
load_in_4bit=True,
|
27 |
+
trust_remote_code=True
|
28 |
+
)
|
29 |
+
tokenizer = AutoTokenizer.from_pretrained(
|
30 |
+
"cognitivecomputations/dolphin-2_6-phi-2",
|
31 |
+
trust_remote_code=True
|
32 |
+
)
|
33 |
+
```
|
34 |
+
|
35 |
+
and use the following prompt template.
|
36 |
+
|
37 |
|
38 |
## Prompt format
|
39 |
|