Update README.md
Browse files
README.md
CHANGED
@@ -37,6 +37,14 @@ Please note this is an experimental GPTQ model. Support for it is currently quit
|
|
37 |
|
38 |
It is also expected to be **SLOW**. This is currently unavoidable, but is being looked at.
|
39 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
40 |
## AutoGPTQ
|
41 |
|
42 |
AutoGPTQ is required: `pip install auto-gptq`
|
@@ -113,8 +121,9 @@ model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
|
|
113 |
quantize_config=None)
|
114 |
|
115 |
prompt = "Tell me about AI"
|
116 |
-
prompt_template=f'''
|
117 |
-
|
|
|
118 |
|
119 |
print("\n\n*** Generate:")
|
120 |
|
|
|
37 |
|
38 |
It is also expected to be **SLOW**. This is currently unavoidable, but is being looked at.
|
39 |
|
40 |
+
## Prompt template
|
41 |
+
|
42 |
+
```
|
43 |
+
A helpful assistant who helps the user with any questions asked.
|
44 |
+
User: prompt
|
45 |
+
Assistant:
|
46 |
+
```
|
47 |
+
|
48 |
## AutoGPTQ
|
49 |
|
50 |
AutoGPTQ is required: `pip install auto-gptq`
|
|
|
121 |
quantize_config=None)
|
122 |
|
123 |
prompt = "Tell me about AI"
|
124 |
+
prompt_template=f'''A helpful assistant who helps the user with any questions asked.
|
125 |
+
User: {prompt}
|
126 |
+
Assistant:''
|
127 |
|
128 |
print("\n\n*** Generate:")
|
129 |
|