Update README.md
Browse files
README.md
CHANGED
@@ -45,6 +45,8 @@ The model performs exceptionally well on writing, explanation and discussion tas
|
|
45 |
"average": 6.8875
|
46 |
}
|
47 |
```
|
|
|
|
|
48 |
|
49 |
## Model Details
|
50 |
|
@@ -114,6 +116,9 @@ in advance, and the model may in some instances produce inaccurate, biased or ot
|
|
114 |
to user prompts. Therefore, before deploying any applications of `LeoLM/leo-hessianai-70b-chat`, developers should
|
115 |
perform safety testing and tuning tailored to their specific applications of the model.
|
116 |
|
|
|
|
|
|
|
117 |
Please see Meta's [Responsible Use Guide](https://ai.meta.com/llama/responsible-use-guide/).
|
118 |
|
119 |
## Finetuning Details
|
|
|
45 |
"average": 6.8875
|
46 |
}
|
47 |
```
|
48 |
+
Have a look at some examples [in this Google Doc](https://docs.google.com/document/d/1SAAikkPAF4oLoFISqE0P1mRL5OUk8l2pI90zZC4bP1E/edit?usp=sharing).
|
49 |
+
|
50 |
|
51 |
## Model Details
|
52 |
|
|
|
116 |
to user prompts. Therefore, before deploying any applications of `LeoLM/leo-hessianai-70b-chat`, developers should
|
117 |
perform safety testing and tuning tailored to their specific applications of the model.
|
118 |
|
119 |
+
We are aware of the model refusing to answer more often than desired. This will be adressed in future versions. For now, the training
|
120 |
+
dataset is equal to that used for our smaller chat variants.
|
121 |
+
|
122 |
Please see Meta's [Responsible Use Guide](https://ai.meta.com/llama/responsible-use-guide/).
|
123 |
|
124 |
## Finetuning Details
|