Commit
•
e91e401
1
Parent(s):
d458b7d
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,39 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
tags:
|
6 |
+
- trl
|
7 |
+
- transformers
|
8 |
+
- reinforcement-learning
|
9 |
+
---
|
10 |
+
|
11 |
+
# Llama-se-peft
|
12 |
+
Adapter weights of a fine-tuned model based on LLaMa. Authored by Edward Beeching, Younes Belkada, Kashiv Rasul, Lewis Tunstall and Leandro von Werra.
|
13 |
+
For more info check out the [blog post]() and [github example]().
|
14 |
+
|
15 |
+
|
16 |
+
## Model Description
|
17 |
+
**Llama-se** is a Llama-based model that has been fine-tuned on the Stack Exchange dataset. This dataset consists of questions and answers from various domains in Stack Exchange, such as programming, mathematics, physics, and more. The model is designed to generate human-like responses to questions in these domains. The model has been training to respond to prompts with the following template:
|
18 |
+
|
19 |
+
```
|
20 |
+
Question: <Query>
|
21 |
+
|
22 |
+
Answer: <Response>
|
23 |
+
```
|
24 |
+
|
25 |
+
## Intended Uses & Limitations
|
26 |
+
**Llama-se** is intended for use in generating responses to questions related to the Stack Exchange dataset. It is suitable for generating answers to questions in the domains covered by the dataset, such as programming, mathematics, and physics. However, the model may not perform well on questions outside these domains or on questions requiring highly specific or technical knowledge.
|
27 |
+
|
28 |
+
## Limitations and Bias
|
29 |
+
The **Llama-se** model inherits limitations and biases from the Llama model and also those contained in the Stack Exchange dataset. The Stack Exchange dataset may contain biases in terms of the topics it covers and the users who contribute to it. It may not include all possible domains, and the quality of answers may vary. Additionally, the model may generate answers that are incorrect or misleading due to biases in the training data or the inherent limitations of the Llama architecture.
|
30 |
+
|
31 |
+
## BibTeX entry and citation info
|
32 |
+
|
33 |
+
```bibtex
|
34 |
+
@misc{beeching2023llama,
|
35 |
+
title={StackLLaMa: An RL Fine-tuned LLaMa Model for Stack Exchange Question and Answering},
|
36 |
+
author={Beeching, Edward and Belkada, Younes and Rasul, Kashiv and Tunstall, Lewis and von Werra, Leandro},
|
37 |
+
year={2023}
|
38 |
+
}
|
39 |
+
```
|