Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -1,23 +1,51 @@
|
|
1 |
---
|
2 |
language:
|
3 |
-
-
|
4 |
license: apache-2.0
|
5 |
tags:
|
6 |
- text-generation-inference
|
7 |
- transformers
|
8 |
- unsloth
|
9 |
- mistral
|
10 |
-
-
|
11 |
-
-
|
12 |
base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit
|
13 |
---
|
14 |
|
15 |
-
# Uploaded
|
16 |
|
17 |
- **Developed by:** Labagaite
|
18 |
- **License:** apache-2.0
|
19 |
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit
|
20 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
21 |
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
|
22 |
|
23 |
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
|
|
1 |
---
|
2 |
language:
|
3 |
+
- fr
|
4 |
license: apache-2.0
|
5 |
tags:
|
6 |
- text-generation-inference
|
7 |
- transformers
|
8 |
- unsloth
|
9 |
- mistral
|
10 |
+
- summarizer
|
11 |
+
- lora
|
12 |
base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit
|
13 |
---
|
14 |
|
15 |
+
# Uploaded as lora model
|
16 |
|
17 |
- **Developed by:** Labagaite
|
18 |
- **License:** apache-2.0
|
19 |
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit
|
20 |
|
21 |
+
# Training Logs
|
22 |
+
|
23 |
+
## Summary metrics
|
24 |
+
### Best ROUGE-1 score : **0.15938795027095953**
|
25 |
+
### Best ROUGE-2 score : **0.15885167464114833**
|
26 |
+
### Best ROUGE-L score : **0.15938795027095953**
|
27 |
+
|
28 |
+
## Wandb logs
|
29 |
+
You can view the training logs [<img src="https://raw.githubusercontent.com/wandb/wandb/main/docs/README_images/logo-light.svg" width="200"/>](https://wandb.ai/william-derue/LLM-summarizer_trainer/runs/e3co46y5).
|
30 |
+
|
31 |
+
## Training details
|
32 |
+
|
33 |
+
### training data
|
34 |
+
- Dataset : [fr-summarizer-dataset](https://huggingface.co/datasets/Labagaite/fr-summarizer-dataset)
|
35 |
+
- Data-size : 7.65 MB
|
36 |
+
- train : 1.97k rows
|
37 |
+
- validation : 440 rows
|
38 |
+
- roles : user , assistant
|
39 |
+
- Format chatml "role": "role", "content": "content", "user": "user", "assistant": "assistant"
|
40 |
+
<br>
|
41 |
+
*French audio podcast transcription*
|
42 |
+
|
43 |
+
# Project details
|
44 |
+
[<img src="https://avatars.githubusercontent.com/u/116890814?v=4" width="100"/>](https://github.com/WillIsback/Report_Maker)
|
45 |
+
Fine-tuned on French audio podcast transcription data for summarization task. As a result, the model is able to summarize French audio podcast transcription data.
|
46 |
+
The model will be used for a AI application: [Report Maker](https://github.com/WillIsback/Report_Maker) wich is a powerful tool designed to automate the process of transcribing and summarizing meetings.
|
47 |
+
It leverages state-of-the-art machine learning models to provide detailed and accurate reports.
|
48 |
+
|
49 |
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
|
50 |
|
51 |
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|