naimsassine commited on
Commit
29dd194
1 Parent(s): c2f0c41

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -5
README.md CHANGED
@@ -1,7 +1,7 @@
1
  ---
2
  base_model: mistralai/Mistral-7B-Instruct-v0.3
3
  datasets:
4
- - generator
5
  library_name: peft
6
  license: apache-2.0
7
  tags:
@@ -18,21 +18,25 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  # mistralinstruct-7b-sft-lora
20
 
21
- This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) on the generator dataset.
22
  It achieves the following results on the evaluation set:
23
  - Loss: 1.2937
24
 
25
  ## Model description
26
 
27
- More information needed
 
28
 
29
  ## Intended uses & limitations
30
 
31
- More information needed
32
 
33
  ## Training and evaluation data
34
 
35
- More information needed
 
 
 
36
 
37
  ## Training procedure
38
 
 
1
  ---
2
  base_model: mistralai/Mistral-7B-Instruct-v0.3
3
  datasets:
4
+ - naimsassine/belgian-law-qafrench-dataset
5
  library_name: peft
6
  license: apache-2.0
7
  tags:
 
18
 
19
  # mistralinstruct-7b-sft-lora
20
 
21
+ This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) on the Belgian Law QnA dataset.
22
  It achieves the following results on the evaluation set:
23
  - Loss: 1.2937
24
 
25
  ## Model description
26
 
27
+ The goal of this model was to experiment how far we can push a model by fine tuning it on a french QnA dataset on Belgian Law. The goal here is to see if we can get a small
28
+ size LLM to become good enough in terms of Legal Expertise in a specific country
29
 
30
  ## Intended uses & limitations
31
 
32
+ Legal Question Answering (Belgian Law/French)
33
 
34
  ## Training and evaluation data
35
 
36
+ SFT-LORA
37
+ Big thanks to Niels Rogge's notebook that helped me through the process
38
+ https://github.com/NielsRogge/Transformers-Tutorials/blob/master/Mistral/Supervised_fine_tuning_(SFT)_of_an_LLM_using_Hugging_Face_tooling.ipynb
39
+
40
 
41
  ## Training procedure
42