W3bsurf commited on
Commit
0b2c774
·
1 Parent(s): ec6bb78

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -7
README.md CHANGED
@@ -3,16 +3,19 @@ base_model: meta-llama/Llama-2-7b-hf
3
  tags:
4
  - generated_from_trainer
5
  model-index:
6
- - name: results
7
  results: []
 
 
 
 
8
  ---
9
 
10
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
11
- should probably proofread and complete it, then remove this comment. -->
12
 
13
  # results
14
 
15
- This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset.
16
  It achieves the following results on the evaluation set:
17
  - Loss: 0.6796
18
 
@@ -22,8 +25,7 @@ More information needed
22
 
23
  ## Intended uses & limitations
24
 
25
- More information needed
26
-
27
  ## Training and evaluation data
28
 
29
  More information needed
@@ -57,4 +59,4 @@ The following hyperparameters were used during training:
57
  - Transformers 4.35.2
58
  - Pytorch 2.1.0+cu118
59
  - Datasets 2.15.0
60
- - Tokenizers 0.15.0
 
3
  tags:
4
  - generated_from_trainer
5
  model-index:
6
+ - name: W3bsurf/llawma-sum-2-7b
7
  results: []
8
+ datasets:
9
+ - dreamproit/bill_summary_us
10
+ language:
11
+ - en
12
  ---
13
 
14
+
 
15
 
16
  # results
17
 
18
+ This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the bill_summary_us dataset.
19
  It achieves the following results on the evaluation set:
20
  - Loss: 0.6796
21
 
 
25
 
26
  ## Intended uses & limitations
27
 
28
+ Summarizing law texts. Language is limited to english.
 
29
  ## Training and evaluation data
30
 
31
  More information needed
 
59
  - Transformers 4.35.2
60
  - Pytorch 2.1.0+cu118
61
  - Datasets 2.15.0
62
+ - Tokenizers 0.15.0