hugo-albert commited on
Commit
73723c7
·
verified ·
1 Parent(s): 2824693

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -19
README.md CHANGED
@@ -19,6 +19,16 @@ This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggin
19
  It achieves the following results on the evaluation set:
20
  - Loss: 0.3878
21
 
 
 
 
 
 
 
 
 
 
 
22
  ## Model description
23
 
24
  More information needed
@@ -63,24 +73,5 @@ The following hyperparameters were used during training:
63
  - Datasets 3.0.1
64
  - Tokenizers 0.13.3
65
 
66
- ## Use this model
67
-
68
- ```
69
- from transformers import AutoModelForCausalLM
70
- from transformers import BitsAndBytesConfig
71
- import torch
72
-
73
- quantization_config = BitsAndBytesConfig(
74
- load_in_4bit=True,
75
- bnb_4bit_quant_type="nf4",
76
- bnb_4bit_use_double_quant=True,
77
- bnb_4bit_compute_dtype=torch.bfloat16,
78
- )
79
-
80
- model = AutoModelForCausalLM.from_pretrained(
81
- "hugo-albert/CodeLlama-7b-hf-finetuned-py-to-cpp",
82
- quantization_config=quantization_config,
83
- torch_dtype=torch.bfloat16,
84
- )
85
 
86
  ```
 
19
  It achieves the following results on the evaluation set:
20
  - Loss: 0.3878
21
 
22
+ Test set:
23
+ - BLEU: 65.06
24
+ - COMET: 89.13
25
+ - CodeBLEU: 78.52
26
+ - N-gram match score: 66.81
27
+ - Weighted n-gram match score: 82.49
28
+ - Syntax match score: 75.77
29
+ - Dataflow match score: 89.02
30
+
31
+
32
  ## Model description
33
 
34
  More information needed
 
73
  - Datasets 3.0.1
74
  - Tokenizers 0.13.3
75
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
76
 
77
  ```