umizkimt commited on
Commit
47ae685
·
verified ·
1 Parent(s): 282fb4b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -2
README.md CHANGED
@@ -42,7 +42,14 @@ This llama model was trained 2x faster with [Unsloth](https://github.com/unsloth
42
  - Training Duration: 46 minutes
43
 
44
  ## Performance
45
- - Omnicampus score: 3.02 (2024-11-29 19:20:27 JST)
 
 
 
 
 
 
 
46
 
47
  ## .jsonl File Output Usage
48
  To generate the output file in Google Colaboratory, use the following script:
@@ -74,7 +81,7 @@ model_id = "llm-jp/llm-jp-3-13b"
74
  adapter_id = "umizkimt/llm-jp-3-13b-it_lora"
75
 
76
  # Hugging Face Token を指定。
77
- HF_TOKEN = "<your-token>"
78
 
79
  # unslothのFastLanguageModelで元のモデルをロード。
80
  dtype = None # Noneにしておけば自動で設定
 
42
  - Training Duration: 46 minutes
43
 
44
  ## Performance
45
+ |Metric|Base Model|Fine-Tuned Model|
46
+ |---|---|---|
47
+ |Score (Gemini 1.5)|2.21|3.01|
48
+ |Inference Time (100 examples)|38 minutes|9 minutes|
49
+
50
+ - Score Type: Provisional score using Gemini 1.5 (for competition purposes)
51
+ - Evaluation Dataset: elyza-tasks-100-TV_0.jsonl
52
+ - Platform: Google Colaboratory (T4 GPU)
53
 
54
  ## .jsonl File Output Usage
55
  To generate the output file in Google Colaboratory, use the following script:
 
81
  adapter_id = "umizkimt/llm-jp-3-13b-it_lora"
82
 
83
  # Hugging Face Token を指定。
84
+ HF_TOKEN = "<YOUR_HF_TOKEN>"
85
 
86
  # unslothのFastLanguageModelで元のモデルをロード。
87
  dtype = None # Noneにしておけば自動で設定