torchtorchkimtorch commited on
Commit
2e8462d
·
verified ·
1 Parent(s): e7a2904

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +47 -7
README.md CHANGED
@@ -17,15 +17,55 @@ The dataset includes 4 rounds per situation. If there is no winner in the final
17
 
18
  The labels are evenly distributed across the three possible outcomes.
19
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
  ## Model Performance
21
 
22
- | Model | Accuracy | F1 Score |
23
- |------------|-------|-------|
24
- | Llama 3.1 (instruct)-8b | 0.8083 | 0.8121 |
25
- | Llama 3-8b | 0.775 | 0.7781 |
26
- | Solar-10.7b | 0.7541 | 0.7592 |
27
- | Qwen-8b | 0.7833 | 0.7877 |
28
- | **Yi-9b-chat (Top)** | **0.8395** | **0.8426** |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
29
 
30
 
31
 
 
17
 
18
  The labels are evenly distributed across the three possible outcomes.
19
 
20
+ <style>
21
+ table {
22
+ width: 60%; /* 표 너비를 60%로 설정 */
23
+ font-size: 18px; /* 표의 글자 크기를 12px로 설정 */
24
+ margin-left: 0;
25
+ margin-right: auto;
26
+ }
27
+ th, td {
28
+ padding: 8px; /* 셀 내부 패딩 설정 */
29
+ text-align: center;
30
+ }
31
+ </style>
32
+
33
  ## Model Performance
34
 
35
+ <table>
36
+ <tr>
37
+ <th>Model</th>
38
+ <th>Acc</th>
39
+ <th>F1</th>
40
+ </tr>
41
+ <tr>
42
+ <td>Llama3.1-8b</td>
43
+ <td>0.808</td>
44
+ <td>0.812</td>
45
+ </tr>
46
+ <tr>
47
+ <td>Llama3-8b</td>
48
+ <td>0.775</td>
49
+ <td>0.778</td>
50
+ </tr>
51
+ <tr>
52
+ <td>Solar-10.7b</td>
53
+ <td>0.754</td>
54
+ <td>0.759</td>
55
+ </tr>
56
+ <tr>
57
+ <td>Qwen-8b</td>
58
+ <td>0.783</td>
59
+ <td>0.788</td>
60
+ </tr>
61
+ <tr>
62
+ <td><strong>Yi-chat-9b (TOP)</strong></td>
63
+ <td>0.840</td>
64
+ <td>0.843</td>
65
+ </tr>
66
+ </table>
67
+
68
+ Models with fewer than 12 billion parameters were used due to GPU limitations. 😂
69
 
70
 
71