tarekziade commited on
Commit
57bd9f7
1 Parent(s): 8719a73

initial copy from https://huggingface.co/lxyuan/distilbert-finetuned-reuters21578-multilabel

Browse files
README.md ADDED
@@ -0,0 +1,428 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: distilbert-base-cased
4
+ tags:
5
+ - generated_from_trainer
6
+ - news_classification
7
+ - multi_label
8
+ datasets:
9
+ - reuters21578
10
+ metrics:
11
+ - f1
12
+ - accuracy
13
+ model-index:
14
+ - name: distilbert-finetuned-reuters21578-multilabel
15
+ results:
16
+ - task:
17
+ name: Text Classification
18
+ type: text-classification
19
+ dataset:
20
+ name: reuters21578
21
+ type: reuters21578
22
+ config: ModApte
23
+ split: test
24
+ args: ModApte
25
+ metrics:
26
+ - name: F1
27
+ type: f1
28
+ value: 0.8628858578607322
29
+ - name: Accuracy
30
+ type: accuracy
31
+ value: 0.8195625759416768
32
+ language:
33
+ - en
34
+ pipeline_tag: text-classification
35
+ widget:
36
+ - text: "JAPAN TO REVISE LONG-TERM ENERGY DEMAND DOWNWARDS The Ministry of International Trade and Industry (MITI) will revise its long-term energy supply/demand outlook by August to meet a forecast downtrend in Japanese energy demand, ministry officials said. MITI is expected to lower the projection for primary energy supplies in the year 2000 to 550 mln kilolitres (kl) from 600 mln, they said. The decision follows the emergence of structural changes in Japanese industry following the rise in the value of the yen and a decline in domestic electric power demand. MITI is planning to work out a revised energy supply/demand outlook through deliberations of committee meetings of the Agency of Natural Resources and Energy, the officials said. They said MITI will also review the breakdown of energy supply sources, including oil, nuclear, coal and natural gas. Nuclear energy provided the bulk of Japan's electric power in the fiscal year ended March 31, supplying an estimated 27 pct on a kilowatt/hour basis, followed by oil (23 pct) and liquefied natural gas (21 pct), they noted. REUTER"
37
+ example_title: "Example-1"
38
+ ---
39
+
40
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
41
+ should probably proofread and complete it, then remove this comment. -->
42
+
43
+ ## Motivation
44
+
45
+ Fine-tuning on the Reuters-21578 multilabel dataset is a valuable exercise, especially as it's frequently used in take-home tests during interviews. The dataset's complexity is just right for testing multilabel classification skills within a limited timeframe, while its real-world relevance helps simulate practical challenges. Experimenting with this dataset not only helps candidates prepare for interviews but also hones various skills including preprocessing, feature extraction, and model evaluation.
46
+
47
+ This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the reuters21578 dataset.
48
+
49
+ ## Inference Example
50
+
51
+ ```python
52
+ from transformers import pipeline
53
+
54
+ pipe = pipeline("text-classification", model="lxyuan/distilbert-finetuned-reuters21578-multilabel", return_all_scores=True)
55
+
56
+ # dataset["test"]["text"][2]
57
+ news_article = (
58
+ "JAPAN TO REVISE LONG-TERM ENERGY DEMAND DOWNWARDS The Ministry of International Trade and "
59
+ "Industry (MITI) will revise its long-term energy supply/demand "
60
+ "outlook by August to meet a forecast downtrend in Japanese "
61
+ "energy demand, ministry officials said. "
62
+ "MITI is expected to lower the projection for primary energy "
63
+ "supplies in the year 2000 to 550 mln kilolitres (kl) from 600 "
64
+ "mln, they said. "
65
+ "The decision follows the emergence of structural changes in "
66
+ "Japanese industry following the rise in the value of the yen "
67
+ "and a decline in domestic electric power demand. "
68
+ "MITI is planning to work out a revised energy supply/demand "
69
+ "outlook through deliberations of committee meetings of the "
70
+ "Agency of Natural Resources and Energy, the officials said. "
71
+ "They said MITI will also review the breakdown of energy "
72
+ "supply sources, including oil, nuclear, coal and natural gas. "
73
+ "Nuclear energy provided the bulk of Japan's electric power "
74
+ "in the fiscal year ended March 31, supplying an estimated 27 "
75
+ "pct on a kilowatt/hour basis, followed by oil (23 pct) and "
76
+ "liquefied natural gas (21 pct), they noted. "
77
+ "REUTER"
78
+ )
79
+
80
+ # dataset["test"]["topics"][2]
81
+ target_topics = ['crude', 'nat-gas']
82
+
83
+ fn_kwargs={"padding": "max_length", "truncation": True, "max_length": 512}
84
+ output = pipe(example, function_to_apply="sigmoid", **fn_kwargs)
85
+
86
+ for item in output[0]:
87
+ if item["score"]>=0.5:
88
+ print(item["label"], item["score"])
89
+
90
+ >>> crude 0.7355073690414429
91
+ nat-gas 0.8600426316261292
92
+
93
+ ```
94
+
95
+
96
+ ## Overall Summary and Comparison Table
97
+
98
+ | Metric | Baseline (Scikit-learn) | Transformer Model |
99
+ |-----------------------|--------------------------|-------------------|
100
+ | Micro-Averaged F1 | 0.77 | 0.86 |
101
+ | Macro-Averaged F1 | 0.29 | 0.33 |
102
+ | Weighted Average F1 | 0.70 | 0.84 |
103
+ | Samples Average F1 | 0.75 | 0.80 |
104
+
105
+ **Precision vs Recall**: Both models prioritize high precision over recall. In our client-facing news classification model, precision takes precedence over recall. This is because the repercussions of false positives are more severe and harder to justify to clients compared to false negatives. When the model incorrectly tags a news item with a topic, it's challenging to explain this error. On the other hand, if the model misses a topic, it's easier to defend by stating that the topic wasn't sufficiently emphasized in the news article.
106
+
107
+ **Class Imbalance Handling**: Both models suffer from the same general issue of not performing well on minority classes, as reflected in the low macro-averaged F1-scores. However, the transformer model shows a slight improvement, albeit marginal, in macro-averaged F1-score (0.33 vs 0.29).
108
+
109
+ **Issue of Zero Support Labels**: Both models have the problem of zero support for several labels, meaning these labels did not appear in the test set. This lack of "support" can significantly skew the performance metrics and may suggest that either the models are not well-tuned to predict these minority classes, or the dataset itself lacks sufficient examples of these classes. Given that both models struggle with low macro-averaged F1 scores, this issue further emphasizes the need for improved minority class handling in the models.
110
+
111
+ **General Performance**: The transformer model surpasses the scikit-learn baseline in terms of weighted and samples average F1-scores, indicating better overall performance and better handling of label imbalance.
112
+
113
+ **Conclusion**: While both models exhibit high precision, which is a business requirement, the transformer model slightly outperforms the scikit-learn baseline model in all metrics considered. It provides a better trade-off between precision and recall, as well as some improvement, albeit small, in handling minority classes. Thus, despite sharing similar weaknesses with the baseline, the transformer model demonstrates incremental improvements that could be significant in a production setting.
114
+
115
+
116
+ ## Training and evaluation data
117
+
118
+ We remove single appearance label from both training and test sets using the following code:
119
+
120
+ ```python
121
+ # Find Single Appearance Labels
122
+ def find_single_appearance_labels(y):
123
+ """Find labels that appear only once in the dataset."""
124
+ all_labels = list(chain.from_iterable(y))
125
+ label_count = Counter(all_labels)
126
+ single_appearance_labels = [label for label, count in label_count.items() if count == 1]
127
+ return single_appearance_labels
128
+
129
+ # Remove Single Appearance Labels from Dataset
130
+ def remove_single_appearance_labels(dataset, single_appearance_labels):
131
+ """Remove samples with single-appearance labels from both train and test sets."""
132
+ for split in ['train', 'test']:
133
+ dataset[split] = dataset[split].filter(lambda x: all(label not in single_appearance_labels for label in x['topics']))
134
+ return dataset
135
+
136
+ dataset = load_dataset("reuters21578", "ModApte")
137
+
138
+ # Find and Remove Single Appearance Labels
139
+ y_train = [item['topics'] for item in dataset['train']]
140
+ single_appearance_labels = find_single_appearance_labels(y_train)
141
+ print(f"Single appearance labels: {single_appearance_labels}")
142
+ >>> Single appearance labels: ['lin-oil', 'rye', 'red-bean', 'groundnut-oil', 'citruspulp', 'rape-meal', 'corn-oil', 'peseta', 'cotton-oil', 'ringgit', 'castorseed', 'castor-oil', 'lit', 'rupiah', 'skr', 'nkr', 'dkr', 'sun-meal', 'lin-meal', 'cruzado']
143
+
144
+ print("Removing samples with single-appearance labels...")
145
+ dataset = remove_single_appearance_labels(dataset, single_appearance_labels)
146
+
147
+ unique_labels = set(chain.from_iterable(dataset['train']["topics"]))
148
+ print(f"We have {len(unique_labels)} unique labels:\n{unique_labels}")
149
+ >>> We have 95 unique labels:
150
+ {'veg-oil', 'gold', 'platinum', 'ipi', 'acq', 'carcass', 'wool', 'coconut-oil', 'linseed', 'copper', 'soy-meal', 'jet', 'dlr', 'copra-cake', 'hog', 'rand', 'strategic-metal', 'can', 'tea', 'sorghum', 'livestock', 'barley', 'lumber', 'earn', 'wheat', 'trade', 'soy-oil', 'cocoa', 'inventories', 'income', 'rubber', 'tin', 'iron-steel', 'ship', 'rapeseed', 'wpi', 'sun-oil', 'pet-chem', 'palmkernel', 'nat-gas', 'gnp', 'l-cattle', 'propane', 'rice', 'lead', 'alum', 'instal-debt', 'saudriyal', 'cpu', 'jobs', 'meal-feed', 'oilseed', 'dmk', 'plywood', 'zinc', 'retail', 'dfl', 'cpi', 'crude', 'pork-belly', 'gas', 'money-fx', 'corn', 'tapioca', 'palladium', 'lei', 'cornglutenfeed', 'sunseed', 'potato', 'silver', 'sugar', 'grain', 'groundnut', 'naphtha', 'orange', 'soybean', 'coconut', 'stg', 'cotton', 'yen', 'rape-oil', 'palm-oil', 'oat', 'reserves', 'housing', 'interest', 'coffee', 'fuel', 'austdlr', 'money-supply', 'heat', 'fishmeal', 'bop', 'nickel', 'nzdlr'}
151
+ ```
152
+
153
+
154
+ ## Training procedure
155
+
156
+ [EDA on Reuters-21578 dataset](https://github.com/LxYuan0420/nlp/blob/main/notebooks/eda_reuters.ipynb):
157
+ This notebook provides an Exploratory Data Analysis (EDA) of the Reuters-21578 dataset. It includes visualizations and statistical summaries that offer insights into the dataset's structure, label distribution, and text characteristics.
158
+
159
+ [Reuters Baseline Scikit-Learn Model](https://github.com/LxYuan0420/nlp/blob/main/notebooks/scikit_learn_reuters.ipynb):
160
+ This notebook establishes a baseline model for text classification on the Reuters-21578 dataset using scikit-learn. It guides you through data preprocessing, feature extraction, model training, and evaluation.
161
+
162
+ [Reuters Transformer Model](https://github.com/LxYuan0420/nlp/blob/main/notebooks/transformer_reuters.ipynb):
163
+ This notebook delves into advanced text classification using a Transformer model on the Reuters-21578 dataset. It covers the implementation details, training process, and performance metrics of using Transformer-based models for this specific task.
164
+
165
+ [Multilabel Stratified Sampling & Hypyerparameter Search on Reuters Dataset](https://github.com/LxYuan0420/nlp/blob/main/notebooks/transformer_reuters_hyperparameter_tuning.ipynb):
166
+ In this notebook, we explore advanced machine learning techniques through the lens of the Hugging Face Trainer API, specifically targeting Multilabel Iterative Stratified Splitting and Hyperparameter Search. The former aims to fairly distribute imbalanced datasets across multiple labels in k-fold cross-validation, maintaining a distribution closely resembling that of the complete dataset. The latter walks users through a structured hyperparameter search to fine-tune model performance for optimal results.
167
+ ## Evaluation results
168
+ <details>
169
+ <summary>Transformer Model Evaluation Result</summary>
170
+
171
+ Classification Report:
172
+ precision recall f1-score support
173
+
174
+ acq 0.97 0.93 0.95 719
175
+ alum 1.00 0.70 0.82 23
176
+ austdlr 0.00 0.00 0.00 0
177
+ barley 1.00 0.50 0.67 12
178
+ bop 0.79 0.50 0.61 30
179
+ can 0.00 0.00 0.00 0
180
+ carcass 0.67 0.67 0.67 18
181
+ cocoa 1.00 1.00 1.00 18
182
+ coconut 0.00 0.00 0.00 2
183
+ coconut-oil 0.00 0.00 0.00 2
184
+ coffee 0.86 0.89 0.87 27
185
+ copper 1.00 0.78 0.88 18
186
+ copra-cake 0.00 0.00 0.00 1
187
+ corn 0.84 0.87 0.86 55
188
+ cornglutenfeed 0.00 0.00 0.00 0
189
+ cotton 0.92 0.67 0.77 18
190
+ cpi 0.86 0.43 0.57 28
191
+ cpu 0.00 0.00 0.00 1
192
+ crude 0.87 0.93 0.90 189
193
+ dfl 0.00 0.00 0.00 1
194
+ dlr 0.72 0.64 0.67 44
195
+ dmk 0.00 0.00 0.00 4
196
+ earn 0.98 0.99 0.98 1087
197
+ fishmeal 0.00 0.00 0.00 0
198
+ fuel 0.00 0.00 0.00 10
199
+ gas 0.80 0.71 0.75 17
200
+ gnp 0.79 0.66 0.72 35
201
+ gold 0.95 0.67 0.78 30
202
+ grain 0.94 0.92 0.93 146
203
+ groundnut 0.00 0.00 0.00 4
204
+ heat 0.00 0.00 0.00 5
205
+ hog 1.00 0.33 0.50 6
206
+ housing 0.00 0.00 0.00 4
207
+ income 0.00 0.00 0.00 7
208
+ instal-debt 0.00 0.00 0.00 1
209
+ interest 0.89 0.67 0.77 131
210
+ inventories 0.00 0.00 0.00 0
211
+ ipi 1.00 0.58 0.74 12
212
+ iron-steel 0.90 0.64 0.75 14
213
+ jet 0.00 0.00 0.00 1
214
+ jobs 0.92 0.57 0.71 21
215
+ l-cattle 0.00 0.00 0.00 2
216
+ lead 0.00 0.00 0.00 14
217
+ lei 0.00 0.00 0.00 3
218
+ linseed 0.00 0.00 0.00 0
219
+ livestock 0.63 0.79 0.70 24
220
+ lumber 0.00 0.00 0.00 6
221
+ meal-feed 0.00 0.00 0.00 17
222
+ money-fx 0.78 0.81 0.80 177
223
+ money-supply 0.80 0.71 0.75 34
224
+ naphtha 0.00 0.00 0.00 4
225
+ nat-gas 0.82 0.60 0.69 30
226
+ nickel 0.00 0.00 0.00 1
227
+ nzdlr 0.00 0.00 0.00 2
228
+ oat 0.00 0.00 0.00 4
229
+ oilseed 0.64 0.61 0.63 44
230
+ orange 1.00 0.36 0.53 11
231
+ palladium 0.00 0.00 0.00 1
232
+ palm-oil 1.00 0.56 0.71 9
233
+ palmkernel 0.00 0.00 0.00 1
234
+ pet-chem 0.00 0.00 0.00 12
235
+ platinum 0.00 0.00 0.00 7
236
+ plywood 0.00 0.00 0.00 0
237
+ pork-belly 0.00 0.00 0.00 0
238
+ potato 0.00 0.00 0.00 3
239
+ propane 0.00 0.00 0.00 3
240
+ rand 0.00 0.00 0.00 1
241
+ rape-oil 0.00 0.00 0.00 1
242
+ rapeseed 0.00 0.00 0.00 8
243
+ reserves 0.83 0.56 0.67 18
244
+ retail 0.00 0.00 0.00 2
245
+ rice 1.00 0.57 0.72 23
246
+ rubber 0.82 0.75 0.78 12
247
+ saudriyal 0.00 0.00 0.00 0
248
+ ship 0.95 0.81 0.87 89
249
+ silver 1.00 0.12 0.22 8
250
+ sorghum 1.00 0.12 0.22 8
251
+ soy-meal 0.00 0.00 0.00 12
252
+ soy-oil 0.00 0.00 0.00 8
253
+ soybean 0.72 0.56 0.63 32
254
+ stg 0.00 0.00 0.00 0
255
+ strategic-metal 0.00 0.00 0.00 11
256
+ sugar 1.00 0.80 0.89 35
257
+ sun-oil 0.00 0.00 0.00 0
258
+ sunseed 0.00 0.00 0.00 5
259
+ tapioca 0.00 0.00 0.00 0
260
+ tea 0.00 0.00 0.00 3
261
+ tin 1.00 0.42 0.59 12
262
+ trade 0.78 0.79 0.79 116
263
+ veg-oil 0.91 0.59 0.71 34
264
+ wheat 0.83 0.83 0.83 69
265
+ wool 0.00 0.00 0.00 0
266
+ wpi 0.00 0.00 0.00 10
267
+ yen 0.57 0.29 0.38 14
268
+ zinc 1.00 0.69 0.82 13
269
+
270
+ micro avg 0.92 0.81 0.86 3694
271
+ macro avg 0.41 0.30 0.33 3694
272
+ weighted avg 0.87 0.81 0.84 3694
273
+ samples avg 0.81 0.80 0.80 3694
274
+
275
+ </details>
276
+
277
+
278
+ <details>
279
+ <summary>Scikit-learn Baseline Model Evaluation Result</summary>
280
+ Classification Report:
281
+ precision recall f1-score support
282
+
283
+ acq 0.98 0.87 0.92 719
284
+ alum 1.00 0.00 0.00 23
285
+ austdlr 1.00 1.00 1.00 0
286
+ barley 1.00 0.00 0.00 12
287
+ bop 1.00 0.30 0.46 30
288
+ can 1.00 1.00 1.00 0
289
+ carcass 1.00 0.06 0.11 18
290
+ cocoa 1.00 0.61 0.76 18
291
+ coconut 1.00 0.00 0.00 2
292
+ coconut-oil 1.00 0.00 0.00 2
293
+ coffee 0.94 0.59 0.73 27
294
+ copper 1.00 0.22 0.36 18
295
+ copra-cake 1.00 0.00 0.00 1
296
+ corn 0.97 0.51 0.67 55
297
+ cornglutenfeed 1.00 1.00 1.00 0
298
+ cotton 1.00 0.06 0.11 18
299
+ cpi 1.00 0.14 0.25 28
300
+ cpu 1.00 0.00 0.00 1
301
+ crude 0.94 0.69 0.80 189
302
+ dfl 1.00 0.00 0.00 1
303
+ dlr 0.86 0.43 0.58 44
304
+ dmk 1.00 0.00 0.00 4
305
+ earn 0.99 0.97 0.98 1087
306
+ fishmeal 1.00 1.00 1.00 0
307
+ fuel 1.00 0.00 0.00 10
308
+ gas 1.00 0.00 0.00 17
309
+ gnp 1.00 0.31 0.48 35
310
+ gold 0.83 0.17 0.28 30
311
+ grain 1.00 0.65 0.79 146
312
+ groundnut 1.00 0.00 0.00 4
313
+ heat 1.00 0.00 0.00 5
314
+ hog 1.00 0.00 0.00 6
315
+ housing 1.00 0.00 0.00 4
316
+ income 1.00 0.00 0.00 7
317
+ instal-debt 1.00 0.00 0.00 1
318
+ interest 0.88 0.40 0.55 131
319
+ inventories 1.00 1.00 1.00 0
320
+ ipi 1.00 0.00 0.00 12
321
+ iron-steel 1.00 0.00 0.00 14
322
+ jet 1.00 0.00 0.00 1
323
+ jobs 1.00 0.14 0.25 21
324
+ l-cattle 1.00 0.00 0.00 2
325
+ lead 1.00 0.00 0.00 14
326
+ lei 1.00 0.00 0.00 3
327
+ linseed 1.00 1.00 1.00 0
328
+ livestock 0.67 0.08 0.15 24
329
+ lumber 1.00 0.00 0.00 6
330
+ meal-feed 1.00 0.00 0.00 17
331
+ money-fx 0.80 0.50 0.62 177
332
+ money-supply 0.88 0.41 0.56 34
333
+ naphtha 1.00 0.00 0.00 4
334
+ nat-gas 1.00 0.27 0.42 30
335
+ nickel 1.00 0.00 0.00 1
336
+ nzdlr 1.00 0.00 0.00 2
337
+ oat 1.00 0.00 0.00 4
338
+ oilseed 0.62 0.11 0.19 44
339
+ orange 1.00 0.00 0.00 11
340
+ palladium 1.00 0.00 0.00 1
341
+ palm-oil 1.00 0.22 0.36 9
342
+ palmkernel 1.00 0.00 0.00 1
343
+ pet-chem 1.00 0.00 0.00 12
344
+ platinum 1.00 0.00 0.00 7
345
+ plywood 1.00 1.00 1.00 0
346
+ pork-belly 1.00 1.00 1.00 0
347
+ potato 1.00 0.00 0.00 3
348
+ propane 1.00 0.00 0.00 3
349
+ rand 1.00 0.00 0.00 1
350
+ rape-oil 1.00 0.00 0.00 1
351
+ rapeseed 1.00 0.00 0.00 8
352
+ reserves 1.00 0.00 0.00 18
353
+ retail 1.00 0.00 0.00 2
354
+ rice 1.00 0.00 0.00 23
355
+ rubber 1.00 0.17 0.29 12
356
+ saudriyal 1.00 1.00 1.00 0
357
+ ship 0.92 0.26 0.40 89
358
+ silver 1.00 0.00 0.00 8
359
+ sorghum 1.00 0.00 0.00 8
360
+ soy-meal 1.00 0.00 0.00 12
361
+ soy-oil 1.00 0.00 0.00 8
362
+ soybean 1.00 0.16 0.27 32
363
+ stg 1.00 1.00 1.00 0
364
+ strategic-metal 1.00 0.00 0.00 11
365
+ sugar 1.00 0.60 0.75 35
366
+ sun-oil 1.00 1.00 1.00 0
367
+ sunseed 1.00 0.00 0.00 5
368
+ tapioca 1.00 1.00 1.00 0
369
+ tea 1.00 0.00 0.00 3
370
+ tin 1.00 0.00 0.00 12
371
+ trade 0.92 0.61 0.74 116
372
+ veg-oil 1.00 0.12 0.21 34
373
+ wheat 0.97 0.55 0.70 69
374
+ wool 1.00 1.00 1.00 0
375
+ wpi 1.00 0.00 0.00 10
376
+ yen 1.00 0.00 0.00 14
377
+ zinc 1.00 0.00 0.00 13
378
+
379
+ micro avg 0.97 0.64 0.77 3694
380
+ macro avg 0.98 0.25 0.29 3694
381
+ weighted avg 0.96 0.64 0.70 3694
382
+ samples avg 0.98 0.74 0.75 3694
383
+ </details>
384
+
385
+
386
+ ### Training hyperparameters
387
+
388
+ The following hyperparameters were used during training:
389
+ - learning_rate: 2e-05
390
+ - train_batch_size: 32
391
+ - eval_batch_size: 32
392
+ - seed: 42
393
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
394
+ - lr_scheduler_type: linear
395
+ - num_epochs: 20
396
+
397
+ ### Training results
398
+
399
+ | Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
400
+ |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
401
+ | 0.1801 | 1.0 | 300 | 0.0439 | 0.3896 | 0.6210 | 0.3566 |
402
+ | 0.0345 | 2.0 | 600 | 0.0287 | 0.6289 | 0.7318 | 0.5954 |
403
+ | 0.0243 | 3.0 | 900 | 0.0219 | 0.6721 | 0.7579 | 0.6084 |
404
+ | 0.0178 | 4.0 | 1200 | 0.0177 | 0.7505 | 0.8128 | 0.6908 |
405
+ | 0.014 | 5.0 | 1500 | 0.0151 | 0.7905 | 0.8376 | 0.7278 |
406
+ | 0.0115 | 6.0 | 1800 | 0.0135 | 0.8132 | 0.8589 | 0.7555 |
407
+ | 0.0096 | 7.0 | 2100 | 0.0124 | 0.8291 | 0.8727 | 0.7725 |
408
+ | 0.0082 | 8.0 | 2400 | 0.0124 | 0.8335 | 0.8757 | 0.7822 |
409
+ | 0.0071 | 9.0 | 2700 | 0.0119 | 0.8392 | 0.8847 | 0.7883 |
410
+ | 0.0064 | 10.0 | 3000 | 0.0123 | 0.8339 | 0.8810 | 0.7828 |
411
+ | 0.0058 | 11.0 | 3300 | 0.0114 | 0.8538 | 0.8999 | 0.8047 |
412
+ | 0.0053 | 12.0 | 3600 | 0.0113 | 0.8525 | 0.8967 | 0.8044 |
413
+ | 0.0048 | 13.0 | 3900 | 0.0115 | 0.8520 | 0.8982 | 0.8029 |
414
+ | 0.0045 | 14.0 | 4200 | 0.0111 | 0.8566 | 0.8962 | 0.8104 |
415
+ | 0.0042 | 15.0 | 4500 | 0.0110 | 0.8610 | 0.9060 | 0.8165 |
416
+ | 0.0039 | 16.0 | 4800 | 0.0112 | 0.8583 | 0.9021 | 0.8138 |
417
+ | 0.0037 | 17.0 | 5100 | 0.0110 | 0.8620 | 0.9055 | 0.8196 |
418
+ | 0.0035 | 18.0 | 5400 | 0.0110 | 0.8629 | 0.9063 | 0.8196 |
419
+ | 0.0035 | 19.0 | 5700 | 0.0111 | 0.8624 | 0.9062 | 0.8180 |
420
+ | 0.0034 | 20.0 | 6000 | 0.0111 | 0.8626 | 0.9055 | 0.8177 |
421
+
422
+
423
+ ### Framework versions
424
+
425
+ - Transformers 4.33.0.dev0
426
+ - Pytorch 2.0.1+cu117
427
+ - Datasets 2.14.3
428
+ - Tokenizers 0.13.3
config.json ADDED
@@ -0,0 +1,220 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "distilbert-base-cased",
3
+ "activation": "gelu",
4
+ "architectures": [
5
+ "DistilBertForSequenceClassification"
6
+ ],
7
+ "attention_dropout": 0.1,
8
+ "dim": 768,
9
+ "dropout": 0.1,
10
+ "hidden_dim": 3072,
11
+ "id2label": {
12
+ "0": "acq",
13
+ "1": "alum",
14
+ "2": "austdlr",
15
+ "3": "barley",
16
+ "4": "bop",
17
+ "5": "can",
18
+ "6": "carcass",
19
+ "7": "cocoa",
20
+ "8": "coconut",
21
+ "9": "coconut-oil",
22
+ "10": "coffee",
23
+ "11": "copper",
24
+ "12": "copra-cake",
25
+ "13": "corn",
26
+ "14": "cornglutenfeed",
27
+ "15": "cotton",
28
+ "16": "cpi",
29
+ "17": "cpu",
30
+ "18": "crude",
31
+ "19": "dfl",
32
+ "20": "dlr",
33
+ "21": "dmk",
34
+ "22": "earn",
35
+ "23": "fishmeal",
36
+ "24": "fuel",
37
+ "25": "gas",
38
+ "26": "gnp",
39
+ "27": "gold",
40
+ "28": "grain",
41
+ "29": "groundnut",
42
+ "30": "heat",
43
+ "31": "hog",
44
+ "32": "housing",
45
+ "33": "income",
46
+ "34": "instal-debt",
47
+ "35": "interest",
48
+ "36": "inventories",
49
+ "37": "ipi",
50
+ "38": "iron-steel",
51
+ "39": "jet",
52
+ "40": "jobs",
53
+ "41": "l-cattle",
54
+ "42": "lead",
55
+ "43": "lei",
56
+ "44": "linseed",
57
+ "45": "livestock",
58
+ "46": "lumber",
59
+ "47": "meal-feed",
60
+ "48": "money-fx",
61
+ "49": "money-supply",
62
+ "50": "naphtha",
63
+ "51": "nat-gas",
64
+ "52": "nickel",
65
+ "53": "nzdlr",
66
+ "54": "oat",
67
+ "55": "oilseed",
68
+ "56": "orange",
69
+ "57": "palladium",
70
+ "58": "palm-oil",
71
+ "59": "palmkernel",
72
+ "60": "pet-chem",
73
+ "61": "platinum",
74
+ "62": "plywood",
75
+ "63": "pork-belly",
76
+ "64": "potato",
77
+ "65": "propane",
78
+ "66": "rand",
79
+ "67": "rape-oil",
80
+ "68": "rapeseed",
81
+ "69": "reserves",
82
+ "70": "retail",
83
+ "71": "rice",
84
+ "72": "rubber",
85
+ "73": "saudriyal",
86
+ "74": "ship",
87
+ "75": "silver",
88
+ "76": "sorghum",
89
+ "77": "soy-meal",
90
+ "78": "soy-oil",
91
+ "79": "soybean",
92
+ "80": "stg",
93
+ "81": "strategic-metal",
94
+ "82": "sugar",
95
+ "83": "sun-oil",
96
+ "84": "sunseed",
97
+ "85": "tapioca",
98
+ "86": "tea",
99
+ "87": "tin",
100
+ "88": "trade",
101
+ "89": "veg-oil",
102
+ "90": "wheat",
103
+ "91": "wool",
104
+ "92": "wpi",
105
+ "93": "yen",
106
+ "94": "zinc"
107
+ },
108
+ "initializer_range": 0.02,
109
+ "label2id": {
110
+ "acq": 0,
111
+ "alum": 1,
112
+ "austdlr": 2,
113
+ "barley": 3,
114
+ "bop": 4,
115
+ "can": 5,
116
+ "carcass": 6,
117
+ "cocoa": 7,
118
+ "coconut": 8,
119
+ "coconut-oil": 9,
120
+ "coffee": 10,
121
+ "copper": 11,
122
+ "copra-cake": 12,
123
+ "corn": 13,
124
+ "cornglutenfeed": 14,
125
+ "cotton": 15,
126
+ "cpi": 16,
127
+ "cpu": 17,
128
+ "crude": 18,
129
+ "dfl": 19,
130
+ "dlr": 20,
131
+ "dmk": 21,
132
+ "earn": 22,
133
+ "fishmeal": 23,
134
+ "fuel": 24,
135
+ "gas": 25,
136
+ "gnp": 26,
137
+ "gold": 27,
138
+ "grain": 28,
139
+ "groundnut": 29,
140
+ "heat": 30,
141
+ "hog": 31,
142
+ "housing": 32,
143
+ "income": 33,
144
+ "instal-debt": 34,
145
+ "interest": 35,
146
+ "inventories": 36,
147
+ "ipi": 37,
148
+ "iron-steel": 38,
149
+ "jet": 39,
150
+ "jobs": 40,
151
+ "l-cattle": 41,
152
+ "lead": 42,
153
+ "lei": 43,
154
+ "linseed": 44,
155
+ "livestock": 45,
156
+ "lumber": 46,
157
+ "meal-feed": 47,
158
+ "money-fx": 48,
159
+ "money-supply": 49,
160
+ "naphtha": 50,
161
+ "nat-gas": 51,
162
+ "nickel": 52,
163
+ "nzdlr": 53,
164
+ "oat": 54,
165
+ "oilseed": 55,
166
+ "orange": 56,
167
+ "palladium": 57,
168
+ "palm-oil": 58,
169
+ "palmkernel": 59,
170
+ "pet-chem": 60,
171
+ "platinum": 61,
172
+ "plywood": 62,
173
+ "pork-belly": 63,
174
+ "potato": 64,
175
+ "propane": 65,
176
+ "rand": 66,
177
+ "rape-oil": 67,
178
+ "rapeseed": 68,
179
+ "reserves": 69,
180
+ "retail": 70,
181
+ "rice": 71,
182
+ "rubber": 72,
183
+ "saudriyal": 73,
184
+ "ship": 74,
185
+ "silver": 75,
186
+ "sorghum": 76,
187
+ "soy-meal": 77,
188
+ "soy-oil": 78,
189
+ "soybean": 79,
190
+ "stg": 80,
191
+ "strategic-metal": 81,
192
+ "sugar": 82,
193
+ "sun-oil": 83,
194
+ "sunseed": 84,
195
+ "tapioca": 85,
196
+ "tea": 86,
197
+ "tin": 87,
198
+ "trade": 88,
199
+ "veg-oil": 89,
200
+ "wheat": 90,
201
+ "wool": 91,
202
+ "wpi": 92,
203
+ "yen": 93,
204
+ "zinc": 94
205
+ },
206
+ "max_position_embeddings": 512,
207
+ "model_type": "distilbert",
208
+ "n_heads": 12,
209
+ "n_layers": 6,
210
+ "output_past": true,
211
+ "pad_token_id": 0,
212
+ "problem_type": "multi_label_classification",
213
+ "qa_dropout": 0.1,
214
+ "seq_classif_dropout": 0.2,
215
+ "sinusoidal_pos_embds": false,
216
+ "tie_weights_": true,
217
+ "torch_dtype": "float32",
218
+ "transformers_version": "4.33.0.dev0",
219
+ "vocab_size": 28996
220
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e38182ddd4f8130e9fe613c01a9e8dc3539a66a45af8281181e036efee984102
3
+ size 263430764
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:028d703b493c27c9c83ea5a9799e9ee652bf8d6ac94fce43fda5140fe36a7438
3
+ size 263453741
special_tokens_map.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": "[CLS]",
3
+ "mask_token": "[MASK]",
4
+ "pad_token": "[PAD]",
5
+ "sep_token": "[SEP]",
6
+ "unk_token": "[UNK]"
7
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "clean_up_tokenization_spaces": true,
3
+ "cls_token": "[CLS]",
4
+ "do_lower_case": false,
5
+ "mask_token": "[MASK]",
6
+ "model_max_length": 512,
7
+ "pad_token": "[PAD]",
8
+ "sep_token": "[SEP]",
9
+ "strip_accents": null,
10
+ "tokenize_chinese_chars": true,
11
+ "tokenizer_class": "DistilBertTokenizer",
12
+ "unk_token": "[UNK]"
13
+ }
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0cb10aebfceb4aaf0793ee1fc45a6576f17634479c7e3b49ad8b134556f4d73c
3
+ size 4091
vocab.txt ADDED
The diff for this file is too large to render. See raw diff