File size: 1,947 Bytes
fb38c45
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9672e73
 
fb38c45
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b72273f
184691a
a84f017
16984c7
3598894
b307e6f
375f3c0
7835501
66e4c75
b240c06
b148500
f568d38
6820750
774bd29
b0fed9a
7776e23
291ade7
57883a0
7a97d5c
1e13b38
5ab888c
528f81a
2f68c56
c395e7b
78cb99a
d9b560b
09b7e58
442ba90
5191200
9672e73
fb38c45
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
---
library_name: transformers
base_model: bert-base-chinese
tags:
- generated_from_keras_callback
model-index:
- name: node-py/my_awesome_eli5_clm-model
  results: []
---

<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->

# node-py/my_awesome_eli5_clm-model

This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0542
- Epoch: 29

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32

### Training results

| Train Loss | Epoch |
|:----------:|:-----:|
| 0.0882     | 0     |
| 0.0878     | 1     |
| 0.0852     | 2     |
| 0.0824     | 3     |
| 0.0810     | 4     |
| 0.0812     | 5     |
| 0.0790     | 6     |
| 0.0772     | 7     |
| 0.0755     | 8     |
| 0.0749     | 9     |
| 0.0717     | 10    |
| 0.0722     | 11    |
| 0.0718     | 12    |
| 0.0689     | 13    |
| 0.0863     | 14    |
| 0.0838     | 15    |
| 0.0731     | 16    |
| 0.0768     | 17    |
| 0.0675     | 18    |
| 0.0646     | 19    |
| 0.0650     | 20    |
| 0.0627     | 21    |
| 0.0610     | 22    |
| 0.0594     | 23    |
| 0.0585     | 24    |
| 0.0585     | 25    |
| 0.0577     | 26    |
| 0.0569     | 27    |
| 0.0565     | 28    |
| 0.0542     | 29    |


### Framework versions

- Transformers 4.44.2
- TensorFlow 2.17.0
- Datasets 2.21.0
- Tokenizers 0.19.1