toastynews commited on
Commit
2544fcc
·
1 Parent(s): eceb170

Upload model.

Browse files
Files changed (1) hide show
  1. README.md +62 -0
README.md ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: yue
3
+ license: apache-2.0
4
+ tags:
5
+ - generated_from_trainer
6
+ model-index:
7
+ - name: electra-hongkongese-base-hkt-ws
8
+ results: []
9
+ ---
10
+
11
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
+ should probably proofread and complete it, then remove this comment. -->
13
+
14
+ # full_toasty_base_hca
15
+
16
+ This model is a fine-tuned version of [toastynews/electra-hongkongese-base-discriminator](https://huggingface.co/toastynews/electra-hongkongese-base-discriminator) on [HKCanCor](https://pycantonese.org/data.html#built-in-data) and [CityU and AS](http://sighan.cs.uchicago.edu/bakeoff2005/) for word segmentation.
17
+
18
+ ## Model description
19
+
20
+ Performs word segmentation on text from Hong Kong.
21
+ There are two versions; hk trained with only text from Hong Kong, and hkt trained with text from Hong Kong and Taiwan. Each version have base and small model sizes.
22
+
23
+ ## Intended uses & limitations
24
+
25
+ Trained to handle both Hongkongese/Cantonese and Standard Chinese from Hong Kong. Text from other places and English do not work as well.
26
+ The easiest way is to use with the CKIP Transformers libary.
27
+
28
+ ## Training and evaluation data
29
+
30
+ HKCanCor, CityU and AS are converted to BI-encoded word segmentation dataset in Hugging Face format using code from [finetune-ckip-transformers](https://github.com/toastynews/finetune-ckip-transformers).
31
+
32
+ ## Training procedure
33
+
34
+ ### Training hyperparameters
35
+
36
+ The following hyperparameters were used during training:
37
+ - learning_rate: 5e-05
38
+ - train_batch_size: 8
39
+ - eval_batch_size: 8
40
+ - seed: 42
41
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
42
+ - lr_scheduler_type: linear
43
+ - num_epochs: 3.0
44
+
45
+ ### Training results
46
+
47
+ |dataset |token_f |token_p |token_r |
48
+ |:---------|--------|--------|--------|
49
+ |ud yue_hk | 0.9404| 0.9442| 0.9367|
50
+ |ud zh_hk | 0.9327| 0.9404| 0.9251|
51
+ |_hkcancor_|_0.9875_|_0.9868_|_0.9883_|
52
+ |cityu | 0.9766| 0.9756| 0.9777|
53
+ |as | 0.9652| 0.9601| 0.9704|
54
+
55
+ _Was trained on hkcancor. Reported for reference only._
56
+
57
+ ### Framework versions
58
+
59
+ - Transformers 4.27.0.dev0
60
+ - Pytorch 1.10.0
61
+ - Datasets 2.10.1
62
+ - Tokenizers 0.13.2