NorbertRop commited on
Commit
f94e30f
1 Parent(s): f47c82f

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +11 -176
README.md CHANGED
@@ -17,11 +17,13 @@ widget:
17
 
18
  # FastPDN
19
 
20
- FastPolDeepNer is a model designed for easy use, training and configuration. The forerunner of this project is [PolDeepNer2](https://gitlab.clarin-pl.eu/information-extraction/poldeepner2). The model implements a pipeline consisting of data processing and training using: hydra, pytorch, pytorch-lightning, transformers.
 
 
21
 
22
  ## How to use
23
 
24
- Here is how to use this model to get the Named Entities in text:
25
 
26
  ```python
27
  from transformers import pipeline
@@ -49,91 +51,13 @@ encoded_input = tokenizer(text, return_tensors='pt')
49
  output = model(**encoded_input)
50
  ```
51
 
52
- ### Developing
53
-
54
- Model pipeline consists of 2 steps:
55
-
56
- - Data processing
57
- - Training
58
- - (optional) Share model to Hugginface Hub
59
-
60
- #### Config
61
-
62
- This project use hydra configuration. Every configuration used in this module
63
- is placed in `.yaml` files in `config` directory.
64
-
65
- This directory has structure:
66
-
67
- - prepare_data.yaml - main configuration for the data processing stage
68
- - train.yaml - main configuration for the training stage
69
- - share_mode.yaml - main configuraion for sharing model to Huggingface Hub
70
- - callbacks - contains callbacks for pytorch_lightning trainer
71
- - default.yaml
72
- - early_stopping.yaml
73
- - learning_rate_monitor.yaml
74
- - model_checkpoint.yaml
75
- - rich_progress_bar.yaml
76
- - datamodule - contains pytorch_lightning datamodule configuration
77
- - pdn.yaml
78
- - experiment - contains all the configurations of executed experiments
79
- - hydra - hydra configuration files
80
- - loggers - contains loggers for trainer
81
- - csv.yaml
82
- - many_loggers.yaml
83
- - tensorboards.yaml
84
- - wandb.yaml
85
- - model - contains model architecture hyperparameters
86
- - default.yaml
87
- - distiluse.yaml
88
- - custom_classification_head.yaml
89
- - multilabel.yaml
90
- - paths - contains paths for IO
91
- - prepare_data - contains configuration for data processing stage
92
- - cen_n82
93
- - default
94
- - trainer - contains trainer configurations
95
- - default.yaml
96
- - cpu.yaml
97
- - gpu.yaml
98
-
99
- #### Training
100
-
101
- 1. Install requirements with poetry
102
-
103
- ```
104
- poetry install
105
- ```
106
-
107
- 2. Use poetry environment in next steps
108
-
109
- ```
110
- poetry shell
111
- ```
112
-
113
- or
114
-
115
- ```
116
- poetry run <command>
117
- ```
118
-
119
- 3. Prepare dataset
120
-
121
- ```
122
- python3 src/prepare_data.py experiment=<experiment-name>
123
- ```
124
-
125
- 4. Train model
126
-
127
- ```
128
- CUDA_VISIBLE_DEVICES=<device-id> python3 src/train.py experiment=<experiment-name>
129
- ```
130
-
131
- 5. (optional) Share model to Huggingface Hub
132
-
133
- ```
134
- python3 src/share_model.py
135
- ```
136
 
 
 
 
 
137
  ## Evaluation
138
 
139
  Runs trained on `cen_n82` and `kpwr_n82`:
@@ -142,100 +66,11 @@ Runs trained on `cen_n82` and `kpwr_n82`:
142
  |distiluse| 0.53 | 0.61 | 0.95 | 0.55 | 0.54 |
143
  | herbert | 0.68 | 0.78 | 0.97 | 0.7 | 0.69 |
144
 
145
- Runs trained and validated only on `cen_n82`:
146
- | name |test/f1|test/pdn2_f1|test/acc|test/precision|test/recall|
147
- |----------------|-------|------------|--------|--------------|-----------|
148
- | distiluse_cen | 0.58 | 0.7 | 0.96 | 0.6 | 0.59 |
149
- |herbert_cen_bs32| 0.71 | 0.84 | 0.97 | 0.72 | 0.72 |
150
- | herbert_cen | 0.72 | 0.84 | 0.97 | 0.73 | 0.73 |
151
-
152
- Detailed results for `herbert`:
153
- | tag | f1 |precision|recall|support|
154
- |-------------------------|----|---------|------|-------|
155
- | nam_eve_human_cultural |0.65| 0.53 | 0.83 | 88 |
156
- | nam_pro_title_document |0.87| 0.82 | 0.92 | 50 |
157
- | nam_loc_gpe_country |0.82| 0.76 | 0.9 | 258 |
158
- | nam_oth_www |0.71| 0.85 | 0.61 | 18 |
159
- | nam_liv_person |0.94| 0.89 | 1.0 | 8 |
160
- | nam_adj_country |0.44| 0.42 | 0.46 | 94 |
161
- | nam_org_institution |0.15| 0.16 | 0.14 | 22 |
162
- | nam_loc_land_continent | 0.5| 0.57 | 0.44 | 9 |
163
- | nam_org_organization |0.64| 0.59 | 0.71 | 58 |
164
- | nam_liv_god |0.13| 0.09 | 0.25 | 4 |
165
- | nam_loc_gpe_city |0.56| 0.51 | 0.62 | 87 |
166
- | nam_org_company | 0.0| 0.0 | 0.0 | 4 |
167
- | nam_oth_currency |0.71| 0.86 | 0.6 | 10 |
168
- | nam_org_group_team |0.87| 0.79 | 0.96 | 106 |
169
- | nam_fac_road |0.67| 0.67 | 0.67 | 6 |
170
- | nam_fac_park |0.39| 0.7 | 0.27 | 26 |
171
- | nam_pro_title_tv |0.17| 1.0 | 0.09 | 11 |
172
- | nam_loc_gpe_admin3 |0.91| 0.97 | 0.86 | 35 |
173
- | nam_adj |0.47| 0.5 | 0.44 | 9 |
174
- | nam_loc_gpe_admin1 |0.92| 0.91 | 0.93 | 1146 |
175
- | nam_oth_tech | 0.0| 0.0 | 0.0 | 4 |
176
- | nam_pro_brand |0.93| 0.88 | 1.0 | 14 |
177
- | nam_fac_goe | 0.1| 0.07 | 0.14 | 7 |
178
- | nam_eve_human |0.76| 0.73 | 0.78 | 74 |
179
- | nam_pro_vehicle |0.81| 0.79 | 0.83 | 36 |
180
- | nam_oth | 0.8| 0.82 | 0.79 | 47 |
181
- | nam_org_nation |0.85| 0.87 | 0.84 | 516 |
182
- | nam_pro_media_periodic |0.95| 0.94 | 0.96 | 603 |
183
- | nam_adj_city |0.43| 0.39 | 0.47 | 19 |
184
- | nam_oth_position |0.56| 0.54 | 0.58 | 26 |
185
- | nam_pro_title |0.63| 0.68 | 0.59 | 22 |
186
- | nam_pro_media_tv |0.29| 0.2 | 0.5 | 2 |
187
- | nam_fac_system |0.29| 0.2 | 0.5 | 2 |
188
- | nam_eve_human_holiday | 1.0| 1.0 | 1.0 | 2 |
189
- | nam_loc_gpe_admin2 |0.83| 0.91 | 0.76 | 51 |
190
- | nam_adj_person |0.86| 0.75 | 1.0 | 3 |
191
- | nam_pro_software |0.67| 1.0 | 0.5 | 2 |
192
- | nam_num_house |0.88| 0.9 | 0.86 | 43 |
193
- | nam_pro_media_web |0.32| 0.43 | 0.25 | 12 |
194
- | nam_org_group | 0.5| 0.45 | 0.56 | 9 |
195
- | nam_loc_hydronym_river |0.67| 0.61 | 0.74 | 19 |
196
- | nam_liv_animal |0.88| 0.79 | 1.0 | 11 |
197
- | nam_pro_award | 0.8| 1.0 | 0.67 | 3 |
198
- | nam_pro |0.82| 0.8 | 0.83 | 243 |
199
- | nam_org_political_party |0.34| 0.38 | 0.32 | 19 |
200
- | nam_eve_human_sport |0.65| 0.73 | 0.58 | 19 |
201
- | nam_pro_title_book |0.94| 0.93 | 0.95 | 149 |
202
- | nam_org_group_band |0.74| 0.73 | 0.75 | 359 |
203
- | nam_oth_data_format |0.82| 0.88 | 0.76 | 88 |
204
- | nam_loc_astronomical |0.75| 0.72 | 0.79 | 341 |
205
- | nam_loc_hydronym_sea | 0.4| 1.0 | 0.25 | 4 |
206
- | nam_loc_land_mountain |0.95| 0.96 | 0.95 | 74 |
207
- | nam_loc_land_island |0.55| 0.52 | 0.59 | 46 |
208
- | nam_num_phone |0.91| 0.91 | 0.91 | 137 |
209
- | nam_pro_model_car |0.56| 0.64 | 0.5 | 14 |
210
- | nam_loc_land_region |0.52| 0.5 | 0.55 | 11 |
211
- | nam_liv_habitant |0.38| 0.29 | 0.54 | 13 |
212
- | nam_eve |0.47| 0.38 | 0.61 | 85 |
213
- | nam_loc_historical_region|0.44| 0.8 | 0.31 | 26 |
214
- | nam_fac_bridge |0.33| 0.26 | 0.46 | 24 |
215
- | nam_oth_license |0.65| 0.74 | 0.58 | 24 |
216
- | nam_pro_media |0.33| 0.32 | 0.35 | 52 |
217
- | nam_loc_gpe_subdivision | 0.0| 0.0 | 0.0 | 9 |
218
- | nam_loc_gpe_district |0.84| 0.86 | 0.81 | 108 |
219
- | nam_loc |0.67| 0.6 | 0.75 | 4 |
220
- | nam_pro_software_game |0.75| 0.61 | 0.95 | 20 |
221
- | nam_pro_title_album | 0.6| 0.56 | 0.65 | 52 |
222
- | nam_loc_country_region |0.81| 0.74 | 0.88 | 26 |
223
- | nam_pro_title_song |0.52| 0.6 | 0.45 | 111 |
224
- | nam_org_organization_sub| 0.0| 0.0 | 0.0 | 3 |
225
- | nam_loc_land | 0.4| 0.31 | 0.56 | 36 |
226
- | nam_fac_square | 0.5| 0.6 | 0.43 | 7 |
227
- | nam_loc_hydronym |0.67| 0.56 | 0.82 | 11 |
228
- | nam_loc_hydronym_lake |0.51| 0.44 | 0.61 | 96 |
229
- | nam_fac_goe_stop |0.35| 0.3 | 0.43 | 7 |
230
- | nam_pro_media_radio | 0.0| 0.0 | 0.0 | 2 |
231
- | nam_pro_title_treaty | 0.3| 0.56 | 0.21 | 24 |
232
- | nam_loc_hydronym_ocean |0.35| 0.38 | 0.33 | 33 |
233
-
234
- To see all the experiments and graphs head over to wandb - https://wandb.ai/clarin-pl/FastPDN
235
 
236
  ## Authors
237
 
238
  - Grupa Wieszcze CLARIN-PL
 
239
 
240
  ## Contact
241
 
 
17
 
18
  # FastPDN
19
 
20
+ FastPolDeepNer is model for Named Entity Recognition, designed for easy use, training and configuration. The forerunner of this project is [PolDeepNer2](https://gitlab.clarin-pl.eu/information-extraction/poldeepner2). The model implements a pipeline consisting of data processing and training using: hydra, pytorch, pytorch-lightning, transformers.
21
+
22
+ Source code: https://gitlab.clarin-pl.eu/grupa-wieszcz/ner/fast-pdn
23
 
24
  ## How to use
25
 
26
+ Here is how to use this model to get Named Entities in text:
27
 
28
  ```python
29
  from transformers import pipeline
 
51
  output = model(**encoded_input)
52
  ```
53
 
54
+ ## Training data
55
+ The FastPDN model was trained on datasets (with 82 class versions) of kpwr and cen. Annotation guidelines are specified [here](https://clarin-pl.eu/dspace/bitstream/handle/11321/294/WytyczneKPWr-jednostkiidentyfikacyjne.pdf).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
56
 
57
+ ## Pretraining
58
+ FastPDN models have been fine-tuned, thanks to pretrained models:
59
+ - [herbert-base-case](https://huggingface.co/allegro/herbert-base-cased)
60
+ - [distiluse-base-multilingual-cased-v1](sentence-transformers/distiluse-base-multilingual-cased-v1)
61
  ## Evaluation
62
 
63
  Runs trained on `cen_n82` and `kpwr_n82`:
 
66
  |distiluse| 0.53 | 0.61 | 0.95 | 0.55 | 0.54 |
67
  | herbert | 0.68 | 0.78 | 0.97 | 0.7 | 0.69 |
68
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
69
 
70
  ## Authors
71
 
72
  - Grupa Wieszcze CLARIN-PL
73
+ - Wiktor Walentynowicz
74
 
75
  ## Contact
76