MassMin commited on
Commit
ee829a1
·
verified ·
1 Parent(s): 4a5c6ae

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +44 -151
README.md CHANGED
@@ -1,10 +1,9 @@
1
  ---
2
- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
3
- # Doc / guide: https://huggingface.co/docs/hub/model-cards
4
- {}
5
- ---
6
 
7
  # XLM-RoBERTa Token Classification for Named Entity Recognition (NER)
 
 
8
  This model is a fine-tuned version of XLM-RoBERTa (xlm-roberta-base) for Named Entity Recognition (NER) tasks. It has been trained on the PAN-X subset of the XTREME dataset for German Language . The model identifies the following entity types:
9
 
10
  PER: Person names
@@ -12,33 +11,19 @@ ORG: Organization names
12
  LOC: Location names
13
 
14
 
15
- <!-- Provide a quick summary of what the model is/does. -->
16
 
17
- This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
18
 
 
19
  ## Model Details
20
 
21
- ### Model Description
22
 
23
- <!-- Provide a longer summary of what this model is. -->
24
 
25
 
26
 
27
- - **Developed by:** [More Information Needed]
28
- - **Funded by [optional]:** [More Information Needed]
29
- - **Shared by [optional]:** [More Information Needed]
30
- - **Model type:** [More Information Needed]
31
- - **Language(s) (NLP):** [More Information Needed]
32
- - **License:** [More Information Needed]
33
- - **Finetuned from model [optional]:** [More Information Needed]
34
 
35
- ### Model Sources [optional]
36
 
37
- <!-- Provide the basic links for the model. -->
38
 
39
- - **Repository:** [More Information Needed]
40
- - **Paper [optional]:** [More Information Needed]
41
- - **Demo [optional]:** [More Information Needed]
42
 
43
  ## Uses
44
  This model is suitable for multilingual NER tasks, especially in scenarios where extracting and classifying person, organization, and location names in text across different languages is required.
@@ -47,75 +32,68 @@ Applications:
47
  Information extraction
48
  Multilingual NER tasks
49
  Automated text analysis for businesses
50
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
51
 
52
- ### Direct Use
53
 
54
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
55
 
56
- [More Information Needed]
57
 
58
- ### Downstream Use [optional]
59
-
60
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
 
 
61
 
62
- [More Information Needed]
63
 
64
- ### Out-of-Scope Use
65
 
66
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
67
 
68
- [More Information Needed]
69
 
70
- ## Bias, Risks, and Limitations
71
 
72
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
 
 
73
 
74
- [More Information Needed]
 
 
75
 
76
- ### Recommendations
 
 
 
77
 
78
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
 
 
79
 
80
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
 
 
81
 
82
- ## How to Get Started with the Model
 
 
83
 
84
- Use the code below to get started with the model.
 
 
85
 
86
- [More Information Needed]
 
 
87
 
88
- ## Training Details
89
- Base Model: xlm-roberta-base
90
- Training Dataset: The model is trained on the PAN-X subset of the XTREME dataset, which includes labeled NER data for multiple languages.
91
- Training Framework: Hugging Face transformers library with PyTorch backend.
92
- Data Preprocessing: Tokenization was performed using XLM-RoBERTa tokenizer, with attention paid to aligning token labels to subword tokens.
93
 
94
- ### Training Data
95
 
96
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
97
-
98
- [More Information Needed]
99
-
100
- ### Training Procedure
101
-
102
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
103
-
104
- #### Preprocessing [optional]
105
-
106
- [More Information Needed]
107
 
108
 
109
  #### Training Hyperparameters
110
  The model's performance is evaluated using the F1 score for NER. The predictions are aligned with gold-standard labels, ignoring sub-token predictions where appropriate.
111
 
112
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
113
 
114
- #### Speeds, Sizes, Times [optional]
115
 
116
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
117
 
118
- [More Information Needed]
119
 
120
 
121
  ## Evaluation
@@ -125,7 +103,7 @@ import torch
125
  from transformers import AutoTokenizer, AutoModelForTokenClassification, pipeline
126
  import pandas as pd
127
 
128
- model_checkpoint = "MassMin/xlm-roberta-base-finetuned-panx-de" # Replace with your Hugging Face model name
129
 
130
  device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
131
 
@@ -150,98 +128,13 @@ print(result)
150
 
151
 
152
 
153
- <!-- This section describes the evaluation protocols and provides the results. -->
154
-
155
- ### Testing Data, Factors & Metrics
156
-
157
- #### Testing Data
158
-
159
- <!-- This should link to a Dataset Card if possible. -->
160
-
161
- [More Information Needed]
162
-
163
- #### Factors
164
-
165
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
166
-
167
- [More Information Needed]
168
-
169
- #### Metrics
170
-
171
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
172
 
173
- [More Information Needed]
174
 
175
- ### Results
176
 
177
- [More Information Needed]
178
 
179
- #### Summary
180
-
181
-
182
-
183
- ## Model Examination [optional]
184
-
185
- <!-- Relevant interpretability work for the model goes here -->
186
-
187
- [More Information Needed]
188
-
189
- ## Environmental Impact
190
-
191
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
192
-
193
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
194
-
195
- - **Hardware Type:** [More Information Needed]
196
- - **Hours used:** [More Information Needed]
197
- - **Cloud Provider:** [More Information Needed]
198
- - **Compute Region:** [More Information Needed]
199
- - **Carbon Emitted:** [More Information Needed]
200
-
201
- ## Technical Specifications [optional]
202
-
203
- ### Model Architecture and Objective
204
-
205
- [More Information Needed]
206
-
207
- ### Compute Infrastructure
208
-
209
- [More Information Needed]
210
-
211
- #### Hardware
212
-
213
- [More Information Needed]
214
-
215
- #### Software
216
-
217
- [More Information Needed]
218
-
219
- ## Citation [optional]
220
-
221
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
222
-
223
- **BibTeX:**
224
-
225
- [More Information Needed]
226
-
227
- **APA:**
228
-
229
- [More Information Needed]
230
-
231
- ## Glossary [optional]
232
-
233
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
234
-
235
- [More Information Needed]
236
-
237
- ## More Information [optional]
238
-
239
- [More Information Needed]
240
-
241
- ## Model Card Authors [optional]
242
-
243
- [More Information Needed]
244
 
245
- ## Model Card Contact
 
 
246
 
247
- [More Information Needed]
 
1
  ---
2
+
 
 
 
3
 
4
  # XLM-RoBERTa Token Classification for Named Entity Recognition (NER)
5
+
6
+ ### Model Description
7
  This model is a fine-tuned version of XLM-RoBERTa (xlm-roberta-base) for Named Entity Recognition (NER) tasks. It has been trained on the PAN-X subset of the XTREME dataset for German Language . The model identifies the following entity types:
8
 
9
  PER: Person names
 
11
  LOC: Location names
12
 
13
 
 
14
 
 
15
 
16
+ T
17
  ## Model Details
18
 
 
19
 
 
20
 
21
 
22
 
 
 
 
 
 
 
 
23
 
24
+ -
25
 
 
26
 
 
 
 
27
 
28
  ## Uses
29
  This model is suitable for multilingual NER tasks, especially in scenarios where extracting and classifying person, organization, and location names in text across different languages is required.
 
32
  Information extraction
33
  Multilingual NER tasks
34
  Automated text analysis for businesses
 
35
 
 
36
 
 
37
 
 
38
 
39
+ ## Training Details
40
+ Base Model: xlm-roberta-base
41
+ Training Dataset: The model is trained on the PAN-X subset of the XTREME dataset, which includes labeled NER data for multiple languages.
42
+ Training Framework: Hugging Face transformers library with PyTorch backend.
43
+ Data Preprocessing: Tokenization was performed using XLM-RoBERTa tokenizer, with attention paid to aligning token labels to subword tokens.
44
 
 
45
 
 
46
 
47
+ ### Training Procedure
48
 
49
+ Here's a brief overview of the training procedure for the XLM-RoBERTa model for NER:
50
 
51
+ Setup Environment:
52
 
53
+ Clone the repository and set up dependencies.
54
+ Import necessary libraries and modules.
55
+ Load Data:
56
 
57
+ Load the PAN-X subset from the XTREME dataset.
58
+ Shuffle and sample data subsets for training and evaluation.
59
+ Data Preparation:
60
 
61
+ Convert raw dataset into a format suitable for token classification.
62
+ Define a mapping for entity tags and apply tokenization.
63
+ Align NER tags with tokenized inputs.
64
+ Define Model:
65
 
66
+ Initialize the XLM-RoBERTa model for token classification.
67
+ Configure the model with the number of labels based on the dataset.
68
+ Setup Training Arguments:
69
 
70
+ Define hyperparameters such as learning rate, batch size, number of epochs, and evaluation strategy.
71
+ Configure logging and checkpointing.
72
+ Initialize Trainer:
73
 
74
+ Create a Trainer instance with the model, training arguments, datasets, and data collator.
75
+ Specify evaluation metrics to monitor performance.
76
+ Train the Model:
77
 
78
+ Start the training process using the Trainer.
79
+ Monitor training progress and metrics.
80
+ Evaluation and Results:
81
 
82
+ Evaluate the model on the validation set.
83
+ Compute metrics like F1 score for performance assessment.
84
+ Save and Push Model:
85
 
86
+ Save the fine-tuned model locally or push to a model hub for sharing and further use.
 
 
 
 
87
 
 
88
 
 
 
 
 
 
 
 
 
 
 
 
89
 
90
 
91
  #### Training Hyperparameters
92
  The model's performance is evaluated using the F1 score for NER. The predictions are aligned with gold-standard labels, ignoring sub-token predictions where appropriate.
93
 
 
94
 
 
95
 
 
96
 
 
97
 
98
 
99
  ## Evaluation
 
103
  from transformers import AutoTokenizer, AutoModelForTokenClassification, pipeline
104
  import pandas as pd
105
 
106
+ model_checkpoint = "MassMin/Multilingual-NER-tagging"
107
 
108
  device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
109
 
 
128
 
129
 
130
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
131
 
 
132
 
 
133
 
 
134
 
135
+ #### Testing Data
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
136
 
137
+ 0 1 2 3 4 5 6 7 8 9 10 11
138
+ Tokens 2.000 Einwohnern an der Danziger Bucht in der polnischen Woiwodschaft Pommern .
139
+ Tags O O O O B-LOC I-LOC O O B-LOC B-LOC I-LOC O
140