igorsterner commited on
Commit
132104e
1 Parent(s): f3cd102

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +163 -0
README.md ADDED
@@ -0,0 +1,163 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - multilingual
5
+ base_model:
6
+ - FacebookAI/xlm-roberta-large
7
+ pipeline_tag: token-classification
8
+ ---
9
+
10
+ # Multilingual Identification of English Code-Switching
11
+
12
+ AnE-NER (Any-English Code-Switching Named Entity Recognition) is a token-level model for detecting named entities in code-switching texts. It classifies words into two classes: `I` (inside a named entity) and `O` (outside a named entity). The model shows strong performance on both languages seen and unseen in the training data.
13
+
14
+ # Usage
15
+
16
+ You can use AnE-NER with Huggingface’s `pipeline` or `AutoModelForTokenClassification`.
17
+
18
+ Let's try the following example (taken from [this](https://aclanthology.org/W18-3213/) paper)
19
+
20
+ ```python
21
+ input = "My Facebook, Ig & Twitter is hellaa dead yall Jk soy yo que has no life!"
22
+ ```
23
+
24
+ ## Pipeline
25
+
26
+ ```python
27
+ from transformers import pipeline
28
+ classifier = pipeline("token-classification", model="igorsterner/AnE-NER", aggregation_strategy="simple")
29
+ result = classifier(input)
30
+ ```
31
+
32
+ which returns
33
+
34
+ ```
35
+ [{'entity_group': 'I',
36
+ 'score': 0.95482016,
37
+ 'word': 'Facebook',
38
+ 'start': 3,
39
+ 'end': 11},
40
+ {'entity_group': 'I',
41
+ 'score': 0.9638739,
42
+ 'word': 'Ig',
43
+ 'start': 13,
44
+ 'end': 15},
45
+ {'entity_group': 'I',
46
+ 'score': 0.98207414,
47
+ 'word': 'Twitter',
48
+ 'start': 18,
49
+ 'end': 25}]
50
+ ```
51
+
52
+ ## Advanced
53
+
54
+ If your input is already word-tokenized, and you want the corresponding word NER labels, you can try the following strategy
55
+
56
+ ```python
57
+ import torch
58
+ from transformers import AutoModelForTokenClassification, AutoTokenizer
59
+
60
+ lid_model_name = "igorsterner/AnE-NER"
61
+ lid_tokenizer = AutoTokenizer.from_pretrained(lid_model_name)
62
+ lid_model = AutoModelForTokenClassification.from_pretrained(lid_model_name)
63
+
64
+ word_tokens = ['My', 'Facebook', ',', 'Ig', '&', 'Twitter', 'is', 'hellaa', 'dead', 'yall', 'Jk', 'soy', 'yo', 'que', 'has', 'no', 'life', '!']
65
+
66
+ subword_inputs = lid_tokenizer(
67
+ word_tokens, truncation=True, is_split_into_words=True, return_tensors="pt"
68
+ )
69
+
70
+ subword2word = subword_inputs.word_ids(batch_index=0)
71
+ logits = lid_model(**subword_inputs).logits
72
+ predictions = torch.argmax(logits, dim=2)
73
+
74
+ predicted_subword_labels = [lid_model.config.id2label[t.item()] for t in predictions[0]]
75
+ predicted_word_labels = [[] for _ in range(len(word_tokens))]
76
+
77
+ for idx, predicted_subword in enumerate(predicted_subword_labels):
78
+ if subword2word[idx] is not None:
79
+ predicted_word_labels[subword2word[idx]].append(predicted_subword)
80
+
81
+ def most_frequent(lst):
82
+ return max(set(lst), key=lst.count) if lst else "Other"
83
+
84
+ predicted_word_labels = [most_frequent(sublist) for sublist in predicted_word_labels]
85
+
86
+ for token, label in zip(word_tokens, predicted_word_labels):
87
+ print(f"{token}: {label}")
88
+ ```
89
+
90
+ which returns
91
+
92
+ ```
93
+ My: O
94
+ Facebook: I
95
+ ,: O
96
+ Ig: I
97
+ &: O
98
+ Twitter: I
99
+ is: O
100
+ hellaa: O
101
+ dead: O
102
+ yall: O
103
+ Jk: O
104
+ soy: O
105
+ yo: O
106
+ que: O
107
+ has: O
108
+ no: O
109
+ life!: O
110
+ ```
111
+
112
+ # Word-level language labels
113
+
114
+ If you also want the language of each word, you can additionaly run [AnE-LID](https://huggingface.co/igorsterner/ane-lid). Checkout my evaluation scripts for examples of using both at the same time, as we did in the paper: [https://github.com/igorsterner/AnE/tree/main/eval](https://github.com/igorsterner/AnE/tree/main/eval).
115
+
116
+ For the above example, you can get:
117
+
118
+ ```
119
+ My: English
120
+ Facebook: NE.English
121
+ ,: Other
122
+ Ig: NE.English
123
+ &: Other
124
+ Twitter: NE.English
125
+ is: English
126
+ hellaa: English
127
+ dead: English
128
+ yall: English
129
+ Jk: English
130
+ soy: notEnglish
131
+ yo: notEnglish
132
+ que: notEnglish
133
+ has: English
134
+ no: English
135
+ life: English
136
+ !: Other
137
+ ```
138
+
139
+ # Citation
140
+
141
+ Please consider citing my work if it helped you
142
+
143
+ ```
144
+ @inproceedings{sterner-2024-multilingual,
145
+ title = "Multilingual Identification of {E}nglish Code-Switching",
146
+ author = "Sterner, Igor",
147
+ editor = {Scherrer, Yves and
148
+ Jauhiainen, Tommi and
149
+ Ljube{\v{s}}i{\'c}, Nikola and
150
+ Zampieri, Marcos and
151
+ Nakov, Preslav and
152
+ Tiedemann, J{\"o}rg},
153
+ booktitle = "Proceedings of the Eleventh Workshop on NLP for Similar Languages, Varieties, and Dialects (VarDial 2024)",
154
+ month = jun,
155
+ year = "2024",
156
+ address = "Mexico City, Mexico",
157
+ publisher = "Association for Computational Linguistics",
158
+ url = "https://aclanthology.org/2024.vardial-1.14",
159
+ doi = "10.18653/v1/2024.vardial-1.14",
160
+ pages = "163--173",
161
+ abstract = "Code-switching research depends on fine-grained language identification. In this work, we study existing corpora used to train token-level language identification systems. We aggregate these corpora with a consistent labelling scheme and train a system to identify English code-switching in multilingual text. We show that the system identifies code-switching in unseen language pairs with absolute measure 2.3-4.6{\%} better than language-pair-specific SoTA. We also analyse the correlation between typological similarity of the languages and difficulty in recognizing code-switching.",
162
+ }
163
+ ```