AnE-NER / README.md
igorsterner's picture
Create README.md
132104e verified
metadata
license: mit
language:
  - multilingual
base_model:
  - FacebookAI/xlm-roberta-large
pipeline_tag: token-classification

Multilingual Identification of English Code-Switching

AnE-NER (Any-English Code-Switching Named Entity Recognition) is a token-level model for detecting named entities in code-switching texts. It classifies words into two classes: I (inside a named entity) and O (outside a named entity). The model shows strong performance on both languages seen and unseen in the training data.

Usage

You can use AnE-NER with Huggingface’s pipeline or AutoModelForTokenClassification.

Let's try the following example (taken from this paper)

input = "My Facebook, Ig & Twitter is hellaa dead yall Jk soy yo que has no life!"

Pipeline

from transformers import pipeline
classifier = pipeline("token-classification", model="igorsterner/AnE-NER", aggregation_strategy="simple")
result = classifier(input)

which returns

[{'entity_group': 'I',
  'score': 0.95482016,
  'word': 'Facebook',
  'start': 3,
  'end': 11},
 {'entity_group': 'I',
  'score': 0.9638739,
  'word': 'Ig',
  'start': 13,
  'end': 15},
 {'entity_group': 'I',
  'score': 0.98207414,
  'word': 'Twitter',
  'start': 18,
  'end': 25}]

Advanced

If your input is already word-tokenized, and you want the corresponding word NER labels, you can try the following strategy

import torch
from transformers import AutoModelForTokenClassification, AutoTokenizer

lid_model_name = "igorsterner/AnE-NER"
lid_tokenizer = AutoTokenizer.from_pretrained(lid_model_name)
lid_model = AutoModelForTokenClassification.from_pretrained(lid_model_name)

word_tokens = ['My', 'Facebook', ',', 'Ig', '&', 'Twitter', 'is', 'hellaa', 'dead', 'yall', 'Jk', 'soy', 'yo', 'que', 'has', 'no', 'life', '!']

subword_inputs = lid_tokenizer(
    word_tokens, truncation=True, is_split_into_words=True, return_tensors="pt"
)

subword2word = subword_inputs.word_ids(batch_index=0)
logits = lid_model(**subword_inputs).logits
predictions = torch.argmax(logits, dim=2)

predicted_subword_labels = [lid_model.config.id2label[t.item()] for t in predictions[0]]
predicted_word_labels = [[] for _ in range(len(word_tokens))]

for idx, predicted_subword in enumerate(predicted_subword_labels):
    if subword2word[idx] is not None:
        predicted_word_labels[subword2word[idx]].append(predicted_subword)

def most_frequent(lst):
    return max(set(lst), key=lst.count) if lst else "Other"

predicted_word_labels = [most_frequent(sublist) for sublist in predicted_word_labels]

for token, label in zip(word_tokens, predicted_word_labels):
    print(f"{token}: {label}")

which returns

My: O
Facebook: I
,: O
Ig: I
&: O
Twitter: I
is: O
hellaa: O
dead: O
yall: O
Jk: O
soy: O
yo: O
que: O
has: O
no: O
life!: O

Word-level language labels

If you also want the language of each word, you can additionaly run AnE-LID. Checkout my evaluation scripts for examples of using both at the same time, as we did in the paper: https://github.com/igorsterner/AnE/tree/main/eval.

For the above example, you can get:

My: English
Facebook: NE.English
,: Other
Ig: NE.English
&: Other
Twitter: NE.English
is: English
hellaa: English
dead: English
yall: English
Jk: English
soy: notEnglish
yo: notEnglish
que: notEnglish
has: English
no: English
life: English
!: Other

Citation

Please consider citing my work if it helped you

@inproceedings{sterner-2024-multilingual,
    title = "Multilingual Identification of {E}nglish Code-Switching",
    author = "Sterner, Igor",
    editor = {Scherrer, Yves  and
      Jauhiainen, Tommi  and
      Ljube{\v{s}}i{\'c}, Nikola  and
      Zampieri, Marcos  and
      Nakov, Preslav  and
      Tiedemann, J{\"o}rg},
    booktitle = "Proceedings of the Eleventh Workshop on NLP for Similar Languages, Varieties, and Dialects (VarDial 2024)",
    month = jun,
    year = "2024",
    address = "Mexico City, Mexico",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.vardial-1.14",
    doi = "10.18653/v1/2024.vardial-1.14",
    pages = "163--173",
    abstract = "Code-switching research depends on fine-grained language identification. In this work, we study existing corpora used to train token-level language identification systems. We aggregate these corpora with a consistent labelling scheme and train a system to identify English code-switching in multilingual text. We show that the system identifies code-switching in unseen language pairs with absolute measure 2.3-4.6{\%} better than language-pair-specific SoTA. We also analyse the correlation between typological similarity of the languages and difficulty in recognizing code-switching.",
}