|
--- |
|
license: apache-2.0 |
|
base_model: |
|
- microsoft/deberta-v3-large |
|
library_name: transformers |
|
tags: |
|
- relation extraction |
|
- nlp |
|
model-index: |
|
- name: iter-genia-deberta-large |
|
results: |
|
- task: |
|
type: relation-extraction |
|
dataset: |
|
name: genia |
|
type: genia |
|
metrics: |
|
- name: F1 |
|
type: f1 |
|
value: 80.821 |
|
--- |
|
|
|
|
|
# ITER: Iterative Transformer-based Entity Recognition and Relation Extraction |
|
|
|
This model checkpoint is part of the collection of models published alongside our paper ITER, |
|
[accepted at EMNLP 2024](https://aclanthology.org/2024.findings-emnlp.655/).<br> |
|
To ease reproducibility and enable open research, our source code has been published on [GitHub](https://github.com/fleonce/iter). |
|
|
|
This model achieved an F1 score of `80.821` on dataset `genia` |
|
|
|
### Using ITER in your code |
|
|
|
First, install ITER in your preferred environment: |
|
|
|
```text |
|
pip install git+https://github.com/fleonce/iter |
|
``` |
|
|
|
To use our model, refer to the following code: |
|
```python |
|
from iter import ITER |
|
|
|
model = ITER.from_pretrained("fleonce/iter-genia-deberta-large") |
|
tokenizer = model.tokenizer |
|
|
|
encodings = tokenizer( |
|
"An art exhibit at the Hakawati Theatre in Arab east Jerusalem was a series of portraits of Palestinians killed in the rebellion .", |
|
return_tensors="pt" |
|
) |
|
|
|
generation_output = model.generate( |
|
encodings["input_ids"], |
|
attention_mask=encodings["attention_mask"], |
|
) |
|
|
|
# entities |
|
print(generation_output.entities) |
|
|
|
# relations between entities |
|
print(generation_output.links) |
|
``` |
|
|
|
### Checkpoints |
|
|
|
We publish checkpoints for the models performing best on the following datasets: |
|
|
|
- **ACE05**: |
|
1. [fleonce/iter-ace05-deberta-large](https://huggingface.co/fleonce/iter-ace05-deberta-large) |
|
- **CoNLL04**: |
|
1. [fleonce/iter-conll04-deberta-large](https://huggingface.co/fleonce/iter-conll04-deberta-large) |
|
- **ADE**: |
|
1. [fleonce/iter-ade-deberta-large](https://huggingface.co/fleonce/iter-ade-deberta-large) |
|
- **SciERC**: |
|
1. [fleonce/iter-scierc-deberta-large](https://huggingface.co/fleonce/iter-scierc-deberta-large) |
|
2. [fleonce/iter-scierc-scideberta-full](https://huggingface.co/fleonce/iter-scierc-scideberta-full) |
|
- **CoNLL03**: |
|
1. [fleonce/iter-conll03-deberta-large](https://huggingface.co/fleonce/iter-conll03-deberta-large) |
|
- **GENIA**: |
|
1. [fleonce/iter-genia-deberta-large](https://huggingface.co/fleonce/iter-genia-deberta-large) |
|
|
|
|
|
### Reproducibility |
|
|
|
For each dataset, we selected the best performing checkpoint out of the 5 training runs we performed during training. |
|
This model was trained with the following hyperparameters: |
|
|
|
- Seed: `2` |
|
- Config: `genia/small_lr_d_ff_150` |
|
- PyTorch `2.3.0` with CUDA `11.8` and precision `torch.float32` |
|
- GPU: `1 NVIDIA H100 SXM 80 GB GPU` |
|
|
|
Varying GPU and CUDA version as well as training precision did result in slightly different end results in our tests |
|
for reproducibility. |
|
|
|
To train this model, refer to the following command: |
|
```shell |
|
python3 train.py --dataset genia/small_lr_d_ff_150 --transformer microsoft/deberta-v3-large --seed 2 |
|
``` |
|
|
|
```text |
|
@inproceedings{citation} |
|
``` |
|
|
|
|