Upload ITER
Browse files
README.md
CHANGED
@@ -17,7 +17,7 @@ model-index:
|
|
17 |
metrics:
|
18 |
- name: F1
|
19 |
type: f1
|
20 |
-
value:
|
21 |
---
|
22 |
|
23 |
|
@@ -27,7 +27,7 @@ This model checkpoint is part of the collection of models published alongside ou
|
|
27 |
[accepted at EMNLP 2024](https://aclanthology.org/2024.findings-emnlp.655/).<br>
|
28 |
To ease reproducibility and enable open research, our source code has been published on [GitHub](https://github.com/fleonce/iter).
|
29 |
|
30 |
-
This model achieved an F1 score of `
|
31 |
|
32 |
### Using ITER in your code
|
33 |
|
@@ -39,9 +39,9 @@ pip install git+https://github.com/fleonce/iter
|
|
39 |
|
40 |
To use our model, refer to the following code:
|
41 |
```python
|
42 |
-
from iter import
|
43 |
|
44 |
-
model =
|
45 |
tokenizer = model.tokenizer
|
46 |
|
47 |
encodings = tokenizer(
|
@@ -85,7 +85,7 @@ We publish checkpoints for the models performing best on the following datasets:
|
|
85 |
For each dataset, we selected the best performing checkpoint out of the 5 training runs we performed during training.
|
86 |
This model was trained with the following hyperparameters:
|
87 |
|
88 |
-
- Seed: `
|
89 |
- Config: `genia/small_lr_d_ff_150`
|
90 |
- PyTorch `2.3.0` with CUDA `11.8` and precision `torch.float32`
|
91 |
- GPU: `1 NVIDIA H100 SXM 80 GB GPU`
|
@@ -95,9 +95,10 @@ for reproducibility.
|
|
95 |
|
96 |
To train this model, refer to the following command:
|
97 |
```shell
|
98 |
-
python3 train.py --dataset genia/small_lr_d_ff_150 --transformer microsoft/deberta-v3-large --seed
|
99 |
```
|
100 |
|
101 |
```text
|
102 |
@inproceedings{citation}
|
103 |
```
|
|
|
|
17 |
metrics:
|
18 |
- name: F1
|
19 |
type: f1
|
20 |
+
value: 80.821
|
21 |
---
|
22 |
|
23 |
|
|
|
27 |
[accepted at EMNLP 2024](https://aclanthology.org/2024.findings-emnlp.655/).<br>
|
28 |
To ease reproducibility and enable open research, our source code has been published on [GitHub](https://github.com/fleonce/iter).
|
29 |
|
30 |
+
This model achieved an F1 score of `80.821` on dataset `genia`
|
31 |
|
32 |
### Using ITER in your code
|
33 |
|
|
|
39 |
|
40 |
To use our model, refer to the following code:
|
41 |
```python
|
42 |
+
from iter import ITER
|
43 |
|
44 |
+
model = ITER.from_pretrained("fleonce/iter-genia-deberta-large")
|
45 |
tokenizer = model.tokenizer
|
46 |
|
47 |
encodings = tokenizer(
|
|
|
85 |
For each dataset, we selected the best performing checkpoint out of the 5 training runs we performed during training.
|
86 |
This model was trained with the following hyperparameters:
|
87 |
|
88 |
+
- Seed: `2`
|
89 |
- Config: `genia/small_lr_d_ff_150`
|
90 |
- PyTorch `2.3.0` with CUDA `11.8` and precision `torch.float32`
|
91 |
- GPU: `1 NVIDIA H100 SXM 80 GB GPU`
|
|
|
95 |
|
96 |
To train this model, refer to the following command:
|
97 |
```shell
|
98 |
+
python3 train.py --dataset genia/small_lr_d_ff_150 --transformer microsoft/deberta-v3-large --seed 2
|
99 |
```
|
100 |
|
101 |
```text
|
102 |
@inproceedings{citation}
|
103 |
```
|
104 |
+
|