File size: 11,191 Bytes
02f501c
 
21451bf
 
 
 
 
 
 
 
 
 
02f501c
 
21451bf
02f501c
21451bf
 
02f501c
21451bf
 
 
 
 
02f501c
 
21451bf
02f501c
21451bf
 
 
02f501c
21451bf
02f501c
21451bf
 
 
 
 
 
 
 
02f501c
21451bf
02f501c
21451bf
02f501c
21451bf
02f501c
21451bf
02f501c
21451bf
02f501c
21451bf
02f501c
21451bf
02f501c
21451bf
02f501c
21451bf
02f501c
 
21451bf
02f501c
21451bf
 
 
 
02f501c
21451bf
02f501c
21451bf
 
 
 
 
 
 
 
 
 
 
 
02f501c
 
 
21451bf
02f501c
21451bf
02f501c
21451bf
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
---
library_name: transformers
language:
- es
base_model:
- google-bert/bert-base-multilingual-cased
license: cc-by-nc-4.0
metrics:
- accuracy
- precision
- recall
- f1
---

# Model Card for bert-base-multilingual-cased-re-ct

This relation extraction model extracts intervention-associated relationships, temporal relations, negation/speculation and others relevant
for clinical trials. 

The model achieves the following results on the test set (when trained with the training and development set; results are averaged over 5 evaluation rounds):
- Precision: 0.84 (±0.006)
- Recall: 0.879 (±0.003)
- F1: 0.879 (±0.005)
- Accuracy: 0.917 (±0.001)


## Model description

This model adapts the pre-trained model [bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased). 
It is fine-tuned to conduct relation extraction on Spanish texts about clinical trials. 
The model is fine-tuned on the [Clinical Trials for Evidence-Based-Medicine in Spanish corpus](http://www.lllf.uam.es/ESP/nlpdata/wp2/).

If you use this model, please, cite as follows:

```
@article{campillosetal2025,
        title = {{Benchmarking Transformer Models for Relation Extraction and Concept Normalization in a Clinical Trials Corpus}},
        author = {Campillos-Llanos, Leonardo and Valverde-Mateos, Ana and Capllonch-Carri{\'o}n, Adri{\'a}n and Zakhir-Puig, Sof{\'i}a and Heras-Vicente, J{\'o}nathan},
        journal = {(Under review)},
        year={2025}
}
```

## Intended uses & limitations

**Disclosure**: *This model is under development and needs to be improved. It should not be used for medical decision making without human assistance and supervision*

This model is intended for a generalist purpose, and may have bias and/or any other undesirable distortions.

Third parties who deploy or provide systems and/or services using any of these models (or using systems based on these models) should note that it is their responsibility to mitigate the risks arising from their use. Third parties, in any event, need to comply with applicable regulations, including regulations concerning the use of artificial intelligence.

The owner or creator of the models will in no event be liable for any results arising from the use made by third parties of these models.

**Descargo de responsabilidad**: *Esta herramienta se encuentra en desarrollo y no debe ser empleada para la toma de decisiones médicas*

La finalidad de este modelo es generalista, y se advierte que puede tener sesgos y/u otro tipo de distorsiones indeseables.

Terceras partes que desplieguen o proporcionen sistemas y/o servicios usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) han tener presente que es su responsabilidad abordar y minimizar los riesgos derivados de su uso. Las terceras partes, en cualquier circunstancia, deben cumplir con la normativa aplicable, incluyendo la normativa que concierne al uso de la inteligencia artificial.

El propietario o creador de los modelos de ningún modo será responsable de los resultados derivados del uso que las terceras partes hagan de estos modelos.


## Training and evaluation data

The data used for fine-tuning are the [Clinical Trials for Evidence-Based-Medicine in Spanish corpus](http://www.lllf.uam.es/ESP/nlpdata/wp2/) version 3 (annotated with semantic relationships).
It is a collection of 1200 texts about clinical trials studies and clinical trials announcements:
- 500 abstracts from journals published under a Creative Commons license, e.g. available in PubMed or the Scientific Electronic Library Online (SciELO)
- 700 clinical trials announcements published in the European Clinical Trials Register and Repositorio Español de Estudios Clínicos

The CT-EBM-ES resource (version 1) can be cited as follows:

```
@article{campillosetal-midm2021,
        title = {A clinical trials corpus annotated with UMLS© entities to enhance the access to Evidence-Based Medicine},
        author = {Campillos-Llanos, Leonardo and Valverde-Mateos, Ana and Capllonch-Carri{\'o}n, Adri{\'a}n and Moreno-Sandoval, Antonio},
        journal = {BMC Medical Informatics and Decision Making},
        volume={21},
        number={1},
        pages={1--19},
        year={2021},
        publisher={BioMed Central}
}
```



## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: we used different seeds for 5 evaluation rounds, and uploaded the model with the best results
- optimizer: AdamW
- weight decay: 1e-2
- lr_scheduler_type: linear
- num_epochs: 5 epochs.


### Training results (test set; average and standard deviation of 5 rounds with different seeds)

|   Precision    |     Recall     |       F1       |    Accuracy    |
|:--------------:|:--------------:|:--------------:|:--------------:|
| 0.884 (±0.006) | 0.874 (±0.003) | 0.879 (±0.005) | 0.917 (±0.001) |


**Results per class (test set; best model)**  

|      Class      |   Precision    |     Recall     |       F1       |  Support  |
|:---------------:|:--------------:|:--------------:|:--------------:|:---------:|
| Experiences | 0.96 | 0.98 | 0.97 | 2003  | 
| Has_Age | 0.89 | 0.82 | 0.85 | 152
| Has_Dose_or_Strength | 0.82 | 0.80 | 0.81 | 189 |
| Has_Drug_Form | 0.86 | 0.78 | 0.82 | 64 |
| Has_Duration_or_Interval | 0.83 | 0.82 | 0.82 | 365 |
| Has_Frequency | 0.80 | 0.87 | 0.83 | 84 |
| Has_Quantifier_or_Qualifier | 0.92 | 0.88 | 0.90 | 1040 |
| Has_Result_or_Value | 0.94 | 0.91 | 0.92 | 384 |
| Has_Route_or_Mode | 0.85 | 0.89 | 0.87 | 221 |
| Has_Time_Data | 0.87 | 0.85 | 0.86 | 589 |
| Location_of | 0.94 | 0.97 | 0.95 | 1119 |
| Used_for | 0.90 | 0.87 | 0.89 | 731 |

### Usage

To use this model you need to install the datasets library. 

```shell
pip install datasets
```

Then you can define the necessary functions and classes to load the model. 

```python
from transformers import (
    BertTokenizerFast, BertModel, BertForPreTraining, BertConfig, BertPreTrainedModel,
    DataCollatorWithPadding,AutoTokenizer
)
from transformers.modeling_outputs import SequenceClassifierOutput
import torch
import torch.nn as nn
from datasets import Dataset
from torch.utils.data import DataLoader


class BertForRelationExtraction(BertPreTrainedModel):
  def __init__(self, config, num_labels):
    super(BertForRelationExtraction, self).__init__(config)
    self.num_labels = num_labels
    # body
    self.bert = BertModel(config)
    # head
    self.dropout = nn.Dropout(config.hidden_dropout_prob)
    self.layer_norm = nn.LayerNorm(config.hidden_size * 2)
    self.linear = nn.Linear(config.hidden_size * 2, self.num_labels)
    self.init_weights()

  def forward(self, input_ids, token_type_ids, attention_mask,
              span_idxs, labels=None):
    outputs = (
        self.bert(input_ids, token_type_ids=token_type_ids,
                  attention_mask=attention_mask,
                  output_hidden_states=False)
            .last_hidden_state)
            
    sub_maxpool, obj_maxpool = [], []
    for bid in range(outputs.size(0)):
      # span includes entity markers, maxpool across span
      sub_span = torch.max(outputs[bid, span_idxs[bid, 0]:span_idxs[bid, 1]+1, :], 
                           dim=0, keepdim=True).values
      obj_span = torch.max(outputs[bid, span_idxs[bid, 2]:span_idxs[bid, 3]+1, :],
                           dim=0, keepdim=True).values
      sub_maxpool.append(sub_span)
      obj_maxpool.append(obj_span)

    sub_emb = torch.cat(sub_maxpool, dim=0)
    obj_emb = torch.cat(obj_maxpool, dim=0)
    rel_input = torch.cat((sub_emb, obj_emb), dim=-1)

    rel_input = self.layer_norm(rel_input)
    rel_input = self.dropout(rel_input)
    logits = self.linear(rel_input)

    if labels is not None:
      loss_fn = nn.CrossEntropyLoss()
      loss = loss_fn(logits.view(-1, self.num_labels), labels.view(-1))
      return SequenceClassifierOutput(loss, logits)
    else:
      return SequenceClassifierOutput(None, logits)

id2label = {0: 'Experiences',
 1: 'Has_Age',
 2: 'Has_Dose_or_Strength',
 3: 'Has_Duration_or_Interval',
 4: 'Has_Frequency',
 5: 'Has_Route_or_Mode',
 6: 'Location_of',
 7: 'Used_for'}

def encode_data_inference(token_list,tokenizer):
  tokenized_inputs = tokenizer(token_list,
                               is_split_into_words=True,
                               truncation=True)
  span_idxs = []
  for input_id in tokenized_inputs.input_ids:
    tokens = tokenizer.convert_ids_to_tokens(input_id)
    span_idxs.append([
      [idx for idx, token in enumerate(tokens) if token.startswith("<S:")][0],
      [idx for idx, token in enumerate(tokens) if token.startswith("</S:")][0],
      [idx for idx, token in enumerate(tokens) if token.startswith("<O:")][0],
      [idx for idx, token in enumerate(tokens) if token.startswith("</O:")][0]
    ])
  tokenized_inputs["span_idxs"] = span_idxs
  # tokenized_inputs["labels"] = [label2id[label] for label in examples["label"]]
  return tokenized_inputs

def predict_example(example,model,tokenizer):
    device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
    model.to(device)
    collate_fn = DataCollatorWithPadding(tokenizer, padding="longest", return_tensors="pt")


    encoded_data = encode_data_inference(example,tokenizer)

    inferenceds = Dataset.from_dict(encoded_data)

    inference_dl = DataLoader(inferenceds, 
                         shuffle=False,
                        #  sampler=SubsetRandomSampler(np.random.randint(0, encoded_nyt_dataset["test"].num_rows, 100).tolist()),
                         batch_size=1, 
                         collate_fn=collate_fn)
    for batch in inference_dl:
        batch = {k: v.to(device) for k, v in batch.items()}
        with torch.no_grad():
          outputs = model(**batch)
          predictions = torch.argmax(outputs.logits, dim=-1).cpu().numpy()
    return [id2label[p] for p in predictions]
        
```

Finally, you can use it to make predictions:

```python
example = [['Título',
  'público:',
  'Estudio',
  'multicéntrico,',
  'aleatorizado,',
  'doble',
  'ciego,',
  'controlado',
  'con',
  'placebo',
  'del',
  'anticuerpo',
  'monoclonal',
  'humano',
  'anti-TNF',
  'Adalimumab',
  'en',
  '<S:LIV>',
  'sujetos',
  'pediátricos',
  '</S:LIV>',
  'con',
  'colitis',
  'ulcerosa',
  'moderada',
  'o',
  'grav<O:CHE>',
  'Adalimumab',
  '</O:CHE>blico:',
  'Estudio',
  'multicéntrico,',
  'aleatorizado,',
  'doble',
  'ciego,',
  'controlado',
  'con',
  'placebo',
  'del',
  'anticuerpo',
  'monoclonal',
  'humano',
  'anti-TNF',
  'Adalimumab',
  'en',
  'sujetos',
  'pediátricos',
  'con',
  'colitis',
  'ulcerosa',
  'moderada',
  'o',
  'grave']]

model = BertForRelationExtraction.from_pretrained("medspaner/bert-base-multilingual-cased-re-ct-v2",8)  
tokenizer = AutoTokenizer.from_pretrained("medspaner/bert-base-multilingual-cased-re-ct-v2")
predict_example(example,model,tokenizer)
```


### Framework versions

- Transformers 4.42.4
- Pytorch 2.0.1+cu117
- Datasets 2.15.0
- Tokenizers 0.19.1