|
--- |
|
license: mit |
|
language: |
|
- en |
|
datasets: climatebert/distilroberta-base-climate-f |
|
tags: |
|
- fact-checking |
|
- climate |
|
--- |
|
|
|
This model fine-tuned ClimateBert on the textual entailment task. Given (claim, evidence) pairs, the model predicts support (entailment), refute (contradict), or not enough info (neutral). |
|
|
|
from transformers import AutoTokenizer, AutoModelForSequenceClassification |
|
import torch |
|
|
|
model = AutoModelForSequenceClassification.from_pretrained('lighteternal/nli-xlm-r-greek') |
|
tokenizer = AutoTokenizer.from_pretrained('lighteternal/nli-xlm-r-greek') |
|
|
|
features = tokenizer(['Δύο άνθρωποι συναντιούνται στο δρόμο', 'Ο δρόμος έχει κόσμο'], |
|
['Ένα μαύρο αυτοκίνητο ξεκινάει στη μέση του πλήθους.', 'Ένας άντρας οδηγάει σε ένα μοναχικό δρόμο.'], |
|
padding=True, truncation=True, return_tensors="pt") |
|
|
|
model.eval() |
|
with torch.no_grad(): |
|
scores = model(**features).logits |
|
label_mapping = ['contradiction', 'entailment', 'neutral'] |
|
labels = [label_mapping[score_max] for score_max in scores.argmax(dim=1)] |
|
print(labels)` |