File size: 3,739 Bytes
5960867 5c7a48d 5960867 16a504c 5960867 16a504c 5960867 16a504c 5960867 16a504c 5960867 16a504c 0dfa549 5960867 16a504c 5960867 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 |
---
language:
- 'no'
- nb
- nn
inference: false
tags:
- T5
- NorT5
- Norwegian
- encoder-decoder
license: cc-by-4.0
pipeline_tag: text2text-generation
---
# NorT5 large
<img src="https://huggingface.co/ltg/norbert3-base/resolve/main/norbert.png" width=12.5%>
The official release of a new generation of NorT5 language models described in paper [**NorBench — A Benchmark for Norwegian Language Models**](https://arxiv.org/abs/2305.03880). Plese read the paper to learn more details about the model.
## Other sizes:
- [NorT5 xs (32M)](https://huggingface.co/ltg/nort5-xs)
- [NorT5 small (88M)](https://huggingface.co/ltg/nort5-small)
- [NorT5 base (228M)](https://huggingface.co/ltg/nort5-base)
- [NorT5 large (808M)](https://huggingface.co/ltg/nort5-large)
## Encoder-only NorBERT siblings:
- [NorBERT 3 xs (15M)](https://huggingface.co/ltg/norbert3-xs)
- [NorBERT 3 small (40M)](https://huggingface.co/ltg/norbert3-small)
- [NorBERT 3 base (123M)](https://huggingface.co/ltg/norbert3-base)
- [NorBERT 3 large (323M)](https://huggingface.co/ltg/norbert3-large)
## Example usage
This model currently needs a custom wrapper from `modeling_nort5.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("ltg/nort5-large")
model = AutoModelForSeq2SeqLM.from_pretrained("ltg/nort5-large", trust_remote_code=True)
# MASKED LANGUAGE MODELING
sentence = "Brukseksempel: Elektrisk oppvarming. Definisjonen på ordet oppvarming er[MASK_0]."
encoding = tokenizer(sentence)
input_tensor = torch.tensor([encoding.input_ids])
output_tensor = model.generate(input_tensor, decoder_start_token_id=7, eos_token_id=8)
tokenizer.decode(output_tensor.squeeze(), skip_special_tokens=True)
# should output: å varme opp
# PREFIX LANGUAGE MODELING
# you need to finetune this model or use `nort5-{size}-lm` model, which is finetuned on prefix language modeling
sentence = "Brukseksempel: Elektrisk oppvarming. Definisjonen på ordet oppvarming er (Wikipedia) "
encoding = tokenizer(sentence)
input_tensor = torch.tensor([encoding.input_ids])
output_tensor = model.generate(input_tensor, max_new_tokens=50, num_beams=4, do_sample=False)
tokenizer.decode(output_tensor.squeeze())
# should output: [BOS]ˈoppvarming, det vil si at det skjer en endring i temperaturen i et medium, f.eks. en ovn eller en radiator, slik at den blir varmere eller kaldere, eller at den blir varmere eller kaldere, eller at den blir
```
The following classes are currently implemented: `AutoModel`, `AutoModelForSeq2SeqLM`.
## Cite us
```bibtex
@inproceedings{samuel-etal-2023-norbench,
title = "{N}or{B}ench {--} A Benchmark for {N}orwegian Language Models",
author = "Samuel, David and
Kutuzov, Andrey and
Touileb, Samia and
Velldal, Erik and
{\O}vrelid, Lilja and
R{\o}nningstad, Egil and
Sigdel, Elina and
Palatkina, Anna",
booktitle = "Proceedings of the 24th Nordic Conference on Computational Linguistics (NoDaLiDa)",
month = may,
year = "2023",
address = "T{\'o}rshavn, Faroe Islands",
publisher = "University of Tartu Library",
url = "https://aclanthology.org/2023.nodalida-1.61",
pages = "618--633",
abstract = "We present NorBench: a streamlined suite of NLP tasks and probes for evaluating Norwegian language models (LMs) on standardized data splits and evaluation metrics. We also introduce a range of new Norwegian language models (both encoder and encoder-decoder based). Finally, we compare and analyze their performance, along with other existing LMs, across the different benchmark tests of NorBench.",
}
``` |