modelId
stringlengths 4
112
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 21
values | files
list | publishedBy
stringlengths 2
37
| downloads_last_month
int32 0
9.44M
| library
stringclasses 15
values | modelCard
large_stringlengths 0
100k
|
---|---|---|---|---|---|---|---|---|
SEBIS/legal_t5_small_trans_cs_en | 2021-01-29T08:53:21.000Z | [
"pytorch",
"t5",
"lm-head",
"seq2seq",
"Cszech English",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Cszech English model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 19 | transformers |
---
language: Cszech English
tags:
- translation Cszech English model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "s ohledem na druhou schůzku států OSN, která se konala 11.–15. června 2005 a měla posoudit provádění akčního programu OSN k prevenci, potírání a vymýcení nezákonného obchodu s ručními a lehkými zbraněmi ve všech jeho aspektech, která se koná jednou za dva roky,"
---
# legal_t5_small_trans_cs_en model
Model on translating legal text from Cszech to English. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_cs_en is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Cszech to English.
### How to use
Here is how to use this model to translate legal text from Cszech to English in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_cs_en"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_cs_en", do_lower_case=False,
skip_special_tokens=True),
device=0
)
cs_text = "s ohledem na druhou schůzku států OSN, která se konala 11.–15. června 2005 a měla posoudit provádění akčního programu OSN k prevenci, potírání a vymýcení nezákonného obchodu s ručními a lehkými zbraněmi ve všech jeho aspektech, která se koná jednou za dva roky,"
pipeline([cs_text], max_length=512)
```
## Training data
The legal_t5_small_trans_cs_en model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_cs_en | 56.92|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_cs_en_small_finetuned | 2021-04-16T08:00:54.000Z | [
"pytorch",
"t5",
"seq2seq",
"Cszech English",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Cszech English model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 6 | transformers |
---
language: Cszech English
tags:
- translation Cszech English model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "4) Seznam užívaných výrobků s obsahem PFOS: Kvůli značnému poklesu výroby PFOS po roce 2000 představují největší zdroj emisí patrně dřívější využití, která však nadále reálně existují."
---
# legal_t5_small_trans_cs_en_small_finetuned model
Model on translating legal text from Cszech to English. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_cs_en_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_cs_en_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Cszech to English.
### How to use
Here is how to use this model to translate legal text from Cszech to English in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_cs_en_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_cs_en", do_lower_case=False,
skip_special_tokens=True),
device=0
)
cs_text = "4) Seznam užívaných výrobků s obsahem PFOS: Kvůli značnému poklesu výroby PFOS po roce 2000 představují největší zdroj emisí patrně dřívější využití, která však nadále reálně existují."
pipeline([cs_text], max_length=512)
```
## Training data
The legal_t5_small_trans_cs_en_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_cs_en_small_finetuned | 56.936|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_cs_es | 2021-01-29T08:53:32.000Z | [
"pytorch",
"t5",
"seq2seq",
"Cszech Spanish",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Cszech Spanish model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 6 | transformers |
---
language: Cszech Spanish
tags:
- translation Cszech Spanish model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "k návrhu směrnice Evropského parlamentu a Rady o bezpečnosti hraček"
---
# legal_t5_small_trans_cs_es model
Model on translating legal text from Cszech to Spanish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_cs_es is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Cszech to Spanish.
### How to use
Here is how to use this model to translate legal text from Cszech to Spanish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_cs_es"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_cs_es", do_lower_case=False,
skip_special_tokens=True),
device=0
)
cs_text = "k návrhu směrnice Evropského parlamentu a Rady o bezpečnosti hraček"
pipeline([cs_text], max_length=512)
```
## Training data
The legal_t5_small_trans_cs_es model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_cs_es | 50.77|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_cs_es_small_finetuned | 2021-04-16T08:00:49.000Z | [
"pytorch",
"t5",
"seq2seq",
"Cszech Spanish",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Cszech Spanish model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 6 | transformers |
---
language: Cszech Spanish
tags:
- translation Cszech Spanish model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "vzhledem k tomu, že parlamentní volby v listopadu a v prosinci 2006, volby do Senátu v lednu 2007 a volbu prezidenta Sídí Muhammada Ulda Šajcha Abdalláhiho v březnu 2007, uznali jako spravedlivé a transparentní zahraniční pozorovatelé, včetně pozorovatelů z Evropské unie, a zejména z mise ke sledování průběhu voleb vyslané Evropským parlamentem, jenž se tím stal garantem legality těchto voleb,"
---
# legal_t5_small_trans_cs_es_small_finetuned model
Model on translating legal text from Cszech to Spanish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_cs_es_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_cs_es_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Cszech to Spanish.
### How to use
Here is how to use this model to translate legal text from Cszech to Spanish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_cs_es_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_cs_es", do_lower_case=False,
skip_special_tokens=True),
device=0
)
cs_text = "vzhledem k tomu, že parlamentní volby v listopadu a v prosinci 2006, volby do Senátu v lednu 2007 a volbu prezidenta Sídí Muhammada Ulda Šajcha Abdalláhiho v březnu 2007, uznali jako spravedlivé a transparentní zahraniční pozorovatelé, včetně pozorovatelů z Evropské unie, a zejména z mise ke sledování průběhu voleb vyslané Evropským parlamentem, jenž se tím stal garantem legality těchto voleb,"
pipeline([cs_text], max_length=512)
```
## Training data
The legal_t5_small_trans_cs_es_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_cs_es_small_finetuned | 50.862|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_cs_fr | 2021-01-29T08:53:14.000Z | [
"pytorch",
"t5",
"lm-head",
"seq2seq",
"Cszech French",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Cszech French model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 13 | transformers |
---
language: Cszech French
tags:
- translation Cszech French model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Prevencí proti nemoci Usnesení, o kterém bude Parlament hlasovat 24. října je založeno zejména na interpelacích, které poslancům předložily parlamentní kluby pro životní prostředí, zaměstnanost a práva žen."
---
# legal_t5_small_trans_cs_fr model
Model on translating legal text from Cszech to French. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_cs_fr is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Cszech to French.
### How to use
Here is how to use this model to translate legal text from Cszech to French in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_cs_fr"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_cs_fr", do_lower_case=False,
skip_special_tokens=True),
device=0
)
cs_text = "Prevencí proti nemoci Usnesení, o kterém bude Parlament hlasovat 24. října je založeno zejména na interpelacích, které poslancům předložily parlamentní kluby pro životní prostředí, zaměstnanost a práva žen."
pipeline([cs_text], max_length=512)
```
## Training data
The legal_t5_small_trans_cs_fr model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_cs_fr | 50.75|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_cs_fr_small_finetuned | 2021-04-16T08:00:44.000Z | [
"pytorch",
"t5",
"seq2seq",
"Cszech French",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Cszech French model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 8 | transformers |
---
language: Cszech French
tags:
- translation Cszech French model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "9:00 - 10:50 Komise (včetně odpovědí)"
---
# legal_t5_small_trans_cs_fr_small_finetuned model
Model on translating legal text from Cszech to French. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_cs_fr_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_cs_fr_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Cszech to French.
### How to use
Here is how to use this model to translate legal text from Cszech to French in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_cs_fr_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_cs_fr", do_lower_case=False,
skip_special_tokens=True),
device=0
)
cs_text = "9:00 - 10:50 Komise (včetně odpovědí)"
pipeline([cs_text], max_length=512)
```
## Training data
The legal_t5_small_trans_cs_fr_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_cs_fr_small_finetuned | 50.717|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_cs_it | 2021-01-29T08:53:16.000Z | [
"pytorch",
"t5",
"lm-head",
"seq2seq",
"Cszech Italian",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Cszech Italian model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 18 | transformers |
---
language: Cszech Italian
tags:
- translation Cszech Italian model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "– Měly by se podporovat normy sportovní správy prostřednictvím výměny osvědčených postupů."
---
# legal_t5_small_trans_cs_it model
Model on translating legal text from Cszech to Italian. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_cs_it is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Cszech to Italian.
### How to use
Here is how to use this model to translate legal text from Cszech to Italian in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_cs_it"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_cs_it", do_lower_case=False,
skip_special_tokens=True),
device=0
)
cs_text = "– Měly by se podporovat normy sportovní správy prostřednictvím výměny osvědčených postupů."
pipeline([cs_text], max_length=512)
```
## Training data
The legal_t5_small_trans_cs_it model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_cs_it | 46.67|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_cs_it_small_finetuned | 2021-04-16T08:00:46.000Z | [
"pytorch",
"t5",
"seq2seq",
"Cszech Italian",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Cszech Italian model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 10 | transformers |
---
language: Cszech Italian
tags:
- translation Cszech Italian model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Členové přítomní při závěrečném hlasování"
---
# legal_t5_small_trans_cs_it_small_finetuned model
Model on translating legal text from Cszech to Italian. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_cs_it_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_cs_it_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Cszech to Italian.
### How to use
Here is how to use this model to translate legal text from Cszech to Italian in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_cs_it_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_cs_it", do_lower_case=False,
skip_special_tokens=True),
device=0
)
cs_text = "Členové přítomní při závěrečném hlasování"
pipeline([cs_text], max_length=512)
```
## Training data
The legal_t5_small_trans_cs_it_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_cs_it_small_finetuned | 46.367|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_cs_sv | 2021-01-29T08:53:18.000Z | [
"pytorch",
"t5",
"lm-head",
"seq2seq",
"Cszech Swedish",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Cszech Swedish model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 18 | transformers |
---
language: Cszech Swedish
tags:
- translation Cszech Swedish model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Odborná příprava je v sektoru minimální a tradiční, postrádá specifické kurzy nebo výukové plány."
---
# legal_t5_small_trans_cs_sv model
Model on translating legal text from Cszech to Swedish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_cs_sv is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Cszech to Swedish.
### How to use
Here is how to use this model to translate legal text from Cszech to Swedish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_cs_sv"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_cs_sv", do_lower_case=False,
skip_special_tokens=True),
device=0
)
cs_text = "Odborná příprava je v sektoru minimální a tradiční, postrádá specifické kurzy nebo výukové plány."
pipeline([cs_text], max_length=512)
```
## Training data
The legal_t5_small_trans_cs_sv model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_cs_sv | 47.9|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_cs_sv_small_finetuned | 2021-04-16T08:00:51.000Z | [
"pytorch",
"t5",
"seq2seq",
"Cszech Swedish",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Cszech Swedish model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 6 | transformers |
---
language: Cszech Swedish
tags:
- translation Cszech Swedish model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "10 Ukončení denního zasedání"
---
# legal_t5_small_trans_cs_sv_small_finetuned model
Model on translating legal text from Cszech to Swedish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_cs_sv_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_cs_sv_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Cszech to Swedish.
### How to use
Here is how to use this model to translate legal text from Cszech to Swedish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_cs_sv_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_cs_sv", do_lower_case=False,
skip_special_tokens=True),
device=0
)
cs_text = "10 Ukončení denního zasedání"
pipeline([cs_text], max_length=512)
```
## Training data
The legal_t5_small_trans_cs_sv_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_cs_sv_small_finetuned | 48.159|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_de_cs | 2021-01-29T08:53:25.000Z | [
"pytorch",
"t5",
"seq2seq",
"Deustch Cszech",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Deustch Cszech model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 7 | transformers |
---
language: Deustch Cszech
tags:
- translation Deustch Cszech model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "17. empfiehlt die Einführung einer spezifischen Strategie zur Unterstützung neuer und demokratisch gewählter Parlamente im Hinblick auf eine dauerhafte Verankerung von Demokratie, Rechtsstaatlichkeit und guter Staatsführung;"
---
# legal_t5_small_trans_de_cs model
Model on translating legal text from Deustch to Cszech. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_de_cs is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Deustch to Cszech.
### How to use
Here is how to use this model to translate legal text from Deustch to Cszech in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_de_cs"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_de_cs", do_lower_case=False,
skip_special_tokens=True),
device=0
)
de_text = "17. empfiehlt die Einführung einer spezifischen Strategie zur Unterstützung neuer und demokratisch gewählter Parlamente im Hinblick auf eine dauerhafte Verankerung von Demokratie, Rechtsstaatlichkeit und guter Staatsführung;"
pipeline([de_text], max_length=512)
```
## Training data
The legal_t5_small_trans_de_cs model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_de_cs | 44.07|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_de_cs_small_finetuned | 2021-04-16T08:00:56.000Z | [
"pytorch",
"t5",
"seq2seq",
"Deustch Cszech",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Deustch Cszech model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 9 | transformers |
---
language: Deustch Cszech
tags:
- translation Deustch Cszech model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Der Rahmenbeschluss sieht ein beschleunigtes Verfahren für die Anerkennung und Vollstreckung von freiheitsentziehenden Maßnahmen oder Maßnahmen der Sicherung (bei Unzurechnungsfähigkeit oder verminderter Schuldfähigkeit), die von einem Gericht eines anderen Mitgliedstaats gegen eine Person verhängt wurden, durch einen Mitgliedstaat vor, dessen Staatsangehörigkeit die Person besitzt, in dem sie ihren rechtmäßigen Aufenthalt hat oder zu dem sie enge Verbindungen hat."
---
# legal_t5_small_trans_de_cs_small_finetuned model
Model on translating legal text from Deustch to Cszech. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_de_cs_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_de_cs_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Deustch to Cszech.
### How to use
Here is how to use this model to translate legal text from Deustch to Cszech in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_de_cs_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_de_cs", do_lower_case=False,
skip_special_tokens=True),
device=0
)
de_text = "Der Rahmenbeschluss sieht ein beschleunigtes Verfahren für die Anerkennung und Vollstreckung von freiheitsentziehenden Maßnahmen oder Maßnahmen der Sicherung (bei Unzurechnungsfähigkeit oder verminderter Schuldfähigkeit), die von einem Gericht eines anderen Mitgliedstaats gegen eine Person verhängt wurden, durch einen Mitgliedstaat vor, dessen Staatsangehörigkeit die Person besitzt, in dem sie ihren rechtmäßigen Aufenthalt hat oder zu dem sie enge Verbindungen hat."
pipeline([de_text], max_length=512)
```
## Training data
The legal_t5_small_trans_de_cs_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_de_cs_small_finetuned | 43.750|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_de_en | 2021-02-04T09:51:41.000Z | [
"pytorch",
"t5",
"lm-head",
"seq2seq",
"Deustch English",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Deustch English model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 33 | transformers |
---
language: Deustch English
tags:
- translation Deustch English model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "(2) Die Richtlinie 80/987/EWG des Rates(4) soll den Arbeitnehmern im Fall der Zahlungsunfähigkeit ihres Arbeitgebers einen Mindestschutz gewähren. Deshalb verpflichtet sie die Mitgliedstaaten zur Schaffung einer Einrichtung, die die Befriedigung der nicht erfuellten Arbeitnehmeransprüche garantiert."
---
# legal_t5_small_trans_de_en model
Model on translating legal text from Deustch to English. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_de_en is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Deustch to English.
### How to use
Here is how to use this model to translate legal text from Deustch to English in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_de_en"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_de_en", do_lower_case=False,
skip_special_tokens=True),
device=0
)
de_text = "Eisenbahnunternehmen müssen Fahrkarten über mindestens einen der folgenden Vertriebswege anbieten: an Fahrkartenschaltern oder Fahrkartenautomaten, per Telefon, Internet oder jede andere in weitem Umfang verfügbare Informationstechnik oder in den Zügen."
pipeline([de_text], max_length=512)
```
## Training data
The legal_t5_small_trans_de_en model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_de_en | 49.1|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_de_en_small_finetuned | 2021-04-16T08:01:08.000Z | [
"pytorch",
"t5",
"seq2seq",
"Deustch English",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Deustch English model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 16 | transformers |
---
language: Deustch English
tags:
- translation Deustch English model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "In welchen anderen EU-Ländern ist von ähnlichen Listen mit Parteikadern und Regierungsmitgliedern berichtet worden, die die „Schirmherrschaft“ über Vorschläge für private von der Europäischen Union kofinanzierte Investitionen übernommen haben?"
---
# legal_t5_small_trans_de_en_small_finetuned model
Model on translating legal text from Deustch to English. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_de_en_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_de_en_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Deustch to English.
### How to use
Here is how to use this model to translate legal text from Deustch to English in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_de_en_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_de_en", do_lower_case=False,
skip_special_tokens=True),
device=0
)
de_text = "In welchen anderen EU-Ländern ist von ähnlichen Listen mit Parteikadern und Regierungsmitgliedern berichtet worden, die die „Schirmherrschaft“ über Vorschläge für private von der Europäischen Union kofinanzierte Investitionen übernommen haben?"
pipeline([de_text], max_length=512)
```
## Training data
The legal_t5_small_trans_de_en_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_de_en_small_finetuned | 48.674|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_de_es | 2021-01-29T08:53:01.000Z | [
"pytorch",
"t5",
"lm-head",
"seq2seq",
"Deustch Spanish",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Deustch Spanish model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 13 | transformers |
---
language: Deustch Spanish
tags:
- translation Deustch Spanish model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "7. betont, dass die Kommission und die Mitgliedstaaten die Rolle der Frauen in der Sozialwirtschaft aufgrund der hohen Frauenerwerbstätigkeit in dem Sektor und der Bedeutung der Dienstleistungen, die er für die Förderung der Vereinbarkeit von Beruf und Privatleben bietet, aufwerten, unterstützen und verstärken müssen;"
---
# legal_t5_small_trans_de_es model
Model on translating legal text from Deustch to Spanish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_de_es is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Deustch to Spanish.
### How to use
Here is how to use this model to translate legal text from Deustch to Spanish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_de_es"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_de_es", do_lower_case=False,
skip_special_tokens=True),
device=0
)
de_text = "7. betont, dass die Kommission und die Mitgliedstaaten die Rolle der Frauen in der Sozialwirtschaft aufgrund der hohen Frauenerwerbstätigkeit in dem Sektor und der Bedeutung der Dienstleistungen, die er für die Förderung der Vereinbarkeit von Beruf und Privatleben bietet, aufwerten, unterstützen und verstärken müssen;"
pipeline([de_text], max_length=512)
```
## Training data
The legal_t5_small_trans_de_es model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_de_es | 47.24|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_de_es_small_finetuned | 2021-04-16T08:01:03.000Z | [
"pytorch",
"t5",
"seq2seq",
"Deustch Spanish",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Deustch Spanish model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 8 | transformers |
---
language: Deustch Spanish
tags:
- translation Deustch Spanish model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Bei einer Kombination von Artikel 124 Absatz 14 mit Artikel 136 AEUV scheint die in den Artikeln 121 und 126 AEUV"
---
# legal_t5_small_trans_de_es_small_finetuned model
Model on translating legal text from Deustch to Spanish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_de_es_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_de_es_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Deustch to Spanish.
### How to use
Here is how to use this model to translate legal text from Deustch to Spanish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_de_es_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_de_es", do_lower_case=False,
skip_special_tokens=True),
device=0
)
de_text = "Bei einer Kombination von Artikel 124 Absatz 14 mit Artikel 136 AEUV scheint die in den Artikeln 121 und 126 AEUV"
pipeline([de_text], max_length=512)
```
## Training data
The legal_t5_small_trans_de_es_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_de_es_small_finetuned | 47.006|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_de_fr | 2021-01-29T08:53:23.000Z | [
"pytorch",
"t5",
"seq2seq",
"Deustch French",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Deustch French model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 12 | transformers |
---
language: Deustch French
tags:
- translation Deustch French model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "stellt fest, dass Leistung und Effizienz nicht in einer standardisierten Art und Weise gemessen werden; fordert die interinstitutionelle Arbeitsgruppe für die Agenturen auf, sich mit dieser Frage zu befassen;"
---
# legal_t5_small_trans_de_fr model
Model on translating legal text from Deustch to French. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_de_fr is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Deustch to French.
### How to use
Here is how to use this model to translate legal text from Deustch to French in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_de_fr"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_de_fr", do_lower_case=False,
skip_special_tokens=True),
device=0
)
de_text = "stellt fest, dass Leistung und Effizienz nicht in einer standardisierten Art und Weise gemessen werden; fordert die interinstitutionelle Arbeitsgruppe für die Agenturen auf, sich mit dieser Frage zu befassen;"
pipeline([de_text], max_length=512)
```
## Training data
The legal_t5_small_trans_de_fr model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_de_fr | 47.78|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_de_fr_small_finetuned | 2021-04-16T08:00:58.000Z | [
"pytorch",
"t5",
"seq2seq",
"Deustch French",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Deustch French model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 7 | transformers |
---
language: Deustch French
tags:
- translation Deustch French model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "SCHRIFTLICHE ANFRAGE P-0029/06"
---
# legal_t5_small_trans_de_fr_small_finetuned model
Model on translating legal text from Deustch to French. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_de_fr_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_de_fr_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Deustch to French.
### How to use
Here is how to use this model to translate legal text from Deustch to French in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_de_fr_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_de_fr", do_lower_case=False,
skip_special_tokens=True),
device=0
)
de_text = "SCHRIFTLICHE ANFRAGE P-0029/06"
pipeline([de_text], max_length=512)
```
## Training data
The legal_t5_small_trans_de_fr_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_de_fr_small_finetuned | 47.461|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_de_it | 2021-01-29T08:53:04.000Z | [
"pytorch",
"t5",
"lm-head",
"seq2seq",
"Deustch Italian",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Deustch Italian model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 21 | transformers |
---
language: Deustch Italian
tags:
- translation Deustch Italian model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Zum Zeitpunkt der Schlussabstimmung anwesende Stellvertreter(innen)"
---
# legal_t5_small_trans_de_it model
Model on translating legal text from Deustch to Italian. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_de_it is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Deustch to Italian.
### How to use
Here is how to use this model to translate legal text from Deustch to Italian in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_de_it"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_de_it", do_lower_case=False,
skip_special_tokens=True),
device=0
)
de_text = "Zum Zeitpunkt der Schlussabstimmung anwesende Stellvertreter(innen)"
pipeline([de_text], max_length=512)
```
## Training data
The legal_t5_small_trans_de_it model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_de_it | 43.3|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_de_it_small_finetuned | 2021-04-16T08:01:01.000Z | [
"pytorch",
"t5",
"seq2seq",
"Deustch Italian",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Deustch Italian model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 8 | transformers |
---
language: Deustch Italian
tags:
- translation Deustch Italian model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "sicherstellen, dass alle Bürger gemäß der Richtlinie .../.../EG [über den Universaldienst und Nutzerrechte bei elektronischen Kommunikationsnetzen und -diensten[ zu erschwinglichen Preisen Zugang zum Universaldienst erhalten;"
---
# legal_t5_small_trans_de_it_small_finetuned model
Model on translating legal text from Deustch to Italian. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_de_it_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_de_it_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Deustch to Italian.
### How to use
Here is how to use this model to translate legal text from Deustch to Italian in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_de_it_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_de_it", do_lower_case=False,
skip_special_tokens=True),
device=0
)
de_text = "sicherstellen, dass alle Bürger gemäß der Richtlinie .../.../EG [über den Universaldienst und Nutzerrechte bei elektronischen Kommunikationsnetzen und -diensten[ zu erschwinglichen Preisen Zugang zum Universaldienst erhalten;"
pipeline([de_text], max_length=512)
```
## Training data
The legal_t5_small_trans_de_it_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_de_it_small_finetuned | 42.895|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_de_sv | 2021-01-29T08:53:06.000Z | [
"pytorch",
"t5",
"lm-head",
"seq2seq",
"Deustch Swedish",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Deustch Swedish model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 17 | transformers |
---
language: Deustch Swedish
tags:
- translation Deustch Swedish model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Betrifft: Leader-Programm"
---
# legal_t5_small_trans_de_sv model
Model on translating legal text from Deustch to Swedish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_de_sv is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Deustch to Swedish.
### How to use
Here is how to use this model to translate legal text from Deustch to Swedish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_de_sv"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_de_sv", do_lower_case=False,
skip_special_tokens=True),
device=0
)
de_text = "Betrifft: Leader-Programm"
pipeline([de_text], max_length=512)
```
## Training data
The legal_t5_small_trans_de_sv model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_de_sv | 41.69|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_de_sv_small_finetuned | 2021-04-16T08:01:06.000Z | [
"pytorch",
"t5",
"seq2seq",
"Deustch Swedish",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Deustch Swedish model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 6 | transformers |
---
language: Deustch Swedish
tags:
- translation Deustch Swedish model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Die Finanzkrise hat schonungslos offenbart, wo die Mängel in den Überwachungsverfahren der EU liegen, die eine wirksame Vorbeugung von Verstößen gegen die Haushaltsdisziplin, ausufernden Haushaltsdefiziten der Mitgliedstaaten, Ungleichgewichten im Handel und Unterschieden in der Wettbewerbsfähigkeit gewährleisten sollen."
---
# legal_t5_small_trans_de_sv_small_finetuned model
Model on translating legal text from Deustch to Swedish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_de_sv_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_de_sv_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Deustch to Swedish.
### How to use
Here is how to use this model to translate legal text from Deustch to Swedish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_de_sv_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_de_sv", do_lower_case=False,
skip_special_tokens=True),
device=0
)
de_text = "Die Finanzkrise hat schonungslos offenbart, wo die Mängel in den Überwachungsverfahren der EU liegen, die eine wirksame Vorbeugung von Verstößen gegen die Haushaltsdisziplin, ausufernden Haushaltsdefiziten der Mitgliedstaaten, Ungleichgewichten im Handel und Unterschieden in der Wettbewerbsfähigkeit gewährleisten sollen."
pipeline([de_text], max_length=512)
```
## Training data
The legal_t5_small_trans_de_sv_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_de_sv_small_finetuned | 41.365|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_en_cs | 2021-02-07T22:21:14.000Z | [
"pytorch",
"t5",
"seq2seq",
"English Cszech",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation English Cszech model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 8 | transformers |
---
language: English Cszech
tags:
- translation English Cszech model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "1 In the countries concerned, this certainly affects the priority assigned to making progress on the issue of final disposal, particularly of highly radioactive waste and irradiated fuel elements."
---
# legal_t5_small_trans_en_cs model
Model on translating legal text from English to Cszech. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_en_cs is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from English to Cszech.
### How to use
Here is how to use this model to translate legal text from English to Cszech in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_en_cs"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_en_cs", do_lower_case=False,
skip_special_tokens=True),
device=0
)
en_text = "1 In the countries concerned, this certainly affects the priority assigned to making progress on the issue of final disposal, particularly of highly radioactive waste and irradiated fuel elements."
pipeline([en_text], max_length=512)
```
## Training data
The legal_t5_small_trans_en_cs model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_en_cs | 50.177|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_en_cs_small_finetuned | 2021-04-16T08:02:07.000Z | [
"pytorch",
"t5",
"seq2seq",
"English Cszech",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation English Cszech model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 9 | transformers |
---
language: English Cszech
tags:
- translation English Cszech model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Members present for the final vote"
---
# legal_t5_small_trans_en_cs_small_finetuned model
Model on translating legal text from English to Cszech. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_en_cs_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_en_cs_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from English to Cszech.
### How to use
Here is how to use this model to translate legal text from English to Cszech in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_en_cs_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_en_cs", do_lower_case=False,
skip_special_tokens=True),
device=0
)
en_text = "Members present for the final vote"
pipeline([en_text], max_length=512)
```
## Training data
The legal_t5_small_trans_en_cs_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_en_cs_small_finetuned | 50.394|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_en_de | 2021-02-07T22:21:10.000Z | [
"pytorch",
"t5",
"seq2seq",
"English Deustch",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation English Deustch model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 20 | transformers |
---
language: English Deustch
tags:
- translation English Deustch model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "· the impact of electromagnetic fields on animals, especially birds in cities;"
---
# legal_t5_small_trans_en_de model
Model on translating legal text from English to Deustch. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_en_de is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from English to Deustch.
### How to use
Here is how to use this model to translate legal text from English to Deustch in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_en_de"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_en_de", do_lower_case=False,
skip_special_tokens=True),
device=0
)
en_text = "· the impact of electromagnetic fields on animals, especially birds in cities;"
pipeline([en_text], max_length=512)
```
## Training data
The legal_t5_small_trans_en_de model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_en_de | 43.656|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_en_de_small_finetuned | 2021-04-16T08:02:10.000Z | [
"pytorch",
"t5",
"seq2seq",
"English Deustch",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation English Deustch model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 7 | transformers |
---
language: English Deustch
tags:
- translation English Deustch model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "The reference framework for the free movement of workers is laid down in Council Regulation (EEC) No 1612/68 on freedom of movement for workers within the Community and has been revised several times."
---
# legal_t5_small_trans_en_de_small_finetuned model
Model on translating legal text from English to Deustch. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_en_de_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_en_de_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from English to Deustch.
### How to use
Here is how to use this model to translate legal text from English to Deustch in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_en_de_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_en_de", do_lower_case=False,
skip_special_tokens=True),
device=0
)
en_text = "The reference framework for the free movement of workers is laid down in Council Regulation (EEC) No 1612/68 on freedom of movement for workers within the Community and has been revised several times."
pipeline([en_text], max_length=512)
```
## Training data
The legal_t5_small_trans_en_de_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_en_de_small_finetuned | 43.636|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_en_es | 2021-03-19T11:09:47.000Z | []
| [
".gitattributes"
]
| SEBIS | 0 | |||
SEBIS/legal_t5_small_trans_en_es_small_finetuned | 2021-04-16T08:02:16.000Z | [
"pytorch",
"t5",
"seq2seq",
"English Spanish",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation English Spanish model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 6 | transformers |
---
language: English Spanish
tags:
- translation English Spanish model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Instructs its President to forward this resolution to the Council and Commission and the Government and Parliament of Uzbekistan."
---
# legal_t5_small_trans_en_es_small_finetuned model
Model on translating legal text from English to Spanish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_en_es_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_en_es_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from English to Spanish.
### How to use
Here is how to use this model to translate legal text from English to Spanish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_en_es_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_en_es", do_lower_case=False,
skip_special_tokens=True),
device=0
)
en_text = "Instructs its President to forward this resolution to the Council and Commission and the Government and Parliament of Uzbekistan."
pipeline([en_text], max_length=512)
```
## Training data
The legal_t5_small_trans_en_es_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_en_es_small_finetuned | 53.692|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_en_fr | 2021-04-09T09:28:06.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 6 | transformers | |
SEBIS/legal_t5_small_trans_en_fr_small_finetuned | 2021-04-16T08:02:12.000Z | [
"pytorch",
"t5",
"seq2seq",
"English French",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation English French model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 6 | transformers |
---
language: English French
tags:
- translation English French model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "recalling the decision by 14 Member States earlier this year to limit their bilateral contacts with another Member State,"
---
# legal_t5_small_trans_en_fr_small_finetuned model
Model on translating legal text from English to French. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_en_fr_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_en_fr_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from English to French.
### How to use
Here is how to use this model to translate legal text from English to French in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_en_fr_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_en_fr", do_lower_case=False,
skip_special_tokens=True),
device=0
)
en_text = "recalling the decision by 14 Member States earlier this year to limit their bilateral contacts with another Member State,"
pipeline([en_text], max_length=512)
```
## Training data
The legal_t5_small_trans_en_fr_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_en_fr_small_finetuned | 52.476|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_en_it | 2021-03-19T11:08:31.000Z | [
"pytorch",
"t5",
"seq2seq",
"English Italian",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation English Italian model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 8 | transformers |
---
language: English Italian
tags:
- translation English Italian model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Answer given by Mrs Benita Ferrero-Waldner on behalf of the Commission"
---
# legal_t5_small_trans_en_it model
Model on translating legal text from English to Italian. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_en_it is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from English to Italian.
### How to use
Here is how to use this model to translate legal text from English to Italian in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_en_it"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_en_it", do_lower_case=False,
skip_special_tokens=True),
device=0
)
en_text = "Answer given by Mrs Benita Ferrero-Waldner on behalf of the Commission"
pipeline([en_text], max_length=512)
```
## Training data
The legal_t5_small_trans_en_it model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_en_it | 45.39|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_en_it_small_finetuned | 2021-04-16T08:02:14.000Z | [
"pytorch",
"t5",
"seq2seq",
"English Italian",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation English Italian model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 11 | transformers |
---
language: English Italian
tags:
- translation English Italian model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Preventing and combating trafficking in human beings, and protecting victims"
---
# legal_t5_small_trans_en_it_small_finetuned model
Model on translating legal text from English to Italian. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_en_it_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_en_it_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from English to Italian.
### How to use
Here is how to use this model to translate legal text from English to Italian in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_en_it_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_en_it", do_lower_case=False,
skip_special_tokens=True),
device=0
)
en_text = "Preventing and combating trafficking in human beings, and protecting victims"
pipeline([en_text], max_length=512)
```
## Training data
The legal_t5_small_trans_en_it_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_en_it_small_finetuned | 46.887|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_en_sv | 2021-03-19T11:11:17.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 8 | transformers | |
SEBIS/legal_t5_small_trans_en_sv_small_finetuned | 2021-04-16T08:02:18.000Z | [
"pytorch",
"t5",
"seq2seq",
"English Swedish",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation English Swedish model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 10 | transformers |
---
language: English Swedish
tags:
- translation English Swedish model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "any operations cofinanced in the framework of"
---
# legal_t5_small_trans_en_sv_small_finetuned model
Model on translating legal text from English to Swedish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_en_sv_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_en_sv_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from English to Swedish.
### How to use
Here is how to use this model to translate legal text from English to Swedish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_en_sv_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_en_sv", do_lower_case=False,
skip_special_tokens=True),
device=0
)
en_text = "any operations cofinanced in the framework of"
pipeline([en_text], max_length=512)
```
## Training data
The legal_t5_small_trans_en_sv_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_en_sv_small_finetuned | 48.126|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_es_cs | 2021-03-19T11:12:48.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 6 | transformers | |
SEBIS/legal_t5_small_trans_es_cs_small_finetuned | 2021-04-16T08:01:40.000Z | [
"pytorch",
"t5",
"seq2seq",
"Spanish Cszech",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Spanish Cszech model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 14 | transformers |
---
language: Spanish Cszech
tags:
- translation Spanish Cszech model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Comisión (incluidas las réplicas)"
---
# legal_t5_small_trans_es_cs_small_finetuned model
Model on translating legal text from Spanish to Cszech. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_es_cs_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_es_cs_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Spanish to Cszech.
### How to use
Here is how to use this model to translate legal text from Spanish to Cszech in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_es_cs_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_es_cs", do_lower_case=False,
skip_special_tokens=True),
device=0
)
es_text = "Comisión (incluidas las réplicas)"
pipeline([es_text], max_length=512)
```
## Training data
The legal_t5_small_trans_es_cs_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_es_cs_small_finetuned | 45.094|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_es_de | 2021-03-19T11:14:25.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 6 | transformers | |
SEBIS/legal_t5_small_trans_es_de_small_finetuned | 2021-04-16T08:01:42.000Z | [
"pytorch",
"t5",
"seq2seq",
"Spanish Deustch",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Spanish Deustch model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 7 | transformers |
---
language: Spanish Deustch
tags:
- translation Spanish Deustch model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Manfred Weber , en nombre del Grupo PPE , al Consejo:"
---
# legal_t5_small_trans_es_de_small_finetuned model
Model on translating legal text from Spanish to Deustch. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_es_de_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_es_de_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Spanish to Deustch.
### How to use
Here is how to use this model to translate legal text from Spanish to Deustch in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_es_de_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_es_de", do_lower_case=False,
skip_special_tokens=True),
device=0
)
es_text = "Manfred Weber , en nombre del Grupo PPE , al Consejo:"
pipeline([es_text], max_length=512)
```
## Training data
The legal_t5_small_trans_es_de_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_es_de_small_finetuned | 42.063|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_es_en | 2021-03-19T11:15:56.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 6 | transformers | |
SEBIS/legal_t5_small_trans_es_en_small_finetuned | 2021-04-16T08:01:52.000Z | [
"pytorch",
"t5",
"seq2seq",
"Spanish English",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Spanish English model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 7 | transformers |
---
language: Spanish English
tags:
- translation Spanish English model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "de Jonas Sjöstedt (GUE/NGL)"
---
# legal_t5_small_trans_es_en_small_finetuned model
Model on translating legal text from Spanish to English. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_es_en_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_es_en_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Spanish to English.
### How to use
Here is how to use this model to translate legal text from Spanish to English in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_es_en_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_es_en", do_lower_case=False,
skip_special_tokens=True),
device=0
)
es_text = "de Jonas Sjöstedt (GUE/NGL)"
pipeline([es_text], max_length=512)
```
## Training data
The legal_t5_small_trans_es_en_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_es_en_small_finetuned | 54.481|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_es_fr | 2021-03-19T11:17:18.000Z | []
| [
".gitattributes"
]
| SEBIS | 0 | |||
SEBIS/legal_t5_small_trans_es_fr_small_finetuned | 2021-04-16T08:01:44.000Z | [
"pytorch",
"t5",
"seq2seq",
"Spanish French",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Spanish French model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 7 | transformers |
---
language: Spanish French
tags:
- translation Spanish French model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Pide a las autoridades eritreas que levanten la prohibición de prensa independiente en el país y que liberen de inmediato a los periodistas independientes y a todos los demás encarcelados por el simple hecho de haber ejercido su derecho a la libertad de expresión;"
---
# legal_t5_small_trans_es_fr_small_finetuned model
Model on translating legal text from Spanish to French. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_es_fr_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_es_fr_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Spanish to French.
### How to use
Here is how to use this model to translate legal text from Spanish to French in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_es_fr_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_es_fr", do_lower_case=False,
skip_special_tokens=True),
device=0
)
es_text = "Pide a las autoridades eritreas que levanten la prohibición de prensa independiente en el país y que liberen de inmediato a los periodistas independientes y a todos los demás encarcelados por el simple hecho de haber ejercido su derecho a la libertad de expresión;"
pipeline([es_text], max_length=512)
```
## Training data
The legal_t5_small_trans_es_fr_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_es_fr_small_finetuned | 52.694|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_es_it | 2021-03-19T11:19:31.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 6 | transformers | |
SEBIS/legal_t5_small_trans_es_it_small_finetuned | 2021-04-16T08:01:47.000Z | [
"pytorch",
"t5",
"seq2seq",
"Spanish Italian",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Spanish Italian model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 7 | transformers |
---
language: Spanish Italian
tags:
- translation Spanish Italian model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "El acceso a las pruebas de densitometría ósea es totalmente inadecuado."
---
# legal_t5_small_trans_es_it_small_finetuned model
Model on translating legal text from Spanish to Italian. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_es_it_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_es_it_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Spanish to Italian.
### How to use
Here is how to use this model to translate legal text from Spanish to Italian in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_es_it_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_es_it", do_lower_case=False,
skip_special_tokens=True),
device=0
)
es_text = "El acceso a las pruebas de densitometría ósea es totalmente inadecuado."
pipeline([es_text], max_length=512)
```
## Training data
The legal_t5_small_trans_es_it_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_es_it_small_finetuned | 46.422|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_es_sv | 2021-03-19T11:21:51.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 6 | transformers | |
SEBIS/legal_t5_small_trans_es_sv_small_finetuned | 2021-04-16T08:01:49.000Z | [
"pytorch",
"t5",
"seq2seq",
"Spanish Swedish",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Spanish Swedish model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 6 | transformers |
---
language: Spanish Swedish
tags:
- translation Spanish Swedish model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Marie Anne Isler Béguin ,"
---
# legal_t5_small_trans_es_sv_small_finetuned model
Model on translating legal text from Spanish to Swedish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_es_sv_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_es_sv_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Spanish to Swedish.
### How to use
Here is how to use this model to translate legal text from Spanish to Swedish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_es_sv_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_es_sv", do_lower_case=False,
skip_special_tokens=True),
device=0
)
es_text = "Marie Anne Isler Béguin ,"
pipeline([es_text], max_length=512)
```
## Training data
The legal_t5_small_trans_es_sv_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_es_sv_small_finetuned | 43.838|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_fr_cs | 2021-01-29T08:53:30.000Z | [
"pytorch",
"t5",
"seq2seq",
"French Cszech",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation French Cszech model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 10 | transformers |
---
language: French Cszech
tags:
- translation French Cszech model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Hannes Swoboda , au nom du groupe PSE,"
---
# legal_t5_small_trans_fr_cs model
Model on translating legal text from French to Cszech. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_fr_cs is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from French to Cszech.
### How to use
Here is how to use this model to translate legal text from French to Cszech in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_fr_cs"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_fr_cs", do_lower_case=False,
skip_special_tokens=True),
device=0
)
fr_text = "Hannes Swoboda , au nom du groupe PSE,"
pipeline([fr_text], max_length=512)
```
## Training data
The legal_t5_small_trans_fr_cs model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_fr_cs | 44.34|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_fr_cs_small_finetuned | 2021-04-16T08:01:11.000Z | [
"pytorch",
"t5",
"seq2seq",
"French Cszech",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation French Cszech model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 9 | transformers |
---
language: French Cszech
tags:
- translation French Cszech model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Compte rendu de la délégation à la Convention-cadre des Nations unies sur le changement climatique (COP17) à Durban (Afrique du Sud)"
---
# legal_t5_small_trans_fr_cs_small_finetuned model
Model on translating legal text from French to Cszech. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_fr_cs_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_fr_cs_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from French to Cszech.
### How to use
Here is how to use this model to translate legal text from French to Cszech in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_fr_cs_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_fr_cs", do_lower_case=False,
skip_special_tokens=True),
device=0
)
fr_text = "Compte rendu de la délégation à la Convention-cadre des Nations unies sur le changement climatique (COP17) à Durban (Afrique du Sud)"
pipeline([fr_text], max_length=512)
```
## Training data
The legal_t5_small_trans_fr_cs_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_fr_cs_small_finetuned | 44.410|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_fr_de | 2021-01-29T08:53:28.000Z | [
"pytorch",
"t5",
"seq2seq",
"French Deustch",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation French Deustch model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 20 | transformers |
---
language: French Deustch
tags:
- translation French Deustch model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Les États membres notifient ces dispositions à la Commission au plus tard à la date mentionnée à l'article 15 et toute modification ultérieure les concernant dans les meilleurs délais."
---
# legal_t5_small_trans_fr_de model
Model on translating legal text from French to Deustch. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_fr_de is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from French to Deustch.
### How to use
Here is how to use this model to translate legal text from French to Deustch in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_fr_de"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_fr_de", do_lower_case=False,
skip_special_tokens=True),
device=0
)
fr_text = "Les États membres notifient ces dispositions à la Commission au plus tard à la date mentionnée à l'article 15 et toute modification ultérieure les concernant dans les meilleurs délais."
pipeline([fr_text], max_length=512)
```
## Training data
The legal_t5_small_trans_fr_de model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_fr_de | 41.33|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_fr_de_small_finetuned | 2021-04-16T08:01:13.000Z | [
"pytorch",
"t5",
"seq2seq",
"French Deustch",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation French Deustch model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 7 | transformers |
---
language: French Deustch
tags:
- translation French Deustch model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "7. demande instamment à la Commission de veiller à ce que l'objectif d'une part de 20% d'énergie renouvelable soit rendue contraignante pour les États membres par des dispositions législatives à cet effet et soit mis en œuvre d'une manière conséquente, et à ce que les États membres qui n'honorent pas leurs engagements soient frappés de lourdes sanctions; souligne la nécessité de plans d'action nationaux dans le cadre desquels chaque État membre se fixe un objectif contraignant pour chaque secteur en fonction de ses possibilités spécifiques météorologiques, géographiques et géologiques et de ses réalisations dans le passé; demande instamment à la Commission de procéder à une évaluation préalable puis intermédiaire de ces plans d'action;"
---
# legal_t5_small_trans_fr_de_small_finetuned model
Model on translating legal text from French to Deustch. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_fr_de_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_fr_de_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from French to Deustch.
### How to use
Here is how to use this model to translate legal text from French to Deustch in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_fr_de_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_fr_de", do_lower_case=False,
skip_special_tokens=True),
device=0
)
fr_text = "7. demande instamment à la Commission de veiller à ce que l'objectif d'une part de 20% d'énergie renouvelable soit rendue contraignante pour les États membres par des dispositions législatives à cet effet et soit mis en œuvre d'une manière conséquente, et à ce que les États membres qui n'honorent pas leurs engagements soient frappés de lourdes sanctions; souligne la nécessité de plans d'action nationaux dans le cadre desquels chaque État membre se fixe un objectif contraignant pour chaque secteur en fonction de ses possibilités spécifiques météorologiques, géographiques et géologiques et de ses réalisations dans le passé; demande instamment à la Commission de procéder à une évaluation préalable puis intermédiaire de ces plans d'action;"
pipeline([fr_text], max_length=512)
```
## Training data
The legal_t5_small_trans_fr_de_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_fr_de_small_finetuned | 41.085|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_fr_en | 2021-01-29T08:53:41.000Z | [
"pytorch",
"t5",
"seq2seq",
"French English",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation French English model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 19 | transformers |
---
language: French English
tags:
- translation French English model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "quels montants ont été attribués et quelles sommes ont été effectivement utilisées dans chaque État membre? 4."
---
# legal_t5_small_trans_fr_en model
Model on translating legal text from French to English. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_fr_en is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from French to English.
### How to use
Here is how to use this model to translate legal text from French to English in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_fr_en"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_fr_en", do_lower_case=False,
skip_special_tokens=True),
device=0
)
fr_text = "quels montants ont été attribués et quelles sommes ont été effectivement utilisées dans chaque État membre? 4."
pipeline([fr_text], max_length=512)
```
## Training data
The legal_t5_small_trans_fr_en model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_fr_en | 51.44|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_fr_en_small_finetuned | 2021-04-16T08:01:23.000Z | [
"pytorch",
"t5",
"seq2seq",
"French English",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation French English model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 6 | transformers |
---
language: French English
tags:
- translation French English model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "RÉSULTAT DU VOTE FINAL EN COMMISSION"
---
# legal_t5_small_trans_fr_en_small_finetuned model
Model on translating legal text from French to English. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_fr_en_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_fr_en_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from French to English.
### How to use
Here is how to use this model to translate legal text from French to English in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_fr_en_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_fr_en", do_lower_case=False,
skip_special_tokens=True),
device=0
)
fr_text = "RÉSULTAT DU VOTE FINAL EN COMMISSION"
pipeline([fr_text], max_length=512)
```
## Training data
The legal_t5_small_trans_fr_en_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_fr_en_small_finetuned | 51.351|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_fr_es | 2021-01-29T08:53:37.000Z | [
"pytorch",
"t5",
"seq2seq",
"French Spanish",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation French Spanish model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 12 | transformers |
---
language: French Spanish
tags:
- translation French Spanish model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "commission des libertés civiles, de la justice et des affaires intérieures"
---
# legal_t5_small_trans_fr_es model
Model on translating legal text from French to Spanish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_fr_es is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from French to Spanish.
### How to use
Here is how to use this model to translate legal text from French to Spanish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_fr_es"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_fr_es", do_lower_case=False,
skip_special_tokens=True),
device=0
)
fr_text = "commission des libertés civiles, de la justice et des affaires intérieures"
pipeline([fr_text], max_length=512)
```
## Training data
The legal_t5_small_trans_fr_es model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_fr_es | 51.16|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_fr_es_small_finetuned | 2021-04-16T08:01:18.000Z | [
"pytorch",
"t5",
"seq2seq",
"French Spanish",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation French Spanish model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 10 | transformers |
---
language: French Spanish
tags:
- translation French Spanish model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "A‑t‑elle déjà engagé, ou compte-t-elle engager, la réalisation d'une étude visant, comme préconisé ci‑dessus, à recenser les principaux problèmes et les besoins spécifiques des régions ultrapériphériques en matière de transport maritime, compte tenu des caractéristiques et des besoins propres à ce secteur, dans la perspective de la réalisation des projets d'autoroutes de la mer dans lesdites régions? 2."
---
# legal_t5_small_trans_fr_es_small_finetuned model
Model on translating legal text from French to Spanish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_fr_es_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_fr_es_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from French to Spanish.
### How to use
Here is how to use this model to translate legal text from French to Spanish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_fr_es_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_fr_es", do_lower_case=False,
skip_special_tokens=True),
device=0
)
fr_text = "A‑t‑elle déjà engagé, ou compte-t-elle engager, la réalisation d'une étude visant, comme préconisé ci‑dessus, à recenser les principaux problèmes et les besoins spécifiques des régions ultrapériphériques en matière de transport maritime, compte tenu des caractéristiques et des besoins propres à ce secteur, dans la perspective de la réalisation des projets d'autoroutes de la mer dans lesdites régions? 2."
pipeline([fr_text], max_length=512)
```
## Training data
The legal_t5_small_trans_fr_es_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_fr_es_small_finetuned | 51.202|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_fr_it | 2021-01-29T08:53:09.000Z | [
"pytorch",
"t5",
"seq2seq",
"French Italian",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation French Italian model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 8 | transformers |
---
language: French Italian
tags:
- translation French Italian model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "considérant la multiplication des constructions qui ne respectent pas la culture des lieux et leur paysage particulier, dégradations à l'appui,"
---
# legal_t5_small_trans_fr_it model
Model on translating legal text from French to Italian. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_fr_it is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from French to Italian.
### How to use
Here is how to use this model to translate legal text from French to Italian in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_fr_it"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_fr_it", do_lower_case=False,
skip_special_tokens=True),
device=0
)
fr_text = "considérant la multiplication des constructions qui ne respectent pas la culture des lieux et leur paysage particulier, dégradations à l'appui,"
pipeline([fr_text], max_length=512)
```
## Training data
The legal_t5_small_trans_fr_it model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_fr_it | 46.45|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_fr_it_small_finetuned | 2021-04-16T08:01:16.000Z | [
"pytorch",
"t5",
"seq2seq",
"French Italian",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation French Italian model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 7 | transformers |
---
language: French Italian
tags:
- translation French Italian model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Le vote a lieu dans un délai de deux mois après réception de la proposition, à moins qu'à la demande de la commission compétente, d'un groupe politique ou de quarante députés au moins, le Parlement n'en décide autrement."
---
# legal_t5_small_trans_fr_it_small_finetuned model
Model on translating legal text from French to Italian. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_fr_it_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_fr_it_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from French to Italian.
### How to use
Here is how to use this model to translate legal text from French to Italian in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_fr_it_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_fr_it", do_lower_case=False,
skip_special_tokens=True),
device=0
)
fr_text = "Le vote a lieu dans un délai de deux mois après réception de la proposition, à moins qu'à la demande de la commission compétente, d'un groupe politique ou de quarante députés au moins, le Parlement n'en décide autrement."
pipeline([fr_text], max_length=512)
```
## Training data
The legal_t5_small_trans_fr_it_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_fr_it_small_finetuned | 46.309|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_fr_sv | 2021-01-29T08:53:39.000Z | [
"pytorch",
"t5",
"seq2seq",
"French Swedish",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation French Swedish model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 10 | transformers |
---
language: French Swedish
tags:
- translation French Swedish model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "posée conformément à l'article 43 du règlement"
---
# legal_t5_small_trans_fr_sv model
Model on translating legal text from French to Swedish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_fr_sv is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from French to Swedish.
### How to use
Here is how to use this model to translate legal text from French to Swedish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_fr_sv"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_fr_sv", do_lower_case=False,
skip_special_tokens=True),
device=0
)
fr_text = "posée conformément à l'article 43 du règlement"
pipeline([fr_text], max_length=512)
```
## Training data
The legal_t5_small_trans_fr_sv model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_fr_sv | 41.9|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_fr_sv_small_finetuned | 2021-04-16T08:01:20.000Z | [
"pytorch",
"t5",
"seq2seq",
"French Swedish",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation French Swedish model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 8 | transformers |
---
language: French Swedish
tags:
- translation French Swedish model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Budget 2009: Section III - Commission"
---
# legal_t5_small_trans_fr_sv_small_finetuned model
Model on translating legal text from French to Swedish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_fr_sv_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_fr_sv_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from French to Swedish.
### How to use
Here is how to use this model to translate legal text from French to Swedish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_fr_sv_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_fr_sv", do_lower_case=False,
skip_special_tokens=True),
device=0
)
fr_text = "Budget 2009: Section III - Commission"
pipeline([fr_text], max_length=512)
```
## Training data
The legal_t5_small_trans_fr_sv_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_fr_sv_small_finetuned | 41.768|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_it_cs | 2021-01-29T08:53:55.000Z | [
"pytorch",
"t5",
"seq2seq",
"Italian Cszech",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Italian Cszech model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 17 | transformers |
---
language: Italian Cszech
tags:
- translation Italian Cszech model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "sull'aumento dei prezzi dei prodotti alimentari"
---
# legal_t5_small_trans_it_cs model
Model on translating legal text from Italian to Cszech. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_it_cs is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Italian to Cszech.
### How to use
Here is how to use this model to translate legal text from Italian to Cszech in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_it_cs"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_it_cs", do_lower_case=False,
skip_special_tokens=True),
device=0
)
it_text = "sull'aumento dei prezzi dei prodotti alimentari"
pipeline([it_text], max_length=512)
```
## Training data
The legal_t5_small_trans_it_cs model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_it_cs | 43.302|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_it_cs_small_finetuned | 2021-04-16T08:01:25.000Z | [
"pytorch",
"t5",
"seq2seq",
"Italian Cszech",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Italian Cszech model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 7 | transformers |
---
language: Italian Cszech
tags:
- translation Italian Cszech model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Il consiglio di amministrazione è assistito da un comitato esecutivo."
---
# legal_t5_small_trans_it_cs_small_finetuned model
Model on translating legal text from Italian to Cszech. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_it_cs_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_it_cs_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Italian to Cszech.
### How to use
Here is how to use this model to translate legal text from Italian to Cszech in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_it_cs_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_it_cs", do_lower_case=False,
skip_special_tokens=True),
device=0
)
it_text = "Il consiglio di amministrazione è assistito da un comitato esecutivo."
pipeline([it_text], max_length=512)
```
## Training data
The legal_t5_small_trans_it_cs_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_it_cs_small_finetuned | 43.236|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_it_de | 2021-01-29T08:53:57.000Z | [
"pytorch",
"t5",
"seq2seq",
"Italian Deustch",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Italian Deustch model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 8 | transformers |
---
language: Italian Deustch
tags:
- translation Italian Deustch model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "presentata con richiesta di iscrizione all'ordine del giorno della discussione su problemi di attualità, urgenti e di notevole rilevanza"
---
# legal_t5_small_trans_it_de model
Model on translating legal text from Italian to Deustch. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_it_de is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Italian to Deustch.
### How to use
Here is how to use this model to translate legal text from Italian to Deustch in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_it_de"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_it_de", do_lower_case=False,
skip_special_tokens=True),
device=0
)
it_text = "presentata con richiesta di iscrizione all'ordine del giorno della discussione su problemi di attualità, urgenti e di notevole rilevanza"
pipeline([it_text], max_length=512)
```
## Training data
The legal_t5_small_trans_it_de model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_it_de | 40.615|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_it_de_small_finetuned | 2021-04-16T08:01:27.000Z | [
"pytorch",
"t5",
"seq2seq",
"Italian Deustch",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Italian Deustch model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 7 | transformers |
---
language: Italian Deustch
tags:
- translation Italian Deustch model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Interventi sulla votazione:"
---
# legal_t5_small_trans_it_de_small_finetuned model
Model on translating legal text from Italian to Deustch. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_it_de_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_it_de_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Italian to Deustch.
### How to use
Here is how to use this model to translate legal text from Italian to Deustch in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_it_de_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_it_de", do_lower_case=False,
skip_special_tokens=True),
device=0
)
it_text = "Interventi sulla votazione:"
pipeline([it_text], max_length=512)
```
## Training data
The legal_t5_small_trans_it_de_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_it_de_small_finetuned | 40.524|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_it_en | 2021-01-29T08:54:04.000Z | [
"pytorch",
"t5",
"seq2seq",
"Italian English",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Italian English model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 9 | transformers |
---
language: Italian English
tags:
- translation Italian English model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Oggetto: Libertà di culto in Turchia"
---
# legal_t5_small_trans_it_en model
Model on translating legal text from Italian to English. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_it_en is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Italian to English.
### How to use
Here is how to use this model to translate legal text from Italian to English in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_it_en"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_it_en", do_lower_case=False,
skip_special_tokens=True),
device=0
)
it_text = "Oggetto: Libertà di culto in Turchia"
pipeline([it_text], max_length=512)
```
## Training data
The legal_t5_small_trans_it_en model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_it_en | 50.068|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_it_en_small_finetuned | 2021-04-16T08:01:38.000Z | [
"pytorch",
"t5",
"seq2seq",
"Italian English",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Italian English model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 10 | transformers |
---
language: Italian English
tags:
- translation Italian English model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Supplenti presenti al momento della votazione finale"
---
# legal_t5_small_trans_it_en_small_finetuned model
Model on translating legal text from Italian to English. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_it_en_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_it_en_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Italian to English.
### How to use
Here is how to use this model to translate legal text from Italian to English in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_it_en_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_it_en", do_lower_case=False,
skip_special_tokens=True),
device=0
)
it_text = "Supplenti presenti al momento della votazione finale"
pipeline([it_text], max_length=512)
```
## Training data
The legal_t5_small_trans_it_en_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_it_en_small_finetuned | 49.840|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_it_es | 2021-01-29T08:54:06.000Z | [
"pytorch",
"t5",
"seq2seq",
"Italian Spanish",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Italian Spanish model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 8 | transformers |
---
language: Italian Spanish
tags:
- translation Italian Spanish model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Risoluzione del Parlamento europeo sulle perquisizioni effettuate ad Ankara nella sede principale dell'Associazione per i diritti dell'uomo in Turchia"
---
# legal_t5_small_trans_it_es model
Model on translating legal text from Italian to Spanish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_it_es is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Italian to Spanish.
### How to use
Here is how to use this model to translate legal text from Italian to Spanish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_it_es"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_it_es", do_lower_case=False,
skip_special_tokens=True),
device=0
)
it_text = "Risoluzione del Parlamento europeo sulle perquisizioni effettuate ad Ankara nella sede principale dell'Associazione per i diritti dell'uomo in Turchia"
pipeline([it_text], max_length=512)
```
## Training data
The legal_t5_small_trans_it_es model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_it_es | 48.998|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_it_es_small_finetuned | 2021-04-16T08:01:32.000Z | [
"pytorch",
"t5",
"seq2seq",
"Italian Spanish",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Italian Spanish model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 9 | transformers |
---
language: Italian Spanish
tags:
- translation Italian Spanish model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "considerando che il 28 marzo 2002 il Consiglio di sicurezza dell'ONU si è dichiarato favorevole all'attuazione integrale del Protocollo di Lusaka e si è detto disposto a cooperare con tutte le parti in conflitto per raggiungere tale obiettivo, nonché ad avviare consultazioni con il governo dell'Angola per ricercare i mezzi con cui modificare le sanzioni imposte all'UNITA attraverso la risoluzione 1127 (1997), e ciò al fine di agevolare i colloqui di pace,"
---
# legal_t5_small_trans_it_es_small_finetuned model
Model on translating legal text from Italian to Spanish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_it_es_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_it_es_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Italian to Spanish.
### How to use
Here is how to use this model to translate legal text from Italian to Spanish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_it_es_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_it_es", do_lower_case=False,
skip_special_tokens=True),
device=0
)
it_text = "considerando che il 28 marzo 2002 il Consiglio di sicurezza dell'ONU si è dichiarato favorevole all'attuazione integrale del Protocollo di Lusaka e si è detto disposto a cooperare con tutte le parti in conflitto per raggiungere tale obiettivo, nonché ad avviare consultazioni con il governo dell'Angola per ricercare i mezzi con cui modificare le sanzioni imposte all'UNITA attraverso la risoluzione 1127 (1997), e ciò al fine di agevolare i colloqui di pace,"
pipeline([it_text], max_length=512)
```
## Training data
The legal_t5_small_trans_it_es_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_it_es_small_finetuned | 49.083|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_it_fr | 2021-01-29T08:53:59.000Z | [
"pytorch",
"t5",
"seq2seq",
"Italian French",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Italian French model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 7 | transformers |
---
language: Italian French
tags:
- translation Italian French model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Qualora gli emendamenti approvati dal Parlamento abbiano l'effetto di aumentare le spese iscritte nel progetto di bilancio oltre il tasso massimo previsto, la commissione competente per il merito sottopone al Parlamento una proposta intesa a fissare un nuovo tasso massimo in conformità del paragrafo 9, ultimo comma, degli articoli 78 del trattato CECA, 272 del trattato CE e 177 del trattato CEEA."
---
# legal_t5_small_trans_it_fr model
Model on translating legal text from Italian to French. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_it_fr is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Italian to French.
### How to use
Here is how to use this model to translate legal text from Italian to French in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_it_fr"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_it_fr", do_lower_case=False,
skip_special_tokens=True),
device=0
)
it_text = "Qualora gli emendamenti approvati dal Parlamento abbiano l'effetto di aumentare le spese iscritte nel progetto di bilancio oltre il tasso massimo previsto, la commissione competente per il merito sottopone al Parlamento una proposta intesa a fissare un nuovo tasso massimo in conformità del paragrafo 9, ultimo comma, degli articoli 78 del trattato CECA, 272 del trattato CE e 177 del trattato CEEA."
pipeline([it_text], max_length=512)
```
## Training data
The legal_t5_small_trans_it_fr model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_it_fr | 50.559|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_it_fr_small_finetuned | 2021-04-16T08:01:29.000Z | [
"pytorch",
"t5",
"seq2seq",
"Italian French",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Italian French model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 7 | transformers |
---
language: Italian French
tags:
- translation Italian French model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Dichiarazioni del Consiglio e della Commissione"
---
# legal_t5_small_trans_it_fr_small_finetuned model
Model on translating legal text from Italian to French. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_it_fr_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_it_fr_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Italian to French.
### How to use
Here is how to use this model to translate legal text from Italian to French in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_it_fr_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_it_fr", do_lower_case=False,
skip_special_tokens=True),
device=0
)
it_text = "Dichiarazioni del Consiglio e della Commissione"
pipeline([it_text], max_length=512)
```
## Training data
The legal_t5_small_trans_it_fr_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_it_fr_small_finetuned | 50.557|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_it_sv | 2021-01-29T08:54:01.000Z | [
"pytorch",
"t5",
"seq2seq",
"Italian Swedish",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Italian Swedish model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 8 | transformers |
---
language: Italian Swedish
tags:
- translation Italian Swedish model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "K. considerando che, come avviene con tutti i sistemi di sanità elettronica, la progettazione, lo sviluppo e l’attuazione di sistemi abilitati alla tecnologia RFID presuppongono il coinvolgimento diretto dei professionisti sanitari, dei pazienti e delle commissioni competenti (per esempio, sulla protezione dei dati e sull’etica),"
---
# legal_t5_small_trans_it_sv model
Model on translating legal text from Italian to Swedish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_it_sv is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Italian to Swedish.
### How to use
Here is how to use this model to translate legal text from Italian to Swedish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_it_sv"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_it_sv", do_lower_case=False,
skip_special_tokens=True),
device=0
)
it_text = "K. considerando che, come avviene con tutti i sistemi di sanità elettronica, la progettazione, lo sviluppo e l’attuazione di sistemi abilitati alla tecnologia RFID presuppongono il coinvolgimento diretto dei professionisti sanitari, dei pazienti e delle commissioni competenti (per esempio, sulla protezione dei dati e sull’etica),"
pipeline([it_text], max_length=512)
```
## Training data
The legal_t5_small_trans_it_sv model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_it_sv | 41.508|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_it_sv_small_finetuned | 2021-04-16T08:01:35.000Z | [
"pytorch",
"t5",
"seq2seq",
"Italian Swedish",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Italian Swedish model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 7 | transformers |
---
language: Italian Swedish
tags:
- translation Italian Swedish model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Cooperazione rafforzata Annuncio in Aula"
---
# legal_t5_small_trans_it_sv_small_finetuned model
Model on translating legal text from Italian to Swedish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_it_sv_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_it_sv_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Italian to Swedish.
### How to use
Here is how to use this model to translate legal text from Italian to Swedish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_it_sv_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_it_sv", do_lower_case=False,
skip_special_tokens=True),
device=0
)
it_text = "Cooperazione rafforzata Annuncio in Aula"
pipeline([it_text], max_length=512)
```
## Training data
The legal_t5_small_trans_it_sv_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_it_sv_small_finetuned | 41.243|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_sv_cs | 2021-01-29T08:54:08.000Z | [
"pytorch",
"t5",
"seq2seq",
"Swedish Cszech",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Swedish Cszech model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 12 | transformers |
---
language: Swedish Cszech
tags:
- translation Swedish Cszech model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "En kvalitetscertifiering av administrativa förfaranden i enlighet med ISO eller motsvarande normer skulle dessutom leda till likvärdiga villkor för sjöfartsadministrationer."
---
# legal_t5_small_trans_sv_cs model
Model on translating legal text from Swedish to Cszech. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_sv_cs is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Swedish to Cszech.
### How to use
Here is how to use this model to translate legal text from Swedish to Cszech in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_sv_cs"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_sv_cs", do_lower_case=False,
skip_special_tokens=True),
device=0
)
sv_text = "En kvalitetscertifiering av administrativa förfaranden i enlighet med ISO eller motsvarande normer skulle dessutom leda till likvärdiga villkor för sjöfartsadministrationer."
pipeline([sv_text], max_length=512)
```
## Training data
The legal_t5_small_trans_sv_cs model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_sv_cs | 45.569|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_sv_cs_small_finetuned | 2021-04-16T08:01:54.000Z | [
"pytorch",
"t5",
"seq2seq",
"Swedish Cszech",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Swedish Cszech model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 6 | transformers |
---
language: Swedish Cszech
tags:
- translation Swedish Cszech model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Kommissionens personal och extern personal som bemyndigas av kommissionen måste få tillträde till bidragsmottagarens lokaler och tillgång till all information som behövs för att genomföra sådana revisioner, inbegripet information i elektronisk form."
---
# legal_t5_small_trans_sv_cs_small_finetuned model
Model on translating legal text from Swedish to Cszech. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_sv_cs_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_sv_cs_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Swedish to Cszech.
### How to use
Here is how to use this model to translate legal text from Swedish to Cszech in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_sv_cs_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_sv_cs", do_lower_case=False,
skip_special_tokens=True),
device=0
)
sv_text = "Kommissionens personal och extern personal som bemyndigas av kommissionen måste få tillträde till bidragsmottagarens lokaler och tillgång till all information som behövs för att genomföra sådana revisioner, inbegripet information i elektronisk form."
pipeline([sv_text], max_length=512)
```
## Training data
The legal_t5_small_trans_sv_cs_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_sv_cs_small_finetuned | 45.472|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_sv_de | 2021-02-07T22:21:23.000Z | [
"pytorch",
"t5",
"seq2seq",
"Swedish Deustch",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Swedish Deustch model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 6 | transformers |
---
language: Swedish Deustch
tags:
- translation Swedish Deustch model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "b) Bekämpning av skadegörare inom skogsbruket."
---
# legal_t5_small_trans_sv_de model
Model on translating legal text from Swedish to Deustch. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_sv_de is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Swedish to Deustch.
### How to use
Here is how to use this model to translate legal text from Swedish to Deustch in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_sv_de"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_sv_de", do_lower_case=False,
skip_special_tokens=True),
device=0
)
sv_text = "b) Bekämpning av skadegörare inom skogsbruket."
pipeline([sv_text], max_length=512)
```
## Training data
The legal_t5_small_trans_sv_de model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_sv_de | 40.264|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_sv_de_small_finetuned | 2021-04-16T08:01:56.000Z | [
"pytorch",
"t5",
"seq2seq",
"Swedish Deustch",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Swedish Deustch model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 7 | transformers |
---
language: Swedish Deustch
tags:
- translation Swedish Deustch model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "G. Mäns och kvinnors förmåga att delta på lika villkor i det politiska livet och i beslutsfattandet är en grundläggande förutsättning för en verklig demokrati."
---
# legal_t5_small_trans_sv_de_small_finetuned model
Model on translating legal text from Swedish to Deustch. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_sv_de_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_sv_de_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Swedish to Deustch.
### How to use
Here is how to use this model to translate legal text from Swedish to Deustch in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_sv_de_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_sv_de", do_lower_case=False,
skip_special_tokens=True),
device=0
)
sv_text = "G. Mäns och kvinnors förmåga att delta på lika villkor i det politiska livet och i beslutsfattandet är en grundläggande förutsättning för en verklig demokrati."
pipeline([sv_text], max_length=512)
```
## Training data
The legal_t5_small_trans_sv_de_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_sv_de_small_finetuned | 40.240|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_sv_en | 2021-02-07T22:21:28.000Z | [
"pytorch",
"t5",
"seq2seq",
"Swedish English",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Swedish English model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 6 | transformers |
---
language: Swedish English
tags:
- translation Swedish English model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Om rättsliga förfaranden inleds rörande omständigheter som ombudsmannen utreder skall han avsluta ärendet."
---
# legal_t5_small_trans_sv_en model
Model on translating legal text from Swedish to English. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_sv_en is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Swedish to English.
### How to use
Here is how to use this model to translate legal text from Swedish to English in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_sv_en"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_sv_en", do_lower_case=False,
skip_special_tokens=True),
device=0
)
sv_text = "Om rättsliga förfaranden inleds rörande omständigheter som ombudsmannen utreder skall han avsluta ärendet."
pipeline([sv_text], max_length=512)
```
## Training data
The legal_t5_small_trans_sv_en model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_sv_en | 52.025|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_sv_en_small_finetuned | 2021-04-16T08:02:05.000Z | [
"pytorch",
"t5",
"seq2seq",
"Swedish English",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Swedish English model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 6 | transformers |
---
language: Swedish English
tags:
- translation Swedish English model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Alejo Vidal-Quadras : 262 röster"
---
# legal_t5_small_trans_sv_en_small_finetuned model
Model on translating legal text from Swedish to English. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_sv_en_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_sv_en_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Swedish to English.
### How to use
Here is how to use this model to translate legal text from Swedish to English in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_sv_en_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_sv_en", do_lower_case=False,
skip_special_tokens=True),
device=0
)
sv_text = "Alejo Vidal-Quadras : 262 röster"
pipeline([sv_text], max_length=512)
```
## Training data
The legal_t5_small_trans_sv_en_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_sv_en_small_finetuned | 52.084|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_sv_es | 2021-02-07T22:21:17.000Z | [
"pytorch",
"t5",
"seq2seq",
"Swedish Spanish",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Swedish Spanish model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 7 | transformers |
---
language: Swedish Spanish
tags:
- translation Swedish Spanish model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Monika Flašíková Beňová (S&D)"
---
# legal_t5_small_trans_sv_es model
Model on translating legal text from Swedish to Spanish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_sv_es is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Swedish to Spanish.
### How to use
Here is how to use this model to translate legal text from Swedish to Spanish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_sv_es"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_sv_es", do_lower_case=False,
skip_special_tokens=True),
device=0
)
sv_text = "Monika Flašíková Beňová (S&D)"
pipeline([sv_text], max_length=512)
```
## Training data
The legal_t5_small_trans_sv_es model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_sv_es | 47.407|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_sv_es_small_finetuned | 2021-04-16T08:02:03.000Z | [
"pytorch",
"t5",
"seq2seq",
"Swedish Spanish",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Swedish Spanish model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 6 | transformers |
---
language: Swedish Spanish
tags:
- translation Swedish Spanish model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "– med beaktande av kommissionen vitbok om idrott ( KOM(2007)0391 ),"
---
# legal_t5_small_trans_sv_es_small_finetuned model
Model on translating legal text from Swedish to Spanish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_sv_es_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_sv_es_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Swedish to Spanish.
### How to use
Here is how to use this model to translate legal text from Swedish to Spanish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_sv_es_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_sv_es", do_lower_case=False,
skip_special_tokens=True),
device=0
)
sv_text = "– med beaktande av kommissionen vitbok om idrott ( KOM(2007)0391 ),"
pipeline([sv_text], max_length=512)
```
## Training data
The legal_t5_small_trans_sv_es_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_sv_es_small_finetuned | 47.411|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_sv_fr | 2021-02-07T22:21:20.000Z | [
"pytorch",
"t5",
"seq2seq",
"Swedish French",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Swedish French model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 6 | transformers |
---
language: Swedish French
tags:
- translation Swedish French model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Kunden måste ha rätt att avsäga sig information i skriftlig form."
---
# legal_t5_small_trans_sv_fr model
Model on translating legal text from Swedish to French. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_sv_fr is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Swedish to French.
### How to use
Here is how to use this model to translate legal text from Swedish to French in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_sv_fr"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_sv_fr", do_lower_case=False,
skip_special_tokens=True),
device=0
)
sv_text = "Kunden måste ha rätt att avsäga sig information i skriftlig form."
pipeline([sv_text], max_length=512)
```
## Training data
The legal_t5_small_trans_sv_fr model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_sv_fr | 47.623|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_sv_fr_small_finetuned | 2021-04-16T08:01:58.000Z | [
"pytorch",
"t5",
"seq2seq",
"Swedish French",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Swedish French model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 9 | transformers |
---
language: Swedish French
tags:
- translation Swedish French model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Samreglering bör följa samma principer som de formella bestämmelserna, vilket betyder att den bör vara objektiv, välgrundad, proportionell och icke-diskriminerande, och bör möjliggöra insyn."
---
# legal_t5_small_trans_sv_fr_small_finetuned model
Model on translating legal text from Swedish to French. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_sv_fr_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_sv_fr_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Swedish to French.
### How to use
Here is how to use this model to translate legal text from Swedish to French in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_sv_fr_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_sv_fr", do_lower_case=False,
skip_special_tokens=True),
device=0
)
sv_text = "Samreglering bör följa samma principer som de formella bestämmelserna, vilket betyder att den bör vara objektiv, välgrundad, proportionell och icke-diskriminerande, och bör möjliggöra insyn."
pipeline([sv_text], max_length=512)
```
## Training data
The legal_t5_small_trans_sv_fr_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_sv_fr_small_finetuned | 47.508|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_sv_it | 2021-02-07T22:21:26.000Z | [
"pytorch",
"t5",
"seq2seq",
"Swedish Italian",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Swedish Italian model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 6 | transformers |
---
language: Swedish Italian
tags:
- translation Swedish Italian model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Den 25 juni 2002 lade kommissionen fram ett förslag till förordning om ”kontroller av kontanta medel som förs in i eller ut ur gemenskapen” i syfte att komplettera direktiv 91/308/EEG om penningtvätt."
---
# legal_t5_small_trans_sv_it model
Model on translating legal text from Swedish to Italian. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_sv_it is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Swedish to Italian.
### How to use
Here is how to use this model to translate legal text from Swedish to Italian in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_sv_it"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_sv_it", do_lower_case=False,
skip_special_tokens=True),
device=0
)
sv_text = "Den 25 juni 2002 lade kommissionen fram ett förslag till förordning om ”kontroller av kontanta medel som förs in i eller ut ur gemenskapen” i syfte att komplettera direktiv 91/308/EEG om penningtvätt."
pipeline([sv_text], max_length=512)
```
## Training data
The legal_t5_small_trans_sv_it model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_sv_it | 42.577|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_sv_it_small_finetuned | 2021-04-16T08:02:00.000Z | [
"pytorch",
"t5",
"seq2seq",
"Swedish Italian",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Swedish Italian model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 7 | transformers |
---
language: Swedish Italian
tags:
- translation Swedish Italian model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "– med beaktande av rådet beslut om Syrien av den 12 april, 9 och 23 maj, 20 och 25 juni samt den 2 september 2011 och av uttalandena från unionens höga representant av den 9, 23 och 29 april, 9 maj, 6, 9 och 11 juni, 9 och 31 juli, 1, 4, 18 och 30 augusti samt den 2 september 2011 om en utvidgning av de restriktiva åtgärderna mot den syriska regimen,"
---
# legal_t5_small_trans_sv_it_small_finetuned model
Model on translating legal text from Swedish to Italian. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_sv_it_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_sv_it_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Swedish to Italian.
### How to use
Here is how to use this model to translate legal text from Swedish to Italian in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_sv_it_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_sv_it", do_lower_case=False,
skip_special_tokens=True),
device=0
)
sv_text = "– med beaktande av rådet beslut om Syrien av den 12 april, 9 och 23 maj, 20 och 25 juni samt den 2 september 2011 och av uttalandena från unionens höga representant av den 9, 23 och 29 april, 9 maj, 6, 9 och 11 juni, 9 och 31 juli, 1, 4, 18 och 30 augusti samt den 2 september 2011 om en utvidgning av de restriktiva åtgärderna mot den syriska regimen,"
pipeline([sv_text], max_length=512)
```
## Training data
The legal_t5_small_trans_sv_it_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_sv_it_small_finetuned | 42.575|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SIC98/GPT2-first-model | 2021-05-21T11:11:24.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"trainer_state.json",
"training_args.bin",
"vocab.json"
]
| SIC98 | 7 | transformers | GPT2-first-model
|
SIC98/GPT2-python-code-generator | 2021-05-21T11:13:58.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"optimizer.pt",
"pytorch_model.bin",
"scheduler.pt",
"special_tokens_map.json",
"tokenizer_config.json",
"trainer_state.json",
"training_args.bin",
"vocab.json"
]
| SIC98 | 204 | transformers | Github
- https://github.com/SIC98/GPT2-python-code-generator |
SIKU-BERT/sikubert | 2021-05-30T05:24:40.000Z | [
"pytorch",
"bert",
"masked-lm",
"zh",
"transformers",
"chinese",
"classical chinese",
"literary chinese",
"ancient chinese",
"roberta",
"license:apache-2.0",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"REDAME.md",
"bert_config.json",
"config.json",
"pytorch_model.bin",
"vocab.txt"
]
| SIKU-BERT | 180 | transformers | ---
language:
- "zh"
thumbnail: "https://raw.githubusercontent.com/SIKU-BERT/SikuBERT/main/appendix/sikubert.png"
tags:
- "chinese"
- "classical chinese"
- "literary chinese"
- "ancient chinese"
- "bert"
- "roberta"
- "pytorch"
inference: false
license: "apache-2.0"
---
# SikuBERT
## Model description

Digital humanities research needs the support of large-scale corpus and high-performance ancient Chinese natural language processing tools. The pre-training language model has greatly improved the accuracy of text mining in English and modern Chinese texts. At present, there is an urgent need for a pre-training model specifically for the automatic processing of ancient texts. We used the verified high-quality “Siku Quanshu” full-text corpus as the training set, based on the BERT deep language model architecture, we constructed the SikuBERT and SikuRoBERTa pre-training language models for intelligent processing tasks of ancient Chinese.
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("SIKU-BERT/sikubert")
model = AutoModel.from_pretrained("SIKU-BERT/sikubert")
```
## About Us
We are from Nanjing Agricultural University.
> Created with by SIKU-BERT [](https://github.com/SIKU-BERT/SikuBERT-for-digital-humanities-and-classical-Chinese-information-processing) |
SIKU-BERT/sikuroberta | 2021-05-30T05:26:04.000Z | [
"pytorch",
"bert",
"masked-lm",
"zh",
"transformers",
"chinese",
"classical chinese",
"literary chinese",
"ancient chinese",
"roberta",
"license:apache-2.0",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"bert_config.json",
"config.json",
"pytorch_model.bin",
"vocab.txt"
]
| SIKU-BERT | 102 | transformers | ---
language:
- "zh"
thumbnail: "https://raw.githubusercontent.com/SIKU-BERT/SikuBERT/main/appendix/sikubert.png"
tags:
- "chinese"
- "classical chinese"
- "literary chinese"
- "ancient chinese"
- "bert"
- "roberta"
- "pytorch"
inference: false
license: "apache-2.0"
---
# SikuBERT
## Model description

Digital humanities research needs the support of large-scale corpus and high-performance ancient Chinese natural language processing tools. The pre-training language model has greatly improved the accuracy of text mining in English and modern Chinese texts. At present, there is an urgent need for a pre-training model specifically for the automatic processing of ancient texts. We used the verified high-quality “Siku Quanshu” full-text corpus as the training set, based on the BERT deep language model architecture, we constructed the SikuBERT and SikuRoBERTa pre-training language models for intelligent processing tasks of ancient Chinese.
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("SIKU-BERT/sikuroberta")
model = AutoModel.from_pretrained("SIKU-BERT/sikuroberta")
```
## About Us
We are from Nanjing Agricultural University.
> Created with by SIKU-BERT [](https://github.com/SIKU-BERT/SikuBERT-for-digital-humanities-and-classical-Chinese-information-processing)
|
ST/xlm-roberta-base | 2021-01-25T09:30:10.000Z | []
| [
".gitattributes"
]
| ST | 0 | |||
SZTAKI-HLT/hubert-base-cc | 2021-05-19T11:29:35.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"hu",
"dataset:common_crawl",
"dataset:wikipedia",
"transformers",
"license:apache-2.0"
]
| [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| SZTAKI-HLT | 1,456 | transformers | ---
language: hu
license: apache-2.0
datasets:
- common_crawl
- wikipedia
---
# huBERT base model (cased)
## Model description
Cased BERT model for Hungarian, trained on the (filtered, deduplicated) Hungarian subset of the Common Crawl and a snapshot of the Hungarian Wikipedia.
## Intended uses & limitations
The model can be used as any other (cased) BERT model. It has been tested on the chunking and
named entity recognition tasks and set a new state-of-the-art on the former.
## Training
Details of the training data and procedure can be found in the PhD thesis linked below. (With the caveat that it only contains preliminary results
based on the Wikipedia subcorpus. Evaluation of the full model will appear in a future paper.)
## Eval results
When fine-tuned (via `BertForTokenClassification`) on chunking and NER, the model outperforms multilingual BERT, achieves state-of-the-art results on
both tasks. The exact scores are
| NER | Minimal NP | Maximal NP |
|-----|------------|------------|
| **97.62%** | **97.14%** | **96.97%** |
### BibTeX entry and citation info
If you use the model, please cite the following papers:
[Nemeskey, Dávid Márk (2020). "Natural Language Processing Methods for Language Modeling." PhD Thesis. Eötvös Loránd University.](https://hlt.bme.hu/en/publ/nemeskey_2020)
Bibtex:
```bibtex
@PhDThesis{ Nemeskey:2020,
author = {Nemeskey, Dávid Márk},
title = {Natural Language Processing Methods for Language Modeling},
year = {2020},
school = {E\"otv\"os Lor\'and University}
}
```
[Nemeskey, Dávid Márk (2021). "Introducing huBERT." In: XVII. Magyar Számítógépes Nyelvészeti Konferencia (MSZNY 2021). Szeged, pp. 3-14](https://hlt.bme.hu/en/publ/hubert_2021)
Bibtex:
```bibtex
@InProceedings{ Nemeskey:2021a,
author = {Nemeskey, Dávid Márk},
title = {Introducing \texttt{huBERT}},
booktitle = {{XVII}.\ Magyar Sz{\'a}m{\'i}t{\'o}g{\'e}pes Nyelv{\'e}szeti Konferencia ({MSZNY}2021)},
year = 2021,
pages = {TBA},
address = {Szeged},
}
```
|
|
SaberXLancer/autoNLP | 2021-05-22T00:00:52.000Z | []
| [
".gitattributes",
"README.md"
]
| SaberXLancer | 0 | |||
Sachinkelenjaguri/sa_bioclinical_bert | 2021-05-19T11:29:58.000Z | [
"bert",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"eval_results_mlm.txt",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| Sachinkelenjaguri | 13 | transformers | |
Sahajtomar/GBERTQnA | 2021-05-18T22:19:34.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"question-answering",
"de",
"dataset:mlqa",
"transformers"
]
| question-answering | [
".gitattributes",
"README.md",
"added_tokens.json",
"config.json",
"flax_model.msgpack",
"nbest_predictions_.json",
"predictions_.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| Sahajtomar | 562 | transformers |
---
language: de
tags:
- pytorch
- tf
- bert
datasets:
- mlqa
metrics:
- f1
- em
---
### QA Model trained on MLQA dataset for german langauge.
MODEL used for fine tuning is GBERT Large by deepset.ai
## MLQA DEV (german)
EM: 63.82
F1: 77.20
## XQUAD TEST (german)
EM: 65.96
F1: 80.85
## Model inferencing:
```python
!pip install -q transformers
from transformers import pipeline
qa_pipeline = pipeline(
"question-answering",
model="Sahajtomar/GBERTQnA",
tokenizer="Sahajtomar/GBERTQnA"
)
qa_pipeline({
'context': "Vor einigen Jahren haben Wissenschaftler ein wichtiges Mutagen identifiziert, das in unseren eigenen Zellen liegt: APOBEC, ein Protein, das normalerweise als Schutzmittel gegen Virusinfektionen fungiert. Heute hat ein Team von Schweizer und russischen Wissenschaftlern unter der Leitung von Sergey Nikolaev, Genetiker an der Universität Genf (UNIGE) in der Schweiz, entschlüsselt, wie APOBEC eine Schwäche unseres DNA-Replikationsprozesses ausnutzt, um Mutationen in unserem Genom zu induzieren.",
'question': "Welches Mutagen schützt vor Virusinfektionen?"
})
# output
{'answer': 'APOBEC', 'end': 121, 'score': 0.9815779328346252, 'start': 115}
## Even complex queries can be answered pretty well
qa_pipeline({
"context": 'Im Juli 1944 befand sich die Rote Armee tief auf polnischem Gebiet und verfolgte die Deutschen in Richtung Warschau. In dem Wissen, dass Stalin der Idee eines unabhängigen Polens feindlich gegenüberstand, gab die polnische Exilregierung in London der unterirdischen Heimatarmee (AK) den Befehl, vor dem Eintreffen der Roten Armee zu versuchen, die Kontrolle über Warschau von den Deutschen zu übernehmen. So begann am 1. August 1944, als sich die Rote Armee der Stadt näherte, der Warschauer Aufstand. Der bewaffnete Kampf, der 48 Stunden dauern sollte, war teilweise erfolgreich, dauerte jedoch 63 Tage. Schließlich mussten die Kämpfer der Heimatarmee und die ihnen unterstützenden Zivilisten kapitulieren. Sie wurden in Kriegsgefangenenlager in Deutschland transportiert, während die gesamte Zivilbevölkerung ausgewiesen wurde. Die Zahl der polnischen Zivilisten wird auf 150.000 bis 200.000 geschätzt.'
'question': "Wer wurde nach Deutschland transportiert?"
#output
{'answer': 'die Kämpfer der Heimatarmee und die ihnen unterstützenden Zivilisten',
'end': 693,
'score': 0.23357819020748138,
'start': 625}
```
Try it on a Colab:
<a href="https://github.com/Sahajtomar/Question-Answering/blob/main/Sahajtomar_GBERTQnA.ipynb" target="_parent"><img src="https://camo.githubusercontent.com/52feade06f2fecbf006889a904d221e6a730c194/68747470733a2f2f636f6c61622e72657365617263682e676f6f676c652e636f6d2f6173736574732f636f6c61622d62616467652e737667" alt="Open In Colab" data-canonical-src="https://colab.research.google.com/assets/colab-badge.svg"></a>
|
Sahajtomar/GELECTRAQA | 2021-01-16T02:18:37.000Z | [
"pytorch",
"tf",
"electra",
"question-answering",
"de",
"dataset:mlqa",
"transformers",
"Gelectra"
]
| question-answering | [
".gitattributes",
"README.md",
"config.json",
"nbest_predictions_.json",
"predictions_.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| Sahajtomar | 2,051 | transformers | ---
language: de
tags:
- pytorch
- tf
- Gelectra
datasets:
- mlqa
metrics:
- f1
- em
---
### QA Model trained on MLQA dataset for german langauge.
MODEL used for fine tuning is GELECTRA Large by deepset.ai
## MLQA DEV (german)
EM: 64.27 \
F1: 77.39
## XQUAD TEST (german)
EM: 66.38 \
F1: 82.25
## Hyperparameters
per_gpu_train_batch_size 4 \
per_gpu_eval_batch_size 32 \
gradient_accumulation_steps 8 \
learning_rate 3e-5 \
num_train_epochs 1.0 \
max_seq_length 384 \
doc_stride 128
## Model inferencing:
```python
!pip install -q transformers
from transformers import pipeline
qa_pipeline = pipeline(
"question-answering",
model="Sahajtomar/GELECTRAQA",
tokenizer="Sahajtomar/GELECTRAQA"
)
qa_pipeline({
'context': "Vor einigen Jahren haben Wissenschaftler ein wichtiges Mutagen identifiziert, das in unseren eigenen Zellen liegt: APOBEC, ein Protein, das normalerweise als Schutzmittel gegen Virusinfektionen fungiert. Heute hat ein Team von Schweizer und russischen Wissenschaftlern unter der Leitung von Sergey Nikolaev, Genetiker an der Universität Genf (UNIGE) in der Schweiz, entschlüsselt, wie APOBEC eine Schwäche unseres DNA-Replikationsprozesses ausnutzt, um Mutationen in unserem Genom zu induzieren.",
'question': "Welches Mutagen schützt vor Virusinfektionen?"
})
# output
{'answer': 'APOBEC', 'end': 121, 'score': 0.987, 'start': 115}
## Even complex queries can be answered pretty well
qa_pipeline({
"context": "Es wird erwartet, dass sich schwarze Löcher mit Sternmasse bilden,
wenn sehr massive Sterne am Ende ihres Lebenszyklus
zusammenbrechen. Nachdem sich ein Schwarzes Loch gebildet hat,
kann es weiter wachsen,indem es Masse aus seiner Umgebung
absorbiert. Durch Absorption anderer Sterne und Verschmelzung mit
anderen Schwarzen Löchern können sich supermassereiche Schwarze
Löcher mit Millionen von Sonnenmassen (M☉) bilden. Es besteht
Konsens darüber, dass in den Zentren der meisten Galaxien
supermassereiche Schwarze Löcher existieren.",
'question': "Wie Sonnenmassen entstehen?"
})
#output
{'answer': 'Durch Absorption anderer Sterne und Verschmelzung mit anderen Schwarzen Löchern',
'end': 332,
'score': 0.23970196,
'start': 253}
```
|
Sahajtomar/German_Zeroshot | 2021-05-18T22:22:18.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"multilingual",
"dataset:xnli",
"transformers",
"nli",
"xnli",
"de",
"zero-shot-classification",
"pipeline_tag:zero-shot-classification"
]
| zero-shot-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| Sahajtomar | 8,166 | transformers | ---
language: multilingual
tags:
- text-classification
- pytorch
- nli
- xnli
- de
datasets:
- xnli
pipeline_tag: zero-shot-classification
widget:
- text: "Letzte Woche gab es einen Selbstmord in einer nahe gelegenen kolonie"
candidate_labels: "Verbrechen,Tragödie,Stehlen"
hypothesis_template: "In deisem geht es um {}."
---
# German Zeroshot
## Model Description
This model has [GBERT Large](https://huggingface.co/deepset/gbert-large) as base model and fine-tuned it on xnli de dataset.
The default hypothesis template is in English: `This text is {}`. While using this model , change it to "In deisem geht es um {}." or something different. While inferencing through huggingface api may give poor results as it uses by default english template. Since model is monolingual and not multilingual, hypothesis template needs to be changed accordingly.
## XNLI DEV (german)
Accuracy: 85.5
## XNLI TEST (german)
Accuracy: 83.6
#### Zero-shot classification pipeline
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification",
model="Sahajtomar/German_Zeroshot")
sequence = "Letzte Woche gab es einen Selbstmord in einer nahe gelegenen kolonie"
candidate_labels = ["Verbrechen","Tragödie","Stehlen"]
hypothesis_template = "In deisem geht es um {}." ## Since monolingual model,its sensitive to hypothesis template. This can be experimented
classifier(sequence, candidate_labels, hypothesis_template=hypothesis_template)
"""{'labels': ['Tragödie', 'Verbrechen', 'Stehlen'],
'scores': [0.8328856854438782, 0.10494536352157593, 0.06316883927583696],
'sequence': 'Letzte Woche gab es einen Selbstmord in einer nahe gelegenen Kolonie'}"""
```
|
Sahajtomar/NER_legal_de | 2021-05-18T22:27:00.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"token-classification",
"de",
"dataset:legal entity recognition",
"transformers",
"NER"
]
| token-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| Sahajtomar | 71 | transformers | ---
language: de
tags:
- pytorch
- tf
- bert
- NER
datasets:
- legal entity recognition
---
### NER model trained on BERT
MODEL used for fine tuning is GBERT Large by deepset.ai
## Test
Accuracy: 98 \
F1: 84.1 \
Precision: 82.7 \
Recall: 85.5
## Model inferencing:
```python
!pip install -q transformers
from transformers import pipeline
ner = pipeline(
"ner",
model="Sahajtomar/NER_legal_de",
tokenizer="Sahajtomar/NER_legal_de")
nlp_ner("Für eine Zuständigkeit des Verwaltungsgerichts Berlin nach § 52 Nr. 1 bis 4 VwGO hat der \
Antragsteller keine Anhaltspunkte vorgetragen .")
```
|
Sahajtomar/french_semantic | 2021-06-04T10:13:35.000Z | [
"fr",
"dataset:sts",
"semantic",
"sentence-transformers",
"sentence-similarity"
]
| sentence-similarity | [
".gitattributes",
"README.md",
"config.json",
"modules.json",
"similarity_evaluation_sts-dev_results.csv",
"similarity_evaluation_sts-test_results.csv",
"0_Transformer/config.json",
"0_Transformer/pytorch_model.bin",
"0_Transformer/sentence_bert_config.json",
"0_Transformer/sentencepiece.bpe.model",
"0_Transformer/special_tokens_map.json",
"0_Transformer/tokenizer_config.json",
"1_Pooling/config.json"
]
| Sahajtomar | 43 | sentence-transformers | ---
language: fr
tags:
- semantic
- sentence-transformers
- sentence-similarity
- fr
datasets:
- sts
---
# French STS
## STS dev (french)
87.4%
## STS test (french)
85.8%
#### STS pipeline
```python
!pip install -U sentence-transformers
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('..model_path..')
sentences1 = ["J'aime mon téléphone",
"Mon téléphone n'est pas bon.",
"Votre téléphone portable est superbe."]
sentences2 = ["Est-ce qu'il neige demain?",
"Récemment, de nombreux ouragans ont frappé les États-Unis",
"Le réchauffement climatique est réel",]
embeddings1 = model.encode(sentences1, convert_to_tensor=True)
embeddings2 = model.encode(sentences2, convert_to_tensor=True)
cosine_scores = util.pytorch_cos_sim(embeddings1, embeddings2)
for i in range(len(sentences1)):
for j in range(len(sentences2)):
print(cosine_scores[i][j]))
"""
"""
``` |
Sahajtomar/sts-GBERT-de | 2021-06-08T12:31:35.000Z | [
"bert",
"de",
"dataset:sts",
"semantic",
"sentence-transformers",
"sentence-similarity"
]
| sentence-similarity | [
".gitattributes",
"README.md",
"config.json",
"modules.json",
"0_Transformer/config.json",
"0_Transformer/pytorch_model.bin",
"0_Transformer/sentence_bert_config.json",
"0_Transformer/special_tokens_map.json",
"0_Transformer/tokenizer_config.json",
"0_Transformer/vocab.txt",
"1_Pooling/config.json"
]
| Sahajtomar | 49 | sentence-transformers | ---
language: de
tags:
- semantic
- sentence-transformers
- sentence-similarity
datasets:
- sts
---
# German STS
## STS dev (german)
87.9%
## STS test (german)
84.3%
#### STS pipeline
```python
!pip install -U sentence-transformers
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('..model_path..')
sentences1 = ['Die Katze sitzt draußen',
"Ein Mann spielt Gitarre",
'Der neue Film ist großartig']
sentences2 = ['Der Hund spielt im Garten',
"Eine Frau sieht fern",
'Der neue Film ist so toll']
embeddings1 = model.encode(sentences1, convert_to_tensor=True)
embeddings2 = model.encode(sentences2, convert_to_tensor=True)
cosine_scores = util.pytorch_cos_sim(embeddings1, embeddings2)
for i in range(len(sentences1)):
for j in range(len(sentences2)):
print(cosine_scores[i][j]))
"""
Die Katze sitzt draußen Der Hund spielt im Garten Score: 0.1259
Die Katze sitzt draußen Eine Frau sieht fern Score: 0.0567
Die Katze sitzt draußen Der neue Film ist so toll Score: 0.0557
Ein Mann spielt Gitarre Der Hund spielt im Garten Score: 0.1031
Ein Mann spielt Gitarre Eine Frau sieht fern Score: 0.0098
Ein Mann spielt Gitarre Der neue Film ist so toll Score: 0.0828
Der neue Film ist großartig Der Hund spielt im Garten Score: 0.1008
Der neue Film ist großartig Eine Frau sieht fern Score: 0.0674
"""
``` |
SajjadAyoubi/bert-base-fa-qa | 2021-05-18T22:30:21.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"question-answering",
"transformers"
]
| question-answering | [
".gitattributes",
"README.md",
"config.json",
"eval.jsonl",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| SajjadAyoubi | 260 | transformers | ### How to use
#### Requirements
Transformers require `transformers` and `sentencepiece`, both of which can be
installed using `pip`.
```sh
pip install transformers sentencepiece
```
#### Pipelines 🚀
In case you are not familiar with Transformers, you can use pipelines instead.
Note that, pipelines can't have _no answer_ for the questions.
```python
from transformers import pipeline
model_name = "SajjadAyoubi/bert-base-fa-qa"
qa_pipeline = pipeline("question-answering", model=model_name, tokenizer=model_name)
text = "سلام من سجاد ایوبی هستم ۲۰ سالمه و به پردازش زبان طبیعی علاقه دارم"
questions = ["اسمم چیه؟", "چند سالمه؟", "به چی علاقه دارم؟"]
for question in questions:
print(qa_pipeline({"context": text, "question": question}))
>>> {'score': 0.4839823544025421, 'start': 8, 'end': 18, 'answer': 'سجاد ایوبی'}
>>> {'score': 0.3747948706150055, 'start': 24, 'end': 32, 'answer': '۲۰ سالمه'}
>>> {'score': 0.5945395827293396, 'start': 38, 'end': 55, 'answer': 'پردازش زبان طبیعی'}
```
#### Manual approach 🔥
Using the Manual approach, it is possible to have _no answer_ with even better
performance.
- PyTorch
```python
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
from src.utils import AnswerPredictor
model_name = "SajjadAyoubi/bert-base-fa-qa"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
text = "سلام من سجاد ایوبی هستم ۲۰ سالمه و به پردازش زبان طبیعی علاقه دارم"
questions = ["اسمم چیه؟", "چند سالمه؟", "به چی علاقه دارم؟"]
# this class is from src/utils.py and you can read more about it
predictor = AnswerPredictor(model, tokenizer, device="cpu", n_best=10)
preds = predictor(questions, [text] * 3, batch_size=3)
for k, v in preds.items():
print(v)
```
Produces an output such below:
```
100%|██████████| 1/1 [00:00<00:00, 3.56it/s]
{'score': 8.040637016296387, 'text': 'سجاد ایوبی'}
{'score': 9.901972770690918, 'text': '۲۰'}
{'score': 12.117212295532227, 'text': 'پردازش زبان طبیعی'}
```
- TensorFlow 2.X
```python
from transformers import AutoTokenizer, TFAutoModelForQuestionAnswering
from src.utils import TFAnswerPredictor
model_name = "SajjadAyoubi/bert-base-fa-qa"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = TFAutoModelForQuestionAnswering.from_pretrained(model_name)
text = "سلام من سجاد ایوبی هستم ۲۰ سالمه و به پردازش زبان طبیعی علاقه دارم"
questions = ["اسمم چیه؟", "چند سالمه؟", "به چی علاقه دارم؟"]
# this class is from src/utils.py, you can read more about it
predictor = TFAnswerPredictor(model, tokenizer, n_best=10)
preds = predictor(questions, [text] * 3, batch_size=3)
for k, v in preds.items():
print(v)
```
Produces an output such below:
```text
100%|██████████| 1/1 [00:00<00:00, 3.56it/s]
{'score': 8.040637016296387, 'text': 'سجاد ایوبی'}
{'score': 9.901972770690918, 'text': '۲۰'}
{'score': 12.117212295532227, 'text': 'پردازش زبان طبیعی'}
```
Or you can access the whole demonstration using [HowToUse iPython Notebook on Google Colab](https://colab.research.google.com/github/sajjjadayobi/PersianQA/blob/main/notebooks/HowToUse.ipynb)
|
SajjadAyoubi/xlm-roberta-large-fa-qa | 2021-04-21T07:23:30.000Z | [
"pytorch",
"tf",
"xlm-roberta",
"question-answering",
"transformers"
]
| question-answering | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"training_args.bin"
]
| SajjadAyoubi | 77 | transformers | ### How to use
#### Requirements
Transformers require `transformers` and `sentencepiece`, both of which can be
installed using `pip`.
```sh
pip install transformers sentencepiece
```
#### Pipelines 🚀
In case you are not familiar with Transformers, you can use pipelines instead.
Note that, pipelines can't have _no answer_ for the questions.
```python
from transformers import pipeline
model_name = "SajjadAyoubi/lm-roberta-large-fa-qa"
qa_pipeline = pipeline("question-answering", model=model_name, tokenizer=model_name)
text = "سلام من سجاد ایوبی هستم ۲۰ سالمه و به پردازش زبان طبیعی علاقه دارم"
questions = ["اسمم چیه؟", "چند سالمه؟", "به چی علاقه دارم؟"]
for question in questions:
print(qa_pipeline({"context": text, "question": question}))
>>> {'score': 0.4839823544025421, 'start': 8, 'end': 18, 'answer': 'سجاد ایوبی'}
>>> {'score': 0.3747948706150055, 'start': 24, 'end': 32, 'answer': '۲۰ سالمه'}
>>> {'score': 0.5945395827293396, 'start': 38, 'end': 55, 'answer': 'پردازش زبان طبیعی'}
```
#### Manual approach 🔥
Using the Manual approach, it is possible to have _no answer_ with even better
performance.
- PyTorch
```python
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
from src.utils import AnswerPredictor
model_name = "SajjadAyoubi/lm-roberta-large-fa-qa"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
text = "سلام من سجاد ایوبی هستم ۲۰ سالمه و به پردازش زبان طبیعی علاقه دارم"
questions = ["اسمم چیه؟", "چند سالمه؟", "به چی علاقه دارم؟"]
# this class is from src/utils.py and you can read more about it
predictor = AnswerPredictor(model, tokenizer, device="cpu", n_best=10)
preds = predictor(questions, [text] * 3, batch_size=3)
for k, v in preds.items():
print(v)
```
Produces an output such below:
```
100%|██████████| 1/1 [00:00<00:00, 3.56it/s]
{'score': 8.040637016296387, 'text': 'سجاد ایوبی'}
{'score': 9.901972770690918, 'text': '۲۰'}
{'score': 12.117212295532227, 'text': 'پردازش زبان طبیعی'}
```
- TensorFlow 2.X
```python
from transformers import AutoTokenizer, TFAutoModelForQuestionAnswering
from src.utils import TFAnswerPredictor
model_name = "SajjadAyoubi/lm-roberta-large-fa-qa"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = TFAutoModelForQuestionAnswering.from_pretrained(model_name)
text = "سلام من سجاد ایوبی هستم ۲۰ سالمه و به پردازش زبان طبیعی علاقه دارم"
questions = ["اسمم چیه؟", "چند سالمه؟", "به چی علاقه دارم؟"]
# this class is from src/utils.py, you can read more about it
predictor = TFAnswerPredictor(model, tokenizer, n_best=10)
preds = predictor(questions, [text] * 3, batch_size=3)
for k, v in preds.items():
print(v)
```
Produces an output such below:
```text
100%|██████████| 1/1 [00:00<00:00, 3.56it/s]
{'score': 8.040637016296387, 'text': 'سجاد ایوبی'}
{'score': 9.901972770690918, 'text': '۲۰'}
{'score': 12.117212295532227, 'text': 'پردازش زبان طبیعی'}
```
Or you can access the whole demonstration using [HowToUse iPython Notebook on Google Colab](https://colab.research.google.com/github/sajjjadayobi/PersianQA/blob/main/notebooks/HowToUse.ipynb)
|
Salesforce/bart-large-xsum-samsum | 2021-06-09T19:36:02.000Z | [
"pytorch",
"tf",
"bart",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.json"
]
| Salesforce | 244 | transformers | |
Salesforce/cods-bart-large-xsum-samsum | 2021-06-09T19:39:19.000Z | [
"pytorch",
"bart",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| Salesforce | 94 | transformers |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.