modelId
stringlengths 4
112
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 21
values | files
list | publishedBy
stringlengths 2
37
| downloads_last_month
int32 0
9.44M
| library
stringclasses 15
values | modelCard
large_stringlengths 0
100k
|
---|---|---|---|---|---|---|---|---|
SEBIS/code_trans_t5_small_program_synthese_transfer_learning_finetune | 2021-02-17T14:39:17.000Z | [
"pytorch",
"t5",
"transformers",
"summarization"
]
| summarization | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
]
| SEBIS | 8 | transformers | ---
tags:
- summarization
widget:
- text: "you are given an array of numbers a and a number b , compute the difference of elements in a and b"
---
# CodeTrans model for program synthesis
Pretrained model on programming language lisp inspired DSL using the t5 small model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans).
## Model description
This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the program synthesis task for the lisp inspired DSL code.
## Intended uses & limitations
The model could be used to generate lisp inspired DSL code given the human language description tasks.
### How to use
Here is how to use this model to generate lisp inspired DSL code using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_program_synthese_transfer_learning_finetune"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_program_synthese_transfer_learning_finetune", skip_special_tokens=True),
device=0
)
tokenized_code = "you are given an array of numbers a and a number b , compute the difference of elements in a and b"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/transfer%20learning%20fine-tuning/small_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Transfer-learning Pretraining
The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Fine-tuning
This model was then fine-tuned on a single TPU Pod V2-8 for 5,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing lisp inspired DSL data.
## Evaluation results
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | LISP |
| -------------------- | :------------: |
| CodeTrans-ST-Small | 89.43 |
| CodeTrans-ST-Base | 89.65 |
| CodeTrans-TF-Small | 90.30 |
| CodeTrans-TF-Base | 90.24 |
| CodeTrans-TF-Large | 90.21 |
| CodeTrans-MT-Small | 82.88 |
| CodeTrans-MT-Base | 86.99 |
| CodeTrans-MT-Large | 90.27 |
| CodeTrans-MT-TF-Small | **90.31** |
| CodeTrans-MT-TF-Base | 90.30 |
| CodeTrans-MT-TF-Large | 90.17 |
| State of the art | 85.80 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_small_source_code_summarization_csharp | 2021-02-16T15:40:27.000Z | [
"pytorch",
"t5",
"transformers",
"summarization"
]
| summarization | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
]
| SEBIS | 14 | transformers | ---
tags:
- summarization
widget:
- text: "public static DateTime ParseUnixDateTime ( double unixTime ) { var dt = new DateTime ( CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , System . DateTimeKind . Utc ) ; dt = dt . AddSeconds ( unixTimeStamp ) . ToLocalTime ( ) ; return dt ; }"
---
# CodeTrans model for source code summarization csharp
Pretrained model on programming language csharp using the t5 small model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized csharp code functions: it works best with tokenized csharp functions.
## Model description
This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used single-task training on source code summarization csharp dataset.
## Intended uses & limitations
The model could be used to generate the description for the csharp function or be fine-tuned on other csharp code tasks. It can be used on unparsed and untokenized csharp code. However, if the csharp code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate csharp function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_csharp"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_csharp", skip_special_tokens=True),
device=0
)
tokenized_code = "public static DateTime ParseUnixDateTime ( double unixTime ) { var dt = new DateTime ( CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , System . DateTimeKind . Utc ) ; dt = dt . AddSeconds ( unixTimeStamp ) . ToLocalTime ( ) ; return dt ; }"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/single%20task/source%20code%20summarization/csharp/small_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Evaluation results
For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | SQL | C# |
| -------------------- | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 |
| CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 |
| CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 |
| CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 |
| CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 |
| CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 |
| CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 |
| CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** |
| CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 |
| CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 |
| CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 |
| CODE-NN | -- | 18.40 | 20.50 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_small_source_code_summarization_csharp_multitask | 2021-02-16T15:49:53.000Z | [
"pytorch",
"t5",
"transformers",
"summarization"
]
| summarization | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
]
| SEBIS | 9 | transformers | ---
tags:
- summarization
widget:
- text: "public static DateTime ParseUnixDateTime ( double unixTime ) { var dt = new DateTime ( CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , System . DateTimeKind . Utc ) ; dt = dt . AddSeconds ( unixTimeStamp ) . ToLocalTime ( ) ; return dt ; }"
---
# CodeTrans model for source code summarization csharp
Pretrained model on programming language csharp using the t5 small model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized csharp code functions: it works best with tokenized csharp functions.
## Model description
This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets.
## Intended uses & limitations
The model could be used to generate the description for the csharp function or be fine-tuned on other csharp code tasks. It can be used on unparsed and untokenized csharp code. However, if the csharp code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate csharp function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_csharp_multitask"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_csharp_multitask", skip_special_tokens=True),
device=0
)
tokenized_code = "public static DateTime ParseUnixDateTime ( double unixTime ) { var dt = new DateTime ( CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , System . DateTimeKind . Utc ) ; dt = dt . AddSeconds ( unixTimeStamp ) . ToLocalTime ( ) ; return dt ; }"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/source%20code%20summarization/csharp/small_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Multi-task Pretraining
The model was trained on a single TPU Pod V3-8 for 300,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
## Evaluation results
For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | SQL | C# |
| -------------------- | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 |
| CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 |
| CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 |
| CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 |
| CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 |
| CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 |
| CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 |
| CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** |
| CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 |
| CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 |
| CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 |
| CODE-NN | -- | 18.40 | 20.50 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_small_source_code_summarization_csharp_multitask_finetune | 2021-02-16T16:36:51.000Z | [
"pytorch",
"t5",
"transformers",
"summarization"
]
| summarization | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
]
| SEBIS | 9 | transformers | ---
tags:
- summarization
widget:
- text: "public static DateTime ParseUnixDateTime ( double unixTime ) { var dt = new DateTime ( CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , System . DateTimeKind . Utc ) ; dt = dt . AddSeconds ( unixTimeStamp ) . ToLocalTime ( ) ; return dt ; }"
---
# CodeTrans model for source code summarization csharp
Pretrained model on programming language csharp using the t5 small model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized csharp code functions: it works best with tokenized csharp functions.
## Model description
This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the source code summarization task for the csharp code snippets.
## Intended uses & limitations
The model could be used to generate the description for the csharp function or be fine-tuned on other csharp code tasks. It can be used on unparsed and untokenized csharp code. However, if the csharp code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate csharp function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_csharp_multitask_finetune"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_csharp_multitask_finetune", skip_special_tokens=True),
device=0
)
tokenized_code = "public static DateTime ParseUnixDateTime ( double unixTime ) { var dt = new DateTime ( CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , System . DateTimeKind . Utc ) ; dt = dt . AddSeconds ( unixTimeStamp ) . ToLocalTime ( ) ; return dt ; }"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/source%20code%20summarization/csharp/small_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Multi-task Pretraining
The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Fine-tuning
This model was then fine-tuned on a single TPU Pod V2-8 for 1200 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing csharp code.
## Evaluation results
For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | SQL | C# |
| -------------------- | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 |
| CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 |
| CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 |
| CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 |
| CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 |
| CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 |
| CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 |
| CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** |
| CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 |
| CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 |
| CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 |
| CODE-NN | -- | 18.40 | 20.50 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_small_source_code_summarization_csharp_transfer_learning_finetune | 2021-02-16T17:03:11.000Z | [
"pytorch",
"t5",
"transformers",
"summarization"
]
| summarization | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
]
| SEBIS | 10 | transformers | ---
tags:
- summarization
widget:
- text: "public static DateTime ParseUnixDateTime ( double unixTime ) { var dt = new DateTime ( CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , System . DateTimeKind . Utc ) ; dt = dt . AddSeconds ( unixTimeStamp ) . ToLocalTime ( ) ; return dt ; }"
---
# CodeTrans model for source code summarization csharp
Pretrained model on programming language csharp using the t5 small model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized csharp code functions: it works best with tokenized csharp functions.
## Model description
This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the source code summarization task for the csharp code snippets.
## Intended uses & limitations
The model could be used to generate the description for the csharp function or be fine-tuned on other csharp code tasks. It can be used on unparsed and untokenized csharp code. However, if the csharp code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate csharp function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_csharp_transfer_learning_finetune"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_csharp_transfer_learning_finetune", skip_special_tokens=True),
device=0
)
tokenized_code = "public static DateTime ParseUnixDateTime ( double unixTime ) { var dt = new DateTime ( CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , System . DateTimeKind . Utc ) ; dt = dt . AddSeconds ( unixTimeStamp ) . ToLocalTime ( ) ; return dt ; }"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/transfer%20learning%20fine-tuning/source%20code%20summarization/csharp/small_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Transfer-learning Pretraining
The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Fine-tuning
This model was then fine-tuned on a single TPU Pod V2-8 for 2000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing csharp code.
## Evaluation results
For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | SQL | C# |
| -------------------- | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 |
| CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 |
| CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 |
| CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 |
| CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 |
| CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 |
| CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 |
| CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** |
| CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 |
| CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 |
| CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 |
| CODE-NN | -- | 18.40 | 20.50 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_small_source_code_summarization_python | 2021-02-16T15:28:53.000Z | [
"pytorch",
"t5",
"transformers",
"summarization"
]
| summarization | [
".Rhistory",
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
]
| SEBIS | 13 | transformers | ---
tags:
- summarization
widget:
- text: '''with open ( CODE_STRING , CODE_STRING ) as in_file : buf = in_file . readlines ( ) with open ( CODE_STRING , CODE_STRING ) as out_file : for line in buf : if line == " ; Include this text " : line = line + " Include below " out_file . write ( line ) '''
---
# CodeTrans model for source code summarization python
Pretrained model on programming language python using the t5 small model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized python code functions: it works best with tokenized python functions.
## Model description
This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used single-task training on source code summarization python dataset.
## Intended uses & limitations
The model could be used to generate the description for the python function or be fine-tuned on other python code tasks. It can be used on unparsed and untokenized python code. However, if the python code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate python function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_python"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_python", skip_special_tokens=True),
device=0
)
tokenized_code = '''with open ( CODE_STRING , CODE_STRING ) as in_file : buf = in_file . readlines ( ) with open ( CODE_STRING , CODE_STRING ) as out_file : for line in buf : if line == " ; Include this text " : line = line + " Include below " out_file . write ( line ) '''
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/single%20task/source%20code%20summarization/python/small_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Evaluation results
For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | SQL | C# |
| -------------------- | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 |
| CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 |
| CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 |
| CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 |
| CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 |
| CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 |
| CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 |
| CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** |
| CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 |
| CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 |
| CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 |
| CODE-NN | -- | 18.40 | 20.50 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_small_source_code_summarization_python_multitask | 2021-02-16T16:05:25.000Z | [
"pytorch",
"t5",
"transformers",
"summarization"
]
| summarization | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
]
| SEBIS | 18 | transformers | ---
tags:
- summarization
widget:
- text: '''with open ( CODE_STRING , CODE_STRING ) as in_file : buf = in_file . readlines ( ) with open ( CODE_STRING , CODE_STRING ) as out_file : for line in buf : if line == " ; Include this text " : line = line + " Include below " out_file . write ( line ) '''
---
# CodeTrans model for source code summarization python
Pretrained model on programming language python using the t5 small model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized python code functions: it works best with tokenized python functions.
## Model description
This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets.
## Intended uses & limitations
The model could be used to generate the description for the python function or be fine-tuned on other python code tasks. It can be used on unparsed and untokenized python code. However, if the python code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate python function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_python_multitask"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_python_multitask", skip_special_tokens=True),
device=0
)
tokenized_code = '''with open ( CODE_STRING , CODE_STRING ) as in_file : buf = in_file . readlines ( ) with open ( CODE_STRING , CODE_STRING ) as out_file : for line in buf : if line == " ; Include this text " : line = line + " Include below " out_file . write ( line ) '''
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/source%20code%20summarization/python/small_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Multi-task Pretraining
The model was trained on a single TPU Pod V3-8 for 300,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
## Evaluation results
For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | SQL | C# |
| -------------------- | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 |
| CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 |
| CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 |
| CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 |
| CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 |
| CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 |
| CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 |
| CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** |
| CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 |
| CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 |
| CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 |
| CODE-NN | -- | 18.40 | 20.50 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_small_source_code_summarization_python_multitask_finetune | 2021-02-16T16:21:04.000Z | [
"pytorch",
"t5",
"transformers",
"summarization"
]
| summarization | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
]
| SEBIS | 15 | transformers | ---
tags:
- summarization
widget:
- text: '''with open ( CODE_STRING , CODE_STRING ) as in_file : buf = in_file . readlines ( ) with open ( CODE_STRING , CODE_STRING ) as out_file : for line in buf : if line == " ; Include this text " : line = line + " Include below " out_file . write ( line ) '''
---
# CodeTrans model for source code summarization python
Pretrained model on programming language python using the t5 small model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized python code functions: it works best with tokenized python functions.
## Model description
This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the source code summarization task for the python code snippets.
## Intended uses & limitations
The model could be used to generate the description for the python function or be fine-tuned on other python code tasks. It can be used on unparsed and untokenized python code. However, if the python code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate python function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_python_multitask_finetune"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_python_multitask_finetune", skip_special_tokens=True),
device=0
)
tokenized_code = '''with open ( CODE_STRING , CODE_STRING ) as in_file : buf = in_file . readlines ( ) with open ( CODE_STRING , CODE_STRING ) as out_file : for line in buf : if line == " ; Include this text " : line = line + " Include below " out_file . write ( line ) '''
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/source%20code%20summarization/python/small_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Multi-task Pretraining
The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Fine-tuning
This model was then fine-tuned on a single TPU Pod V2-8 for 600 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing python code.
## Evaluation results
For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | SQL | C# |
| -------------------- | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 |
| CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 |
| CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 |
| CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 |
| CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 |
| CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 |
| CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 |
| CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** |
| CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 |
| CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 |
| CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 |
| CODE-NN | -- | 18.40 | 20.50 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_small_source_code_summarization_python_transfer_learning_finetune | 2021-02-16T17:12:57.000Z | [
"pytorch",
"t5",
"transformers",
"summarization"
]
| summarization | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
]
| SEBIS | 14 | transformers | ---
tags:
- summarization
widget:
- text: '''with open ( CODE_STRING , CODE_STRING ) as in_file : buf = in_file . readlines ( ) with open ( CODE_STRING , CODE_STRING ) as out_file : for line in buf : if line == " ; Include this text " : line = line + " Include below " out_file . write ( line ) '''
---
# CodeTrans model for source code summarization python
Pretrained model on programming language python using the t5 small model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized python code functions: it works best with tokenized python functions.
## Model description
This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the source code summarization task for the python code snippets.
## Intended uses & limitations
The model could be used to generate the description for the python function or be fine-tuned on other python code tasks. It can be used on unparsed and untokenized python code. However, if the python code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate python function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_python_transfer_learning_finetune"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_python_transfer_learning_finetune", skip_special_tokens=True),
device=0
)
tokenized_code = '''with open ( CODE_STRING , CODE_STRING ) as in_file : buf = in_file . readlines ( ) with open ( CODE_STRING , CODE_STRING ) as out_file : for line in buf : if line == " ; Include this text " : line = line + " Include below " out_file . write ( line ) '''
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/transfer%20learning%20fine-tuning/source%20code%20summarization/python/small_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Transfer-learning Pretraining
The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Fine-tuning
This model was then fine-tuned on a single TPU Pod V2-8 for 5000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing python code.
## Evaluation results
For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | SQL | C# |
| -------------------- | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 |
| CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 |
| CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 |
| CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 |
| CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 |
| CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 |
| CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 |
| CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** |
| CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 |
| CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 |
| CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 |
| CODE-NN | -- | 18.40 | 20.50 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_small_source_code_summarization_sql | 2021-02-16T15:37:08.000Z | [
"pytorch",
"t5",
"transformers",
"summarization"
]
| summarization | [
".Rhistory",
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
]
| SEBIS | 9 | transformers | ---
tags:
- summarization
widget:
- text: "select time ( col0 ) from tab0"
---
# CodeTrans model for source code summarization sql
Pretrained model on programming language sql using the t5 small model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized sql code functions: it works best with tokenized sql functions.
## Model description
This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used single-task training on source code summarization sql dataset.
## Intended uses & limitations
The model could be used to generate the description for the sql function or be fine-tuned on other sql code tasks. It can be used on unparsed and untokenized sql code. However, if the sql code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate sql function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_sql"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_sql", skip_special_tokens=True),
device=0
)
tokenized_code = "select time ( col0 ) from tab0"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/single%20task/source%20code%20summarization/sql/small_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Evaluation results
For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | SQL | C# |
| -------------------- | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 |
| CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 |
| CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 |
| CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 |
| CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 |
| CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 |
| CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 |
| CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** |
| CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 |
| CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 |
| CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 |
| CODE-NN | -- | 18.40 | 20.50 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_small_source_code_summarization_sql_multitask | 2021-02-16T15:59:24.000Z | [
"pytorch",
"t5",
"transformers",
"summarization"
]
| summarization | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
]
| SEBIS | 11 | transformers | ---
tags:
- summarization
widget:
- text: "select time ( col0 ) from tab0"
---
# CodeTrans model for source code summarization sql
Pretrained model on programming language sql using the t5 small model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized sql code functions: it works best with tokenized sql functions.
## Model description
This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets.
## Intended uses & limitations
The model could be used to generate the description for the sql function or be fine-tuned on other sql code tasks. It can be used on unparsed and untokenized sql code. However, if the sql code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate sql function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_sql_multitask"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_sql_multitask", skip_special_tokens=True),
device=0
)
tokenized_code = "select time ( col0 ) from tab0"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/source%20code%20summarization/sql/small_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Multi-task Pretraining
The model was trained on a single TPU Pod V3-8 for 460,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
## Evaluation results
For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | SQL | C# |
| -------------------- | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 |
| CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 |
| CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 |
| CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 |
| CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 |
| CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 |
| CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 |
| CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** |
| CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 |
| CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 |
| CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 |
| CODE-NN | -- | 18.40 | 20.50 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_small_source_code_summarization_sql_multitask_finetune | 2021-02-16T16:38:19.000Z | [
"pytorch",
"t5",
"transformers",
"summarization"
]
| summarization | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
]
| SEBIS | 10 | transformers | ---
tags:
- summarization
widget:
- text: "select time ( col0 ) from tab0"
---
# CodeTrans model for source code summarization sql
Pretrained model on programming language sql using the t5 small model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized sql code functions: it works best with tokenized sql functions.
## Model description
This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the source code summarization task for the sql code snippets.
## Intended uses & limitations
The model could be used to generate the description for the sql function or be fine-tuned on other sql code tasks. It can be used on unparsed and untokenized sql code. However, if the sql code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate sql function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_sql_multitask_finetune"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_sql_multitask_finetune", skip_special_tokens=True),
device=0
)
tokenized_code = "select time ( col0 ) from tab0"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/source%20code%20summarization/sql/small_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Multi-task Pretraining
The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Fine-tuning
This model was then fine-tuned on a single TPU Pod V2-8 for 1200 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing sql code.
## Evaluation results
For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | SQL | C# |
| -------------------- | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 |
| CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 |
| CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 |
| CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 |
| CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 |
| CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 |
| CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 |
| CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** |
| CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 |
| CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 |
| CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 |
| CODE-NN | -- | 18.40 | 20.50 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_small_source_code_summarization_sql_transfer_learning_finetune | 2021-02-16T17:09:28.000Z | [
"pytorch",
"t5",
"transformers",
"summarization"
]
| summarization | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
]
| SEBIS | 12 | transformers | ---
tags:
- summarization
widget:
- text: "select time ( col0 ) from tab0"
---
# CodeTrans model for source code summarization sql
Pretrained model on programming language sql using the t5 small model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized sql code functions: it works best with tokenized sql functions.
## Model description
This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the source code summarization task for the sql code snippets.
## Intended uses & limitations
The model could be used to generate the description for the sql function or be fine-tuned on other sql code tasks. It can be used on unparsed and untokenized sql code. However, if the sql code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate sql function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_sql_transfer_learning_finetune"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_sql_transfer_learning_finetune", skip_special_tokens=True),
device=0
)
tokenized_code = "select time ( col0 ) from tab0"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/transfer%20learning%20fine-tuning/source%20code%20summarization/sql/small_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Transfer-learning Pretraining
The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Fine-tuning
This model was then fine-tuned on a single TPU Pod V2-8 for 1000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing sql code.
## Evaluation results
For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | SQL | C# |
| -------------------- | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 |
| CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 |
| CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 |
| CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 |
| CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 |
| CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 |
| CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 |
| CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** |
| CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 |
| CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 |
| CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 |
| CODE-NN | -- | 18.40 | 20.50 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_small_transfer_learning_pretrain | 2021-02-19T11:59:59.000Z | [
"pytorch",
"t5",
"transformers"
]
| [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
]
| SEBIS | 94 | transformers | # CodeTrans transfer learning pre-trained model
Pretrained model on programming languages using the t5 small model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans).
## Model description
This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain.
The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
It could be used to fine-tune other tasks in the software development domain.
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
|
SEBIS/legal_t5_small_cls_cs | 2021-01-29T08:52:19.000Z | [
"pytorch",
"t5",
"seq2seq",
"Cszech",
"dataset:jrc-acquis",
"transformers",
"classification Cszech model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 8 | transformers |
---
language: Cszech
tags:
- classification Cszech model
datasets:
- jrc-acquis
widget:
- text: "Bez námitek k navrhovanému spojení (Případ č. COMP/M.4169 – Virgin/CPW/JV) (2006/C 103/16) (Text s významem pro EHP) Dne 29. března 2006 se Komise rozhodla nevznést námitky proti výše uvedenému spojení a prohlásit ho za slučitelné se společným trhem. Toto rozhodnutí je založeno na čl. 6 odst. 1 písm. b) nařízení Rady (ES) č. 139/2004. Celý text rozhodnutí je přístupný pouze v angličtině a bude uveřejněn poté, co bude zbaven obchodního tajemství, které může případně obsahovat. Text bude dosažitelný: - na webové stránce Europa – hospodářská soutěž (http://europa.eu.int/comm/competition/mergers/cases/). Tato webová stránka umožňuje vyhledat jednotlivá rozhodnutí o spojení, a to včetně společnosti, čísla případu, data a indexu odvětví hospodářství. - v elektronické podobě na webové stránce EUR-Lex, pod dokumentem č. 32006M4169. EUR-Lex umožňuje přístup k Evropskému právu přes Internet. (http://europa.eu.int/eur-lex/lex) --------------------------------------------------"
---
# legal_t5_small_cls_cs model
Model for classification of legal text written in Cszech. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis.
## Model description
legal_t5_small_cls_cs is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for classification of legal texts written in Cszech.
### How to use
Here is how to use this model to classify legal text written in Cszech in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_cls_cs"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_cls_cs", do_lower_case=False,
skip_special_tokens=True),
device=0
)
cs_text = "Bez námitek k navrhovanému spojení (Případ č. COMP/M.4169 – Virgin/CPW/JV) (2006/C 103/16) (Text s významem pro EHP) Dne 29. března 2006 se Komise rozhodla nevznést námitky proti výše uvedenému spojení a prohlásit ho za slučitelné se společným trhem. Toto rozhodnutí je založeno na čl. 6 odst. 1 písm. b) nařízení Rady (ES) č. 139/2004. Celý text rozhodnutí je přístupný pouze v angličtině a bude uveřejněn poté, co bude zbaven obchodního tajemství, které může případně obsahovat. Text bude dosažitelný: - na webové stránce Europa – hospodářská soutěž (http://europa.eu.int/comm/competition/mergers/cases/). Tato webová stránka umožňuje vyhledat jednotlivá rozhodnutí o spojení, a to včetně společnosti, čísla případu, data a indexu odvětví hospodářství. - v elektronické podobě na webové stránce EUR-Lex, pod dokumentem č. 32006M4169. EUR-Lex umožňuje přístup k Evropskému právu přes Internet. (http://europa.eu.int/eur-lex/lex) --------------------------------------------------"
pipeline([cs_text], max_length=512)
```
## Training data
The legal_t5_small_cls_cs model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html) dataset consisting of 18 Thousand texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 64). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for classification test dataset, achieves the following results:
Test results :
| Model | F1 score |
|:-----:|:-----:|
| legal_t5_small_cls_cs | 0.6297|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_cls_de | 2021-01-29T08:52:22.000Z | [
"pytorch",
"t5",
"seq2seq",
"Deustch",
"dataset:jrc-acquis",
"transformers",
"classification Deustch model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 8 | transformers |
---
language: Deustch
tags:
- classification Deustch model
datasets:
- jrc-acquis
widget:
- text: "BESCHLUSS DES RATES vom 17. Dezember 1999 über den Abschluß des Abkommens in Form eines Briefwechsels zwischen der Europäischen Gemeinschaft und der Tunesischen Republik über die Regelung für die Einfuhr von nicht behandeltem Olivenöl mit Ursprung in Tunesien in die Gemeinschaft (1999/873/EG) DER RAT DER EUROPÄISCHEN UNION - gestützt auf den Vertrag zur Gründung der Europäischen Gemeinschaft, insbesondere auf Artikel 133 in Verbindung mit Artikel 300 Absatz 2 Unterabsatz 1, auf Vorschlag der Kommission, in Erwägung nachstehender Gründe: (1) Zwischen der Europäischen Gemeinschaft und der Tunesischen Republik wurde ein Abkommen in Form eines Briefwechsels ausgehandelt, um die Geltungsdauer der Regelung für die Einfuhr von nicht behandeltem Olivenöl mit Ursprung in Tunesien in die Gemeinschaft, die in Artikel 3 des Protokolls Nr. 1 des Europa-Mittelmeer-Abkommens zur Gründung einer Assoziation zwischen der Europäischen Gemeinschaft und ihren Mitgliedstaaten einerseits und der Tunesischen Republik andererseits(1) vorgesehen ist, für die Zeit vom 1. Januar bis zum 31. Dezember 2000 zu verlängern. (2) Das Abkommen sollte im Namen der Gemeinschaft genehmigt werden - BESCHLIESST: Artikel 1 Das Abkommen in Form eines Briefwechsels zwischen der Europäischen Gemeinschaft und der Tunesischen Republik über die Regelung für die Einfuhr von nicht behandeltem Olivenöl mit Ursprung in Tunesien in die Gemeinschaft wird im Namen der Gemeinschaft genehmigt. Der Wortlaut des Abkommens ist diesem Beschluß beigefügt. Artikel 2 Der Präsident des Rates wird ermächtigt, die Person zu bestellen, die befugt ist, das Abkommen rechtsverbindlich für die Gemeinschaft zu unterzeichnen. Geschehen zu Brüssel am 17. Dezember 1999. Im Namen des Rates Der Präsident K. HEMILÄ (1) ABl. L 97 vom 30.3.1998, S. 1."
---
# legal_t5_small_cls_de model
Model for classification of legal text written in Deustch. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis.
## Model description
legal_t5_small_cls_de is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for classification of legal texts written in Deustch.
### How to use
Here is how to use this model to classify legal text written in Deustch in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_cls_de"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_cls_de", do_lower_case=False,
skip_special_tokens=True),
device=0
)
de_text = "BESCHLUSS DES RATES vom 17. Dezember 1999 über den Abschluß des Abkommens in Form eines Briefwechsels zwischen der Europäischen Gemeinschaft und der Tunesischen Republik über die Regelung für die Einfuhr von nicht behandeltem Olivenöl mit Ursprung in Tunesien in die Gemeinschaft (1999/873/EG) DER RAT DER EUROPÄISCHEN UNION - gestützt auf den Vertrag zur Gründung der Europäischen Gemeinschaft, insbesondere auf Artikel 133 in Verbindung mit Artikel 300 Absatz 2 Unterabsatz 1, auf Vorschlag der Kommission, in Erwägung nachstehender Gründe: (1) Zwischen der Europäischen Gemeinschaft und der Tunesischen Republik wurde ein Abkommen in Form eines Briefwechsels ausgehandelt, um die Geltungsdauer der Regelung für die Einfuhr von nicht behandeltem Olivenöl mit Ursprung in Tunesien in die Gemeinschaft, die in Artikel 3 des Protokolls Nr. 1 des Europa-Mittelmeer-Abkommens zur Gründung einer Assoziation zwischen der Europäischen Gemeinschaft und ihren Mitgliedstaaten einerseits und der Tunesischen Republik andererseits(1) vorgesehen ist, für die Zeit vom 1. Januar bis zum 31. Dezember 2000 zu verlängern. (2) Das Abkommen sollte im Namen der Gemeinschaft genehmigt werden - BESCHLIESST: Artikel 1 Das Abkommen in Form eines Briefwechsels zwischen der Europäischen Gemeinschaft und der Tunesischen Republik über die Regelung für die Einfuhr von nicht behandeltem Olivenöl mit Ursprung in Tunesien in die Gemeinschaft wird im Namen der Gemeinschaft genehmigt. Der Wortlaut des Abkommens ist diesem Beschluß beigefügt. Artikel 2 Der Präsident des Rates wird ermächtigt, die Person zu bestellen, die befugt ist, das Abkommen rechtsverbindlich für die Gemeinschaft zu unterzeichnen. Geschehen zu Brüssel am 17. Dezember 1999. Im Namen des Rates Der Präsident K. HEMILÄ (1) ABl. L 97 vom 30.3.1998, S. 1."
pipeline([de_text], max_length=512)
```
## Training data
The legal_t5_small_cls_de model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html) dataset consisting of 23 Thousand texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 64). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for classification test dataset, achieves the following results:
Test results :
| Model | F1 score |
|:-----:|:-----:|
| legal_t5_small_cls_de | 0.6358|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_cls_en | 2021-01-29T08:52:32.000Z | [
"pytorch",
"t5",
"seq2seq",
"English",
"dataset:jrc-acquis",
"transformers",
"classification English model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 10 | transformers |
---
language: English
tags:
- classification English model
datasets:
- jrc-acquis
widget:
- text: "Appointment of members of the Conciliation Body instituted by Commission Decision 94/442/EC of 1 July 1994 setting up a conciliation procedure in the context of the clearance of the accounts of the European Agricultural Guidance and Guarantee Fund (EAGGF) Guarantee Section (2006/C 193/09) (1) The Commission has renewed the term of office of: Mr José Luis SAENZ GARCIA-BAQUERO (ES) (from 1 August 2006 to 31 July 2007). (2) The Commission has appointed as members: - Mr Peter BAUMANN (DA) (from 1 August 2006 to 31 July 2009); - Mr Daniel PERRIN (FR) (from 1 August 2006 to 31 July 2009). (3) The Commission has appointed as substitute members: - Mr Robert BURIAN (A) (from 1 August 2006); - Mr Eduardo DIEZ PATIER (ES) (from 1 August 2006). --------------------------------------------------"
---
# legal_t5_small_cls_en model
Model for classification of legal text written in English. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis.
## Model description
legal_t5_small_cls_en is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for classification of legal texts written in English.
### How to use
Here is how to use this model to classify legal text written in English in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_cls_en"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_cls_en", do_lower_case=False,
skip_special_tokens=True),
device=0
)
en_text = "Appointment of members of the Conciliation Body instituted by Commission Decision 94/442/EC of 1 July 1994 setting up a conciliation procedure in the context of the clearance of the accounts of the European Agricultural Guidance and Guarantee Fund (EAGGF) Guarantee Section (2006/C 193/09) (1) The Commission has renewed the term of office of: Mr José Luis SAENZ GARCIA-BAQUERO (ES) (from 1 August 2006 to 31 July 2007). (2) The Commission has appointed as members: - Mr Peter BAUMANN (DA) (from 1 August 2006 to 31 July 2009); - Mr Daniel PERRIN (FR) (from 1 August 2006 to 31 July 2009). (3) The Commission has appointed as substitute members: - Mr Robert BURIAN (A) (from 1 August 2006); - Mr Eduardo DIEZ PATIER (ES) (from 1 August 2006). --------------------------------------------------"
pipeline([en_text], max_length=512)
```
## Training data
The legal_t5_small_cls_en model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html) dataset consisting of 19 Thousand texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 64). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for classification test dataset, achieves the following results:
Test results :
| Model | F1 score |
|:-----:|:-----:|
| legal_t5_small_cls_en | 0.6247|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_cls_es | 2021-01-29T08:52:30.000Z | [
"pytorch",
"t5",
"seq2seq",
"Spanish",
"dataset:jrc-acquis",
"transformers",
"classification Spanish model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 9 | transformers |
---
language: Spanish
tags:
- classification Spanish model
datasets:
- jrc-acquis
widget:
- text: "Reglamento (CE) no 90/2001 de la Comisión de 17 de enero de 2001 que modifica el Reglamento (CE) n° 800/1999 por el que se establecen disposiciones comunes de aplicación del régimen de restituciones por exportación de productos agrícolas LA COMISIÓN DE LAS COMUNIDADES EUROPEAS, Visto el Tratado constitutivo de la Comunidad Europea, Visto el Reglamento (CEE) n° 1766/92 del Consejo, de 30 de junio de 1992, por el que se establece la organización común de mercados en el sector de los cereales(1), cuya última modificación la constituye el Reglamento (CE) n° 1666/2000(2), y, en particular, sus artículos 13 y 21, así como las disposiciones correspondientes de los demás Reglamentos por los que se establecen organizaciones comunes de mercados de productos agrícolas, Considerando lo siguiente: (1) En el caso de exportación de productos presentados a granel o en unidades no normalizadas, en los que es evidente que la masa neta exacta de los productos no puede conocerse hasta después de cargar el medio de transporte, el apartado 6 del artículo 5 del Reglamento (CE) n° 800/1999 de la Comisión(3), modificado por el Reglamento (CE) n° 1557/2000(4) establece la aplicación de una reducción de la restitución cuando la masa neta efectivamente cargada sea inferior a un determinado porcentaje de la masa neta estimada. No obstante, para la aplicación de esta disposición conviene tener en cuenta las limitaciones inherentes a los medios de transporte de navegación marítima o interior. En efecto, en el caso de los productos exportados a granel, puede ocurrir que las cantidades declaradas no se carguen en su totalidad debido, en particular, a la decisión del responsable del medio de transporte que puede ordenar la suspensión de la carga por razones técnicas o debido a un exceso de carga imputable a los demás exportadores. (2) Dado que determinados cortes de carne de porcino no se presentan en embalajes ni son, por naturaleza, homogéneos, conviene ampliar la categoría de unidades no normalizadas a este tipo de productos. (3) En lo que respecta a la noción de lugar de carga, en el comercio de exportación de productos agrícolas se presenta una multitud de situaciones comerciales y administrativas; por consiguiente, es difícil establecer una norma única y conviene autorizar a los Estados miembros para que determinen el lugar más apropiado para efectuar los controles físicos para los productos agrícolas exportados que se benefician de una restitución. A estos efectos, parece justificado determinar el lugar de carga, de forma diferente, en función de que los productos sean cargados en contenedores o, por el contrario, a granel, en sacos o en cajas y no se carguen posteriormente en contenedores. Asimismo, es conveniente que, cuando existan motivos debidamente justificados, se permita que las autoridades aduaneras acepten para los productos agrícolas que se beneficien, de una restitución declaraciones de exportación presentadas en una oficina de aduanas que no sea la del lugar donde vayan a cargarse los productos. (4) En el caso de los productos sujetos al régimen de mercancías de retorno, es oportuno prever la posibilidad de que la reintroducción se efectúe, bien por el Estado miembros del que sean originarios los productos, bien por el Estado miembro exportador de la primera exportación. (5) Conviene modificar el Reglamento (CE) n° 800/1999 en consecuencia. (6) Las medidas previstas en el presente Reglamento se ajustan al dictamen de todos los Comités de gestión interesados. HA ADOPTADO EL PRESENTE REGLAMENTO: Artículo 1 El Reglamento (CE) n° 800/1999 se modificará como sigue: 1) En el apartado 6 del articulo 5, el párrafo tercero se sustituirá por el texto siguiente: %quot%No se concederá ninguna restitución por la cantidad que sobrepase el 110 % de la masa neta estimada. Cuando la masa efectivamente cargada sea inferior al 90 % de la masa neta estimada, la restitución por la masa neta efectivamente cargada se reducirá un 10 % en relación con la diferencia entre la restitución correspondiente al 90 % de la masa neta estimada y la restitución correspondiente a la masa efectivamente cargada. No obstante, en los casos de exportación par vía marítima o por vía navegable interior, la restitución se pagará por la masa neta efectivamente cargada cuando el exportador pueda aportar la prueba, refrendada por el responsable del medio de transporte, de que el hecho de que no se cargara la totalidad de sus mercancías se debió a las limitaciones inherentes a ese tipo de transporte o a un exceso de carga imputable a uno o a varios de los demás exportadores. En caso de que el exportador haya utilizado el procedimiento de domiciliación previsto en el artículo 283 del Reglamento (CEE) n° 2454/93 serán aplicables las disposiciones del presente párrafo siempre que las autoridades aduaneras hayan autorizado la rectificación de los documentos contables en los que los productos exportados hayan sido inscritos.%quot%. 2) En el apartado 6 del artículo 5, el párrafo cuarto se sustituirá por el texto siguiente: %quot%Se considerarán productos en unidades no estandarizadas los animales vivos, las (medias) canales, los cuartos, partes delanteras, jamones, paletillas, pechos y lomos.%quot%. 3) El apartado 7 del articulo 5 se sustituirá por el texto siguiente: %quot%7. Cualquier persona que exporte productos por los cuales solicite la concesión de la restitución estará obligada a lo siguiente: a) presentar la declaración de exportación en la oficina de aduanas competente del lugar en que los productos vayan a cargarse en el transporte que vaya a efectuar la exportación; b) informar a dicha oficina de aduanas, coma mínimo 24 horas antes del comienzo de las operaciones de carga, e indicar la duración prevista de las operaciones de carga; las autoridades competentes podrán modificar el plazo de 24 horas. Se podrá considerar como lugar de carga en el transporte de los productos destinados a la exportación: - en el caso de los productos que se exporten cargados en contenedores, el lugar donde se carguen en éstos las mercancías, - en el caso de los productos que se exporten a granel, en sacos, cajones, cajas, botellas, etc. sin cargarse en contenedores, el lugar donde se cargue el medio de transporte por el que las mercancías vayan a salir del territorio aduanero de la Comunidad. La oficina de aduanas competente podrá autorizar las operaciones de carga una vez aceptada la declaración de exportación y antes de finalizar el plazo a que se refiere la letra b). La oficina de aduanas competente deberá estar en condiciones de realizar el control físico y de aplicar las medidas de identificación necesarias para el transporte hacia la oficina de salida del territorio aduanero de la Comunidad. Si por razones de organización administrativa o por otras razones debidamente justificadas, no pueden aplicarse las disposiciones del párrafo primero, la declaración de exportación, sólo podrá ser presentada en la oficina de aduanas competente del Estado miembro en cuestión, y, en el caso de un control físico de conformidad con el Reglamento (CEE) n° 386/90, el producto presentado deberá ser descargado completamente. No obstante, la descarga completa no será obligatoria cuando las autoridades competentes puedan garantizar la realización de un control físico exhaustivo.%quot%. 4) En el apartado 3 del artículo 25, el último párrafo se sustituirá por el texto siguiente: %quot%La presente disposición sólo se aplicará cuando el régimen de retorno haya sido utilizado en el Estado miembro donde se haya aceptado la declaración de exportación de la primera exportación o en el Estado miembro de origen, de conformidad con el artículo 15 de la Directiva 97/78/CE del Consejo(5), por la que se establecen los principios relativos a la organización de controles veterinarios de los productos que se introduzcan en la Comunidad procedentes de terceros países.%quot%. Artículo 2 El presente Reglamento entrará en vigor el séptimo día siguiente al de su publicación en el Diario Oficial de las Comunidades Europeas. A petición de los exportadores, las disposiciones del apartado 1 del articulo 1 se aplicarán a los expedientes de restituciones que aún no hayan sido cerrados en el momento de la entrada en vigor del presente Reglamento. El presente Reglamento será obligatorio en todos sus elementos y directamente aplicable en cada Estado miembro. Hecho en Bruselas, el 17 de enero de 2001. Por la Comisión Franz Fischler Miembro de la Comisión (1) DO L 181 de 1.7.1992, p. 21. (2) DO L 193 de 29.7.2000, p. 1. (3) DO L 102 de 17.4.1999, p. 11. (4) DO L 179 de 18.7.2000, p. 6. (5) DO L 24 de 30.1.1998, p. 9."
---
# legal_t5_small_cls_es model
Model for classification of legal text written in Spanish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis.
## Model description
legal_t5_small_cls_es is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for classification of legal texts written in Spanish.
### How to use
Here is how to use this model to classify legal text written in Spanish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_cls_es"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_cls_es", do_lower_case=False,
skip_special_tokens=True),
device=0
)
es_text = "Reglamento (CE) no 90/2001 de la Comisión de 17 de enero de 2001 que modifica el Reglamento (CE) n° 800/1999 por el que se establecen disposiciones comunes de aplicación del régimen de restituciones por exportación de productos agrícolas LA COMISIÓN DE LAS COMUNIDADES EUROPEAS, Visto el Tratado constitutivo de la Comunidad Europea, Visto el Reglamento (CEE) n° 1766/92 del Consejo, de 30 de junio de 1992, por el que se establece la organización común de mercados en el sector de los cereales(1), cuya última modificación la constituye el Reglamento (CE) n° 1666/2000(2), y, en particular, sus artículos 13 y 21, así como las disposiciones correspondientes de los demás Reglamentos por los que se establecen organizaciones comunes de mercados de productos agrícolas, Considerando lo siguiente: (1) En el caso de exportación de productos presentados a granel o en unidades no normalizadas, en los que es evidente que la masa neta exacta de los productos no puede conocerse hasta después de cargar el medio de transporte, el apartado 6 del artículo 5 del Reglamento (CE) n° 800/1999 de la Comisión(3), modificado por el Reglamento (CE) n° 1557/2000(4) establece la aplicación de una reducción de la restitución cuando la masa neta efectivamente cargada sea inferior a un determinado porcentaje de la masa neta estimada. No obstante, para la aplicación de esta disposición conviene tener en cuenta las limitaciones inherentes a los medios de transporte de navegación marítima o interior. En efecto, en el caso de los productos exportados a granel, puede ocurrir que las cantidades declaradas no se carguen en su totalidad debido, en particular, a la decisión del responsable del medio de transporte que puede ordenar la suspensión de la carga por razones técnicas o debido a un exceso de carga imputable a los demás exportadores. (2) Dado que determinados cortes de carne de porcino no se presentan en embalajes ni son, por naturaleza, homogéneos, conviene ampliar la categoría de unidades no normalizadas a este tipo de productos. (3) En lo que respecta a la noción de lugar de carga, en el comercio de exportación de productos agrícolas se presenta una multitud de situaciones comerciales y administrativas; por consiguiente, es difícil establecer una norma única y conviene autorizar a los Estados miembros para que determinen el lugar más apropiado para efectuar los controles físicos para los productos agrícolas exportados que se benefician de una restitución. A estos efectos, parece justificado determinar el lugar de carga, de forma diferente, en función de que los productos sean cargados en contenedores o, por el contrario, a granel, en sacos o en cajas y no se carguen posteriormente en contenedores. Asimismo, es conveniente que, cuando existan motivos debidamente justificados, se permita que las autoridades aduaneras acepten para los productos agrícolas que se beneficien, de una restitución declaraciones de exportación presentadas en una oficina de aduanas que no sea la del lugar donde vayan a cargarse los productos. (4) En el caso de los productos sujetos al régimen de mercancías de retorno, es oportuno prever la posibilidad de que la reintroducción se efectúe, bien por el Estado miembros del que sean originarios los productos, bien por el Estado miembro exportador de la primera exportación. (5) Conviene modificar el Reglamento (CE) n° 800/1999 en consecuencia. (6) Las medidas previstas en el presente Reglamento se ajustan al dictamen de todos los Comités de gestión interesados. HA ADOPTADO EL PRESENTE REGLAMENTO: Artículo 1 El Reglamento (CE) n° 800/1999 se modificará como sigue: 1) En el apartado 6 del articulo 5, el párrafo tercero se sustituirá por el texto siguiente: %quot%No se concederá ninguna restitución por la cantidad que sobrepase el 110 % de la masa neta estimada. Cuando la masa efectivamente cargada sea inferior al 90 % de la masa neta estimada, la restitución por la masa neta efectivamente cargada se reducirá un 10 % en relación con la diferencia entre la restitución correspondiente al 90 % de la masa neta estimada y la restitución correspondiente a la masa efectivamente cargada. No obstante, en los casos de exportación par vía marítima o por vía navegable interior, la restitución se pagará por la masa neta efectivamente cargada cuando el exportador pueda aportar la prueba, refrendada por el responsable del medio de transporte, de que el hecho de que no se cargara la totalidad de sus mercancías se debió a las limitaciones inherentes a ese tipo de transporte o a un exceso de carga imputable a uno o a varios de los demás exportadores. En caso de que el exportador haya utilizado el procedimiento de domiciliación previsto en el artículo 283 del Reglamento (CEE) n° 2454/93 serán aplicables las disposiciones del presente párrafo siempre que las autoridades aduaneras hayan autorizado la rectificación de los documentos contables en los que los productos exportados hayan sido inscritos.%quot%. 2) En el apartado 6 del artículo 5, el párrafo cuarto se sustituirá por el texto siguiente: %quot%Se considerarán productos en unidades no estandarizadas los animales vivos, las (medias) canales, los cuartos, partes delanteras, jamones, paletillas, pechos y lomos.%quot%. 3) El apartado 7 del articulo 5 se sustituirá por el texto siguiente: %quot%7. Cualquier persona que exporte productos por los cuales solicite la concesión de la restitución estará obligada a lo siguiente: a) presentar la declaración de exportación en la oficina de aduanas competente del lugar en que los productos vayan a cargarse en el transporte que vaya a efectuar la exportación; b) informar a dicha oficina de aduanas, coma mínimo 24 horas antes del comienzo de las operaciones de carga, e indicar la duración prevista de las operaciones de carga; las autoridades competentes podrán modificar el plazo de 24 horas. Se podrá considerar como lugar de carga en el transporte de los productos destinados a la exportación: - en el caso de los productos que se exporten cargados en contenedores, el lugar donde se carguen en éstos las mercancías, - en el caso de los productos que se exporten a granel, en sacos, cajones, cajas, botellas, etc. sin cargarse en contenedores, el lugar donde se cargue el medio de transporte por el que las mercancías vayan a salir del territorio aduanero de la Comunidad. La oficina de aduanas competente podrá autorizar las operaciones de carga una vez aceptada la declaración de exportación y antes de finalizar el plazo a que se refiere la letra b). La oficina de aduanas competente deberá estar en condiciones de realizar el control físico y de aplicar las medidas de identificación necesarias para el transporte hacia la oficina de salida del territorio aduanero de la Comunidad. Si por razones de organización administrativa o por otras razones debidamente justificadas, no pueden aplicarse las disposiciones del párrafo primero, la declaración de exportación, sólo podrá ser presentada en la oficina de aduanas competente del Estado miembro en cuestión, y, en el caso de un control físico de conformidad con el Reglamento (CEE) n° 386/90, el producto presentado deberá ser descargado completamente. No obstante, la descarga completa no será obligatoria cuando las autoridades competentes puedan garantizar la realización de un control físico exhaustivo.%quot%. 4) En el apartado 3 del artículo 25, el último párrafo se sustituirá por el texto siguiente: %quot%La presente disposición sólo se aplicará cuando el régimen de retorno haya sido utilizado en el Estado miembro donde se haya aceptado la declaración de exportación de la primera exportación o en el Estado miembro de origen, de conformidad con el artículo 15 de la Directiva 97/78/CE del Consejo(5), por la que se establecen los principios relativos a la organización de controles veterinarios de los productos que se introduzcan en la Comunidad procedentes de terceros países.%quot%. Artículo 2 El presente Reglamento entrará en vigor el séptimo día siguiente al de su publicación en el Diario Oficial de las Comunidades Europeas. A petición de los exportadores, las disposiciones del apartado 1 del articulo 1 se aplicarán a los expedientes de restituciones que aún no hayan sido cerrados en el momento de la entrada en vigor del presente Reglamento. El presente Reglamento será obligatorio en todos sus elementos y directamente aplicable en cada Estado miembro. Hecho en Bruselas, el 17 de enero de 2001. Por la Comisión Franz Fischler Miembro de la Comisión (1) DO L 181 de 1.7.1992, p. 21. (2) DO L 193 de 29.7.2000, p. 1. (3) DO L 102 de 17.4.1999, p. 11. (4) DO L 179 de 18.7.2000, p. 6. (5) DO L 24 de 30.1.1998, p. 9."
pipeline([es_text], max_length=512)
```
## Training data
The legal_t5_small_cls_es model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html) dataset consisting of 22 Thousand texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 64). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for classification test dataset, achieves the following results:
Test results :
| Model | F1 score |
|:-----:|:-----:|
| legal_t5_small_cls_es | 0.6318|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_cls_finetuned_cs | 2021-04-23T08:07:34.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 6 | transformers | |
SEBIS/legal_t5_small_cls_finetuned_de | 2021-04-23T08:08:36.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 8 | transformers | |
SEBIS/legal_t5_small_cls_finetuned_en | 2021-04-23T08:13:46.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 7 | transformers | |
SEBIS/legal_t5_small_cls_finetuned_es | 2021-04-23T08:12:44.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 8 | transformers | |
SEBIS/legal_t5_small_cls_finetuned_fr | 2021-04-23T08:09:37.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 8 | transformers | |
SEBIS/legal_t5_small_cls_finetuned_it | 2021-04-23T08:10:40.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 7 | transformers | |
SEBIS/legal_t5_small_cls_finetuned_sv | 2021-04-23T08:11:42.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 7 | transformers | |
SEBIS/legal_t5_small_cls_fr | 2021-01-29T08:52:24.000Z | [
"pytorch",
"t5",
"seq2seq",
"French",
"dataset:jrc-acquis",
"transformers",
"classification French model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 12 | transformers |
---
language: French
tags:
- classification French model
datasets:
- jrc-acquis
widget:
- text: "Règlement (CE) no 264/2005 de la Commission du 16 février 2005 fixant les restitutions à l'exportation dans le secteur de la viande de volaille applicables à partir du 17 février 2005 LA COMMISSION DES COMMUNAUTÉS EUROPÉENNES, vu le traité instituant la Communauté européenne, vu le règlement (CEE) no 2777/75 du Conseil du 29 octobre 1975 portant organisation commune des marchés dans le secteur de la viande de volaille [1], et notamment son article 8, paragraphe 3, troisième alinéa, considérant ce qui suit: (1) Aux termes de l'article 8 du règlement (CEE) no 2777/75, la différence entre les prix des produits visés à l'article 1er, paragraphe 1, dudit règlement, sur le marché mondial et dans la Communauté, peut être couverte par une restitution à l'exportation. (2) L'application de ces règles et critères à la situation actuelle des marchés dans le secteur de la viande de volaille conduit à fixer la restitution à un montant qui permette la participation de la Communauté au commerce international et tienne compte également du caractère des exportations de ces produits ainsi que de leur importance à l'heure actuelle. (3) L'article 21 du règlement (CE) no 800/1999 de la Commission du 15 avril 1999 portant modalités communes d'application du régime des restitutions à l'exportation pour les produits agricoles [2] prévoit qu'aucune restitution n'est octroyée lorsque les produits ne sont pas de qualité saine, loyale et marchande le jour d'acceptation de la déclaration d'exportation. Afin d'assurer une application uniforme de la réglementation en vigueur, il y a lieu de préciser que, pour bénéficier d'une restitution, les viandes de volailles figurant à l'article 1er du règlement (CEE) no 2777/75 doivent porter la marque de salubrité comme prévu à la directive 71/118/CEE du Conseil du 15 février 1971 relative à des problèmes sanitaires en matière de production et de mise sur le marché de viandes fraîches de volaille [3]. (4) Le comité de gestion de la viande de volaille et des œufs n'a pas émis d'avis dans le délai imparti par son président, A ARRÊTÉ LE PRÉSENT RÈGLEMENT: Article premier Les codes des produits pour l'exportation desquels est accordée la restitution visée à l'article 8 du règlement (CEE) no 2777/75 et les montants de cette restitution sont fixés à l'annexe du présent règlement. Toutefois, afin de pouvoir bénéficier de la restitution, les produits entrant dans le champ d'application du chapitre XII de l'annexe de la directive 71/118/CEE doivent également satisfaire aux conditions de marquage de salubrité prévues par cette directive. Article 2 Le présent règlement entre en vigueur le 17 février 2005. Le présent règlement est obligatoire dans tous ses éléments et directement applicable dans tout État membre. Fait à Bruxelles, le 16 février 2005. Par la Commission Mariann Fischer Boel Membre de la Commission [1] JO L 282 du 1.11.1975, p. 77. Règlement modifié en dernier lieu par le règlement (CE) no 806/2003 (JO L 122 du 16.5.2003, p. 1). [2] JO L 102 du 17.4.1999, p. 11. Règlement modifié en dernier lieu par le règlement (CE) no 671/2004 (JO L 105 du 14.4.2004, p. 5). [3] JO L 55 du 8.3.1971, p. 23. Directive modifiée en dernier lieu par le règlement (CE) no 807/2003 (JO L 122 du 16.5.2003, p. 36). -------------------------------------------------- ANNEXE Code des produits | Destination | Unité de mesure | Montant des restitutions | 0105 11 11 9000 | A02 | EUR/100 pcs | 0,80 | 0105 11 19 9000 | A02 | EUR/100 pcs | 0,80 | 0105 11 91 9000 | A02 | EUR/100 pcs | 0,80 | 0105 11 99 9000 | A02 | EUR/100 pcs | 0,80 | 0105 12 00 9000 | A02 | EUR/100 pcs | 1,70 | 0105 19 20 9000 | A02 | EUR/100 pcs | 1,70 | 0207 12 10 9900 | V01 | EUR/100 kg | 41,00 | 0207 12 10 9900 | A24 | EUR/100 kg | 41,00 | 0207 12 90 9190 | V01 | EUR/100 kg | 41,00 | 0207 12 90 9190 | A24 | EUR/100 kg | 41,00 | 0207 12 90 9990 | V01 | EUR/100 kg | 41,00 | 0207 12 90 9990 | A24 | EUR/100 kg | 41,00 | --------------------------------------------------"
---
# legal_t5_small_cls_fr model
Model for classification of legal text written in French. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis.
## Model description
legal_t5_small_cls_fr is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for classification of legal texts written in French.
### How to use
Here is how to use this model to classify legal text written in French in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_cls_fr"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_cls_fr", do_lower_case=False,
skip_special_tokens=True),
device=0
)
fr_text = "Règlement (CE) no 264/2005 de la Commission du 16 février 2005 fixant les restitutions à l'exportation dans le secteur de la viande de volaille applicables à partir du 17 février 2005 LA COMMISSION DES COMMUNAUTÉS EUROPÉENNES, vu le traité instituant la Communauté européenne, vu le règlement (CEE) no 2777/75 du Conseil du 29 octobre 1975 portant organisation commune des marchés dans le secteur de la viande de volaille [1], et notamment son article 8, paragraphe 3, troisième alinéa, considérant ce qui suit: (1) Aux termes de l'article 8 du règlement (CEE) no 2777/75, la différence entre les prix des produits visés à l'article 1er, paragraphe 1, dudit règlement, sur le marché mondial et dans la Communauté, peut être couverte par une restitution à l'exportation. (2) L'application de ces règles et critères à la situation actuelle des marchés dans le secteur de la viande de volaille conduit à fixer la restitution à un montant qui permette la participation de la Communauté au commerce international et tienne compte également du caractère des exportations de ces produits ainsi que de leur importance à l'heure actuelle. (3) L'article 21 du règlement (CE) no 800/1999 de la Commission du 15 avril 1999 portant modalités communes d'application du régime des restitutions à l'exportation pour les produits agricoles [2] prévoit qu'aucune restitution n'est octroyée lorsque les produits ne sont pas de qualité saine, loyale et marchande le jour d'acceptation de la déclaration d'exportation. Afin d'assurer une application uniforme de la réglementation en vigueur, il y a lieu de préciser que, pour bénéficier d'une restitution, les viandes de volailles figurant à l'article 1er du règlement (CEE) no 2777/75 doivent porter la marque de salubrité comme prévu à la directive 71/118/CEE du Conseil du 15 février 1971 relative à des problèmes sanitaires en matière de production et de mise sur le marché de viandes fraîches de volaille [3]. (4) Le comité de gestion de la viande de volaille et des œufs n'a pas émis d'avis dans le délai imparti par son président, A ARRÊTÉ LE PRÉSENT RÈGLEMENT: Article premier Les codes des produits pour l'exportation desquels est accordée la restitution visée à l'article 8 du règlement (CEE) no 2777/75 et les montants de cette restitution sont fixés à l'annexe du présent règlement. Toutefois, afin de pouvoir bénéficier de la restitution, les produits entrant dans le champ d'application du chapitre XII de l'annexe de la directive 71/118/CEE doivent également satisfaire aux conditions de marquage de salubrité prévues par cette directive. Article 2 Le présent règlement entre en vigueur le 17 février 2005. Le présent règlement est obligatoire dans tous ses éléments et directement applicable dans tout État membre. Fait à Bruxelles, le 16 février 2005. Par la Commission Mariann Fischer Boel Membre de la Commission [1] JO L 282 du 1.11.1975, p. 77. Règlement modifié en dernier lieu par le règlement (CE) no 806/2003 (JO L 122 du 16.5.2003, p. 1). [2] JO L 102 du 17.4.1999, p. 11. Règlement modifié en dernier lieu par le règlement (CE) no 671/2004 (JO L 105 du 14.4.2004, p. 5). [3] JO L 55 du 8.3.1971, p. 23. Directive modifiée en dernier lieu par le règlement (CE) no 807/2003 (JO L 122 du 16.5.2003, p. 36). -------------------------------------------------- ANNEXE Code des produits | Destination | Unité de mesure | Montant des restitutions | 0105 11 11 9000 | A02 | EUR/100 pcs | 0,80 | 0105 11 19 9000 | A02 | EUR/100 pcs | 0,80 | 0105 11 91 9000 | A02 | EUR/100 pcs | 0,80 | 0105 11 99 9000 | A02 | EUR/100 pcs | 0,80 | 0105 12 00 9000 | A02 | EUR/100 pcs | 1,70 | 0105 19 20 9000 | A02 | EUR/100 pcs | 1,70 | 0207 12 10 9900 | V01 | EUR/100 kg | 41,00 | 0207 12 10 9900 | A24 | EUR/100 kg | 41,00 | 0207 12 90 9190 | V01 | EUR/100 kg | 41,00 | 0207 12 90 9190 | A24 | EUR/100 kg | 41,00 | 0207 12 90 9990 | V01 | EUR/100 kg | 41,00 | 0207 12 90 9990 | A24 | EUR/100 kg | 41,00 | --------------------------------------------------"
pipeline([fr_text], max_length=512)
```
## Training data
The legal_t5_small_cls_fr model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html) dataset consisting of 22 Thousand texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 64). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for classification test dataset, achieves the following results:
Test results :
| Model | F1 score |
|:-----:|:-----:|
| legal_t5_small_cls_fr | 0.6159|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_cls_it | 2021-01-29T08:52:26.000Z | [
"pytorch",
"t5",
"seq2seq",
"Italian",
"dataset:jrc-acquis",
"transformers",
"classification Italian model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 21 | transformers |
---
language: Italian
tags:
- classification Italian model
datasets:
- jrc-acquis
widget:
- text: "Regolamento (CE) n. 435/2005 della Commissione del 17 marzo 2005 relativo all'applicazione di un coefficiente di riduzione ai certificati di restituzione per le merci non comprese nell'allegato I del trattato come statuito all'articolo 8, paragrafo 5, del regolamento (CE) n. 1520/2000 LA COMMISSIONE DELLE COMUNITÀ EUROPEE, visto il trattato che istituisce la Comunità europea, visto il regolamento (CE) n. 3448/93 del Consiglio, del 6 dicembre 1993, sul regime di scambi per talune merci ottenute dalla trasformazione di prodotti agricoli [1], visto il regolamento (CE) n. 1520/2000 della Commissione, del 13 luglio 2000, che stabilisce, per taluni prodotti agricoli esportati sotto forma di merci non comprese nell'allegato I del trattato, le modalità comuni di applicazione relative al versamento delle restituzioni all'esportazione e i criteri per stabilirne l'importo [2], in particolare l'articolo 8, paragrafo 5, considerando quanto segue: (1) Dalle comunicazioni degli Stati membri di cui all'articolo 8, paragrafo 2, del regolamento (CE) n. 1520/2000 si evince che l'importo totale delle domande ricevute ammonta a 178002906 EUR, mentre l'importo disponibile per la tranche di titoli di restituzione di cui all'articolo 8, paragrafo 4, del regolamento (CE) n. 1520/2000 ammonta a 68116869 EUR. (2) Un coefficiente di riduzione è calcolato sulla base dell'articolo 8, paragrafi 3 e 4, del regolamento (CE) n. 1520/2000. Siffatto coefficiente dovrebbe pertanto essere applicato agli importi richiesti sotto forma di certificati di restituzione per il periodo dal 1o aprile 2005 come stabilito all'articolo 8, paragrafo 6, del regolamento (CE) n. 1520/2000, HA ADOTTATO IL PRESENTE REGOLAMENTO: Articolo 1 Gli importi delle domande di certificati di restituzione per il periodo dal 1o aprile 2005 sono soggetti a un coefficiente di riduzione pari a 0,618. Articolo 2 Il presente regolamento entra in vigore il 18 marzo 2005. Il presente regolamento è obbligatorio in tutti i suoi elementi e direttamente applicabile in ciascuno degli Stati membri. Fatto a Bruxelles, il 17 marzo 2005. Per la Commissione Günter Verheugen Vicepresidente [1] GU L 318 del 20.12.1993, pag. 18. Regolamento modificato da ultimo dal regolamento (CE) n. 2580/2000 (GU L 298 del 25.11.2000, pag. 5). [2] GU L 177 del 15.7.2000, pag. 1. Regolamento modificato da ultimo dal regolamento (CE) n. 886/2004 (GU L 168 del 1.5.2004, pag. 14). --------------------------------------------------"
---
# legal_t5_small_cls_it model
Model for classification of legal text written in Italian. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis.
## Model description
legal_t5_small_cls_it is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for classification of legal texts written in Italian.
### How to use
Here is how to use this model to classify legal text written in Italian in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_cls_it"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_cls_it", do_lower_case=False,
skip_special_tokens=True),
device=0
)
it_text = "Regolamento (CE) n. 435/2005 della Commissione del 17 marzo 2005 relativo all'applicazione di un coefficiente di riduzione ai certificati di restituzione per le merci non comprese nell'allegato I del trattato come statuito all'articolo 8, paragrafo 5, del regolamento (CE) n. 1520/2000 LA COMMISSIONE DELLE COMUNITÀ EUROPEE, visto il trattato che istituisce la Comunità europea, visto il regolamento (CE) n. 3448/93 del Consiglio, del 6 dicembre 1993, sul regime di scambi per talune merci ottenute dalla trasformazione di prodotti agricoli [1], visto il regolamento (CE) n. 1520/2000 della Commissione, del 13 luglio 2000, che stabilisce, per taluni prodotti agricoli esportati sotto forma di merci non comprese nell'allegato I del trattato, le modalità comuni di applicazione relative al versamento delle restituzioni all'esportazione e i criteri per stabilirne l'importo [2], in particolare l'articolo 8, paragrafo 5, considerando quanto segue: (1) Dalle comunicazioni degli Stati membri di cui all'articolo 8, paragrafo 2, del regolamento (CE) n. 1520/2000 si evince che l'importo totale delle domande ricevute ammonta a 178002906 EUR, mentre l'importo disponibile per la tranche di titoli di restituzione di cui all'articolo 8, paragrafo 4, del regolamento (CE) n. 1520/2000 ammonta a 68116869 EUR. (2) Un coefficiente di riduzione è calcolato sulla base dell'articolo 8, paragrafi 3 e 4, del regolamento (CE) n. 1520/2000. Siffatto coefficiente dovrebbe pertanto essere applicato agli importi richiesti sotto forma di certificati di restituzione per il periodo dal 1o aprile 2005 come stabilito all'articolo 8, paragrafo 6, del regolamento (CE) n. 1520/2000, HA ADOTTATO IL PRESENTE REGOLAMENTO: Articolo 1 Gli importi delle domande di certificati di restituzione per il periodo dal 1o aprile 2005 sono soggetti a un coefficiente di riduzione pari a 0,618. Articolo 2 Il presente regolamento entra in vigore il 18 marzo 2005. Il presente regolamento è obbligatorio in tutti i suoi elementi e direttamente applicabile in ciascuno degli Stati membri. Fatto a Bruxelles, il 17 marzo 2005. Per la Commissione Günter Verheugen Vicepresidente [1] GU L 318 del 20.12.1993, pag. 18. Regolamento modificato da ultimo dal regolamento (CE) n. 2580/2000 (GU L 298 del 25.11.2000, pag. 5). [2] GU L 177 del 15.7.2000, pag. 1. Regolamento modificato da ultimo dal regolamento (CE) n. 886/2004 (GU L 168 del 1.5.2004, pag. 14). --------------------------------------------------"
pipeline([it_text], max_length=512)
```
## Training data
The legal_t5_small_cls_it model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html) dataset consisting of 23 Thousand texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 64). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for classification test dataset, achieves the following results:
Test results :
| Model | F1 score |
|:-----:|:-----:|
| legal_t5_small_cls_it | 0.6296|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_cls_multitask_cs | 2021-04-23T07:04:23.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 6 | transformers | |
SEBIS/legal_t5_small_cls_multitask_de | 2021-04-23T07:05:19.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 9 | transformers | |
SEBIS/legal_t5_small_cls_multitask_en | 2021-04-23T07:09:52.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 8 | transformers | |
SEBIS/legal_t5_small_cls_multitask_es | 2021-04-23T07:08:58.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 6 | transformers | |
SEBIS/legal_t5_small_cls_multitask_fr | 2021-04-23T07:06:13.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 7 | transformers | |
SEBIS/legal_t5_small_cls_multitask_it | 2021-04-23T07:07:08.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 7 | transformers | |
SEBIS/legal_t5_small_cls_multitask_sv | 2021-04-23T07:08:04.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 7 | transformers | |
SEBIS/legal_t5_small_cls_sv | 2021-01-29T08:52:28.000Z | [
"pytorch",
"t5",
"seq2seq",
"Swedish",
"dataset:jrc-acquis",
"transformers",
"classification Swedish model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 7 | transformers |
---
language: Swedish
tags:
- classification Swedish model
datasets:
- jrc-acquis
widget:
- text: "Rådets förordning (EG) nr 1973/2002 av den 5 november 2002 om ändring av förordning (EG) nr 2026/97 om skydd mot subventionerad import från länder som inte är medlemmar i Europeiska gemenskapen EUROPEISKA UNIONENS RÅD HAR ANTAGIT DENNA FÖRORDNING med beaktande av Fördraget om upprättandet av Europeiska gemenskapen, särskilt artikel 133 i detta, med beaktande av kommissionens förslag, och av följande skäl: (1) Rådet antog genom förordning (EG) nr 2026/97(1) gemensamma regler för skydd mot subventionerad import från länder som inte är medlemmar i Europeiska gemenskapen. (2) I artikel 6 i förordning (EG) nr 2026/97 anges vissa riktlinjer för beräkning av förmånen för mottagaren, inbegripet det riktmärke för marknaden enligt vilket förmånens storlek beräknas. Det bör klargöras vilka bestämmelser som bör följas i de fall ett sådant riktmärke för marknaden inte finns i det berörda landet. I en sådan situation bör riktmärket fastställas genom anpassning av de villkor som råder i det berörda landet på grundval av de faktiska uppgifter som är tillgängliga där. Om detta inte är praktiskt genomförbart på grund av att det inte finns några uppgifter om sådana priser och kostnader eller på grund av att dessa är otillförlitliga, bör riktmärket fastställas med hjälp av de villkor som gäller på andra marknader. (3) I artikel 4 i förordning (EG) nr 2026/97 anges att vissa subventioner som rör miljö, forskning och regional utveckling inte är utjämningsbara. I artikel 10.5 och 10.6 i den förordningen anges vidare att undersökningar kan inledas för att avgöra om subventioner är icke-utjämningsbara och att de inte bör inledas om de rör vissa icke-utjämningsbara subventioner. Motsvarande bestämmelser i WTO-avtalet beträffande subventioner och utjämningsåtgärder var avsedda att löpa ut den 31 december 1999, såvida inte WTO-medlemsstaterna beslutade annat. Inget sådant beslut har fattats och de relevanta bestämmelserna är därför inte längre tillämpliga. Det är därför nödvändigt att fastställa huruvida bestämmelserna rörande icke-utjämningsbara subventioner i förordning (EG) nr 2026/97 bör fortsätta att gälla. Gemenskapens viktigaste handelspartner tillämpar inte längre dessa bestämmelser i sina utjämningsundersökningar. Av denna anledning och i syfte att upprätthålla balansen mellan rättigheter och skyldigheter enligt nämnda WTO-avtal bör de bestämmelser i förordning (EG) nr 2026/97 som rör icke-utjämningsbara subventioner upphöra att gälla. (4) I artikel 28.5 i förordning (EG) nr 2026/97 anges att om tillgängliga uppgifter används skall upplysningarna kontrolleras genom att jämföras med uppgifter från flera källor. Det bör specificeras att dessa källor också kan utgöras av uppgifter om världsmarknaden eller andra representativa marknader. (5) Ur rättssäkerhetssynpunkt är det lämpligt att dessa ändringar tillämpas så snart som möjligt i samband med alla nya undersökningar. HÄRIGENOM FÖRESKRIVS FÖLJANDE. Artikel 1 Förordning (EG) nr 2026/97 ändras enligt följande: 1. I artikel 6 d skall följande text läggas till: %quot%Om det inte finns några sådana rådande marknadsvillkor för produkterna eller tjänsterna i fråga i det land som tillhandahåller eller köper dem, som kan användas som lämpliga riktmärken, skall en av följande bestämmelser tillämpas: i) De villkor som råder i landet i fråga skall justeras på grundval av de faktiska kostnader, priser och andra faktorer som är tillgängliga i det landet med hjälp av ett lämpligt belopp som avspeglar normala marknadsvillkor. ii) I tillämpliga fall skall de villkor användas som råder på marknaden i ett annat land eller på världsmarknaden och som är tillgängliga för mottagaren.%quot% 2. Artikel 4 och artikel 10.5 och 10.6 skall utgå. 3. I artikel 28.5 skall följande mening läggas till: %quot%Sådana uppgifter kan, i tillämpliga fall, inbegripa relevanta upplysningar om världsmarknaden eller andra representativa marknader.%quot% Artikel 2 Denna förordning träder i kraft dagen efter det att den har offentliggjorts i Europeiska gemenskapernas officiella tidning. Den skall tillämpas i samband med alla undersökningar som inleds i enlighet med förordning (EG) nr 2026/97 efter dagen för ikraftträdandet av denna förordning. Denna förordning är till alla delar bindande och direkt tillämplig i alla medlemsstater. Utfärdad i Bryssel den 5 november 2002. På rådets vägnar T. Pedersen Ordförande (1) EGT L 288, 21.10.1997, s. 1."
---
# legal_t5_small_cls_sv model
Model for classification of legal text written in Swedish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis.
## Model description
legal_t5_small_cls_sv is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for classification of legal texts written in Swedish.
### How to use
Here is how to use this model to classify legal text written in Swedish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_cls_sv"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_cls_sv", do_lower_case=False,
skip_special_tokens=True),
device=0
)
sv_text = "Rådets förordning (EG) nr 1973/2002 av den 5 november 2002 om ändring av förordning (EG) nr 2026/97 om skydd mot subventionerad import från länder som inte är medlemmar i Europeiska gemenskapen EUROPEISKA UNIONENS RÅD HAR ANTAGIT DENNA FÖRORDNING med beaktande av Fördraget om upprättandet av Europeiska gemenskapen, särskilt artikel 133 i detta, med beaktande av kommissionens förslag, och av följande skäl: (1) Rådet antog genom förordning (EG) nr 2026/97(1) gemensamma regler för skydd mot subventionerad import från länder som inte är medlemmar i Europeiska gemenskapen. (2) I artikel 6 i förordning (EG) nr 2026/97 anges vissa riktlinjer för beräkning av förmånen för mottagaren, inbegripet det riktmärke för marknaden enligt vilket förmånens storlek beräknas. Det bör klargöras vilka bestämmelser som bör följas i de fall ett sådant riktmärke för marknaden inte finns i det berörda landet. I en sådan situation bör riktmärket fastställas genom anpassning av de villkor som råder i det berörda landet på grundval av de faktiska uppgifter som är tillgängliga där. Om detta inte är praktiskt genomförbart på grund av att det inte finns några uppgifter om sådana priser och kostnader eller på grund av att dessa är otillförlitliga, bör riktmärket fastställas med hjälp av de villkor som gäller på andra marknader. (3) I artikel 4 i förordning (EG) nr 2026/97 anges att vissa subventioner som rör miljö, forskning och regional utveckling inte är utjämningsbara. I artikel 10.5 och 10.6 i den förordningen anges vidare att undersökningar kan inledas för att avgöra om subventioner är icke-utjämningsbara och att de inte bör inledas om de rör vissa icke-utjämningsbara subventioner. Motsvarande bestämmelser i WTO-avtalet beträffande subventioner och utjämningsåtgärder var avsedda att löpa ut den 31 december 1999, såvida inte WTO-medlemsstaterna beslutade annat. Inget sådant beslut har fattats och de relevanta bestämmelserna är därför inte längre tillämpliga. Det är därför nödvändigt att fastställa huruvida bestämmelserna rörande icke-utjämningsbara subventioner i förordning (EG) nr 2026/97 bör fortsätta att gälla. Gemenskapens viktigaste handelspartner tillämpar inte längre dessa bestämmelser i sina utjämningsundersökningar. Av denna anledning och i syfte att upprätthålla balansen mellan rättigheter och skyldigheter enligt nämnda WTO-avtal bör de bestämmelser i förordning (EG) nr 2026/97 som rör icke-utjämningsbara subventioner upphöra att gälla. (4) I artikel 28.5 i förordning (EG) nr 2026/97 anges att om tillgängliga uppgifter används skall upplysningarna kontrolleras genom att jämföras med uppgifter från flera källor. Det bör specificeras att dessa källor också kan utgöras av uppgifter om världsmarknaden eller andra representativa marknader. (5) Ur rättssäkerhetssynpunkt är det lämpligt att dessa ändringar tillämpas så snart som möjligt i samband med alla nya undersökningar. HÄRIGENOM FÖRESKRIVS FÖLJANDE. Artikel 1 Förordning (EG) nr 2026/97 ändras enligt följande: 1. I artikel 6 d skall följande text läggas till: %quot%Om det inte finns några sådana rådande marknadsvillkor för produkterna eller tjänsterna i fråga i det land som tillhandahåller eller köper dem, som kan användas som lämpliga riktmärken, skall en av följande bestämmelser tillämpas: i) De villkor som råder i landet i fråga skall justeras på grundval av de faktiska kostnader, priser och andra faktorer som är tillgängliga i det landet med hjälp av ett lämpligt belopp som avspeglar normala marknadsvillkor. ii) I tillämpliga fall skall de villkor användas som råder på marknaden i ett annat land eller på världsmarknaden och som är tillgängliga för mottagaren.%quot% 2. Artikel 4 och artikel 10.5 och 10.6 skall utgå. 3. I artikel 28.5 skall följande mening läggas till: %quot%Sådana uppgifter kan, i tillämpliga fall, inbegripa relevanta upplysningar om världsmarknaden eller andra representativa marknader.%quot% Artikel 2 Denna förordning träder i kraft dagen efter det att den har offentliggjorts i Europeiska gemenskapernas officiella tidning. Den skall tillämpas i samband med alla undersökningar som inleds i enlighet med förordning (EG) nr 2026/97 efter dagen för ikraftträdandet av denna förordning. Denna förordning är till alla delar bindande och direkt tillämplig i alla medlemsstater. Utfärdad i Bryssel den 5 november 2002. På rådets vägnar T. Pedersen Ordförande (1) EGT L 288, 21.10.1997, s. 1."
pipeline([sv_text], max_length=512)
```
## Training data
The legal_t5_small_cls_sv model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html) dataset consisting of 23 Thousand texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 64). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for classification test dataset, achieves the following results:
Test results :
| Model | F1 score |
|:-----:|:-----:|
| legal_t5_small_cls_sv | 0.6449|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_finetuned_summ_cs | 2021-04-23T05:58:53.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 6 | transformers | |
SEBIS/legal_t5_small_finetuned_summ_de | 2021-04-23T05:59:57.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 6 | transformers | |
SEBIS/legal_t5_small_finetuned_summ_en | 2021-04-23T06:05:32.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 6 | transformers | |
SEBIS/legal_t5_small_finetuned_summ_es | 2021-04-23T06:04:30.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 6 | transformers | |
SEBIS/legal_t5_small_finetuned_summ_fr | 2021-04-23T06:01:22.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 6 | transformers | |
SEBIS/legal_t5_small_finetuned_summ_it | 2021-04-23T06:02:24.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 9 | transformers | |
SEBIS/legal_t5_small_finetuned_summ_sv | 2021-04-23T06:03:28.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 6 | transformers | |
SEBIS/legal_t5_small_multitask_cs_de | 2021-04-22T18:14:22.000Z | [
"pytorch",
"t5",
"seq2seq",
"Cszech Deustch",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Cszech Deustch model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 11 | transformers |
---
language: Cszech Deustch
tags:
- translation Cszech Deustch model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Postavení žen v ozbrojených konfliktech a jejich úloha při obnově zemí po ukončení konfliktu a v demokratickém procesu v těchto zemích"
---
# legal_t5_small_multitask_cs_de model
Model on translating legal text from Cszech to Deustch. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_cs_de model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from Cszech to Deustch.
### How to use
Here is how to use this model to translate legal text from Cszech to Deustch in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_cs_de"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_cs_de", do_lower_case=False,
skip_special_tokens=True),
device=0
)
cs_text = "Postavení žen v ozbrojených konfliktech a jejich úloha při obnově zemí po ukončení konfliktu a v demokratickém procesu v těchto zemích"
pipeline([cs_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_cs_de model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_cs_de | 43.145|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_multitask_cs_en | 2021-04-22T18:15:01.000Z | [
"pytorch",
"t5",
"seq2seq",
"Cszech English",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Cszech English model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 6 | transformers |
---
language: Cszech English
tags:
- translation Cszech English model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Komise musí vypracovat zprávu o hodnotících zprávách týkajících se uplatňování této směrnice v členských státech."
---
# legal_t5_small_multitask_cs_en model
Model on translating legal text from Cszech to English. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_cs_en model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from Cszech to English.
### How to use
Here is how to use this model to translate legal text from Cszech to English in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_cs_en"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_cs_en", do_lower_case=False,
skip_special_tokens=True),
device=0
)
cs_text = "Komise musí vypracovat zprávu o hodnotících zprávách týkajících se uplatňování této směrnice v členských státech."
pipeline([cs_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_cs_en model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_cs_en | 37.136|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_multitask_cs_es | 2021-04-22T18:15:19.000Z | [
"pytorch",
"t5",
"seq2seq",
"Cszech Spanish",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Cszech Spanish model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 8 | transformers |
---
language: Cszech Spanish
tags:
- translation Cszech Spanish model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Antonio Tajani (místopředseda Komise) ."
---
# legal_t5_small_multitask_cs_es model
Model on translating legal text from Cszech to Spanish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_cs_es model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from Cszech to Spanish.
### How to use
Here is how to use this model to translate legal text from Cszech to Spanish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_cs_es"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_cs_es", do_lower_case=False,
skip_special_tokens=True),
device=0
)
cs_text = "Antonio Tajani (místopředseda Komise) ."
pipeline([cs_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_cs_es model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_cs_es | 48.559|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_multitask_cs_fr | 2021-04-22T18:14:54.000Z | [
"pytorch",
"t5",
"seq2seq",
"Cszech French",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Cszech French model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 9 | transformers |
---
language: Cszech French
tags:
- translation Cszech French model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Agentura USA pro ochranu životního prostředí ve své hodnotící studii v roce 2002 zjistila možnou systémovou toxicitu a karcinogenitu a údaje získané z krevních testů nasvědčují rozsáhlé expozici obyvatelstva."
---
# legal_t5_small_multitask_cs_fr model
Model on translating legal text from Cszech to French. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_cs_fr model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from Cszech to French.
### How to use
Here is how to use this model to translate legal text from Cszech to French in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_cs_fr"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_cs_fr", do_lower_case=False,
skip_special_tokens=True),
device=0
)
cs_text = "Agentura USA pro ochranu životního prostředí ve své hodnotící studii v roce 2002 zjistila možnou systémovou toxicitu a karcinogenitu a údaje získané z krevních testů nasvědčují rozsáhlé expozici obyvatelstva."
pipeline([cs_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_cs_fr model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_cs_fr | 47.588|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_multitask_cs_it | 2021-04-22T18:14:57.000Z | [
"pytorch",
"t5",
"seq2seq",
"Cszech Italian",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Cszech Italian model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 7 | transformers |
---
language: Cszech Italian
tags:
- translation Cszech Italian model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Příprava Evropské rady (29.-30. října 2009)"
---
# legal_t5_small_multitask_cs_it model
Model on translating legal text from Cszech to Italian. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_cs_it model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from Cszech to Italian.
### How to use
Here is how to use this model to translate legal text from Cszech to Italian in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_cs_it"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_cs_it", do_lower_case=False,
skip_special_tokens=True),
device=0
)
cs_text = "Příprava Evropské rady (29.-30. října 2009)"
pipeline([cs_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_cs_it model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_cs_it | 45.297|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_multitask_cs_sv | 2021-04-22T18:14:59.000Z | [
"pytorch",
"t5",
"seq2seq",
"Cszech Swedish",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Cszech Swedish model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 6 | transformers |
---
language: Cszech Swedish
tags:
- translation Cszech Swedish model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Hračky určené pro častý kontakt s kůží obsahující alergenní látky jiné než vonné, které jsou známé vyvoláváním vážných nebo dokonce osudných účinků na zdraví dětí (například látky, které mohou vyvolat anafylaktický šok), musí být v souladu s ustanoveními týkajícími se označování uvedenými ve směrnici Komise 2006/125/ES ze dne 5. prosince 2006 o obilných a ostatních příkrmech pro kojence a malé děti."
---
# legal_t5_small_multitask_cs_sv model
Model on translating legal text from Cszech to Swedish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_cs_sv model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from Cszech to Swedish.
### How to use
Here is how to use this model to translate legal text from Cszech to Swedish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_cs_sv"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_cs_sv", do_lower_case=False,
skip_special_tokens=True),
device=0
)
cs_text = "Hračky určené pro častý kontakt s kůží obsahující alergenní látky jiné než vonné, které jsou známé vyvoláváním vážných nebo dokonce osudných účinků na zdraví dětí (například látky, které mohou vyvolat anafylaktický šok), musí být v souladu s ustanoveními týkajícími se označování uvedenými ve směrnici Komise 2006/125/ES ze dne 5. prosince 2006 o obilných a ostatních příkrmech pro kojence a malé děti."
pipeline([cs_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_cs_sv model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_cs_sv | 35.871|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_multitask_de_cs | 2021-04-16T09:05:39.000Z | []
| [
".gitattributes"
]
| SEBIS | 0 | |||
SEBIS/legal_t5_small_multitask_de_en | 2021-04-22T18:14:25.000Z | [
"pytorch",
"t5",
"seq2seq",
"Deustch English",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Deustch English model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 25 | transformers |
---
language: Deustch English
tags:
- translation Deustch English model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Der zuständige Ausschuss wacht darüber, dass alle Angaben, die die Ausübung des Mandats eines Mitglieds bzw. die Rangfolge der Stellvertreter beeinflussen können, dem Parlament unverzüglich von den Behörden der Mitgliedstaaten und der Union - unter Angabe deren Wirksamwerdens im Falle einer Benennung - übermittelt werden."
---
# legal_t5_small_multitask_de_en model
Model on translating legal text from Deustch to English. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_de_en model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from Deustch to English.
### How to use
Here is how to use this model to translate legal text from Deustch to English in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_de_en"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_de_en", do_lower_case=False,
skip_special_tokens=True),
device=0
)
de_text = "Der zuständige Ausschuss wacht darüber, dass alle Angaben, die die Ausübung des Mandats eines Mitglieds bzw. die Rangfolge der Stellvertreter beeinflussen können, dem Parlament unverzüglich von den Behörden der Mitgliedstaaten und der Union - unter Angabe deren Wirksamwerdens im Falle einer Benennung - übermittelt werden."
pipeline([de_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_de_en model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_de_en | 42.437|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_multitask_de_es | 2021-04-22T18:14:31.000Z | [
"pytorch",
"t5",
"seq2seq",
"Deustch Spanish",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Deustch Spanish model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 6 | transformers |
---
language: Deustch Spanish
tags:
- translation Deustch Spanish model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Kugelförmige, eiförmige oder ellipsenförmige Verpackungen dürfen keine Abmessungen aufweisen, die durch eine Einklemmung im Mund oder Rachen eine Blockierung der internen Atemwege verursachen können."
---
# legal_t5_small_multitask_de_es model
Model on translating legal text from Deustch to Spanish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_de_es model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from Deustch to Spanish.
### How to use
Here is how to use this model to translate legal text from Deustch to Spanish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_de_es"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_de_es", do_lower_case=False,
skip_special_tokens=True),
device=0
)
de_text = "Kugelförmige, eiförmige oder ellipsenförmige Verpackungen dürfen keine Abmessungen aufweisen, die durch eine Einklemmung im Mund oder Rachen eine Blockierung der internen Atemwege verursachen können."
pipeline([de_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_de_es model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_de_es | 36.458|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_multitask_de_fr | 2021-04-22T18:15:04.000Z | [
"pytorch",
"t5",
"seq2seq",
"Deustch French",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Deustch French model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 6 | transformers |
---
language: Deustch French
tags:
- translation Deustch French model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Wegen einer in Ausübung ihres Amtes erfolgten Äußerung oder Abstimmung dürfen Mitglieder des Europäischen Parlaments weder in ein Ermittlungsverfahren verwickelt noch festgenommen oder verfolgt werden."
---
# legal_t5_small_multitask_de_fr model
Model on translating legal text from Deustch to French. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_de_fr model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from Deustch to French.
### How to use
Here is how to use this model to translate legal text from Deustch to French in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_de_fr"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_de_fr", do_lower_case=False,
skip_special_tokens=True),
device=0
)
de_text = "Wegen einer in Ausübung ihres Amtes erfolgten Äußerung oder Abstimmung dürfen Mitglieder des Europäischen Parlaments weder in ein Ermittlungsverfahren verwickelt noch festgenommen oder verfolgt werden."
pipeline([de_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_de_fr model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_de_fr | 41.003|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_multitask_de_it | 2021-04-22T18:14:34.000Z | [
"pytorch",
"t5",
"seq2seq",
"Deustch Italian",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Deustch Italian model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 8 | transformers |
---
language: Deustch Italian
tags:
- translation Deustch Italian model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Im vergangenen März hat die Parlamentarische Versammlung der Union für den Mittelmeerraum einstimmig den Bericht „Einwanderung und Integration: Dialog zwischen den neuen Generationen zur Entwicklung einer Kultur des Friedens“ verabschiedet."
---
# legal_t5_small_multitask_de_it model
Model on translating legal text from Deustch to Italian. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_de_it model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from Deustch to Italian.
### How to use
Here is how to use this model to translate legal text from Deustch to Italian in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_de_it"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_de_it", do_lower_case=False,
skip_special_tokens=True),
device=0
)
de_text = "Im vergangenen März hat die Parlamentarische Versammlung der Union für den Mittelmeerraum einstimmig den Bericht „Einwanderung und Integration: Dialog zwischen den neuen Generationen zur Entwicklung einer Kultur des Friedens“ verabschiedet."
pipeline([de_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_de_it model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_de_it | 41.405|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_multitask_de_sv | 2021-04-22T18:14:37.000Z | [
"pytorch",
"t5",
"seq2seq",
"Deustch Swedish",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Deustch Swedish model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 6 | transformers |
---
language: Deustch Swedish
tags:
- translation Deustch Swedish model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "SCHRIFTLICHE ANFRAGE P-1584/03"
---
# legal_t5_small_multitask_de_sv model
Model on translating legal text from Deustch to Swedish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_de_sv model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from Deustch to Swedish.
### How to use
Here is how to use this model to translate legal text from Deustch to Swedish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_de_sv"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_de_sv", do_lower_case=False,
skip_special_tokens=True),
device=0
)
de_text = "SCHRIFTLICHE ANFRAGE P-1584/03"
pipeline([de_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_de_sv model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_de_sv | 35.945|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_multitask_en_cs | 2021-04-22T18:15:49.000Z | [
"pytorch",
"t5",
"seq2seq",
"English Cszech",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation English Cszech model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 6 | transformers |
---
language: English Cszech
tags:
- translation English Cszech model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Text proposed by the Commission"
---
# legal_t5_small_multitask_en_cs model
Model on translating legal text from English to Cszech. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_en_cs model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from English to Cszech.
### How to use
Here is how to use this model to translate legal text from English to Cszech in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_en_cs"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_en_cs", do_lower_case=False,
skip_special_tokens=True),
device=0
)
en_text = "Text proposed by the Commission"
pipeline([en_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_en_cs model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_en_cs | 36.226|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_multitask_en_de | 2021-04-22T18:15:46.000Z | [
"pytorch",
"t5",
"seq2seq",
"English Deustch",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation English Deustch model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 6 | transformers |
---
language: English Deustch
tags:
- translation English Deustch model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Reiterates its call on the Commission to submit a proposal to the Parliament and Council as soon as possible in order to ensure that bunker oil for engine fuel in new ships is stored in safer, double-hull tanks since freight or container ships often contain heavy fuel as engine fuel in their bunkers the quantity of which may considerably exceed the cargoes of smaller oil tankers; considers that, before submitting such a proposal, the Commission should ascertain whether or not the existing IMO rules laid down in Resolution MEPC.141(54) are sufficient to guarantee the safe transport of bunker oil used as fuel;"
---
# legal_t5_small_multitask_en_de model
Model on translating legal text from English to Deustch. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_en_de model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from English to Deustch.
### How to use
Here is how to use this model to translate legal text from English to Deustch in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_en_de"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_en_de", do_lower_case=False,
skip_special_tokens=True),
device=0
)
en_text = "Reiterates its call on the Commission to submit a proposal to the Parliament and Council as soon as possible in order to ensure that bunker oil for engine fuel in new ships is stored in safer, double-hull tanks since freight or container ships often contain heavy fuel as engine fuel in their bunkers the quantity of which may considerably exceed the cargoes of smaller oil tankers; considers that, before submitting such a proposal, the Commission should ascertain whether or not the existing IMO rules laid down in Resolution MEPC.141(54) are sufficient to guarantee the safe transport of bunker oil used as fuel;"
pipeline([en_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_en_de model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_en_de | 41.337|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_multitask_en_es | 2021-04-22T18:14:39.000Z | [
"pytorch",
"t5",
"seq2seq",
"English Spanish",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation English Spanish model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 7 | transformers |
---
language: English Spanish
tags:
- translation English Spanish model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Amendment 14 Article 5, paragraph 1, point (a)"
---
# legal_t5_small_multitask_en_es model
Model on translating legal text from English to Spanish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_en_es model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from English to Spanish.
### How to use
Here is how to use this model to translate legal text from English to Spanish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_en_es"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_en_es", do_lower_case=False,
skip_special_tokens=True),
device=0
)
en_text = "Amendment 14 Article 5, paragraph 1, point (a)"
pipeline([en_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_en_es model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_en_es | 37.404|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_multitask_en_fr | 2021-04-22T18:14:41.000Z | [
"pytorch",
"t5",
"seq2seq",
"English French",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation English French model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 9 | transformers |
---
language: English French
tags:
- translation English French model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Article 2(b), sub-heading"
---
# legal_t5_small_multitask_en_fr model
Model on translating legal text from English to French. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_en_fr model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from English to French.
### How to use
Here is how to use this model to translate legal text from English to French in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_en_fr"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_en_fr", do_lower_case=False,
skip_special_tokens=True),
device=0
)
en_text = "Article 2(b), sub-heading"
pipeline([en_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_en_fr model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_en_fr | 38.063|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_multitask_en_it | 2021-04-22T18:14:44.000Z | [
"pytorch",
"t5",
"seq2seq",
"English Italian",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation English Italian model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 8 | transformers |
---
language: English Italian
tags:
- translation English Italian model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "WRITTEN QUESTION E-1184/07"
---
# legal_t5_small_multitask_en_it model
Model on translating legal text from English to Italian. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_en_it model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from English to Italian.
### How to use
Here is how to use this model to translate legal text from English to Italian in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_en_it"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_en_it", do_lower_case=False,
skip_special_tokens=True),
device=0
)
en_text = "WRITTEN QUESTION E-1184/07"
pipeline([en_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_en_it model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_en_it | 47.070|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_multitask_en_sv | 2021-04-22T18:16:03.000Z | [
"pytorch",
"t5",
"seq2seq",
"English Swedish",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation English Swedish model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 9 | transformers |
---
language: English Swedish
tags:
- translation English Swedish model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "whereas enlargement to Bulgaria and Romania should be effective in 2007,"
---
# legal_t5_small_multitask_en_sv model
Model on translating legal text from English to Swedish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_en_sv model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from English to Swedish.
### How to use
Here is how to use this model to translate legal text from English to Swedish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_en_sv"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_en_sv", do_lower_case=False,
skip_special_tokens=True),
device=0
)
en_text = "whereas enlargement to Bulgaria and Romania should be effective in 2007,"
pipeline([en_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_en_sv model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_en_sv | 47.968|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_multitask_es_cs | 2021-04-22T18:16:05.000Z | [
"pytorch",
"t5",
"seq2seq",
"Spanish Cszech",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Spanish Cszech model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 6 | transformers |
---
language: Spanish Cszech
tags:
- translation Spanish Cszech model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "La política pesquera supone que se tenga en cuenta un gran número de dimensiones – social, medioambiental, económica – lo que exige un enfoque integrado y equilibrado, incompatible con una visión que los sobrestima, en particular, mediante una definición a priori de cualquier jerarquía de prioridades."
---
# legal_t5_small_multitask_es_cs model
Model on translating legal text from Spanish to Cszech. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_es_cs model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from Spanish to Cszech.
### How to use
Here is how to use this model to translate legal text from Spanish to Cszech in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_es_cs"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_es_cs", do_lower_case=False,
skip_special_tokens=True),
device=0
)
es_text = "La política pesquera supone que se tenga en cuenta un gran número de dimensiones – social, medioambiental, económica – lo que exige un enfoque integrado y equilibrado, incompatible con una visión que los sobrestima, en particular, mediante una definición a priori de cualquier jerarquía de prioridades."
pipeline([es_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_es_cs model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_es_cs | 47.673|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_multitask_es_de | 2021-04-22T18:16:07.000Z | [
"pytorch",
"t5",
"seq2seq",
"Spanish Deustch",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Spanish Deustch model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 8 | transformers |
---
language: Spanish Deustch
tags:
- translation Spanish Deustch model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Estudios y publicaciones realizados por el Parlamento Europeo"
---
# legal_t5_small_multitask_es_de model
Model on translating legal text from Spanish to Deustch. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_es_de model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from Spanish to Deustch.
### How to use
Here is how to use this model to translate legal text from Spanish to Deustch in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_es_de"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_es_de", do_lower_case=False,
skip_special_tokens=True),
device=0
)
es_text = "Estudios y publicaciones realizados por el Parlamento Europeo"
pipeline([es_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_es_de model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_es_de | 41.196|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_multitask_es_en | 2021-04-22T18:16:09.000Z | [
"pytorch",
"t5",
"seq2seq",
"Spanish English",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Spanish English model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 8 | transformers |
---
language: Spanish English
tags:
- translation Spanish English model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "PPE-DE: 6', PSE: 6', ALDE: 5', Verts/ALE: 4', GUE/NGL: 4', IND/DEM:4', UEN: 4', NI: 4'"
---
# legal_t5_small_multitask_es_en model
Model on translating legal text from Spanish to English. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_es_en model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from Spanish to English.
### How to use
Here is how to use this model to translate legal text from Spanish to English in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_es_en"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_es_en", do_lower_case=False,
skip_special_tokens=True),
device=0
)
es_text = "PPE-DE: 6', PSE: 6', ALDE: 5', Verts/ALE: 4', GUE/NGL: 4', IND/DEM:4', UEN: 4', NI: 4'"
pipeline([es_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_es_en model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_es_en | 36.607|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_multitask_es_fr | 2021-04-22T18:14:47.000Z | [
"pytorch",
"t5",
"seq2seq",
"Spanish French",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Spanish French model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 6 | transformers |
---
language: Spanish French
tags:
- translation Spanish French model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Fecha del anuncio en el Pleno"
---
# legal_t5_small_multitask_es_fr model
Model on translating legal text from Spanish to French. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_es_fr model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from Spanish to French.
### How to use
Here is how to use this model to translate legal text from Spanish to French in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_es_fr"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_es_fr", do_lower_case=False,
skip_special_tokens=True),
device=0
)
es_text = "Fecha del anuncio en el Pleno"
pipeline([es_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_es_fr model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_es_fr | 41.523|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_multitask_es_it | 2021-04-22T18:16:12.000Z | [
"pytorch",
"t5",
"seq2seq",
"Spanish Italian",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Spanish Italian model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 7 | transformers |
---
language: Spanish Italian
tags:
- translation Spanish Italian model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Por el Parlamento Europeo Por el Consejo"
---
# legal_t5_small_multitask_es_it model
Model on translating legal text from Spanish to Italian. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_es_it model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from Spanish to Italian.
### How to use
Here is how to use this model to translate legal text from Spanish to Italian in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_es_it"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_es_it", do_lower_case=False,
skip_special_tokens=True),
device=0
)
es_text = "Por el Parlamento Europeo Por el Consejo"
pipeline([es_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_es_it model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_es_it | 37.386|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_multitask_es_sv | 2021-04-22T18:16:14.000Z | [
"pytorch",
"t5",
"seq2seq",
"Spanish Swedish",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Spanish Swedish model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 6 | transformers |
---
language: Spanish Swedish
tags:
- translation Spanish Swedish model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Tiempo de uso de la palabra ( artículo 149 del Reglamento PE)"
---
# legal_t5_small_multitask_es_sv model
Model on translating legal text from Spanish to Swedish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_es_sv model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from Spanish to Swedish.
### How to use
Here is how to use this model to translate legal text from Spanish to Swedish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_es_sv"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_es_sv", do_lower_case=False,
skip_special_tokens=True),
device=0
)
es_text = "Tiempo de uso de la palabra ( artículo 149 del Reglamento PE)"
pipeline([es_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_es_sv model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_es_sv | 37.975|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_multitask_fr_cs | 2021-04-22T18:15:17.000Z | [
"pytorch",
"t5",
"seq2seq",
"French Cszech",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation French Cszech model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 9 | transformers |
---
language: French Cszech
tags:
- translation French Cszech model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "BUDG – Décision: aucun avis"
---
# legal_t5_small_multitask_fr_cs model
Model on translating legal text from French to Cszech. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_fr_cs model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from French to Cszech.
### How to use
Here is how to use this model to translate legal text from French to Cszech in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_fr_cs"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_fr_cs", do_lower_case=False,
skip_special_tokens=True),
device=0
)
fr_text = "BUDG – Décision: aucun avis"
pipeline([fr_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_fr_cs model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_fr_cs | 44.499|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_multitask_fr_de | 2021-04-16T09:18:02.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 9 | transformers | |
SEBIS/legal_t5_small_multitask_fr_en | 2021-04-22T18:15:32.000Z | [
"pytorch",
"t5",
"seq2seq",
"French English",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation French English model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 6 | transformers |
---
language: French English
tags:
- translation French English model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Raül Romeva i Rueda (Verts/ALE)"
---
# legal_t5_small_multitask_fr_en model
Model on translating legal text from French to English. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_fr_en model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from French to English.
### How to use
Here is how to use this model to translate legal text from French to English in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_fr_en"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_fr_en", do_lower_case=False,
skip_special_tokens=True),
device=0
)
fr_text = "Raül Romeva i Rueda (Verts/ALE)"
pipeline([fr_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_fr_en model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_fr_en | 39.123|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_multitask_fr_es | 2021-04-22T18:15:24.000Z | [
"pytorch",
"t5",
"seq2seq",
"French Spanish",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation French Spanish model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 6 | transformers |
---
language: French Spanish
tags:
- translation French Spanish model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "+ lettre autorités suédoises"
---
# legal_t5_small_multitask_fr_es model
Model on translating legal text from French to Spanish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_fr_es model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from French to Spanish.
### How to use
Here is how to use this model to translate legal text from French to Spanish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_fr_es"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_fr_es", do_lower_case=False,
skip_special_tokens=True),
device=0
)
fr_text = "+ lettre autorités suédoises"
pipeline([fr_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_fr_es model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_fr_es | 43.807|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_multitask_fr_it | 2021-04-22T18:14:49.000Z | [
"pytorch",
"t5",
"seq2seq",
"French Italian",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation French Italian model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 6 | transformers |
---
language: French Italian
tags:
- translation French Italian model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Situation humanitaire au Soudan"
---
# legal_t5_small_multitask_fr_it model
Model on translating legal text from French to Italian. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_fr_it model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from French to Italian.
### How to use
Here is how to use this model to translate legal text from French to Italian in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_fr_it"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_fr_it", do_lower_case=False,
skip_special_tokens=True),
device=0
)
fr_text = "Situation humanitaire au Soudan"
pipeline([fr_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_fr_it model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_fr_it | 41.140|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_multitask_fr_sv | 2021-04-22T18:15:26.000Z | [
"pytorch",
"t5",
"seq2seq",
"French Swedish",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation French Swedish model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 6 | transformers |
---
language: French Swedish
tags:
- translation French Swedish model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "**I Procédure de coopération (première lecture)"
---
# legal_t5_small_multitask_fr_sv model
Model on translating legal text from French to Swedish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_fr_sv model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from French to Swedish.
### How to use
Here is how to use this model to translate legal text from French to Swedish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_fr_sv"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_fr_sv", do_lower_case=False,
skip_special_tokens=True),
device=0
)
fr_text = "**I Procédure de coopération (première lecture)"
pipeline([fr_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_fr_sv model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_fr_sv | 39.947|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_multitask_it_cs | 2021-04-22T18:15:22.000Z | [
"pytorch",
"t5",
"seq2seq",
"Italian Cszech",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Italian Cszech model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 6 | transformers |
---
language: Italian Cszech
tags:
- translation Italian Cszech model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Per mobilitare il Fondo, la Commissione ha presentato all'autorità di bilancio una richiesta di storno per un importo complessivo di 667.823 EUR dalla riserva FEG (40 02 43) in stanziamenti d'impegno verso la linea di bilancio FEG."
---
# legal_t5_small_multitask_it_cs model
Model on translating legal text from Italian to Cszech. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_it_cs model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from Italian to Cszech.
### How to use
Here is how to use this model to translate legal text from Italian to Cszech in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_it_cs"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_it_cs", do_lower_case=False,
skip_special_tokens=True),
device=0
)
it_text = "Per mobilitare il Fondo, la Commissione ha presentato all'autorità di bilancio una richiesta di storno per un importo complessivo di 667.823 EUR dalla riserva FEG (40 02 43) in stanziamenti d'impegno verso la linea di bilancio FEG."
pipeline([it_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_it_cs model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_it_cs | 37.935|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_multitask_it_de | 2021-04-22T18:15:34.000Z | [
"pytorch",
"t5",
"seq2seq",
"Italian Deustch",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Italian Deustch model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 7 | transformers |
---
language: Italian Deustch
tags:
- translation Italian Deustch model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "di Alyn Smith (Verts/ALE)"
---
# legal_t5_small_multitask_it_de model
Model on translating legal text from Italian to Deustch. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_it_de model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from Italian to Deustch.
### How to use
Here is how to use this model to translate legal text from Italian to Deustch in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_it_de"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_it_de", do_lower_case=False,
skip_special_tokens=True),
device=0
)
it_text = "di Alyn Smith (Verts/ALE)"
pipeline([it_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_it_de model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_it_de | 35.365|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_multitask_it_en | 2021-04-22T18:15:39.000Z | [
"pytorch",
"t5",
"seq2seq",
"Italian English",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Italian English model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 6 | transformers |
---
language: Italian English
tags:
- translation Italian English model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Con l’adesione all'area dell'euro questo procedimento non è stato più possibile."
---
# legal_t5_small_multitask_it_en model
Model on translating legal text from Italian to English. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_it_en model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from Italian to English.
### How to use
Here is how to use this model to translate legal text from Italian to English in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_it_en"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_it_en", do_lower_case=False,
skip_special_tokens=True),
device=0
)
it_text = "Con l’adesione all'area dell'euro questo procedimento non è stato più possibile."
pipeline([it_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_it_en model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_it_en | 36.687|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_multitask_it_es | 2021-04-22T18:15:42.000Z | [
"pytorch",
"t5",
"seq2seq",
"Italian Spanish",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Italian Spanish model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 9 | transformers |
---
language: Italian Spanish
tags:
- translation Italian Spanish model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Interrogazione con richiesta di risposta scritta E-005808/2011"
---
# legal_t5_small_multitask_it_es model
Model on translating legal text from Italian to Spanish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_it_es model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from Italian to Spanish.
### How to use
Here is how to use this model to translate legal text from Italian to Spanish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_it_es"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_it_es", do_lower_case=False,
skip_special_tokens=True),
device=0
)
it_text = "Interrogazione con richiesta di risposta scritta E-005808/2011"
pipeline([it_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_it_es model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_it_es | 36.980|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_multitask_it_fr | 2021-04-22T18:15:37.000Z | [
"pytorch",
"t5",
"seq2seq",
"Italian French",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Italian French model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 10 | transformers |
---
language: Italian French
tags:
- translation Italian French model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Gli Stati membri adottano le leggi, i regolamenti e le disposizioni amministrative necessari per ottemperare alla presente direttiva entro il 31 dicembre 2002 e ne informano immediatamente la Commissione."
---
# legal_t5_small_multitask_it_fr model
Model on translating legal text from Italian to French. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_it_fr model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from Italian to French.
### How to use
Here is how to use this model to translate legal text from Italian to French in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_it_fr"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_it_fr", do_lower_case=False,
skip_special_tokens=True),
device=0
)
it_text = "Gli Stati membri adottano le leggi, i regolamenti e le disposizioni amministrative necessari per ottemperare alla presente direttiva entro il 31 dicembre 2002 e ne informano immediatamente la Commissione."
pipeline([it_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_it_fr model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_it_fr | 41.956|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_multitask_it_sv | 2021-04-22T18:14:52.000Z | [
"pytorch",
"t5",
"seq2seq",
"Italian Swedish",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Italian Swedish model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 7 | transformers |
---
language: Italian Swedish
tags:
- translation Italian Swedish model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Può il Commissario responsabile comunicare al Parlamento in che modo la DG Ricerca garantirà che l’Europa possa svolgere un ruolo di primo piano in questo sforzo globale di ricerca sul diabete?"
---
# legal_t5_small_multitask_it_sv model
Model on translating legal text from Italian to Swedish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_it_sv model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from Italian to Swedish.
### How to use
Here is how to use this model to translate legal text from Italian to Swedish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_it_sv"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_it_sv", do_lower_case=False,
skip_special_tokens=True),
device=0
)
it_text = "Può il Commissario responsabile comunicare al Parlamento in che modo la DG Ricerca garantirà che l’Europa possa svolgere un ruolo di primo piano in questo sforzo globale di ricerca sul diabete?"
pipeline([it_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_it_sv model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_it_sv | 41.523|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_multitask_sv_cs | 2021-04-22T18:15:44.000Z | [
"pytorch",
"t5",
"seq2seq",
"Swedish Cszech",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Swedish Cszech model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 7 | transformers |
---
language: Swedish Cszech
tags:
- translation Swedish Cszech model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Standarderna för integrerat växtskydd bör tillämpas snabbare än vad kommissionen föreskrivit."
---
# legal_t5_small_multitask_sv_cs model
Model on translating legal text from Swedish to Cszech. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_sv_cs model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from Swedish to Cszech.
### How to use
Here is how to use this model to translate legal text from Swedish to Cszech in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_sv_cs"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_sv_cs", do_lower_case=False,
skip_special_tokens=True),
device=0
)
sv_text = "Standarderna för integrerat växtskydd bör tillämpas snabbare än vad kommissionen föreskrivit."
pipeline([sv_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_sv_cs model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_sv_cs | 45.058|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_multitask_sv_de | 2021-04-22T18:15:56.000Z | [
"pytorch",
"t5",
"seq2seq",
"Swedish Deustch",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Swedish Deustch model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 6 | transformers |
---
language: Swedish Deustch
tags:
- translation Swedish Deustch model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Kan kommissionen bekräfta att i Olaf‑handlingar som samlats in inom ramen för denna granskning, daterade mellan 2000 och 2004, kan följande information hittas: —"
---
# legal_t5_small_multitask_sv_de model
Model on translating legal text from Swedish to Deustch. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_sv_de model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from Swedish to Deustch.
### How to use
Here is how to use this model to translate legal text from Swedish to Deustch in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_sv_de"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_sv_de", do_lower_case=False,
skip_special_tokens=True),
device=0
)
sv_text = "Kan kommissionen bekräfta att i Olaf‑handlingar som samlats in inom ramen för denna granskning, daterade mellan 2000 och 2004, kan följande information hittas: —"
pipeline([sv_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_sv_de model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_sv_de | 44.684|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_multitask_sv_en | 2021-04-22T18:16:00.000Z | [
"pytorch",
"t5",
"seq2seq",
"Swedish English",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Swedish English model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 6 | transformers |
---
language: Swedish English
tags:
- translation Swedish English model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "inlämnat av följande ledamöter:"
---
# legal_t5_small_multitask_sv_en model
Model on translating legal text from Swedish to English. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_sv_en model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from Swedish to English.
### How to use
Here is how to use this model to translate legal text from Swedish to English in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_sv_en"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_sv_en", do_lower_case=False,
skip_special_tokens=True),
device=0
)
sv_text = "inlämnat av följande ledamöter:"
pipeline([sv_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_sv_en model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_sv_en | 36.195|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_multitask_sv_es | 2021-04-22T18:15:51.000Z | [
"pytorch",
"t5",
"seq2seq",
"Swedish Spanish",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Swedish Spanish model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 6 | transformers |
---
language: Swedish Spanish
tags:
- translation Swedish Spanish model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "med beaktande av sin resolution av den 14 april 2005 om torkan i Portugal,"
---
# legal_t5_small_multitask_sv_es model
Model on translating legal text from Swedish to Spanish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_sv_es model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from Swedish to Spanish.
### How to use
Here is how to use this model to translate legal text from Swedish to Spanish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_sv_es"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_sv_es", do_lower_case=False,
skip_special_tokens=True),
device=0
)
sv_text = "med beaktande av sin resolution av den 14 april 2005 om torkan i Portugal,"
pipeline([sv_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_sv_es model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_sv_es | 35.506|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_multitask_sv_fr | 2021-04-22T18:15:53.000Z | [
"pytorch",
"t5",
"seq2seq",
"Swedish French",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Swedish French model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 7 | transformers |
---
language: Swedish French
tags:
- translation Swedish French model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Europaparlamentet understryker att det stora antalet kvinnor och barn bland flyktingar och internt fördrivna som registrerats av internationella organ som resultat av väpnade konflikter och inbördeskrig är mycket oroväckande."
---
# legal_t5_small_multitask_sv_fr model
Model on translating legal text from Swedish to French. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_sv_fr model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from Swedish to French.
### How to use
Here is how to use this model to translate legal text from Swedish to French in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_sv_fr"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_sv_fr", do_lower_case=False,
skip_special_tokens=True),
device=0
)
sv_text = "Europaparlamentet understryker att det stora antalet kvinnor och barn bland flyktingar och internt fördrivna som registrerats av internationella organ som resultat av väpnade konflikter och inbördeskrig är mycket oroväckande."
pipeline([sv_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_sv_fr model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_sv_fr | 45.790|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_multitask_sv_it | 2021-04-22T18:15:58.000Z | [
"pytorch",
"t5",
"seq2seq",
"Swedish Italian",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Swedish Italian model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 7 | transformers |
---
language: Swedish Italian
tags:
- translation Swedish Italian model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "De nationella tillsynsmyndigheterna får använda"
---
# legal_t5_small_multitask_sv_it model
Model on translating legal text from Swedish to Italian. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_sv_it model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from Swedish to Italian.
### How to use
Here is how to use this model to translate legal text from Swedish to Italian in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_sv_it"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_sv_it", do_lower_case=False,
skip_special_tokens=True),
device=0
)
sv_text = "De nationella tillsynsmyndigheterna får använda"
pipeline([sv_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_sv_it model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_sv_it | 44.242|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_summ_cs | 2021-01-29T08:52:42.000Z | [
"pytorch",
"t5",
"lm-head",
"seq2seq",
"Cszech",
"dataset:jrc-acquis",
"transformers",
"summarization Cszech model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 10 | transformers |
---
language: Cszech
tags:
- summarization Cszech model
datasets:
- jrc-acquis
widget:
- text: "(2006/C 67/15) (Text s významem pro EHP) Dne 10. března 2006 se Komise rozhodla nevznést námitky proti výše uvedenému spojení a prohlásit ho za slučitelné se společným trhem. Toto rozhodnutí je založeno na čl. 6 odst. 1 písm. b) nařízení Rady (ES) č. 139/2004. Celý text rozhodnutí je přístupný pouze v angličtině a bude uveřejněn poté, co bude zbaven obchodního tajemství, které může případně obsahovat. Text bude dosažitelný: - na webové stránce Europa – hospodářská soutěž (http://europa.eu.int/comm/competition/mergers/cases/). Tato webová stránka umožňuje vyhledat jednotlivá rozhodnutí o spojení, a to včetně společnosti, čísla případu, data a indexu odvětví hospodářství. - v elektronické podobě na webové stránce EUR-Lex, pod dokumentem č. 32006M4093. EUR-Lex umožňuje přístup k Evropskému právu přes Internet. (http://europa.eu.int/eur-lex/lex) -------------------------------------------------- "
---
# legal_t5_small_summ_cs model
Model for Summarization of legal text written in Cszech. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis.
## Model description
legal_t5_small_summ_cs is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for summarization of legal texts written in Cszech.
### How to use
Here is how to use this model to summarize legal text written in Cszech in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_summ_cs"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_summ_cs", do_lower_case=False,
skip_special_tokens=True),
device=0
)
cs_text = "(2006/C 67/15) (Text s významem pro EHP) Dne 10. března 2006 se Komise rozhodla nevznést námitky proti výše uvedenému spojení a prohlásit ho za slučitelné se společným trhem. Toto rozhodnutí je založeno na čl. 6 odst. 1 písm. b) nařízení Rady (ES) č. 139/2004. Celý text rozhodnutí je přístupný pouze v angličtině a bude uveřejněn poté, co bude zbaven obchodního tajemství, které může případně obsahovat. Text bude dosažitelný: - na webové stránce Europa – hospodářská soutěž (http://europa.eu.int/comm/competition/mergers/cases/). Tato webová stránka umožňuje vyhledat jednotlivá rozhodnutí o spojení, a to včetně společnosti, čísla případu, data a indexu odvětví hospodářství. - v elektronické podobě na webové stránce EUR-Lex, pod dokumentem č. 32006M4093. EUR-Lex umožňuje přístup k Evropskému právu přes Internet. (http://europa.eu.int/eur-lex/lex) -------------------------------------------------- "
pipeline([cs_text], max_length=512)
```
## Training data
The legal_t5_small_summ_cs model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html) dataset consisting of 18 Thousand texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 64). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for classification test dataset, achieves the following results:
Test results :
| Model | Rouge1 | Rouge2 | Rouge Lsum |
|:-----:|:-----:|:-----:|:-----:|
| legal_t5_small_summ_cs | 75.86|65.82 |74.95|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_summ_de | 2021-01-29T08:52:37.000Z | [
"pytorch",
"t5",
"lm-head",
"seq2seq",
"Deustch",
"dataset:jrc-acquis",
"transformers",
"summarization Deustch model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model"
]
| SEBIS | 12 | transformers |
---
language: Deustch
tags:
- summarization Deustch model
datasets:
- jrc-acquis
widget:
- text: "(90/365/EWG) DER RAT DER EUROPÄISCHEN GEMEINSCHAFTEN - gestützt auf den Vertrag zur Gründung der Europäischen Wirtschaftsgemeinschaft, insbesondere auf Artikel 235, auf Vorschlag der Kommission (1), nach Stellungnahme des Europäischen Parlaments (2), nach Stellungnahme des Wirtschafts- und Sozialausschusses (3), in Erwägung nachstehender Gründe: Gemäß Artikel 3 Buchstabe c) des Vertrages umfasst die Tätigkeit der Gemeinschaft, nach Maßgabe des Vertrages, die Beseitigung der Hindernisse für den freien Personenverkehr zwischen den Mitgliedstaaten. Artikel 8a des Vertrages sieht vor, daß der Binnenmarkt bis zum 31. Dezember 1992 zu verwirklichen ist. Der Binnenmarkt umfasst einen Raum ohne Binnengrenzen, in dem der freie Verkehr von Waren, Personen, Dienstleistungen und Kapital gemäß den Bestimmungen des Vertrages gewährleistet ist. Die Artikel 48 und 52 des Vertrages sehen die Freizuegigkeit der Arbeitnehmer und selbständig Erwerbstätigen vor, was ein Recht auf Aufenthalt in dem Mitgliedstaat beinhaltet, in dem sie ihr Berufsleben verbringen. Es empfiehlt sich, dieses Aufenthaltsrecht auch Personen zu gewähren, die aus dem Erwerbsleben ausgeschieden sind, auch wenn sie während ihres Berufslebens von dem Recht auf Freizuegigkeit keinen Gebrauch gemacht haben. Die Aufenthaltsberechtigten dürfen die öffentlichen Finanzen des Aufnahmemitgliedstaates nicht über Gebühr belasten. Nach Artikel 10 der Verordnung (EWG) Nr. 1408/71 (4) in der Fassung der Verordnung (EWG) Nr. 1390/81 (5) haben die Empfänger von Geldleistungen bei Invalidität und Alter und die Bezieher von Renten bei Arbeitsunfällen oder Berufskrankheiten auch dann weiterhin Anspruch auf diese Leistungen und Renten, wenn sie im Gebiet eines anderen Mitgliedstaates als des Staates wohnen, auf dessen Gebiet der zur Zahlung verpflichtete Träger seinen Sitz hat. Die Ausübung des Aufenthaltsrechts wird erst dann eine reale Möglichkeit, wenn es auch den Familienangehörigen zugestanden wird. Für die von dieser Richtlinie Begünstigten sollte eine Verwaltungsregelung entsprechend der insbesondere in der Richtlinie 68/360/EWG (6) und in der Richtlinie 64/221/EWG (7) vorgesehenen Regelung gelten. Der Vertrag enthält Befugnisse für den Erlaß der vorliegenden Richtlinie nur in Artikel 235 - HAT FOLGENDE RICHTLINIE ERLASSEN: Artikel 1 (1) Die Mitgliedstaaten gewähren den Angehörigen der Mitgliedstaaten, die in der Gemeinschaft eine Tätigkeit als Arbeitnehmer oder als Selbständige ausgeuebt haben, sowie deren Familienangehörigen nach der Definition von Absatz 2 unter der Bedingung das Aufenthaltsrecht, daß sie eine Invaliditäts-, Vorruhestands- oder Altersrente oder eine Rente wegen Arbeitsunfalls oder Berufskrankheit in einer solchen Höhe beziehen, daß sie während ihres Aufenthalts nicht die Sozialhilfe des Aufnahmemitgliedstaats in Anspruch nehmen müssen, und einen Krankenversicherungsschutz genießen, der im Aufnahmemitgliedstaat alle Risiken abdeckt. Die Existenzmittel des Antragstellers gelten als ausreichend, wenn sie einen Betrag übersteigen, unterhalb dessen der Aufnahmemitgliedstaat seinen Staatsangehörigen aufgrund der persönlichen Situation des Antragstellers und gegebenenfalls der Situation der nach Absatz 2 aufgenommenen Personen Sozialhilfe gewähren kann. Ist Unterabsatz 2 in einem Mitgliedstaat nicht anwendbar, so gelten die Existenzmittel des Antragstellers als ausreichend, wenn sie den Betrag der Grundrente der Sozialversicherung übersteigen, die der Aufnahmemitgliedstaat zahlt. (2) Bei dem Aufenthaltsberechtigten dürfen folgende Personen ungeachtet ihrer Staatsangehörigkeit in einem anderen Mitgliedstaat Wohnung nehmen: a) sein Ehegatte sowie die Verwandten in absteigender Linie, denen Unterhalt gewährt wird; b) seine Verwandten und die Verwandten seines Ehegatten in aufsteigender Linie, denen er Unterhalt gewährt. Artikel 2 (1) Zum Nachweis des Aufenthaltsrechts wird eine Bescheinigung, die »Aufenthaltserlaubnis für Staatsangehörige eines EWG-Mitgliedstaates%quot%, erteilt, deren Gültigkeit auf fünf Jahre mit Verlängerungsmöglichkeit begrenzt werden kann. Die Mitgliedstaaten können jedoch die Erneuerung der Aufenthaltserlaubnis nach den ersten zwei Aufenthaltsjahren verlangen, wenn sie dies für erforderlich halten. Einem Familienmitglied, das nicht die Staatsangehörigkeit eines Mitgliedstaats besitzt, wird ein Aufenthaltsdokument mit der gleichen Gültigkeitsdauer ausgestellt wie dem Staatsangehörigen, von dem es seine Rechte herleitet. Für die Erteilung der Aufenthaltserlaubnis oder des Aufenthaltsdokuments darf der Mitgliedstaat vom Antragsteller nur die Vorlage eines gültigen Personalausweises bzw. Reisepasses sowie den Nachweis verlangen, daß er die Voraussetzungen des Artikels 1 erfuellt. (2) Die Artikel 2 und 3, Artikel 6 Absatz 1 Buchstabe a) und Absatz 2 sowie Artikel 9 der Richtlinie 68/360/EWG finden auf die von dieser Richtlinie Begünstigten entsprechende Anwendung. Der Ehegatte eines Staatsangehörigen eines Mitgliedstaats, der im Hoheitsgebiet eines Mitgliedstaats aufenthaltsberechtigt ist, sowie die Kinder dieses Staatsangehörigen, denen er Unterhalt gewährt, haben, auch wenn sie die Staatsangehörigkeit eines Mitgliedstaats nicht besitzen, das Recht, im gesamten Hoheitsgebiet dieses Mitgliedstaats jedwede Tätigkeit im Lohn- oder Gehaltsverhältnis oder jedwede selbständige Erwerbstätigkeit auszuüben. Die Mitgliedstaaten dürfen nur aus Gründen der öffentlichen Ordnung, der öffentlichen Sicherheit oder der Volksgesundheit von den Bestimmungen dieser Richtlinie abweichen. In diesem Fall findet die Richtlinie 64/221/EWG Anwendung. (3) Die vorliegende Richtlinie berührt nicht die geltenden Rechtsvorschriften für den Erwerb von Zweitwohnungen. Artikel 3 Das Aufenthaltsrecht besteht, solange die Berechtigten die Bedingungen des Artikels 1 erfuellen. Artikel 4 Die Kommission arbeitet spätestens drei Jahre nach dem Beginn der Anwendung dieser Richtlinie und anschließend alle drei Jahre einen Bericht über ihre Anwendung aus und legt ihn dem Europäischen Parlament und dem Rat vor. Artikel 5 Die Mitgliedstaaten setzen die erforderlichen Rechts- und Verwaltungsvorschriften in Kraft, um dieser Richtlinie bis spätestens 30. Juni 1992 nachzukommen. Sie setzen die Kommission unverzueglich davon in Kenntnis. Artikel 6 Diese Richtlinie ist an die Mitgliedstaaten gerichtet. Geschehen zu Luxemburg am 28. Juni 1990. Im Namen des Rates Der Präsident M. GEOGHEGAN-QUINN (1) ABl. Nr. C 191 vom 28. 7. 1989, S. 3 und ABl. Nr. C 26 vom 3. 2. 1990, S. 19. (2) Stellungnahme vom 13. Juni 1990 (noch nicht im Amtsblatt veröffentlicht). (3) ABl. Nr. C 329 vom 30. 12. 1989, S. 25. (4) ABl. Nr. L 149 vom 5. 7. 1971, S. 2. (5) ABl. Nr. L 143 vom 29. 5. 1981, S. 1. (6) ABl. Nr. L 257 vom 19. 10. 1968, S. 13. (7) ABl. Nr. 56 vom 4. 4. 1964, S. 850/64. "
---
# legal_t5_small_summ_de model
Model for Summarization of legal text written in Deustch. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis.
## Model description
legal_t5_small_summ_de is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for summarization of legal texts written in Deustch.
### How to use
Here is how to use this model to summarize legal text written in Deustch in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_summ_de"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_summ_de", do_lower_case=False,
skip_special_tokens=True),
device=0
)
de_text = "(90/365/EWG) DER RAT DER EUROPÄISCHEN GEMEINSCHAFTEN - gestützt auf den Vertrag zur Gründung der Europäischen Wirtschaftsgemeinschaft, insbesondere auf Artikel 235, auf Vorschlag der Kommission (1), nach Stellungnahme des Europäischen Parlaments (2), nach Stellungnahme des Wirtschafts- und Sozialausschusses (3), in Erwägung nachstehender Gründe: Gemäß Artikel 3 Buchstabe c) des Vertrages umfasst die Tätigkeit der Gemeinschaft, nach Maßgabe des Vertrages, die Beseitigung der Hindernisse für den freien Personenverkehr zwischen den Mitgliedstaaten. Artikel 8a des Vertrages sieht vor, daß der Binnenmarkt bis zum 31. Dezember 1992 zu verwirklichen ist. Der Binnenmarkt umfasst einen Raum ohne Binnengrenzen, in dem der freie Verkehr von Waren, Personen, Dienstleistungen und Kapital gemäß den Bestimmungen des Vertrages gewährleistet ist. Die Artikel 48 und 52 des Vertrages sehen die Freizuegigkeit der Arbeitnehmer und selbständig Erwerbstätigen vor, was ein Recht auf Aufenthalt in dem Mitgliedstaat beinhaltet, in dem sie ihr Berufsleben verbringen. Es empfiehlt sich, dieses Aufenthaltsrecht auch Personen zu gewähren, die aus dem Erwerbsleben ausgeschieden sind, auch wenn sie während ihres Berufslebens von dem Recht auf Freizuegigkeit keinen Gebrauch gemacht haben. Die Aufenthaltsberechtigten dürfen die öffentlichen Finanzen des Aufnahmemitgliedstaates nicht über Gebühr belasten. Nach Artikel 10 der Verordnung (EWG) Nr. 1408/71 (4) in der Fassung der Verordnung (EWG) Nr. 1390/81 (5) haben die Empfänger von Geldleistungen bei Invalidität und Alter und die Bezieher von Renten bei Arbeitsunfällen oder Berufskrankheiten auch dann weiterhin Anspruch auf diese Leistungen und Renten, wenn sie im Gebiet eines anderen Mitgliedstaates als des Staates wohnen, auf dessen Gebiet der zur Zahlung verpflichtete Träger seinen Sitz hat. Die Ausübung des Aufenthaltsrechts wird erst dann eine reale Möglichkeit, wenn es auch den Familienangehörigen zugestanden wird. Für die von dieser Richtlinie Begünstigten sollte eine Verwaltungsregelung entsprechend der insbesondere in der Richtlinie 68/360/EWG (6) und in der Richtlinie 64/221/EWG (7) vorgesehenen Regelung gelten. Der Vertrag enthält Befugnisse für den Erlaß der vorliegenden Richtlinie nur in Artikel 235 - HAT FOLGENDE RICHTLINIE ERLASSEN: Artikel 1 (1) Die Mitgliedstaaten gewähren den Angehörigen der Mitgliedstaaten, die in der Gemeinschaft eine Tätigkeit als Arbeitnehmer oder als Selbständige ausgeuebt haben, sowie deren Familienangehörigen nach der Definition von Absatz 2 unter der Bedingung das Aufenthaltsrecht, daß sie eine Invaliditäts-, Vorruhestands- oder Altersrente oder eine Rente wegen Arbeitsunfalls oder Berufskrankheit in einer solchen Höhe beziehen, daß sie während ihres Aufenthalts nicht die Sozialhilfe des Aufnahmemitgliedstaats in Anspruch nehmen müssen, und einen Krankenversicherungsschutz genießen, der im Aufnahmemitgliedstaat alle Risiken abdeckt. Die Existenzmittel des Antragstellers gelten als ausreichend, wenn sie einen Betrag übersteigen, unterhalb dessen der Aufnahmemitgliedstaat seinen Staatsangehörigen aufgrund der persönlichen Situation des Antragstellers und gegebenenfalls der Situation der nach Absatz 2 aufgenommenen Personen Sozialhilfe gewähren kann. Ist Unterabsatz 2 in einem Mitgliedstaat nicht anwendbar, so gelten die Existenzmittel des Antragstellers als ausreichend, wenn sie den Betrag der Grundrente der Sozialversicherung übersteigen, die der Aufnahmemitgliedstaat zahlt. (2) Bei dem Aufenthaltsberechtigten dürfen folgende Personen ungeachtet ihrer Staatsangehörigkeit in einem anderen Mitgliedstaat Wohnung nehmen: a) sein Ehegatte sowie die Verwandten in absteigender Linie, denen Unterhalt gewährt wird; b) seine Verwandten und die Verwandten seines Ehegatten in aufsteigender Linie, denen er Unterhalt gewährt. Artikel 2 (1) Zum Nachweis des Aufenthaltsrechts wird eine Bescheinigung, die »Aufenthaltserlaubnis für Staatsangehörige eines EWG-Mitgliedstaates%quot%, erteilt, deren Gültigkeit auf fünf Jahre mit Verlängerungsmöglichkeit begrenzt werden kann. Die Mitgliedstaaten können jedoch die Erneuerung der Aufenthaltserlaubnis nach den ersten zwei Aufenthaltsjahren verlangen, wenn sie dies für erforderlich halten. Einem Familienmitglied, das nicht die Staatsangehörigkeit eines Mitgliedstaats besitzt, wird ein Aufenthaltsdokument mit der gleichen Gültigkeitsdauer ausgestellt wie dem Staatsangehörigen, von dem es seine Rechte herleitet. Für die Erteilung der Aufenthaltserlaubnis oder des Aufenthaltsdokuments darf der Mitgliedstaat vom Antragsteller nur die Vorlage eines gültigen Personalausweises bzw. Reisepasses sowie den Nachweis verlangen, daß er die Voraussetzungen des Artikels 1 erfuellt. (2) Die Artikel 2 und 3, Artikel 6 Absatz 1 Buchstabe a) und Absatz 2 sowie Artikel 9 der Richtlinie 68/360/EWG finden auf die von dieser Richtlinie Begünstigten entsprechende Anwendung. Der Ehegatte eines Staatsangehörigen eines Mitgliedstaats, der im Hoheitsgebiet eines Mitgliedstaats aufenthaltsberechtigt ist, sowie die Kinder dieses Staatsangehörigen, denen er Unterhalt gewährt, haben, auch wenn sie die Staatsangehörigkeit eines Mitgliedstaats nicht besitzen, das Recht, im gesamten Hoheitsgebiet dieses Mitgliedstaats jedwede Tätigkeit im Lohn- oder Gehaltsverhältnis oder jedwede selbständige Erwerbstätigkeit auszuüben. Die Mitgliedstaaten dürfen nur aus Gründen der öffentlichen Ordnung, der öffentlichen Sicherheit oder der Volksgesundheit von den Bestimmungen dieser Richtlinie abweichen. In diesem Fall findet die Richtlinie 64/221/EWG Anwendung. (3) Die vorliegende Richtlinie berührt nicht die geltenden Rechtsvorschriften für den Erwerb von Zweitwohnungen. Artikel 3 Das Aufenthaltsrecht besteht, solange die Berechtigten die Bedingungen des Artikels 1 erfuellen. Artikel 4 Die Kommission arbeitet spätestens drei Jahre nach dem Beginn der Anwendung dieser Richtlinie und anschließend alle drei Jahre einen Bericht über ihre Anwendung aus und legt ihn dem Europäischen Parlament und dem Rat vor. Artikel 5 Die Mitgliedstaaten setzen die erforderlichen Rechts- und Verwaltungsvorschriften in Kraft, um dieser Richtlinie bis spätestens 30. Juni 1992 nachzukommen. Sie setzen die Kommission unverzueglich davon in Kenntnis. Artikel 6 Diese Richtlinie ist an die Mitgliedstaaten gerichtet. Geschehen zu Luxemburg am 28. Juni 1990. Im Namen des Rates Der Präsident M. GEOGHEGAN-QUINN (1) ABl. Nr. C 191 vom 28. 7. 1989, S. 3 und ABl. Nr. C 26 vom 3. 2. 1990, S. 19. (2) Stellungnahme vom 13. Juni 1990 (noch nicht im Amtsblatt veröffentlicht). (3) ABl. Nr. C 329 vom 30. 12. 1989, S. 25. (4) ABl. Nr. L 149 vom 5. 7. 1971, S. 2. (5) ABl. Nr. L 143 vom 29. 5. 1981, S. 1. (6) ABl. Nr. L 257 vom 19. 10. 1968, S. 13. (7) ABl. Nr. 56 vom 4. 4. 1964, S. 850/64. "
pipeline([de_text], max_length=512)
```
## Training data
The legal_t5_small_summ_de model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html) dataset consisting of 23 Thousand texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 64). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for classification test dataset, achieves the following results:
Test results :
| Model | Rouge1 | Rouge2 | Rouge Lsum |
|:-----:|:-----:|:-----:|:-----:|
| legal_t5_small_summ_de | 78.03|68.84 |76.95|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_summ_en | 2021-01-29T16:50:47.000Z | [
"pytorch",
"t5",
"lm-head",
"seq2seq",
"English",
"dataset:jrc-acquis",
"transformers",
"summarization English model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 23 | transformers |
---
language: English
tags:
- summarization English model
datasets:
- jrc-acquis
widget:
- text: >
THE COMMISSION OF THE EUROPEAN COMMUNITIES, Having regard to the Treaty establishing
the European Community, Having regard to Council Regulation (EC) No 1255/1999 of 17 May 1999
on the common organisation of the market in milk and milk products [1], and in particular Article 15 thereof,
Whereas: (1) Article 7(1) of Commission Regulation (EC) No 2799/1999 [2] fixes the amount of aid for
skimmed milk and skimmed-milk powder intended for animal feed taking into account the factors set out
in Article 11(2) of Regulation (EC) No 1255/1999. In view of the developments in the market price of
skimmed-milk powder, of the increase in the market prices for competing proteins, and of the reduction
of the supply of skimmed-milk powder, the amount of aid should be reduced. (2) Regulation (EC)
No 2799/1999 should therefore be amended accordingly. (3) The Management Committee for Milk and
Milk Products has not delivered an opinion within the time-limit set by its chairman,
HAS ADOPTED THIS REGULATION: Article 1 In Article 7 of Regulation (EC) No 2799/1999, paragraph 1 is replaced by the following: "1. Aid is fixed at: (a) EUR 1,62 per 100 kg of skimmed milk with a protein content of not less than 35,6 % of the non-fatty dry extract; (b) EUR 1,42 per 100 kg of skimmed milk with a protein content of not less than 31,4 % but less than 35,6 % of the non-fatty dry extract; (c) EUR 20,00 per 100 kg of skimmed-milk powder with a protein content of not less than 35,6 % of the non-fatty dry extract; (d) EUR 17,64 per 100 kg of skimmed-milk powder with a protein content of not less than 31,4 % but less than 35,6 % of the non-fatty dry extract." Article 2 This Regulation shall enter into force on the day following its publication in the Official Journal of the European Union. This Regulation shall be binding in its entirety and directly applicable in all Member States. Done at Brussels, 19 April 2006. For the Commission Mariann Fischer Boel Member of the Commission [1] OJ L 160, 26.6.1999, p. 48. Regulation as last amended by Regulation (EC) No 1913/2005 (OJ L 307, 25.11.2005, p. 2). [2] OJ L 340, 31.12.1999, p. 3.
Regulation as last amended by Regulation (EC) No 1194/2005 (OJ L 194, 26.7.2005, p. 7).
---
# legal_t5_small_summ_en model
Model for Summarization of legal text written in English. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis.
## Model description
legal_t5_small_summ_en is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for summarization of legal texts written in English.
### How to use
Here is how to use this model to summarize legal text written in English in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_summ_en"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_summ_en", do_lower_case=False,
skip_special_tokens=True),
device=0
)
en_text = "THE COMMISSION OF THE EUROPEAN COMMUNITIES, Having regard to the Treaty establishing the European Community, Having regard to Council Regulation (EC) No 1255/1999 of 17 May 1999 on the common organisation of the market in milk and milk products [1], and in particular Article 15 thereof, Whereas: (1) Article 7(1) of Commission Regulation (EC) No 2799/1999 [2] fixes the amount of aid for skimmed milk and skimmed-milk powder intended for animal feed taking into account the factors set out in Article 11(2) of Regulation (EC) No 1255/1999. In view of the developments in the market price of skimmed-milk powder, of the increase in the market prices for competing proteins, and of the reduction of the supply of skimmed-milk powder, the amount of aid should be reduced. (2) Regulation (EC) No 2799/1999 should therefore be amended accordingly. (3) The Management Committee for Milk and Milk Products has not delivered an opinion within the time-limit set by its chairman, HAS ADOPTED THIS REGULATION: Article 1 In Article 7 of Regulation (EC) No 2799/1999, paragraph 1 is replaced by the following: "1. Aid is fixed at: (a) EUR 1,62 per 100 kg of skimmed milk with a protein content of not less than 35,6 % of the non-fatty dry extract; (b) EUR 1,42 per 100 kg of skimmed milk with a protein content of not less than 31,4 % but less than 35,6 % of the non-fatty dry extract; (c) EUR 20,00 per 100 kg of skimmed-milk powder with a protein content of not less than 35,6 % of the non-fatty dry extract; (d) EUR 17,64 per 100 kg of skimmed-milk powder with a protein content of not less than 31,4 % but less than 35,6 % of the non-fatty dry extract." Article 2 This Regulation shall enter into force on the day following its publication in the Official Journal of the European Union. This Regulation shall be binding in its entirety and directly applicable in all Member States. Done at Brussels, 19 April 2006. For the Commission Mariann Fischer Boel Member of the Commission [1] OJ L 160, 26.6.1999, p. 48. Regulation as last amended by Regulation (EC) No 1913/2005 (OJ L 307, 25.11.2005, p. 2). [2] OJ L 340, 31.12.1999, p. 3. Regulation as last amended by Regulation (EC) No 1194/2005 (OJ L 194, 26.7.2005, p. 7). -------------------------------------------------- "
pipeline([en_text], max_length=512)
```
## Training data
The legal_t5_small_summ_en model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html) dataset consisting of 22 Thousand texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 64). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for classification test dataset, achieves the following results:
Test results :
| Model | Rouge1 | Rouge2 | Rouge Lsum |
|:-----:|:-----:|:-----:|:-----:|
| legal_t5_small_summ_en | 78.11|68.78 |77.0|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_summ_es | 2021-01-29T08:52:51.000Z | [
"pytorch",
"t5",
"lm-head",
"seq2seq",
"Spanish",
"dataset:jrc-acquis",
"transformers",
"summarization Spanish model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 10 | transformers |
---
language: Spanish
tags:
- summarization Spanish model
datasets:
- jrc-acquis
widget:
- text: "[notificada con el número C(2006) 166] (El texto en lengua portuguesa es el único auténtico) (2006/78/CE) LA COMISIÓN DE LAS COMUNIDADES EUROPEAS, Visto el Tratado constitutivo de la Comunidad Europea, Vista la Decisión 90/424/CEE del Consejo, de 26 de junio de 1990, relativa a determinados gastos en el sector veterinario [1], y, en particular, su artículo 3, apartado 2 bis, Considerando lo siguiente: (1) El 24 de noviembre de 2004 se declararon brotes de fiebre catarral ovina en Portugal. La aparición de esta enfermedad puede representar un grave riesgo para la cabaña ganadera de la Comunidad. (2) Para atajar la propagación de la enfermedad en el plazo más breve, la Comunidad debe participar en los gastos subvencionables que suponen para Portugal la adopción de medidas de urgencia contra la enfermedad, en las condiciones previstas en la Decisión 90/424/CEE. Por ello, el 15 de septiembre de 2005 se adoptó la Decisión 2005/660/CE de la Comisión relativa a una ayuda financiera de la Comunidad para medidas de urgencia contra la fiebre catarral ovina adoptadas en Portugal en 2004 y 2005 [2]. (3) La Comisión ha adoptado varias decisiones para delimitar las zonas de protección y vigilancia y fijar las condiciones que deben cumplir los animales que vayan a salir de esas zonas; la última de ellas es la Decisión 2005/393/CE, de 23 de mayo de 2005, sobre las zonas de protección y vigilancia en relación con la fiebre catarral ovina y las condiciones que se aplican a los traslados de animales desde estas zonas o a través de ellas [3]. (4) Desde el otoño de 2004, la excepcional escasez de lluvias en Portugal ha afectado gravemente al suministro de forraje y, en consecuencia, a las posibilidades de alimentación animal, lo que ha conllevado costes adicionales para los ganaderos. La situación tiene consecuencias particulares en Portugal, pues las explotaciones especializadas en reproducción de bovinos y de ovinos están ubicadas en las zonas afectadas por las restricciones aplicadas a los traslados de animales, mientras que las especializadas en engorde, que constituyen la salida lógica de los animales criados en aquéllas, están localizadas fuera de dichas zonas. (5) Portugal, en colaboración con España, puso en marcha otras medidas para controlar la epidemia, como la realización de estudios epidemiológicos y la aplicación de medidas de vigilancia de la enfermedad, incluidas las pruebas de laboratorio para el control serológico y virológico en el marco de las pruebas realizadas a los animales antes de su traslado y en el de la vigilancia entomológica. (6) Portugal y España han presentado pruebas de su cooperación para evitar la propagación de la enfermedad tomando medidas de vigilancia de la misma. (7) De conformidad con el artículo 3, apartado 2, del Reglamento (CE) no 1258/1999 del Consejo, de 17 de mayo de 1999, sobre la financiación de la política agrícola común [4], las medidas veterinarias y fitosanitarias ejecutadas según las normas comunitarias son financiadas por la sección Garantía del Fondo Europeo de Orientación y de Garantía Agrícola. El control financiero de estas acciones debe efectuarse de conformidad con lo dispuesto en los artículos 8 y 9 de dicho Reglamento. (8) El pago de la contribución financiera de la Comunidad se supedita a la realización efectiva de las acciones programadas y a la presentación por parte de las autoridades de toda la información necesaria en los plazos establecidos. (9) El 25 de febrero de 2005, Portugal presentó un primer cálculo de los costes de las demás medidas de urgencia, como las de vigilancia epidemiológica, tomadas para luchar contra la enfermedad. El importe estimado de las medidas de vigilancia epidemiológica se eleva a 4303336 EUR. (10) A la espera de que se efectúen los controles in situ de la Comisión, procede fijar desde ahora el importe de un primer pago de la ayuda financiera de la Comunidad. Este primer pago ha de ser igual al 50 % de la contribución de la Comunidad, establecida sobre la base del gasto subvencionable calculado para las medidas de vigilancia epidemiológica. Procede asimismo determinar los importes máximos que se reembolsarán en concepto de pruebas realizadas y de trampas utilizadas en el marco de dichas medidas. (11) Las autoridades portuguesas han cumplido íntegramente sus obligaciones técnicas y administrativas relacionadas con las medidas previstas en el artículo 3 de la Decisión 90/424/CEE. (12) Las medidas previstas en la presente Decisión se ajustan al dictamen del Comité permanente de la cadena alimentaria y de sanidad animal. HA ADOPTADO LA PRESENTE DECISIÓN: Artículo 1 Concesión de una ayuda financiera de la Comunidad a Portugal 1. En el marco de las medidas de urgencia contra la fiebre catarral ovina adoptadas en Portugal en 2004 y 2005, Portugal tendrá derecho a una contribución comunitaria del 50 % de los importes desembolsados en concepto de pruebas de laboratorio para la vigilancia serológica y virológica, así como en concepto de vigilancia entomológica, incluida la adquisición de trampas. 2. El importe máximo de los gastos que se reembolsarán a Portugal en concepto de las pruebas y las trampas mencionadas en el apartado 1 no excederá de: a) vigilancia serológica, prueba ELISA: 2,5 EUR por prueba; b) vigilancia virológica, reacción en cadena de la polimerasa retrotranscriptásica (RT.PCR): 15 EUR por prueba; c) vigilancia entomológica, trampa: 160 EUR por trampa. 3. El impuesto sobre el valor añadido se excluirá de la participación financiera de la Comunidad. Artículo 2 Modalidades de pago A reserva del resultado de los controles in situ llevados a cabo de conformidad con el artículo 9, apartado 1, de la Decisión 90/424/CEE, se efectuará un primer pago de 600000 EUR como parte de la ayuda financiera de la Comunidad prevista en el artículo 1. El pago se llevará a cabo previa presentación por parte de Portugal de justificantes de las pruebas de laboratorio y de la adquisición de las trampas mencionadas en el artículo 1, apartado 1. Artículo 3 Condiciones de pago y documentación justificativa 1. La ayuda financiera de la Comunidad contemplada en el artículo 1 se pagará atendiendo a los siguientes elementos: a) una solicitud que contenga los datos especificados en el anexo, presentada en el plazo establecido en el apartado 2 del presente artículo; b) la documentación justificativa mencionada en el artículo 2, que incluirá un informe epidemiológico y un informe financiero; c) el resultado de cualquiera de los controles in situ llevados a cabo de conformidad con el artículo 9, apartado 1, de la Decisión 90/424/CEE. Los documentos mencionados en la letra b) deberán estar disponibles para los controles in situ mencionados en la letra c). 2. La solicitud mencionada en el apartado 1, letra a), se presentará en formato electrónico en un plazo de 60 días naturales a partir de la fecha de notificación de la presente Decisión. Si no se respeta este plazo, la ayuda financiera comunitaria se reducirá un 25 % por cada mes de retraso. Artículo 4 Destinatario El destinatario de la presente Decisión es la República Portuguesa. Hecho en Bruselas, el 31 de enero de 2006. Por la Comisión Markos Kyprianou Miembro de la Comisión [1] DO L 224 de 18.8.1990, p. 19. Decisión modificada en último lugar por el Reglamento (CE) no 806/2003 (DO L 122 de 16.5.2003, p. 1). [2] DO L 244 de 20.9.2005, p. 28. [3] DO L 130 de 24.5.2005, p. 22. Decisión modificada en último lugar por la Decisión 2005/828/CE (DO L 311 de 26.11.2005, p. 37). [4] DO L 160 de 26.6.1999, p. 103. -------------------------------------------------- ANEXO Datos mencionados en el artículo 3, apartado 1, letra a) Gastos | Naturaleza de los costes | Número | Importe (sin IVA) | Pruebas ELISA | | | Pruebas RT.PCR | | | Otras pruebas virológicas | | | Trampas | | | Total | | -------------------------------------------------- "
---
# legal_t5_small_summ_es model
Model for Summarization of legal text written in Spanish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis.
## Model description
legal_t5_small_summ_es is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for summarization of legal texts written in Spanish.
### How to use
Here is how to use this model to summarize legal text written in Spanish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_summ_es"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_summ_es", do_lower_case=False,
skip_special_tokens=True),
device=0
)
es_text = "[notificada con el número C(2006) 166] (El texto en lengua portuguesa es el único auténtico) (2006/78/CE) LA COMISIÓN DE LAS COMUNIDADES EUROPEAS, Visto el Tratado constitutivo de la Comunidad Europea, Vista la Decisión 90/424/CEE del Consejo, de 26 de junio de 1990, relativa a determinados gastos en el sector veterinario [1], y, en particular, su artículo 3, apartado 2 bis, Considerando lo siguiente: (1) El 24 de noviembre de 2004 se declararon brotes de fiebre catarral ovina en Portugal. La aparición de esta enfermedad puede representar un grave riesgo para la cabaña ganadera de la Comunidad. (2) Para atajar la propagación de la enfermedad en el plazo más breve, la Comunidad debe participar en los gastos subvencionables que suponen para Portugal la adopción de medidas de urgencia contra la enfermedad, en las condiciones previstas en la Decisión 90/424/CEE. Por ello, el 15 de septiembre de 2005 se adoptó la Decisión 2005/660/CE de la Comisión relativa a una ayuda financiera de la Comunidad para medidas de urgencia contra la fiebre catarral ovina adoptadas en Portugal en 2004 y 2005 [2]. (3) La Comisión ha adoptado varias decisiones para delimitar las zonas de protección y vigilancia y fijar las condiciones que deben cumplir los animales que vayan a salir de esas zonas; la última de ellas es la Decisión 2005/393/CE, de 23 de mayo de 2005, sobre las zonas de protección y vigilancia en relación con la fiebre catarral ovina y las condiciones que se aplican a los traslados de animales desde estas zonas o a través de ellas [3]. (4) Desde el otoño de 2004, la excepcional escasez de lluvias en Portugal ha afectado gravemente al suministro de forraje y, en consecuencia, a las posibilidades de alimentación animal, lo que ha conllevado costes adicionales para los ganaderos. La situación tiene consecuencias particulares en Portugal, pues las explotaciones especializadas en reproducción de bovinos y de ovinos están ubicadas en las zonas afectadas por las restricciones aplicadas a los traslados de animales, mientras que las especializadas en engorde, que constituyen la salida lógica de los animales criados en aquéllas, están localizadas fuera de dichas zonas. (5) Portugal, en colaboración con España, puso en marcha otras medidas para controlar la epidemia, como la realización de estudios epidemiológicos y la aplicación de medidas de vigilancia de la enfermedad, incluidas las pruebas de laboratorio para el control serológico y virológico en el marco de las pruebas realizadas a los animales antes de su traslado y en el de la vigilancia entomológica. (6) Portugal y España han presentado pruebas de su cooperación para evitar la propagación de la enfermedad tomando medidas de vigilancia de la misma. (7) De conformidad con el artículo 3, apartado 2, del Reglamento (CE) no 1258/1999 del Consejo, de 17 de mayo de 1999, sobre la financiación de la política agrícola común [4], las medidas veterinarias y fitosanitarias ejecutadas según las normas comunitarias son financiadas por la sección Garantía del Fondo Europeo de Orientación y de Garantía Agrícola. El control financiero de estas acciones debe efectuarse de conformidad con lo dispuesto en los artículos 8 y 9 de dicho Reglamento. (8) El pago de la contribución financiera de la Comunidad se supedita a la realización efectiva de las acciones programadas y a la presentación por parte de las autoridades de toda la información necesaria en los plazos establecidos. (9) El 25 de febrero de 2005, Portugal presentó un primer cálculo de los costes de las demás medidas de urgencia, como las de vigilancia epidemiológica, tomadas para luchar contra la enfermedad. El importe estimado de las medidas de vigilancia epidemiológica se eleva a 4303336 EUR. (10) A la espera de que se efectúen los controles in situ de la Comisión, procede fijar desde ahora el importe de un primer pago de la ayuda financiera de la Comunidad. Este primer pago ha de ser igual al 50 % de la contribución de la Comunidad, establecida sobre la base del gasto subvencionable calculado para las medidas de vigilancia epidemiológica. Procede asimismo determinar los importes máximos que se reembolsarán en concepto de pruebas realizadas y de trampas utilizadas en el marco de dichas medidas. (11) Las autoridades portuguesas han cumplido íntegramente sus obligaciones técnicas y administrativas relacionadas con las medidas previstas en el artículo 3 de la Decisión 90/424/CEE. (12) Las medidas previstas en la presente Decisión se ajustan al dictamen del Comité permanente de la cadena alimentaria y de sanidad animal. HA ADOPTADO LA PRESENTE DECISIÓN: Artículo 1 Concesión de una ayuda financiera de la Comunidad a Portugal 1. En el marco de las medidas de urgencia contra la fiebre catarral ovina adoptadas en Portugal en 2004 y 2005, Portugal tendrá derecho a una contribución comunitaria del 50 % de los importes desembolsados en concepto de pruebas de laboratorio para la vigilancia serológica y virológica, así como en concepto de vigilancia entomológica, incluida la adquisición de trampas. 2. El importe máximo de los gastos que se reembolsarán a Portugal en concepto de las pruebas y las trampas mencionadas en el apartado 1 no excederá de: a) vigilancia serológica, prueba ELISA: 2,5 EUR por prueba; b) vigilancia virológica, reacción en cadena de la polimerasa retrotranscriptásica (RT.PCR): 15 EUR por prueba; c) vigilancia entomológica, trampa: 160 EUR por trampa. 3. El impuesto sobre el valor añadido se excluirá de la participación financiera de la Comunidad. Artículo 2 Modalidades de pago A reserva del resultado de los controles in situ llevados a cabo de conformidad con el artículo 9, apartado 1, de la Decisión 90/424/CEE, se efectuará un primer pago de 600000 EUR como parte de la ayuda financiera de la Comunidad prevista en el artículo 1. El pago se llevará a cabo previa presentación por parte de Portugal de justificantes de las pruebas de laboratorio y de la adquisición de las trampas mencionadas en el artículo 1, apartado 1. Artículo 3 Condiciones de pago y documentación justificativa 1. La ayuda financiera de la Comunidad contemplada en el artículo 1 se pagará atendiendo a los siguientes elementos: a) una solicitud que contenga los datos especificados en el anexo, presentada en el plazo establecido en el apartado 2 del presente artículo; b) la documentación justificativa mencionada en el artículo 2, que incluirá un informe epidemiológico y un informe financiero; c) el resultado de cualquiera de los controles in situ llevados a cabo de conformidad con el artículo 9, apartado 1, de la Decisión 90/424/CEE. Los documentos mencionados en la letra b) deberán estar disponibles para los controles in situ mencionados en la letra c). 2. La solicitud mencionada en el apartado 1, letra a), se presentará en formato electrónico en un plazo de 60 días naturales a partir de la fecha de notificación de la presente Decisión. Si no se respeta este plazo, la ayuda financiera comunitaria se reducirá un 25 % por cada mes de retraso. Artículo 4 Destinatario El destinatario de la presente Decisión es la República Portuguesa. Hecho en Bruselas, el 31 de enero de 2006. Por la Comisión Markos Kyprianou Miembro de la Comisión [1] DO L 224 de 18.8.1990, p. 19. Decisión modificada en último lugar por el Reglamento (CE) no 806/2003 (DO L 122 de 16.5.2003, p. 1). [2] DO L 244 de 20.9.2005, p. 28. [3] DO L 130 de 24.5.2005, p. 22. Decisión modificada en último lugar por la Decisión 2005/828/CE (DO L 311 de 26.11.2005, p. 37). [4] DO L 160 de 26.6.1999, p. 103. -------------------------------------------------- ANEXO Datos mencionados en el artículo 3, apartado 1, letra a) Gastos | Naturaleza de los costes | Número | Importe (sin IVA) | Pruebas ELISA | | | Pruebas RT.PCR | | | Otras pruebas virológicas | | | Trampas | | | Total | | -------------------------------------------------- "
pipeline([es_text], max_length=512)
```
## Training data
The legal_t5_small_summ_es model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html) dataset consisting of 23 Thousand texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 64). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for classification test dataset, achieves the following results:
Test results :
| Model | Rouge1 | Rouge2 | Rouge Lsum |
|:-----:|:-----:|:-----:|:-----:|
| legal_t5_small_summ_es | 80.23|70.16 |78.69|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_summ_fr | 2021-01-29T08:52:39.000Z | [
"pytorch",
"t5",
"lm-head",
"seq2seq",
"French",
"dataset:jrc-acquis",
"transformers",
"summarization French model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 18 | transformers |
---
language: French
tags:
- summarization French model
datasets:
- jrc-acquis
widget:
- text: "LA COMMISSION DES COMMUNAUTÉS EUROPÉENNES, vu le traité instituant la Communauté européenne, vu le règlement (CE) no 1784/2003 du Conseil du 29 septembre 2003 portant organisation commune des marchés dans le secteur des céréales [1], et notamment son article 13, paragraphe 3, vu le règlement (CE) no 1785/2003 du Conseil du 29 septembre 2003 portant organisation commune du marché du riz [2], et notamment son article 14, paragraphe 3, considérant ce qui suit: (1) Conformément à l'article 13, paragraphe 1, du règlement (CE) no 1784/2003 et à l'article 14, paragraphe 1, du règlement (CE) no 1785/2003, la différence entre les cours ou les prix sur le marché mondial des produits visés à l'article 1er de chacun de ces deux règlements et les prix dans la Communauté peut être couverte par une restitution à l'exportation. (2) Le règlement (CE) no 1043/2005 de la Commission du 30 juin 2005 portant application du règlement (CE) no 3448/93 du Conseil en ce qui concerne le système d’octroi des restitutions à l'exportation pour certains produits agricoles exportés sous forme de marchandises ne relevant pas de l'annexe I du traité ainsi que les critères de fixation de leurs montants [3] a spécifié ceux de ces produits pour lesquels il y a lieu de fixer un taux de restitution applicable lors de leur exportation sous forme de marchandises reprises, selon le cas, à l'annexe III du règlement (CE) no 1784/2003 ou à l'annexe IV du règlement (CE) no 1785/2003. (3) Conformément à l'article 14, paragraphe 1, du règlement (CE) no 1043/2005, le taux de la restitution par 100 kilogrammes de chacun des produits de base considérés doit être fixé chaque mois. (4) Les engagements pris en matière de restitutions pouvant être octroyées à l'exportation de produits agricoles incorporés dans des marchandises ne relevant pas de l'annexe I du traité peuvent être mis en péril par la fixation à l'avance de taux de restitution élevés. Il convient, dès lors, de prendre des mesures de sauvegarde dans ces situations sans empêcher pour autant la conclusion de contrats à long terme. La fixation d'un taux de restitution spécifique pour la fixation à l'avance des restitutions est une mesure permettant de rencontrer ces différents objectifs. (5) À la suite de l'arrangement entre la Communauté européenne et les États-Unis d'Amérique concernant les exportations de pâtes alimentaires de la Communauté aux États-Unis approuvé par la décision 87/482/CEE du Conseil [4], il est nécessaire de différencier la restitution pour les marchandises relevant des codes NC 19021100 et 190219 selon leur destination. (6) Conformément à l'article 15, paragraphes 2 et 3, du règlement (CE) no 1043/2005, il y a lieu de fixer un taux de restitution à l'exportation réduit, compte tenu du montant de la restitution à la production applicable, en vertu du règlement (CEE) no 1722/93 de la Commission [5], au produit de base mis en œuvre, valable au cours de la période présumée de fabrication des marchandises. (7) Les boissons spiritueuses sont considérées comme moins sensibles au prix des céréales mises en œuvre pour leur fabrication. Toutefois, le protocole 19 du traité d'adhésion du Royaume-Uni, de l'Irlande et du Danemark prévoit que des mesures nécessaires doivent être arrêtées afin de faciliter l'utilisation des céréales communautaires pour la fabrication de boissons spiritueuses obtenues à partir de céréales. Il convient donc d'adapter le taux de restitution applicable aux céréales exportées sous forme de boissons spiritueuses. (8) Le comité de gestion des céréales n'a pas émis d'avis dans le délai imparti par son président, A ARRÊTÉ LE PRÉSENT RÈGLEMENT: Article premier Les taux des restitutions applicables aux produits de base figurant à l'annexe I du règlement (CE) no 1043/2005 et à l'article 1er du règlement (CE) no 1784/2003 ou à l'article 1er du règlement (CE) no 1785/2003 modifié, qui sont exportés sous forme de marchandises reprises respectivement à l'annexe III du règlement (CE) no 1784/2003 ou à l'annexe IV du règlement (CE) no 1785/2003, sont fixés comme indiqué à l'annexe du présent règlement. Article 2 Le présent règlement entre en vigueur le 23 septembre 2005. Le présent règlement est obligatoire dans tous ses éléments et directement applicable dans tout État membre. Fait à Bruxelles, le 22 septembre 2005. Par la Commission Günter Verheugen Vice-président [1] JO L 270 du 21.10.2003, p. 78. [2] JO L 270 du 21.10.2003, p. 96. [3] JO L 172 du 5.7.2005, p. 24. [4] JO L 275 du 29.9.1987, p. 36. [5] JO L 159 du 1.7.1993, p. 112. Règlement modifié en dernier lieu par le règlement (CE) no 1584/2004 (JO L 280 du 31.8.2004, p. 11). -------------------------------------------------- ANNEXE Taux des restitutions applicables à compter du 23 septembre 2005 à certains produits des secteurs des céréales et du riz exportés sous forme de marchandises ne relevant pas de l'annexe I du traité [1] (en EUR/100 kg) | Code NC | Désignation des marchandises | Taux de la restitution par 100 kg du produit de base | En cas de fixation à l'avance des restitutions | Autres | 10011000 | Froment (blé) dur: | | | – en cas d'exportation de marchandises relevant des codes NC 190211 et 190219 vers les États-Unis d'Amérique | — | — | – dans les autres cas | — | — | 10019099 | Froment (blé) tendre et méteil: | | | – en cas d'exportation de marchandises relevant des codes NC 190211 et 190219 vers les États-Unis d'Amérique | — | — | – dans les autres cas: | | | – – en cas d'application de l'article 15, paragraphe 3, du règlement (CE) no 1043/2005 | — | — | – – en cas d'exportation de marchandises relevant du sous-chapitre 2208 | — | — | – – dans les autres cas | — | — | 10020000 | Seigle | — | — | 10030090 | Orge | | | – en cas d'exportation de marchandises relevant du sous-chapitre 2208 | — | — | – dans les autres cas | — | — | 10040000 | Avoine | — | — | 10059000 | Maïs, mis en œuvre sous forme de: | | | – amidon: | | | – – en cas d'application de l'article 15, paragraphe 3, du règlement (CE) no 1043/2005 | 2,994 | 3,150 | – – en cas d'exportation de marchandises relevant du sous-chapitre 2208 | 2,368 | 2,368 | – – dans les autres cas | 4,615 | 4,615 | – glucose, sirop de glucose, maltodextrine, sirop de maltodextrine des codes NC 17023051, 17023059, 17023091, 17023099, 17024090, 17029050, 17029075, 17029079, 21069055: | | | – – en cas d'application de l'article 15, paragraphe 3, du règlement (CE) no 1043/2005 | 1,840 | 1,996 | – – en cas d'exportation de marchandises relevant du sous-chapitre 2208 | 1,776 | 1,776 | – – dans les autres cas | 3,461 | 3,461 | – en cas d'exportation de marchandises relevant du sous-chapitre 2208 | 2,368 | 2,368 | – autres (y compris en l'état) | 4,615 | 4,615 | Fécule de pommes de terre du code NC 11081300 assimilée à un produit issu de la transformation du maïs: | | | – en cas d'application de l'article 15, paragraphe 3, du règlement (CE) no 1043/2005 | 2,435 | 2,585 | – en cas d'exportation de marchandises relevant du sous-chapitre 2208 | 2,368 | 2,368 | – dans les autres cas | 4,615 | 4,615 | ex100630 | Riz blanchi: | | | – à grains ronds | — | — | – à grains moyens | — | — | – à grains longs | — | — | 10064000 | Riz en brisures | — | — | 10070090 | Sorgho à grains (à l'excl. du sorgho à grains, hybride, destiné à l'ensemencement) | — | — | [1] Les taux prévus à la présente annexe ne s’appliquent pas avec effet au 1er octobre 2004 aux exportations vers la Bulgarie et avec effet au 1er février 2005 aux marchandises visées aux tableaux I et II du Protocole no 2 de l’Accord entre la Communauté économique européenne et la Confédération suisse du 22 juillet 1972 qui sont exportées vers la Confédération suisse ou la principauté de Liechtenstein. [2] En ce qui concerne les produits agricoles obtenus par transformation d’un produit de base et/ou de produits assimilés, les coefficients fixés à l’annexe V du règlement (CE) no 1043/2005 de la Commission s’appliquent. [3] La marchandise concernée relève du code NC 35051050. [4] Marchandises reprises à l'annexe III du règlement (CE) no 1784/2003 ou visées à l'article 2 du règlement (CEE) no 2825/93 (JO L 258 du 16.10.1993, p. 6). [5] Pour les sirops des codes NC 17023099, 17024090 et 17026090, obtenus par mélange de sirops de glucose et fructose, seul le sirop de glucose a droit à la restitution à l'exportation. -------------------------------------------------- "
---
# legal_t5_small_summ_fr model
Model for Summarization of legal text written in French. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis.
## Model description
legal_t5_small_summ_fr is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for summarization of legal texts written in French.
### How to use
Here is how to use this model to summarize legal text written in French in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_summ_fr"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_summ_fr", do_lower_case=False,
skip_special_tokens=True),
device=0
)
fr_text = "LA COMMISSION DES COMMUNAUTÉS EUROPÉENNES, vu le traité instituant la Communauté européenne, vu le règlement (CE) no 1784/2003 du Conseil du 29 septembre 2003 portant organisation commune des marchés dans le secteur des céréales [1], et notamment son article 13, paragraphe 3, vu le règlement (CE) no 1785/2003 du Conseil du 29 septembre 2003 portant organisation commune du marché du riz [2], et notamment son article 14, paragraphe 3, considérant ce qui suit: (1) Conformément à l'article 13, paragraphe 1, du règlement (CE) no 1784/2003 et à l'article 14, paragraphe 1, du règlement (CE) no 1785/2003, la différence entre les cours ou les prix sur le marché mondial des produits visés à l'article 1er de chacun de ces deux règlements et les prix dans la Communauté peut être couverte par une restitution à l'exportation. (2) Le règlement (CE) no 1043/2005 de la Commission du 30 juin 2005 portant application du règlement (CE) no 3448/93 du Conseil en ce qui concerne le système d’octroi des restitutions à l'exportation pour certains produits agricoles exportés sous forme de marchandises ne relevant pas de l'annexe I du traité ainsi que les critères de fixation de leurs montants [3] a spécifié ceux de ces produits pour lesquels il y a lieu de fixer un taux de restitution applicable lors de leur exportation sous forme de marchandises reprises, selon le cas, à l'annexe III du règlement (CE) no 1784/2003 ou à l'annexe IV du règlement (CE) no 1785/2003. (3) Conformément à l'article 14, paragraphe 1, du règlement (CE) no 1043/2005, le taux de la restitution par 100 kilogrammes de chacun des produits de base considérés doit être fixé chaque mois. (4) Les engagements pris en matière de restitutions pouvant être octroyées à l'exportation de produits agricoles incorporés dans des marchandises ne relevant pas de l'annexe I du traité peuvent être mis en péril par la fixation à l'avance de taux de restitution élevés. Il convient, dès lors, de prendre des mesures de sauvegarde dans ces situations sans empêcher pour autant la conclusion de contrats à long terme. La fixation d'un taux de restitution spécifique pour la fixation à l'avance des restitutions est une mesure permettant de rencontrer ces différents objectifs. (5) À la suite de l'arrangement entre la Communauté européenne et les États-Unis d'Amérique concernant les exportations de pâtes alimentaires de la Communauté aux États-Unis approuvé par la décision 87/482/CEE du Conseil [4], il est nécessaire de différencier la restitution pour les marchandises relevant des codes NC 19021100 et 190219 selon leur destination. (6) Conformément à l'article 15, paragraphes 2 et 3, du règlement (CE) no 1043/2005, il y a lieu de fixer un taux de restitution à l'exportation réduit, compte tenu du montant de la restitution à la production applicable, en vertu du règlement (CEE) no 1722/93 de la Commission [5], au produit de base mis en œuvre, valable au cours de la période présumée de fabrication des marchandises. (7) Les boissons spiritueuses sont considérées comme moins sensibles au prix des céréales mises en œuvre pour leur fabrication. Toutefois, le protocole 19 du traité d'adhésion du Royaume-Uni, de l'Irlande et du Danemark prévoit que des mesures nécessaires doivent être arrêtées afin de faciliter l'utilisation des céréales communautaires pour la fabrication de boissons spiritueuses obtenues à partir de céréales. Il convient donc d'adapter le taux de restitution applicable aux céréales exportées sous forme de boissons spiritueuses. (8) Le comité de gestion des céréales n'a pas émis d'avis dans le délai imparti par son président, A ARRÊTÉ LE PRÉSENT RÈGLEMENT: Article premier Les taux des restitutions applicables aux produits de base figurant à l'annexe I du règlement (CE) no 1043/2005 et à l'article 1er du règlement (CE) no 1784/2003 ou à l'article 1er du règlement (CE) no 1785/2003 modifié, qui sont exportés sous forme de marchandises reprises respectivement à l'annexe III du règlement (CE) no 1784/2003 ou à l'annexe IV du règlement (CE) no 1785/2003, sont fixés comme indiqué à l'annexe du présent règlement. Article 2 Le présent règlement entre en vigueur le 23 septembre 2005. Le présent règlement est obligatoire dans tous ses éléments et directement applicable dans tout État membre. Fait à Bruxelles, le 22 septembre 2005. Par la Commission Günter Verheugen Vice-président [1] JO L 270 du 21.10.2003, p. 78. [2] JO L 270 du 21.10.2003, p. 96. [3] JO L 172 du 5.7.2005, p. 24. [4] JO L 275 du 29.9.1987, p. 36. [5] JO L 159 du 1.7.1993, p. 112. Règlement modifié en dernier lieu par le règlement (CE) no 1584/2004 (JO L 280 du 31.8.2004, p. 11). -------------------------------------------------- ANNEXE Taux des restitutions applicables à compter du 23 septembre 2005 à certains produits des secteurs des céréales et du riz exportés sous forme de marchandises ne relevant pas de l'annexe I du traité [1] (en EUR/100 kg) | Code NC | Désignation des marchandises | Taux de la restitution par 100 kg du produit de base | En cas de fixation à l'avance des restitutions | Autres | 10011000 | Froment (blé) dur: | | | – en cas d'exportation de marchandises relevant des codes NC 190211 et 190219 vers les États-Unis d'Amérique | — | — | – dans les autres cas | — | — | 10019099 | Froment (blé) tendre et méteil: | | | – en cas d'exportation de marchandises relevant des codes NC 190211 et 190219 vers les États-Unis d'Amérique | — | — | – dans les autres cas: | | | – – en cas d'application de l'article 15, paragraphe 3, du règlement (CE) no 1043/2005 | — | — | – – en cas d'exportation de marchandises relevant du sous-chapitre 2208 | — | — | – – dans les autres cas | — | — | 10020000 | Seigle | — | — | 10030090 | Orge | | | – en cas d'exportation de marchandises relevant du sous-chapitre 2208 | — | — | – dans les autres cas | — | — | 10040000 | Avoine | — | — | 10059000 | Maïs, mis en œuvre sous forme de: | | | – amidon: | | | – – en cas d'application de l'article 15, paragraphe 3, du règlement (CE) no 1043/2005 | 2,994 | 3,150 | – – en cas d'exportation de marchandises relevant du sous-chapitre 2208 | 2,368 | 2,368 | – – dans les autres cas | 4,615 | 4,615 | – glucose, sirop de glucose, maltodextrine, sirop de maltodextrine des codes NC 17023051, 17023059, 17023091, 17023099, 17024090, 17029050, 17029075, 17029079, 21069055: | | | – – en cas d'application de l'article 15, paragraphe 3, du règlement (CE) no 1043/2005 | 1,840 | 1,996 | – – en cas d'exportation de marchandises relevant du sous-chapitre 2208 | 1,776 | 1,776 | – – dans les autres cas | 3,461 | 3,461 | – en cas d'exportation de marchandises relevant du sous-chapitre 2208 | 2,368 | 2,368 | – autres (y compris en l'état) | 4,615 | 4,615 | Fécule de pommes de terre du code NC 11081300 assimilée à un produit issu de la transformation du maïs: | | | – en cas d'application de l'article 15, paragraphe 3, du règlement (CE) no 1043/2005 | 2,435 | 2,585 | – en cas d'exportation de marchandises relevant du sous-chapitre 2208 | 2,368 | 2,368 | – dans les autres cas | 4,615 | 4,615 | ex100630 | Riz blanchi: | | | – à grains ronds | — | — | – à grains moyens | — | — | – à grains longs | — | — | 10064000 | Riz en brisures | — | — | 10070090 | Sorgho à grains (à l'excl. du sorgho à grains, hybride, destiné à l'ensemencement) | — | — | [1] Les taux prévus à la présente annexe ne s’appliquent pas avec effet au 1er octobre 2004 aux exportations vers la Bulgarie et avec effet au 1er février 2005 aux marchandises visées aux tableaux I et II du Protocole no 2 de l’Accord entre la Communauté économique européenne et la Confédération suisse du 22 juillet 1972 qui sont exportées vers la Confédération suisse ou la principauté de Liechtenstein. [2] En ce qui concerne les produits agricoles obtenus par transformation d’un produit de base et/ou de produits assimilés, les coefficients fixés à l’annexe V du règlement (CE) no 1043/2005 de la Commission s’appliquent. [3] La marchandise concernée relève du code NC 35051050. [4] Marchandises reprises à l'annexe III du règlement (CE) no 1784/2003 ou visées à l'article 2 du règlement (CEE) no 2825/93 (JO L 258 du 16.10.1993, p. 6). [5] Pour les sirops des codes NC 17023099, 17024090 et 17026090, obtenus par mélange de sirops de glucose et fructose, seul le sirop de glucose a droit à la restitution à l'exportation. -------------------------------------------------- "
pipeline([fr_text], max_length=512)
```
## Training data
The legal_t5_small_summ_fr model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html) dataset consisting of 23 Thousand texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 64). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for classification test dataset, achieves the following results:
Test results :
| Model | Rouge1 | Rouge2 | Rouge Lsum |
|:-----:|:-----:|:-----:|:-----:|
| legal_t5_small_summ_fr | 77.1|67.97 |75.74|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_summ_it | 2021-01-29T08:52:44.000Z | [
"pytorch",
"t5",
"lm-head",
"seq2seq",
"Italian",
"dataset:jrc-acquis",
"transformers",
"summarization Italian model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 20 | transformers |
---
language: Italian
tags:
- summarization Italian model
datasets:
- jrc-acquis
widget:
- text: "LA COMMISSIONE DELLE COMUNITÀ EUROPEE, visto il trattato che istituisce la Comunità europea, visto il regolamento (CEE) n. 2082/92 del Consiglio, del 14 luglio 1992, relativo alle attestazioni di specificità dei prodotti agricoli ed alimentari(1), in particolare l'articolo 9, paragrafo 1, considerando quanto segue: (1) A norma dell'articolo 7 del regolamento (CEE) n. 2082/92, la Finlandia ha trasmesso alla Commissione una domanda di registrazione della denominazione %quot%Kalakukko%quot% quale attestazione di specificità. (2) La dicitura %quot%specialità tradizionale garantita%quot% può applicarsi soltanto a denominazioni figuranti nel summenzionato albo. (3) Nessuna dichiarazione di opposizione, ai sensi dell'articolo 8 del summenzionato regolamento, è stata trasmessa alla Commissione a seguito della pubblicazione nella Gazzetta ufficiale delle Comunità europee(2) della denominazione figurante nell'allegato del presente regolamento. (4) Di conseguenza, la denominazione di cui all'allegato può essere iscritta nell'albo delle attestazioni di specificità e beneficiare pertanto della protezione a livello comunitario quale specialità tradizionale garantita nella Comunità in virtù dell'articolo 13, paragrafo 2, del regolamento (CEE) n. 2082/92. (5) L'allegato del presente regolamento completa l'allegato del regolamento (CE) n. 2301/97 della Commissione(3), modificato da ultimo dal regolamento (CE) n. 688/2002(4), HA ADOTTATO IL PRESENTE REGOLAMENTO: Articolo 1 La denominazione di cui all'allegato del presente regolamento è aggiunta all'allegato del regolamento (CE) n. 2301/97 e iscritta nell'albo delle attestazioni di specificità, conformemente all'articolo 9, paragrafo 1, del regolamento (CEE) n. 2082/92. Tale denominazione è protetta ai sensi dell'articolo 13, paragrafo 2, del summenzionato regolamento. Articolo 2 Il presente regolamento entra in vigore il ventesimo giorno successivo alla pubblicazione nella Gazzetta ufficiale delle Comunità europee. Il presente regolamento è obbligatorio in tutti i suoi elementi e direttamente applicabile in ciascuno degli Stati membri. Fatto a Bruxelles, il 15 luglio 2002. Per la Commissione Franz Fischler Membro della Commissione (1) GU L 208 del 24.7.1992, pag. 9. (2) GU C 235 del 21.8.2001, pag. 12. (3) GU L 319 del 21.11.1997, pag. 8. (4) GU L 106 del 23.4.2002, pag. 7. ALLEGATO Prodotti della panetteria, della pasticceria, della confetteria o della biscotteria - Kalakukko "
---
# legal_t5_small_summ_it model
Model for Summarization of legal text written in Italian. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis.
## Model description
legal_t5_small_summ_it is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for summarization of legal texts written in Italian.
### How to use
Here is how to use this model to summarize legal text written in Italian in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_summ_it"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_summ_it", do_lower_case=False,
skip_special_tokens=True),
device=0
)
it_text = "LA COMMISSIONE DELLE COMUNITÀ EUROPEE, visto il trattato che istituisce la Comunità europea, visto il regolamento (CEE) n. 2082/92 del Consiglio, del 14 luglio 1992, relativo alle attestazioni di specificità dei prodotti agricoli ed alimentari(1), in particolare l'articolo 9, paragrafo 1, considerando quanto segue: (1) A norma dell'articolo 7 del regolamento (CEE) n. 2082/92, la Finlandia ha trasmesso alla Commissione una domanda di registrazione della denominazione %quot%Kalakukko%quot% quale attestazione di specificità. (2) La dicitura %quot%specialità tradizionale garantita%quot% può applicarsi soltanto a denominazioni figuranti nel summenzionato albo. (3) Nessuna dichiarazione di opposizione, ai sensi dell'articolo 8 del summenzionato regolamento, è stata trasmessa alla Commissione a seguito della pubblicazione nella Gazzetta ufficiale delle Comunità europee(2) della denominazione figurante nell'allegato del presente regolamento. (4) Di conseguenza, la denominazione di cui all'allegato può essere iscritta nell'albo delle attestazioni di specificità e beneficiare pertanto della protezione a livello comunitario quale specialità tradizionale garantita nella Comunità in virtù dell'articolo 13, paragrafo 2, del regolamento (CEE) n. 2082/92. (5) L'allegato del presente regolamento completa l'allegato del regolamento (CE) n. 2301/97 della Commissione(3), modificato da ultimo dal regolamento (CE) n. 688/2002(4), HA ADOTTATO IL PRESENTE REGOLAMENTO: Articolo 1 La denominazione di cui all'allegato del presente regolamento è aggiunta all'allegato del regolamento (CE) n. 2301/97 e iscritta nell'albo delle attestazioni di specificità, conformemente all'articolo 9, paragrafo 1, del regolamento (CEE) n. 2082/92. Tale denominazione è protetta ai sensi dell'articolo 13, paragrafo 2, del summenzionato regolamento. Articolo 2 Il presente regolamento entra in vigore il ventesimo giorno successivo alla pubblicazione nella Gazzetta ufficiale delle Comunità europee. Il presente regolamento è obbligatorio in tutti i suoi elementi e direttamente applicabile in ciascuno degli Stati membri. Fatto a Bruxelles, il 15 luglio 2002. Per la Commissione Franz Fischler Membro della Commissione (1) GU L 208 del 24.7.1992, pag. 9. (2) GU C 235 del 21.8.2001, pag. 12. (3) GU L 319 del 21.11.1997, pag. 8. (4) GU L 106 del 23.4.2002, pag. 7. ALLEGATO Prodotti della panetteria, della pasticceria, della confetteria o della biscotteria - Kalakukko "
pipeline([it_text], max_length=512)
```
## Training data
The legal_t5_small_summ_it model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html) dataset consisting of 22 Thousand texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 64). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for classification test dataset, achieves the following results:
Test results :
| Model | Rouge1 | Rouge2 | Rouge Lsum |
|:-----:|:-----:|:-----:|:-----:|
| legal_t5_small_summ_it | 75.07|65.53 |73.85|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_summ_multitask_cs | 2021-04-22T21:22:07.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 7 | transformers | |
SEBIS/legal_t5_small_summ_multitask_de | 2021-04-22T21:22:57.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 7 | transformers | |
SEBIS/legal_t5_small_summ_multitask_en | 2021-04-22T21:27:00.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 8 | transformers | |
SEBIS/legal_t5_small_summ_multitask_es | 2021-04-22T21:26:12.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 6 | transformers | |
SEBIS/legal_t5_small_summ_multitask_fr | 2021-04-22T21:23:45.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 6 | transformers | |
SEBIS/legal_t5_small_summ_multitask_it | 2021-04-22T21:24:33.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 7 | transformers | |
SEBIS/legal_t5_small_summ_multitask_sv | 2021-04-22T21:25:22.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 6 | transformers | |
SEBIS/legal_t5_small_summ_sv | 2021-01-29T08:52:46.000Z | [
"pytorch",
"t5",
"lm-head",
"seq2seq",
"Swedish",
"dataset:jrc-acquis",
"transformers",
"summarization Swedish model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 14 | transformers |
---
language: Swedish
tags:
- summarization Swedish model
datasets:
- jrc-acquis
widget:
- text: "EUROPEISKA GEMENSKAPERNAS RÅD HAR ANTAGIT DENNA FÖRORDNING med beaktande av Fördraget om upprättandet av Europeiska ekonomiska gemenskapen, särskilt artiklarna 43 och 100a i detta, med beaktande av kommissionens förslag(1), i samarbete med Europaparlamentet(2), med beaktande av Ekonomiska och sociala kommitténs yttrande(3), och med beaktande av följande: Det bör införas förbud mot användning av blybaserade kapsyler eller blybaserad folie i förslutningar på förpackningar som används då aromatiserade viner, aromatiserade vinbaserade drycker och aromatiserade drinkar baserade på vinprodukter släpps ut på marknaden i syfte att undvika risken för kontaminering, särskilt vid oavsiktlig kontakt med sådana produkter, samt risken för miljöförorening på grund av avfall som innehåller bly från kapsyler och folie av detta slag. Tillverkarna och användarna av kapsylerna och folien i fråga bör dock ges tid att anpassa sig genom att förbudet inte tillämpas förrän från och med den 1 januari 1993. Det är även nödvändigt att tillåta att produkter som före detta datum tappats på buteljer med blybaserade kapsyler eller blybaserad folie får säljas till dess att lagren är uttömda. Vissa definitioner av aromatiserade vinbaserade drycker bör anpassas så att större hänsyn tas till traditionella framställningsmetoder. Förordning (EEG) nr 1601/91(4) bör därför ändras. HÄRIGENOM FÖRESKRIVS FÖLJANDE. Artikel 1 Förordning (EEG) nr 1601/91 ändras på följande sätt: 1. Artikel 2.3 a första stycket skall ersättas med följande: %quot%a) Sangria: en dryck som framställs av vin - som smaksatts genom tillsats av naturliga extrakt eller essenser av citrusfrukt, - med eller utan saft av sådan frukt, - eventuellt: - med tillsats av kryddor, - sötat, - med tillsats av CO2, och med en slutlig alkoholstyrka på under 12 volymprocent.%quot% 2. Artikel 2.3 e skall ersättas med följande: %quot%e) Kalte Ente: Smaksatt vinbaserad dryck som framställs genom att vin, pärlande vin eller pärlande vin med tillsatt CO2 blandas med mousserande vin eller mousserande vin med tillsatt CO2 och tillsätts naturlig citronsubstans eller extrakt av detta som måste ge en tydligt framträdande smak. Slutprodukten måste innehålla minst 25 volymprocent mousserande vin eller mousserande vin med tillsatt CO2.%quot% 3. Följande punkt skall införas i artikel 8: %quot%4.a Från och med den 1 januari 1993 får buteljerade produkter som omfattas av denna förordning inte saluhållas eller släppas ut på marknaden i förpackningar med förslutningar som täckts med blybaserade kapsyler eller blybaserad folie. Dock får produkter som före detta datum tappats på flaskor med detta slag av kapsyler eller folie avyttras till dess att lagren tömts.%quot% Artikel 2 Denna förordning träder i kraft den tredje dagen efter det att den har offentliggjorts i Europeiska gemenskapernas officiella tidning. Denna förordning är till alla delar bindande och direkt tillämplig i alla medlemsstater. Utfärdad i Bryssel den 9 november 1992. På rådets vägnar D. HURD Ordförande (1) EGT nr C 69, 18.3.1992, s. 11. (2) EGT nr C 241, 21.9.1992, s. 97 och beslut av den 28 oktober 1992. (3) EGT nr C 169, 6.7.1992, s. 1. (4) EGT nr L 149, 14.6.1991, s. 1. "
---
# legal_t5_small_summ_sv model
Model for Summarization of legal text written in Swedish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis.
## Model description
legal_t5_small_summ_sv is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for summarization of legal texts written in Swedish.
### How to use
Here is how to use this model to summarize legal text written in Swedish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_summ_sv"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_summ_sv", do_lower_case=False,
skip_special_tokens=True),
device=0
)
sv_text = "EUROPEISKA GEMENSKAPERNAS RÅD HAR ANTAGIT DENNA FÖRORDNING med beaktande av Fördraget om upprättandet av Europeiska ekonomiska gemenskapen, särskilt artiklarna 43 och 100a i detta, med beaktande av kommissionens förslag(1), i samarbete med Europaparlamentet(2), med beaktande av Ekonomiska och sociala kommitténs yttrande(3), och med beaktande av följande: Det bör införas förbud mot användning av blybaserade kapsyler eller blybaserad folie i förslutningar på förpackningar som används då aromatiserade viner, aromatiserade vinbaserade drycker och aromatiserade drinkar baserade på vinprodukter släpps ut på marknaden i syfte att undvika risken för kontaminering, särskilt vid oavsiktlig kontakt med sådana produkter, samt risken för miljöförorening på grund av avfall som innehåller bly från kapsyler och folie av detta slag. Tillverkarna och användarna av kapsylerna och folien i fråga bör dock ges tid att anpassa sig genom att förbudet inte tillämpas förrän från och med den 1 januari 1993. Det är även nödvändigt att tillåta att produkter som före detta datum tappats på buteljer med blybaserade kapsyler eller blybaserad folie får säljas till dess att lagren är uttömda. Vissa definitioner av aromatiserade vinbaserade drycker bör anpassas så att större hänsyn tas till traditionella framställningsmetoder. Förordning (EEG) nr 1601/91(4) bör därför ändras. HÄRIGENOM FÖRESKRIVS FÖLJANDE. Artikel 1 Förordning (EEG) nr 1601/91 ändras på följande sätt: 1. Artikel 2.3 a första stycket skall ersättas med följande: %quot%a) Sangria: en dryck som framställs av vin - som smaksatts genom tillsats av naturliga extrakt eller essenser av citrusfrukt, - med eller utan saft av sådan frukt, - eventuellt: - med tillsats av kryddor, - sötat, - med tillsats av CO2, och med en slutlig alkoholstyrka på under 12 volymprocent.%quot% 2. Artikel 2.3 e skall ersättas med följande: %quot%e) Kalte Ente: Smaksatt vinbaserad dryck som framställs genom att vin, pärlande vin eller pärlande vin med tillsatt CO2 blandas med mousserande vin eller mousserande vin med tillsatt CO2 och tillsätts naturlig citronsubstans eller extrakt av detta som måste ge en tydligt framträdande smak. Slutprodukten måste innehålla minst 25 volymprocent mousserande vin eller mousserande vin med tillsatt CO2.%quot% 3. Följande punkt skall införas i artikel 8: %quot%4.a Från och med den 1 januari 1993 får buteljerade produkter som omfattas av denna förordning inte saluhållas eller släppas ut på marknaden i förpackningar med förslutningar som täckts med blybaserade kapsyler eller blybaserad folie. Dock får produkter som före detta datum tappats på flaskor med detta slag av kapsyler eller folie avyttras till dess att lagren tömts.%quot% Artikel 2 Denna förordning träder i kraft den tredje dagen efter det att den har offentliggjorts i Europeiska gemenskapernas officiella tidning. Denna förordning är till alla delar bindande och direkt tillämplig i alla medlemsstater. Utfärdad i Bryssel den 9 november 1992. På rådets vägnar D. HURD Ordförande (1) EGT nr C 69, 18.3.1992, s. 11. (2) EGT nr C 241, 21.9.1992, s. 97 och beslut av den 28 oktober 1992. (3) EGT nr C 169, 6.7.1992, s. 1. (4) EGT nr L 149, 14.6.1991, s. 1. "
pipeline([sv_text], max_length=512)
```
## Training data
The legal_t5_small_summ_sv model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html) dataset consisting of 19 Thousand texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 64). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for classification test dataset, achieves the following results:
Test results :
| Model | Rouge1 | Rouge2 | Rouge Lsum |
|:-----:|:-----:|:-----:|:-----:|
| legal_t5_small_summ_sv | 78.84|69.97 |77.59|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_cs_de | 2021-01-29T08:52:57.000Z | [
"pytorch",
"t5",
"lm-head",
"seq2seq",
"Cszech Deustch",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Cszech Deustch model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 25 | transformers |
---
language: Cszech Deustch
tags:
- translation Cszech Deustch model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Konečná zpráva bude Parlamentu předložena na konci nového funkčního období."
---
# legal_t5_small_trans_cs_de model
Model on translating legal text from Cszech to Deustch. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_cs_de is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Cszech to Deustch.
### How to use
Here is how to use this model to translate legal text from Cszech to Deustch in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_cs_de"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_cs_de", do_lower_case=False,
skip_special_tokens=True),
device=0
)
cs_text = "Konečná zpráva bude Parlamentu předložena na konci nového funkčního období."
pipeline([cs_text], max_length=512)
```
## Training data
The legal_t5_small_trans_cs_de model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_cs_de | 44.69|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_cs_de_small_finetuned | 2021-04-16T08:00:42.000Z | [
"pytorch",
"t5",
"seq2seq",
"Cszech Deustch",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Cszech Deustch model",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model"
]
| SEBIS | 6 | transformers |
---
language: Cszech Deustch
tags:
- translation Cszech Deustch model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Vzhledem k tomu, že tento právní předpis bude přímo použitelný v členských státech a zavede mnoho povinností pro ty, na něž se vztahuje, je žádoucí, aby se jim poskytlo více času na přizpůsobení se těmto novým pravidlům."
---
# legal_t5_small_trans_cs_de_small_finetuned model
Model on translating legal text from Cszech to Deustch. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_cs_de_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_cs_de_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Cszech to Deustch.
### How to use
Here is how to use this model to translate legal text from Cszech to Deustch in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_cs_de_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_cs_de", do_lower_case=False,
skip_special_tokens=True),
device=0
)
cs_text = "Vzhledem k tomu, že tento právní předpis bude přímo použitelný v členských státech a zavede mnoho povinností pro ty, na něž se vztahuje, je žádoucí, aby se jim poskytlo více času na přizpůsobení se těmto novým pravidlům."
pipeline([cs_text], max_length=512)
```
## Training data
The legal_t5_small_trans_cs_de_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_cs_de_small_finetuned | 44.175|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.