Search is not available for this dataset
pipeline_tag
stringclasses
48 values
library_name
stringclasses
205 values
text
stringlengths
0
18.3M
metadata
stringlengths
2
1.07B
id
stringlengths
5
122
last_modified
null
tags
listlengths
1
1.84k
sha
null
created_at
stringlengths
25
25
summarization
transformers
# CodeTrans model for source code summarization sql Pretrained model on programming language sql using the t5 small model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized sql code functions: it works best with tokenized sql functions. ## Model description This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. ## Intended uses & limitations The model could be used to generate the description for the sql function or be fine-tuned on other sql code tasks. It can be used on unparsed and untokenized sql code. However, if the sql code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate sql function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_sql_multitask"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_sql_multitask", skip_special_tokens=True), device=0 ) tokenized_code = "select time ( col0 ) from tab0" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/source%20code%20summarization/sql/small_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 460,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ## Evaluation results For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | SQL | C# | | -------------------- | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 | | CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 | | CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 | | CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 | | CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 | | CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 | | CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 | | CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** | | CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 | | CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 | | CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 | | CODE-NN | -- | 18.40 | 20.50 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "select time ( col0 ) from tab0"}]}
SEBIS/code_trans_t5_small_source_code_summarization_sql_multitask
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
summarization
transformers
# CodeTrans model for source code summarization sql Pretrained model on programming language sql using the t5 small model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized sql code functions: it works best with tokenized sql functions. ## Model description This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the source code summarization task for the sql code snippets. ## Intended uses & limitations The model could be used to generate the description for the sql function or be fine-tuned on other sql code tasks. It can be used on unparsed and untokenized sql code. However, if the sql code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate sql function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_sql_multitask_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_sql_multitask_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "select time ( col0 ) from tab0" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/source%20code%20summarization/sql/small_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 1200 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing sql code. ## Evaluation results For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | SQL | C# | | -------------------- | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 | | CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 | | CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 | | CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 | | CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 | | CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 | | CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 | | CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** | | CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 | | CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 | | CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 | | CODE-NN | -- | 18.40 | 20.50 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "select time ( col0 ) from tab0"}]}
SEBIS/code_trans_t5_small_source_code_summarization_sql_multitask_finetune
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
summarization
transformers
# CodeTrans model for source code summarization sql Pretrained model on programming language sql using the t5 small model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized sql code functions: it works best with tokenized sql functions. ## Model description This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the source code summarization task for the sql code snippets. ## Intended uses & limitations The model could be used to generate the description for the sql function or be fine-tuned on other sql code tasks. It can be used on unparsed and untokenized sql code. However, if the sql code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate sql function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_sql_transfer_learning_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_sql_transfer_learning_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "select time ( col0 ) from tab0" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/transfer%20learning%20fine-tuning/source%20code%20summarization/sql/small_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Transfer-learning Pretraining The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 1000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing sql code. ## Evaluation results For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | SQL | C# | | -------------------- | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 | | CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 | | CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 | | CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 | | CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 | | CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 | | CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 | | CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** | | CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 | | CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 | | CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 | | CODE-NN | -- | 18.40 | 20.50 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "select time ( col0 ) from tab0"}]}
SEBIS/code_trans_t5_small_source_code_summarization_sql_transfer_learning_finetune
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
feature-extraction
transformers
# CodeTrans transfer learning pre-trained model Pretrained model on programming languages using the t5 small model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). ## Model description This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. It could be used to fine-tune other tasks in the software development domain. > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{}
SEBIS/code_trans_t5_small_transfer_learning_pretrain
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_cls_cs model Model for classification of legal text written in Cszech. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis. ## Model description legal_t5_small_cls_cs is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for classification of legal texts written in Cszech. ### How to use Here is how to use this model to classify legal text written in Cszech in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_cls_cs"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_cls_cs", do_lower_case=False, skip_special_tokens=True), device=0 ) cs_text = "Bez námitek k navrhovanému spojení (Případ č. COMP/M.4169 – Virgin/CPW/JV) (2006/C 103/16) (Text s významem pro EHP) Dne 29. března 2006 se Komise rozhodla nevznést námitky proti výše uvedenému spojení a prohlásit ho za slučitelné se společným trhem. Toto rozhodnutí je založeno na čl. 6 odst. 1 písm. b) nařízení Rady (ES) č. 139/2004. Celý text rozhodnutí je přístupný pouze v angličtině a bude uveřejněn poté, co bude zbaven obchodního tajemství, které může případně obsahovat. Text bude dosažitelný: - na webové stránce Europa – hospodářská soutěž (http://europa.eu.int/comm/competition/mergers/cases/). Tato webová stránka umožňuje vyhledat jednotlivá rozhodnutí o spojení, a to včetně společnosti, čísla případu, data a indexu odvětví hospodářství. - v elektronické podobě na webové stránce EUR-Lex, pod dokumentem č. 32006M4169. EUR-Lex umožňuje přístup k Evropskému právu přes Internet. (http://europa.eu.int/eur-lex/lex) --------------------------------------------------" pipeline([cs_text], max_length=512) ``` ## Training data The legal_t5_small_cls_cs model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html) dataset consisting of 18 Thousand texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 64). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for classification test dataset, achieves the following results: Test results : | Model | F1 score | |:-----:|:-----:| | legal_t5_small_cls_cs | 0.6297| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Cszech", "tags": ["classification Cszech model"], "datasets": ["jrc-acquis"], "widget": [{"text": "Bez n\u00e1mitek k navrhovan\u00e9mu spojen\u00ed (P\u0159\u00edpad \u010d. COMP/M.4169 \u2013 Virgin/CPW/JV) (2006/C 103/16) (Text s v\u00fdznamem pro EHP) Dne 29. b\u0159ezna 2006 se Komise rozhodla nevzn\u00e9st n\u00e1mitky proti v\u00fd\u0161e uveden\u00e9mu spojen\u00ed a prohl\u00e1sit ho za slu\u010diteln\u00e9 se spole\u010dn\u00fdm trhem. Toto rozhodnut\u00ed je zalo\u017eeno na \u010dl. 6 odst. 1 p\u00edsm. b) na\u0159\u00edzen\u00ed Rady (ES) \u010d. 139/2004. Cel\u00fd text rozhodnut\u00ed je p\u0159\u00edstupn\u00fd pouze v angli\u010dtin\u011b a bude uve\u0159ejn\u011bn pot\u00e9, co bude zbaven obchodn\u00edho tajemstv\u00ed, kter\u00e9 m\u016f\u017ee p\u0159\u00edpadn\u011b obsahovat. Text bude dosa\u017eiteln\u00fd: - na webov\u00e9 str\u00e1nce Europa \u2013 hospod\u00e1\u0159sk\u00e1 sout\u011b\u017e (http://europa.eu.int/comm/competition/mergers/cases/). Tato webov\u00e1 str\u00e1nka umo\u017e\u0148uje vyhledat jednotliv\u00e1 rozhodnut\u00ed o spojen\u00ed, a to v\u010detn\u011b spole\u010dnosti, \u010d\u00edsla p\u0159\u00edpadu, data a indexu odv\u011btv\u00ed hospod\u00e1\u0159stv\u00ed. - v elektronick\u00e9 podob\u011b na webov\u00e9 str\u00e1nce EUR-Lex, pod dokumentem \u010d. 32006M4169. EUR-Lex umo\u017e\u0148uje p\u0159\u00edstup k Evropsk\u00e9mu pr\u00e1vu p\u0159es Internet. (http://europa.eu.int/eur-lex/lex) --------------------------------------------------"}]}
SEBIS/legal_t5_small_cls_cs
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "classification Cszech model", "dataset:jrc-acquis", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_cls_de model Model for classification of legal text written in Deustch. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis. ## Model description legal_t5_small_cls_de is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for classification of legal texts written in Deustch. ### How to use Here is how to use this model to classify legal text written in Deustch in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_cls_de"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_cls_de", do_lower_case=False, skip_special_tokens=True), device=0 ) de_text = "BESCHLUSS DES RATES vom 17. Dezember 1999 über den Abschluß des Abkommens in Form eines Briefwechsels zwischen der Europäischen Gemeinschaft und der Tunesischen Republik über die Regelung für die Einfuhr von nicht behandeltem Olivenöl mit Ursprung in Tunesien in die Gemeinschaft (1999/873/EG) DER RAT DER EUROPÄISCHEN UNION - gestützt auf den Vertrag zur Gründung der Europäischen Gemeinschaft, insbesondere auf Artikel 133 in Verbindung mit Artikel 300 Absatz 2 Unterabsatz 1, auf Vorschlag der Kommission, in Erwägung nachstehender Gründe: (1) Zwischen der Europäischen Gemeinschaft und der Tunesischen Republik wurde ein Abkommen in Form eines Briefwechsels ausgehandelt, um die Geltungsdauer der Regelung für die Einfuhr von nicht behandeltem Olivenöl mit Ursprung in Tunesien in die Gemeinschaft, die in Artikel 3 des Protokolls Nr. 1 des Europa-Mittelmeer-Abkommens zur Gründung einer Assoziation zwischen der Europäischen Gemeinschaft und ihren Mitgliedstaaten einerseits und der Tunesischen Republik andererseits(1) vorgesehen ist, für die Zeit vom 1. Januar bis zum 31. Dezember 2000 zu verlängern. (2) Das Abkommen sollte im Namen der Gemeinschaft genehmigt werden - BESCHLIESST: Artikel 1 Das Abkommen in Form eines Briefwechsels zwischen der Europäischen Gemeinschaft und der Tunesischen Republik über die Regelung für die Einfuhr von nicht behandeltem Olivenöl mit Ursprung in Tunesien in die Gemeinschaft wird im Namen der Gemeinschaft genehmigt. Der Wortlaut des Abkommens ist diesem Beschluß beigefügt. Artikel 2 Der Präsident des Rates wird ermächtigt, die Person zu bestellen, die befugt ist, das Abkommen rechtsverbindlich für die Gemeinschaft zu unterzeichnen. Geschehen zu Brüssel am 17. Dezember 1999. Im Namen des Rates Der Präsident K. HEMILÄ (1) ABl. L 97 vom 30.3.1998, S. 1." pipeline([de_text], max_length=512) ``` ## Training data The legal_t5_small_cls_de model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html) dataset consisting of 23 Thousand texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 64). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for classification test dataset, achieves the following results: Test results : | Model | F1 score | |:-----:|:-----:| | legal_t5_small_cls_de | 0.6358| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Deustch", "tags": ["classification Deustch model"], "datasets": ["jrc-acquis"], "widget": [{"text": "BESCHLUSS DES RATES vom 17. Dezember 1999 \u00fcber den Abschlu\u00df des Abkommens in Form eines Briefwechsels zwischen der Europ\u00e4ischen Gemeinschaft und der Tunesischen Republik \u00fcber die Regelung f\u00fcr die Einfuhr von nicht behandeltem Oliven\u00f6l mit Ursprung in Tunesien in die Gemeinschaft (1999/873/EG) DER RAT DER EUROP\u00c4ISCHEN UNION - gest\u00fctzt auf den Vertrag zur Gr\u00fcndung der Europ\u00e4ischen Gemeinschaft, insbesondere auf Artikel 133 in Verbindung mit Artikel 300 Absatz 2 Unterabsatz 1, auf Vorschlag der Kommission, in Erw\u00e4gung nachstehender Gr\u00fcnde: (1) Zwischen der Europ\u00e4ischen Gemeinschaft und der Tunesischen Republik wurde ein Abkommen in Form eines Briefwechsels ausgehandelt, um die Geltungsdauer der Regelung f\u00fcr die Einfuhr von nicht behandeltem Oliven\u00f6l mit Ursprung in Tunesien in die Gemeinschaft, die in Artikel 3 des Protokolls Nr. 1 des Europa-Mittelmeer-Abkommens zur Gr\u00fcndung einer Assoziation zwischen der Europ\u00e4ischen Gemeinschaft und ihren Mitgliedstaaten einerseits und der Tunesischen Republik andererseits(1) vorgesehen ist, f\u00fcr die Zeit vom 1. Januar bis zum 31. Dezember 2000 zu verl\u00e4ngern. (2) Das Abkommen sollte im Namen der Gemeinschaft genehmigt werden - BESCHLIESST: Artikel 1 Das Abkommen in Form eines Briefwechsels zwischen der Europ\u00e4ischen Gemeinschaft und der Tunesischen Republik \u00fcber die Regelung f\u00fcr die Einfuhr von nicht behandeltem Oliven\u00f6l mit Ursprung in Tunesien in die Gemeinschaft wird im Namen der Gemeinschaft genehmigt. Der Wortlaut des Abkommens ist diesem Beschlu\u00df beigef\u00fcgt. Artikel 2 Der Pr\u00e4sident des Rates wird erm\u00e4chtigt, die Person zu bestellen, die befugt ist, das Abkommen rechtsverbindlich f\u00fcr die Gemeinschaft zu unterzeichnen. Geschehen zu Br\u00fcssel am 17. Dezember 1999. Im Namen des Rates Der Pr\u00e4sident K. HEMIL\u00c4 (1) ABl. L 97 vom 30.3.1998, S. 1."}]}
SEBIS/legal_t5_small_cls_de
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "classification Deustch model", "dataset:jrc-acquis", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_cls_en model Model for classification of legal text written in English. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis. ## Model description legal_t5_small_cls_en is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for classification of legal texts written in English. ### How to use Here is how to use this model to classify legal text written in English in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_cls_en"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_cls_en", do_lower_case=False, skip_special_tokens=True), device=0 ) en_text = "Appointment of members of the Conciliation Body instituted by Commission Decision 94/442/EC of 1 July 1994 setting up a conciliation procedure in the context of the clearance of the accounts of the European Agricultural Guidance and Guarantee Fund (EAGGF) Guarantee Section (2006/C 193/09) (1) The Commission has renewed the term of office of: Mr José Luis SAENZ GARCIA-BAQUERO (ES) (from 1 August 2006 to 31 July 2007). (2) The Commission has appointed as members: - Mr Peter BAUMANN (DA) (from 1 August 2006 to 31 July 2009); - Mr Daniel PERRIN (FR) (from 1 August 2006 to 31 July 2009). (3) The Commission has appointed as substitute members: - Mr Robert BURIAN (A) (from 1 August 2006); - Mr Eduardo DIEZ PATIER (ES) (from 1 August 2006). --------------------------------------------------" pipeline([en_text], max_length=512) ``` ## Training data The legal_t5_small_cls_en model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html) dataset consisting of 19 Thousand texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 64). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for classification test dataset, achieves the following results: Test results : | Model | F1 score | |:-----:|:-----:| | legal_t5_small_cls_en | 0.6247| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "English", "tags": ["classification English model"], "datasets": ["jrc-acquis"], "widget": [{"text": "Appointment of members of the Conciliation Body instituted by Commission Decision 94/442/EC of 1 July 1994 setting up a conciliation procedure in the context of the clearance of the accounts of the European Agricultural Guidance and Guarantee Fund (EAGGF) Guarantee Section (2006/C 193/09) (1) The Commission has renewed the term of office of: Mr Jos\u00e9 Luis SAENZ GARCIA-BAQUERO (ES) (from 1 August 2006 to 31 July 2007). (2) The Commission has appointed as members: - Mr Peter BAUMANN (DA) (from 1 August 2006 to 31 July 2009); - Mr Daniel PERRIN (FR) (from 1 August 2006 to 31 July 2009). (3) The Commission has appointed as substitute members: - Mr Robert BURIAN (A) (from 1 August 2006); - Mr Eduardo DIEZ PATIER (ES) (from 1 August 2006). --------------------------------------------------"}]}
SEBIS/legal_t5_small_cls_en
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "classification English model", "dataset:jrc-acquis", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_cls_es model Model for classification of legal text written in Spanish. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis. ## Model description legal_t5_small_cls_es is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for classification of legal texts written in Spanish. ### How to use Here is how to use this model to classify legal text written in Spanish in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_cls_es"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_cls_es", do_lower_case=False, skip_special_tokens=True), device=0 ) es_text = "Reglamento (CE) no 90/2001 de la Comisión de 17 de enero de 2001 que modifica el Reglamento (CE) n° 800/1999 por el que se establecen disposiciones comunes de aplicación del régimen de restituciones por exportación de productos agrícolas LA COMISIÓN DE LAS COMUNIDADES EUROPEAS, Visto el Tratado constitutivo de la Comunidad Europea, Visto el Reglamento (CEE) n° 1766/92 del Consejo, de 30 de junio de 1992, por el que se establece la organización común de mercados en el sector de los cereales(1), cuya última modificación la constituye el Reglamento (CE) n° 1666/2000(2), y, en particular, sus artículos 13 y 21, así como las disposiciones correspondientes de los demás Reglamentos por los que se establecen organizaciones comunes de mercados de productos agrícolas, Considerando lo siguiente: (1) En el caso de exportación de productos presentados a granel o en unidades no normalizadas, en los que es evidente que la masa neta exacta de los productos no puede conocerse hasta después de cargar el medio de transporte, el apartado 6 del artículo 5 del Reglamento (CE) n° 800/1999 de la Comisión(3), modificado por el Reglamento (CE) n° 1557/2000(4) establece la aplicación de una reducción de la restitución cuando la masa neta efectivamente cargada sea inferior a un determinado porcentaje de la masa neta estimada. No obstante, para la aplicación de esta disposición conviene tener en cuenta las limitaciones inherentes a los medios de transporte de navegación marítima o interior. En efecto, en el caso de los productos exportados a granel, puede ocurrir que las cantidades declaradas no se carguen en su totalidad debido, en particular, a la decisión del responsable del medio de transporte que puede ordenar la suspensión de la carga por razones técnicas o debido a un exceso de carga imputable a los demás exportadores. (2) Dado que determinados cortes de carne de porcino no se presentan en embalajes ni son, por naturaleza, homogéneos, conviene ampliar la categoría de unidades no normalizadas a este tipo de productos. (3) En lo que respecta a la noción de lugar de carga, en el comercio de exportación de productos agrícolas se presenta una multitud de situaciones comerciales y administrativas; por consiguiente, es difícil establecer una norma única y conviene autorizar a los Estados miembros para que determinen el lugar más apropiado para efectuar los controles físicos para los productos agrícolas exportados que se benefician de una restitución. A estos efectos, parece justificado determinar el lugar de carga, de forma diferente, en función de que los productos sean cargados en contenedores o, por el contrario, a granel, en sacos o en cajas y no se carguen posteriormente en contenedores. Asimismo, es conveniente que, cuando existan motivos debidamente justificados, se permita que las autoridades aduaneras acepten para los productos agrícolas que se beneficien, de una restitución declaraciones de exportación presentadas en una oficina de aduanas que no sea la del lugar donde vayan a cargarse los productos. (4) En el caso de los productos sujetos al régimen de mercancías de retorno, es oportuno prever la posibilidad de que la reintroducción se efectúe, bien por el Estado miembros del que sean originarios los productos, bien por el Estado miembro exportador de la primera exportación. (5) Conviene modificar el Reglamento (CE) n° 800/1999 en consecuencia. (6) Las medidas previstas en el presente Reglamento se ajustan al dictamen de todos los Comités de gestión interesados. HA ADOPTADO EL PRESENTE REGLAMENTO: Artículo 1 El Reglamento (CE) n° 800/1999 se modificará como sigue: 1) En el apartado 6 del articulo 5, el párrafo tercero se sustituirá por el texto siguiente: %quot%No se concederá ninguna restitución por la cantidad que sobrepase el 110 % de la masa neta estimada. Cuando la masa efectivamente cargada sea inferior al 90 % de la masa neta estimada, la restitución por la masa neta efectivamente cargada se reducirá un 10 % en relación con la diferencia entre la restitución correspondiente al 90 % de la masa neta estimada y la restitución correspondiente a la masa efectivamente cargada. No obstante, en los casos de exportación par vía marítima o por vía navegable interior, la restitución se pagará por la masa neta efectivamente cargada cuando el exportador pueda aportar la prueba, refrendada por el responsable del medio de transporte, de que el hecho de que no se cargara la totalidad de sus mercancías se debió a las limitaciones inherentes a ese tipo de transporte o a un exceso de carga imputable a uno o a varios de los demás exportadores. En caso de que el exportador haya utilizado el procedimiento de domiciliación previsto en el artículo 283 del Reglamento (CEE) n° 2454/93 serán aplicables las disposiciones del presente párrafo siempre que las autoridades aduaneras hayan autorizado la rectificación de los documentos contables en los que los productos exportados hayan sido inscritos.%quot%. 2) En el apartado 6 del artículo 5, el párrafo cuarto se sustituirá por el texto siguiente: %quot%Se considerarán productos en unidades no estandarizadas los animales vivos, las (medias) canales, los cuartos, partes delanteras, jamones, paletillas, pechos y lomos.%quot%. 3) El apartado 7 del articulo 5 se sustituirá por el texto siguiente: %quot%7. Cualquier persona que exporte productos por los cuales solicite la concesión de la restitución estará obligada a lo siguiente: a) presentar la declaración de exportación en la oficina de aduanas competente del lugar en que los productos vayan a cargarse en el transporte que vaya a efectuar la exportación; b) informar a dicha oficina de aduanas, coma mínimo 24 horas antes del comienzo de las operaciones de carga, e indicar la duración prevista de las operaciones de carga; las autoridades competentes podrán modificar el plazo de 24 horas. Se podrá considerar como lugar de carga en el transporte de los productos destinados a la exportación: - en el caso de los productos que se exporten cargados en contenedores, el lugar donde se carguen en éstos las mercancías, - en el caso de los productos que se exporten a granel, en sacos, cajones, cajas, botellas, etc. sin cargarse en contenedores, el lugar donde se cargue el medio de transporte por el que las mercancías vayan a salir del territorio aduanero de la Comunidad. La oficina de aduanas competente podrá autorizar las operaciones de carga una vez aceptada la declaración de exportación y antes de finalizar el plazo a que se refiere la letra b). La oficina de aduanas competente deberá estar en condiciones de realizar el control físico y de aplicar las medidas de identificación necesarias para el transporte hacia la oficina de salida del territorio aduanero de la Comunidad. Si por razones de organización administrativa o por otras razones debidamente justificadas, no pueden aplicarse las disposiciones del párrafo primero, la declaración de exportación, sólo podrá ser presentada en la oficina de aduanas competente del Estado miembro en cuestión, y, en el caso de un control físico de conformidad con el Reglamento (CEE) n° 386/90, el producto presentado deberá ser descargado completamente. No obstante, la descarga completa no será obligatoria cuando las autoridades competentes puedan garantizar la realización de un control físico exhaustivo.%quot%. 4) En el apartado 3 del artículo 25, el último párrafo se sustituirá por el texto siguiente: %quot%La presente disposición sólo se aplicará cuando el régimen de retorno haya sido utilizado en el Estado miembro donde se haya aceptado la declaración de exportación de la primera exportación o en el Estado miembro de origen, de conformidad con el artículo 15 de la Directiva 97/78/CE del Consejo(5), por la que se establecen los principios relativos a la organización de controles veterinarios de los productos que se introduzcan en la Comunidad procedentes de terceros países.%quot%. Artículo 2 El presente Reglamento entrará en vigor el séptimo día siguiente al de su publicación en el Diario Oficial de las Comunidades Europeas. A petición de los exportadores, las disposiciones del apartado 1 del articulo 1 se aplicarán a los expedientes de restituciones que aún no hayan sido cerrados en el momento de la entrada en vigor del presente Reglamento. El presente Reglamento será obligatorio en todos sus elementos y directamente aplicable en cada Estado miembro. Hecho en Bruselas, el 17 de enero de 2001. Por la Comisión Franz Fischler Miembro de la Comisión (1) DO L 181 de 1.7.1992, p. 21. (2) DO L 193 de 29.7.2000, p. 1. (3) DO L 102 de 17.4.1999, p. 11. (4) DO L 179 de 18.7.2000, p. 6. (5) DO L 24 de 30.1.1998, p. 9." pipeline([es_text], max_length=512) ``` ## Training data The legal_t5_small_cls_es model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html) dataset consisting of 22 Thousand texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 64). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for classification test dataset, achieves the following results: Test results : | Model | F1 score | |:-----:|:-----:| | legal_t5_small_cls_es | 0.6318| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Spanish", "tags": ["classification Spanish model"], "datasets": ["jrc-acquis"], "widget": [{"text": "Reglamento (CE) no 90/2001 de la Comisi\u00f3n de 17 de enero de 2001 que modifica el Reglamento (CE) n\u00b0 800/1999 por el que se establecen disposiciones comunes de aplicaci\u00f3n del r\u00e9gimen de restituciones por exportaci\u00f3n de productos agr\u00edcolas LA COMISI\u00d3N DE LAS COMUNIDADES EUROPEAS, Visto el Tratado constitutivo de la Comunidad Europea, Visto el Reglamento (CEE) n\u00b0 1766/92 del Consejo, de 30 de junio de 1992, por el que se establece la organizaci\u00f3n com\u00fan de mercados en el sector de los cereales(1), cuya \u00faltima modificaci\u00f3n la constituye el Reglamento (CE) n\u00b0 1666/2000(2), y, en particular, sus art\u00edculos 13 y 21, as\u00ed como las disposiciones correspondientes de los dem\u00e1s Reglamentos por los que se establecen organizaciones comunes de mercados de productos agr\u00edcolas, Considerando lo siguiente: (1) En el caso de exportaci\u00f3n de productos presentados a granel o en unidades no normalizadas, en los que es evidente que la masa neta exacta de los productos no puede conocerse hasta despu\u00e9s de cargar el medio de transporte, el apartado 6 del art\u00edculo 5 del Reglamento (CE) n\u00b0 800/1999 de la Comisi\u00f3n(3), modificado por el Reglamento (CE) n\u00b0 1557/2000(4) establece la aplicaci\u00f3n de una reducci\u00f3n de la restituci\u00f3n cuando la masa neta efectivamente cargada sea inferior a un determinado porcentaje de la masa neta estimada. No obstante, para la aplicaci\u00f3n de esta disposici\u00f3n conviene tener en cuenta las limitaciones inherentes a los medios de transporte de navegaci\u00f3n mar\u00edtima o interior. En efecto, en el caso de los productos exportados a granel, puede ocurrir que las cantidades declaradas no se carguen en su totalidad debido, en particular, a la decisi\u00f3n del responsable del medio de transporte que puede ordenar la suspensi\u00f3n de la carga por razones t\u00e9cnicas o debido a un exceso de carga imputable a los dem\u00e1s exportadores. (2) Dado que determinados cortes de carne de porcino no se presentan en embalajes ni son, por naturaleza, homog\u00e9neos, conviene ampliar la categor\u00eda de unidades no normalizadas a este tipo de productos. (3) En lo que respecta a la noci\u00f3n de lugar de carga, en el comercio de exportaci\u00f3n de productos agr\u00edcolas se presenta una multitud de situaciones comerciales y administrativas; por consiguiente, es dif\u00edcil establecer una norma \u00fanica y conviene autorizar a los Estados miembros para que determinen el lugar m\u00e1s apropiado para efectuar los controles f\u00edsicos para los productos agr\u00edcolas exportados que se benefician de una restituci\u00f3n. A estos efectos, parece justificado determinar el lugar de carga, de forma diferente, en funci\u00f3n de que los productos sean cargados en contenedores o, por el contrario, a granel, en sacos o en cajas y no se carguen posteriormente en contenedores. Asimismo, es conveniente que, cuando existan motivos debidamente justificados, se permita que las autoridades aduaneras acepten para los productos agr\u00edcolas que se beneficien, de una restituci\u00f3n declaraciones de exportaci\u00f3n presentadas en una oficina de aduanas que no sea la del lugar donde vayan a cargarse los productos. (4) En el caso de los productos sujetos al r\u00e9gimen de mercanc\u00edas de retorno, es oportuno prever la posibilidad de que la reintroducci\u00f3n se efect\u00fae, bien por el Estado miembros del que sean originarios los productos, bien por el Estado miembro exportador de la primera exportaci\u00f3n. (5) Conviene modificar el Reglamento (CE) n\u00b0 800/1999 en consecuencia. (6) Las medidas previstas en el presente Reglamento se ajustan al dictamen de todos los Comit\u00e9s de gesti\u00f3n interesados. HA ADOPTADO EL PRESENTE REGLAMENTO: Art\u00edculo 1 El Reglamento (CE) n\u00b0 800/1999 se modificar\u00e1 como sigue: 1) En el apartado 6 del articulo 5, el p\u00e1rrafo tercero se sustituir\u00e1 por el texto siguiente: %quot%No se conceder\u00e1 ninguna restituci\u00f3n por la cantidad que sobrepase el 110 % de la masa neta estimada. Cuando la masa efectivamente cargada sea inferior al 90 % de la masa neta estimada, la restituci\u00f3n por la masa neta efectivamente cargada se reducir\u00e1 un 10 % en relaci\u00f3n con la diferencia entre la restituci\u00f3n correspondiente al 90 % de la masa neta estimada y la restituci\u00f3n correspondiente a la masa efectivamente cargada. No obstante, en los casos de exportaci\u00f3n par v\u00eda mar\u00edtima o por v\u00eda navegable interior, la restituci\u00f3n se pagar\u00e1 por la masa neta efectivamente cargada cuando el exportador pueda aportar la prueba, refrendada por el responsable del medio de transporte, de que el hecho de que no se cargara la totalidad de sus mercanc\u00edas se debi\u00f3 a las limitaciones inherentes a ese tipo de transporte o a un exceso de carga imputable a uno o a varios de los dem\u00e1s exportadores. En caso de que el exportador haya utilizado el procedimiento de domiciliaci\u00f3n previsto en el art\u00edculo 283 del Reglamento (CEE) n\u00b0 2454/93 ser\u00e1n aplicables las disposiciones del presente p\u00e1rrafo siempre que las autoridades aduaneras hayan autorizado la rectificaci\u00f3n de los documentos contables en los que los productos exportados hayan sido inscritos.%quot%. 2) En el apartado 6 del art\u00edculo 5, el p\u00e1rrafo cuarto se sustituir\u00e1 por el texto siguiente: %quot%Se considerar\u00e1n productos en unidades no estandarizadas los animales vivos, las (medias) canales, los cuartos, partes delanteras, jamones, paletillas, pechos y lomos.%quot%. 3) El apartado 7 del articulo 5 se sustituir\u00e1 por el texto siguiente: %quot%7. Cualquier persona que exporte productos por los cuales solicite la concesi\u00f3n de la restituci\u00f3n estar\u00e1 obligada a lo siguiente: a) presentar la declaraci\u00f3n de exportaci\u00f3n en la oficina de aduanas competente del lugar en que los productos vayan a cargarse en el transporte que vaya a efectuar la exportaci\u00f3n; b) informar a dicha oficina de aduanas, coma m\u00ednimo 24 horas antes del comienzo de las operaciones de carga, e indicar la duraci\u00f3n prevista de las operaciones de carga; las autoridades competentes podr\u00e1n modificar el plazo de 24 horas. Se podr\u00e1 considerar como lugar de carga en el transporte de los productos destinados a la exportaci\u00f3n: - en el caso de los productos que se exporten cargados en contenedores, el lugar donde se carguen en \u00e9stos las mercanc\u00edas, - en el caso de los productos que se exporten a granel, en sacos, cajones, cajas, botellas, etc. sin cargarse en contenedores, el lugar donde se cargue el medio de transporte por el que las mercanc\u00edas vayan a salir del territorio aduanero de la Comunidad. La oficina de aduanas competente podr\u00e1 autorizar las operaciones de carga una vez aceptada la declaraci\u00f3n de exportaci\u00f3n y antes de finalizar el plazo a que se refiere la letra b). La oficina de aduanas competente deber\u00e1 estar en condiciones de realizar el control f\u00edsico y de aplicar las medidas de identificaci\u00f3n necesarias para el transporte hacia la oficina de salida del territorio aduanero de la Comunidad. Si por razones de organizaci\u00f3n administrativa o por otras razones debidamente justificadas, no pueden aplicarse las disposiciones del p\u00e1rrafo primero, la declaraci\u00f3n de exportaci\u00f3n, s\u00f3lo podr\u00e1 ser presentada en la oficina de aduanas competente del Estado miembro en cuesti\u00f3n, y, en el caso de un control f\u00edsico de conformidad con el Reglamento (CEE) n\u00b0 386/90, el producto presentado deber\u00e1 ser descargado completamente. No obstante, la descarga completa no ser\u00e1 obligatoria cuando las autoridades competentes puedan garantizar la realizaci\u00f3n de un control f\u00edsico exhaustivo.%quot%. 4) En el apartado 3 del art\u00edculo 25, el \u00faltimo p\u00e1rrafo se sustituir\u00e1 por el texto siguiente: %quot%La presente disposici\u00f3n s\u00f3lo se aplicar\u00e1 cuando el r\u00e9gimen de retorno haya sido utilizado en el Estado miembro donde se haya aceptado la declaraci\u00f3n de exportaci\u00f3n de la primera exportaci\u00f3n o en el Estado miembro de origen, de conformidad con el art\u00edculo 15 de la Directiva 97/78/CE del Consejo(5), por la que se establecen los principios relativos a la organizaci\u00f3n de controles veterinarios de los productos que se introduzcan en la Comunidad procedentes de terceros pa\u00edses.%quot%. Art\u00edculo 2 El presente Reglamento entrar\u00e1 en vigor el s\u00e9ptimo d\u00eda siguiente al de su publicaci\u00f3n en el Diario Oficial de las Comunidades Europeas. A petici\u00f3n de los exportadores, las disposiciones del apartado 1 del articulo 1 se aplicar\u00e1n a los expedientes de restituciones que a\u00fan no hayan sido cerrados en el momento de la entrada en vigor del presente Reglamento. El presente Reglamento ser\u00e1 obligatorio en todos sus elementos y directamente aplicable en cada Estado miembro. Hecho en Bruselas, el 17 de enero de 2001. Por la Comisi\u00f3n Franz Fischler Miembro de la Comisi\u00f3n (1) DO L 181 de 1.7.1992, p. 21. (2) DO L 193 de 29.7.2000, p. 1. (3) DO L 102 de 17.4.1999, p. 11. (4) DO L 179 de 18.7.2000, p. 6. (5) DO L 24 de 30.1.1998, p. 9."}]}
SEBIS/legal_t5_small_cls_es
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "classification Spanish model", "dataset:jrc-acquis", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
{}
SEBIS/legal_t5_small_cls_finetuned_cs
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
{}
SEBIS/legal_t5_small_cls_finetuned_de
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
{}
SEBIS/legal_t5_small_cls_finetuned_en
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
{}
SEBIS/legal_t5_small_cls_finetuned_es
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
{}
SEBIS/legal_t5_small_cls_finetuned_fr
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
{}
SEBIS/legal_t5_small_cls_finetuned_it
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
{}
SEBIS/legal_t5_small_cls_finetuned_sv
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_cls_fr model Model for classification of legal text written in French. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis. ## Model description legal_t5_small_cls_fr is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for classification of legal texts written in French. ### How to use Here is how to use this model to classify legal text written in French in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_cls_fr"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_cls_fr", do_lower_case=False, skip_special_tokens=True), device=0 ) fr_text = "Règlement (CE) no 264/2005 de la Commission du 16 février 2005 fixant les restitutions à l'exportation dans le secteur de la viande de volaille applicables à partir du 17 février 2005 LA COMMISSION DES COMMUNAUTÉS EUROPÉENNES, vu le traité instituant la Communauté européenne, vu le règlement (CEE) no 2777/75 du Conseil du 29 octobre 1975 portant organisation commune des marchés dans le secteur de la viande de volaille [1], et notamment son article 8, paragraphe 3, troisième alinéa, considérant ce qui suit: (1) Aux termes de l'article 8 du règlement (CEE) no 2777/75, la différence entre les prix des produits visés à l'article 1er, paragraphe 1, dudit règlement, sur le marché mondial et dans la Communauté, peut être couverte par une restitution à l'exportation. (2) L'application de ces règles et critères à la situation actuelle des marchés dans le secteur de la viande de volaille conduit à fixer la restitution à un montant qui permette la participation de la Communauté au commerce international et tienne compte également du caractère des exportations de ces produits ainsi que de leur importance à l'heure actuelle. (3) L'article 21 du règlement (CE) no 800/1999 de la Commission du 15 avril 1999 portant modalités communes d'application du régime des restitutions à l'exportation pour les produits agricoles [2] prévoit qu'aucune restitution n'est octroyée lorsque les produits ne sont pas de qualité saine, loyale et marchande le jour d'acceptation de la déclaration d'exportation. Afin d'assurer une application uniforme de la réglementation en vigueur, il y a lieu de préciser que, pour bénéficier d'une restitution, les viandes de volailles figurant à l'article 1er du règlement (CEE) no 2777/75 doivent porter la marque de salubrité comme prévu à la directive 71/118/CEE du Conseil du 15 février 1971 relative à des problèmes sanitaires en matière de production et de mise sur le marché de viandes fraîches de volaille [3]. (4) Le comité de gestion de la viande de volaille et des œufs n'a pas émis d'avis dans le délai imparti par son président, A ARRÊTÉ LE PRÉSENT RÈGLEMENT: Article premier Les codes des produits pour l'exportation desquels est accordée la restitution visée à l'article 8 du règlement (CEE) no 2777/75 et les montants de cette restitution sont fixés à l'annexe du présent règlement. Toutefois, afin de pouvoir bénéficier de la restitution, les produits entrant dans le champ d'application du chapitre XII de l'annexe de la directive 71/118/CEE doivent également satisfaire aux conditions de marquage de salubrité prévues par cette directive. Article 2 Le présent règlement entre en vigueur le 17 février 2005. Le présent règlement est obligatoire dans tous ses éléments et directement applicable dans tout État membre. Fait à Bruxelles, le 16 février 2005. Par la Commission Mariann Fischer Boel Membre de la Commission [1] JO L 282 du 1.11.1975, p. 77. Règlement modifié en dernier lieu par le règlement (CE) no 806/2003 (JO L 122 du 16.5.2003, p. 1). [2] JO L 102 du 17.4.1999, p. 11. Règlement modifié en dernier lieu par le règlement (CE) no 671/2004 (JO L 105 du 14.4.2004, p. 5). [3] JO L 55 du 8.3.1971, p. 23. Directive modifiée en dernier lieu par le règlement (CE) no 807/2003 (JO L 122 du 16.5.2003, p. 36). -------------------------------------------------- ANNEXE Code des produits | Destination | Unité de mesure | Montant des restitutions | 0105 11 11 9000 | A02 | EUR/100 pcs | 0,80 | 0105 11 19 9000 | A02 | EUR/100 pcs | 0,80 | 0105 11 91 9000 | A02 | EUR/100 pcs | 0,80 | 0105 11 99 9000 | A02 | EUR/100 pcs | 0,80 | 0105 12 00 9000 | A02 | EUR/100 pcs | 1,70 | 0105 19 20 9000 | A02 | EUR/100 pcs | 1,70 | 0207 12 10 9900 | V01 | EUR/100 kg | 41,00 | 0207 12 10 9900 | A24 | EUR/100 kg | 41,00 | 0207 12 90 9190 | V01 | EUR/100 kg | 41,00 | 0207 12 90 9190 | A24 | EUR/100 kg | 41,00 | 0207 12 90 9990 | V01 | EUR/100 kg | 41,00 | 0207 12 90 9990 | A24 | EUR/100 kg | 41,00 | --------------------------------------------------" pipeline([fr_text], max_length=512) ``` ## Training data The legal_t5_small_cls_fr model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html) dataset consisting of 22 Thousand texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 64). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for classification test dataset, achieves the following results: Test results : | Model | F1 score | |:-----:|:-----:| | legal_t5_small_cls_fr | 0.6159| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "French", "tags": ["classification French model"], "datasets": ["jrc-acquis"], "widget": [{"text": "R\u00e8glement (CE) no 264/2005 de la Commission du 16 f\u00e9vrier 2005 fixant les restitutions \u00e0 l'exportation dans le secteur de la viande de volaille applicables \u00e0 partir du 17 f\u00e9vrier 2005 LA COMMISSION DES COMMUNAUT\u00c9S EUROP\u00c9ENNES, vu le trait\u00e9 instituant la Communaut\u00e9 europ\u00e9enne, vu le r\u00e8glement (CEE) no 2777/75 du Conseil du 29 octobre 1975 portant organisation commune des march\u00e9s dans le secteur de la viande de volaille [1], et notamment son article 8, paragraphe 3, troisi\u00e8me alin\u00e9a, consid\u00e9rant ce qui suit: (1) Aux termes de l'article 8 du r\u00e8glement (CEE) no 2777/75, la diff\u00e9rence entre les prix des produits vis\u00e9s \u00e0 l'article 1er, paragraphe 1, dudit r\u00e8glement, sur le march\u00e9 mondial et dans la Communaut\u00e9, peut \u00eatre couverte par une restitution \u00e0 l'exportation. (2) L'application de ces r\u00e8gles et crit\u00e8res \u00e0 la situation actuelle des march\u00e9s dans le secteur de la viande de volaille conduit \u00e0 fixer la restitution \u00e0 un montant qui permette la participation de la Communaut\u00e9 au commerce international et tienne compte \u00e9galement du caract\u00e8re des exportations de ces produits ainsi que de leur importance \u00e0 l'heure actuelle. (3) L'article 21 du r\u00e8glement (CE) no 800/1999 de la Commission du 15 avril 1999 portant modalit\u00e9s communes d'application du r\u00e9gime des restitutions \u00e0 l'exportation pour les produits agricoles [2] pr\u00e9voit qu'aucune restitution n'est octroy\u00e9e lorsque les produits ne sont pas de qualit\u00e9 saine, loyale et marchande le jour d'acceptation de la d\u00e9claration d'exportation. Afin d'assurer une application uniforme de la r\u00e9glementation en vigueur, il y a lieu de pr\u00e9ciser que, pour b\u00e9n\u00e9ficier d'une restitution, les viandes de volailles figurant \u00e0 l'article 1er du r\u00e8glement (CEE) no 2777/75 doivent porter la marque de salubrit\u00e9 comme pr\u00e9vu \u00e0 la directive 71/118/CEE du Conseil du 15 f\u00e9vrier 1971 relative \u00e0 des probl\u00e8mes sanitaires en mati\u00e8re de production et de mise sur le march\u00e9 de viandes fra\u00eeches de volaille [3]. (4) Le comit\u00e9 de gestion de la viande de volaille et des \u0153ufs n'a pas \u00e9mis d'avis dans le d\u00e9lai imparti par son pr\u00e9sident, A ARR\u00caT\u00c9 LE PR\u00c9SENT R\u00c8GLEMENT: Article premier Les codes des produits pour l'exportation desquels est accord\u00e9e la restitution vis\u00e9e \u00e0 l'article 8 du r\u00e8glement (CEE) no 2777/75 et les montants de cette restitution sont fix\u00e9s \u00e0 l'annexe du pr\u00e9sent r\u00e8glement. Toutefois, afin de pouvoir b\u00e9n\u00e9ficier de la restitution, les produits entrant dans le champ d'application du chapitre XII de l'annexe de la directive 71/118/CEE doivent \u00e9galement satisfaire aux conditions de marquage de salubrit\u00e9 pr\u00e9vues par cette directive. Article 2 Le pr\u00e9sent r\u00e8glement entre en vigueur le 17 f\u00e9vrier 2005. Le pr\u00e9sent r\u00e8glement est obligatoire dans tous ses \u00e9l\u00e9ments et directement applicable dans tout \u00c9tat membre. Fait \u00e0 Bruxelles, le 16 f\u00e9vrier 2005. Par la Commission Mariann Fischer Boel Membre de la Commission [1] JO L 282 du 1.11.1975, p. 77. R\u00e8glement modifi\u00e9 en dernier lieu par le r\u00e8glement (CE) no 806/2003 (JO L 122 du 16.5.2003, p. 1). [2] JO L 102 du 17.4.1999, p. 11. R\u00e8glement modifi\u00e9 en dernier lieu par le r\u00e8glement (CE) no 671/2004 (JO L 105 du 14.4.2004, p. 5). [3] JO L 55 du 8.3.1971, p. 23. Directive modifi\u00e9e en dernier lieu par le r\u00e8glement (CE) no 807/2003 (JO L 122 du 16.5.2003, p. 36). -------------------------------------------------- ANNEXE Code des produits | Destination | Unit\u00e9 de mesure | Montant des restitutions | 0105 11 11 9000 | A02 | EUR/100 pcs | 0,80 | 0105 11 19 9000 | A02 | EUR/100 pcs | 0,80 | 0105 11 91 9000 | A02 | EUR/100 pcs | 0,80 | 0105 11 99 9000 | A02 | EUR/100 pcs | 0,80 | 0105 12 00 9000 | A02 | EUR/100 pcs | 1,70 | 0105 19 20 9000 | A02 | EUR/100 pcs | 1,70 | 0207 12 10 9900 | V01 | EUR/100 kg | 41,00 | 0207 12 10 9900 | A24 | EUR/100 kg | 41,00 | 0207 12 90 9190 | V01 | EUR/100 kg | 41,00 | 0207 12 90 9190 | A24 | EUR/100 kg | 41,00 | 0207 12 90 9990 | V01 | EUR/100 kg | 41,00 | 0207 12 90 9990 | A24 | EUR/100 kg | 41,00 | --------------------------------------------------"}]}
SEBIS/legal_t5_small_cls_fr
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "classification French model", "dataset:jrc-acquis", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_cls_it model Model for classification of legal text written in Italian. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis. ## Model description legal_t5_small_cls_it is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for classification of legal texts written in Italian. ### How to use Here is how to use this model to classify legal text written in Italian in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_cls_it"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_cls_it", do_lower_case=False, skip_special_tokens=True), device=0 ) it_text = "Regolamento (CE) n. 435/2005 della Commissione del 17 marzo 2005 relativo all'applicazione di un coefficiente di riduzione ai certificati di restituzione per le merci non comprese nell'allegato I del trattato come statuito all'articolo 8, paragrafo 5, del regolamento (CE) n. 1520/2000 LA COMMISSIONE DELLE COMUNITÀ EUROPEE, visto il trattato che istituisce la Comunità europea, visto il regolamento (CE) n. 3448/93 del Consiglio, del 6 dicembre 1993, sul regime di scambi per talune merci ottenute dalla trasformazione di prodotti agricoli [1], visto il regolamento (CE) n. 1520/2000 della Commissione, del 13 luglio 2000, che stabilisce, per taluni prodotti agricoli esportati sotto forma di merci non comprese nell'allegato I del trattato, le modalità comuni di applicazione relative al versamento delle restituzioni all'esportazione e i criteri per stabilirne l'importo [2], in particolare l'articolo 8, paragrafo 5, considerando quanto segue: (1) Dalle comunicazioni degli Stati membri di cui all'articolo 8, paragrafo 2, del regolamento (CE) n. 1520/2000 si evince che l'importo totale delle domande ricevute ammonta a 178002906 EUR, mentre l'importo disponibile per la tranche di titoli di restituzione di cui all'articolo 8, paragrafo 4, del regolamento (CE) n. 1520/2000 ammonta a 68116869 EUR. (2) Un coefficiente di riduzione è calcolato sulla base dell'articolo 8, paragrafi 3 e 4, del regolamento (CE) n. 1520/2000. Siffatto coefficiente dovrebbe pertanto essere applicato agli importi richiesti sotto forma di certificati di restituzione per il periodo dal 1o aprile 2005 come stabilito all'articolo 8, paragrafo 6, del regolamento (CE) n. 1520/2000, HA ADOTTATO IL PRESENTE REGOLAMENTO: Articolo 1 Gli importi delle domande di certificati di restituzione per il periodo dal 1o aprile 2005 sono soggetti a un coefficiente di riduzione pari a 0,618. Articolo 2 Il presente regolamento entra in vigore il 18 marzo 2005. Il presente regolamento è obbligatorio in tutti i suoi elementi e direttamente applicabile in ciascuno degli Stati membri. Fatto a Bruxelles, il 17 marzo 2005. Per la Commissione Günter Verheugen Vicepresidente [1] GU L 318 del 20.12.1993, pag. 18. Regolamento modificato da ultimo dal regolamento (CE) n. 2580/2000 (GU L 298 del 25.11.2000, pag. 5). [2] GU L 177 del 15.7.2000, pag. 1. Regolamento modificato da ultimo dal regolamento (CE) n. 886/2004 (GU L 168 del 1.5.2004, pag. 14). --------------------------------------------------" pipeline([it_text], max_length=512) ``` ## Training data The legal_t5_small_cls_it model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html) dataset consisting of 23 Thousand texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 64). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for classification test dataset, achieves the following results: Test results : | Model | F1 score | |:-----:|:-----:| | legal_t5_small_cls_it | 0.6296| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Italian", "tags": ["classification Italian model"], "datasets": ["jrc-acquis"], "widget": [{"text": "Regolamento (CE) n. 435/2005 della Commissione del 17 marzo 2005 relativo all'applicazione di un coefficiente di riduzione ai certificati di restituzione per le merci non comprese nell'allegato I del trattato come statuito all'articolo 8, paragrafo 5, del regolamento (CE) n. 1520/2000 LA COMMISSIONE DELLE COMUNIT\u00c0 EUROPEE, visto il trattato che istituisce la Comunit\u00e0 europea, visto il regolamento (CE) n. 3448/93 del Consiglio, del 6 dicembre 1993, sul regime di scambi per talune merci ottenute dalla trasformazione di prodotti agricoli [1], visto il regolamento (CE) n. 1520/2000 della Commissione, del 13 luglio 2000, che stabilisce, per taluni prodotti agricoli esportati sotto forma di merci non comprese nell'allegato I del trattato, le modalit\u00e0 comuni di applicazione relative al versamento delle restituzioni all'esportazione e i criteri per stabilirne l'importo [2], in particolare l'articolo 8, paragrafo 5, considerando quanto segue: (1) Dalle comunicazioni degli Stati membri di cui all'articolo 8, paragrafo 2, del regolamento (CE) n. 1520/2000 si evince che l'importo totale delle domande ricevute ammonta a 178002906 EUR, mentre l'importo disponibile per la tranche di titoli di restituzione di cui all'articolo 8, paragrafo 4, del regolamento (CE) n. 1520/2000 ammonta a 68116869 EUR. (2) Un coefficiente di riduzione \u00e8 calcolato sulla base dell'articolo 8, paragrafi 3 e 4, del regolamento (CE) n. 1520/2000. Siffatto coefficiente dovrebbe pertanto essere applicato agli importi richiesti sotto forma di certificati di restituzione per il periodo dal 1o aprile 2005 come stabilito all'articolo 8, paragrafo 6, del regolamento (CE) n. 1520/2000, HA ADOTTATO IL PRESENTE REGOLAMENTO: Articolo 1 Gli importi delle domande di certificati di restituzione per il periodo dal 1o aprile 2005 sono soggetti a un coefficiente di riduzione pari a 0,618. Articolo 2 Il presente regolamento entra in vigore il 18 marzo 2005. Il presente regolamento \u00e8 obbligatorio in tutti i suoi elementi e direttamente applicabile in ciascuno degli Stati membri. Fatto a Bruxelles, il 17 marzo 2005. Per la Commissione G\u00fcnter Verheugen Vicepresidente [1] GU L 318 del 20.12.1993, pag. 18. Regolamento modificato da ultimo dal regolamento (CE) n. 2580/2000 (GU L 298 del 25.11.2000, pag. 5). [2] GU L 177 del 15.7.2000, pag. 1. Regolamento modificato da ultimo dal regolamento (CE) n. 886/2004 (GU L 168 del 1.5.2004, pag. 14). --------------------------------------------------"}]}
SEBIS/legal_t5_small_cls_it
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "classification Italian model", "dataset:jrc-acquis", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
{}
SEBIS/legal_t5_small_cls_multitask_cs
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
{}
SEBIS/legal_t5_small_cls_multitask_de
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
{}
SEBIS/legal_t5_small_cls_multitask_en
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
{}
SEBIS/legal_t5_small_cls_multitask_es
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
{}
SEBIS/legal_t5_small_cls_multitask_fr
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
{}
SEBIS/legal_t5_small_cls_multitask_it
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
{}
SEBIS/legal_t5_small_cls_multitask_sv
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_cls_sv model Model for classification of legal text written in Swedish. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis. ## Model description legal_t5_small_cls_sv is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for classification of legal texts written in Swedish. ### How to use Here is how to use this model to classify legal text written in Swedish in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_cls_sv"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_cls_sv", do_lower_case=False, skip_special_tokens=True), device=0 ) sv_text = "Rådets förordning (EG) nr 1973/2002 av den 5 november 2002 om ändring av förordning (EG) nr 2026/97 om skydd mot subventionerad import från länder som inte är medlemmar i Europeiska gemenskapen EUROPEISKA UNIONENS RÅD HAR ANTAGIT DENNA FÖRORDNING med beaktande av Fördraget om upprättandet av Europeiska gemenskapen, särskilt artikel 133 i detta, med beaktande av kommissionens förslag, och av följande skäl: (1) Rådet antog genom förordning (EG) nr 2026/97(1) gemensamma regler för skydd mot subventionerad import från länder som inte är medlemmar i Europeiska gemenskapen. (2) I artikel 6 i förordning (EG) nr 2026/97 anges vissa riktlinjer för beräkning av förmånen för mottagaren, inbegripet det riktmärke för marknaden enligt vilket förmånens storlek beräknas. Det bör klargöras vilka bestämmelser som bör följas i de fall ett sådant riktmärke för marknaden inte finns i det berörda landet. I en sådan situation bör riktmärket fastställas genom anpassning av de villkor som råder i det berörda landet på grundval av de faktiska uppgifter som är tillgängliga där. Om detta inte är praktiskt genomförbart på grund av att det inte finns några uppgifter om sådana priser och kostnader eller på grund av att dessa är otillförlitliga, bör riktmärket fastställas med hjälp av de villkor som gäller på andra marknader. (3) I artikel 4 i förordning (EG) nr 2026/97 anges att vissa subventioner som rör miljö, forskning och regional utveckling inte är utjämningsbara. I artikel 10.5 och 10.6 i den förordningen anges vidare att undersökningar kan inledas för att avgöra om subventioner är icke-utjämningsbara och att de inte bör inledas om de rör vissa icke-utjämningsbara subventioner. Motsvarande bestämmelser i WTO-avtalet beträffande subventioner och utjämningsåtgärder var avsedda att löpa ut den 31 december 1999, såvida inte WTO-medlemsstaterna beslutade annat. Inget sådant beslut har fattats och de relevanta bestämmelserna är därför inte längre tillämpliga. Det är därför nödvändigt att fastställa huruvida bestämmelserna rörande icke-utjämningsbara subventioner i förordning (EG) nr 2026/97 bör fortsätta att gälla. Gemenskapens viktigaste handelspartner tillämpar inte längre dessa bestämmelser i sina utjämningsundersökningar. Av denna anledning och i syfte att upprätthålla balansen mellan rättigheter och skyldigheter enligt nämnda WTO-avtal bör de bestämmelser i förordning (EG) nr 2026/97 som rör icke-utjämningsbara subventioner upphöra att gälla. (4) I artikel 28.5 i förordning (EG) nr 2026/97 anges att om tillgängliga uppgifter används skall upplysningarna kontrolleras genom att jämföras med uppgifter från flera källor. Det bör specificeras att dessa källor också kan utgöras av uppgifter om världsmarknaden eller andra representativa marknader. (5) Ur rättssäkerhetssynpunkt är det lämpligt att dessa ändringar tillämpas så snart som möjligt i samband med alla nya undersökningar. HÄRIGENOM FÖRESKRIVS FÖLJANDE. Artikel 1 Förordning (EG) nr 2026/97 ändras enligt följande: 1. I artikel 6 d skall följande text läggas till: %quot%Om det inte finns några sådana rådande marknadsvillkor för produkterna eller tjänsterna i fråga i det land som tillhandahåller eller köper dem, som kan användas som lämpliga riktmärken, skall en av följande bestämmelser tillämpas: i) De villkor som råder i landet i fråga skall justeras på grundval av de faktiska kostnader, priser och andra faktorer som är tillgängliga i det landet med hjälp av ett lämpligt belopp som avspeglar normala marknadsvillkor. ii) I tillämpliga fall skall de villkor användas som råder på marknaden i ett annat land eller på världsmarknaden och som är tillgängliga för mottagaren.%quot% 2. Artikel 4 och artikel 10.5 och 10.6 skall utgå. 3. I artikel 28.5 skall följande mening läggas till: %quot%Sådana uppgifter kan, i tillämpliga fall, inbegripa relevanta upplysningar om världsmarknaden eller andra representativa marknader.%quot% Artikel 2 Denna förordning träder i kraft dagen efter det att den har offentliggjorts i Europeiska gemenskapernas officiella tidning. Den skall tillämpas i samband med alla undersökningar som inleds i enlighet med förordning (EG) nr 2026/97 efter dagen för ikraftträdandet av denna förordning. Denna förordning är till alla delar bindande och direkt tillämplig i alla medlemsstater. Utfärdad i Bryssel den 5 november 2002. På rådets vägnar T. Pedersen Ordförande (1) EGT L 288, 21.10.1997, s. 1." pipeline([sv_text], max_length=512) ``` ## Training data The legal_t5_small_cls_sv model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html) dataset consisting of 23 Thousand texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 64). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for classification test dataset, achieves the following results: Test results : | Model | F1 score | |:-----:|:-----:| | legal_t5_small_cls_sv | 0.6449| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Swedish", "tags": ["classification Swedish model"], "datasets": ["jrc-acquis"], "widget": [{"text": "R\u00e5dets f\u00f6rordning (EG) nr 1973/2002 av den 5 november 2002 om \u00e4ndring av f\u00f6rordning (EG) nr 2026/97 om skydd mot subventionerad import fr\u00e5n l\u00e4nder som inte \u00e4r medlemmar i Europeiska gemenskapen EUROPEISKA UNIONENS R\u00c5D HAR ANTAGIT DENNA F\u00d6RORDNING med beaktande av F\u00f6rdraget om uppr\u00e4ttandet av Europeiska gemenskapen, s\u00e4rskilt artikel 133 i detta, med beaktande av kommissionens f\u00f6rslag, och av f\u00f6ljande sk\u00e4l: (1) R\u00e5det antog genom f\u00f6rordning (EG) nr 2026/97(1) gemensamma regler f\u00f6r skydd mot subventionerad import fr\u00e5n l\u00e4nder som inte \u00e4r medlemmar i Europeiska gemenskapen. (2) I artikel 6 i f\u00f6rordning (EG) nr 2026/97 anges vissa riktlinjer f\u00f6r ber\u00e4kning av f\u00f6rm\u00e5nen f\u00f6r mottagaren, inbegripet det riktm\u00e4rke f\u00f6r marknaden enligt vilket f\u00f6rm\u00e5nens storlek ber\u00e4knas. Det b\u00f6r klarg\u00f6ras vilka best\u00e4mmelser som b\u00f6r f\u00f6ljas i de fall ett s\u00e5dant riktm\u00e4rke f\u00f6r marknaden inte finns i det ber\u00f6rda landet. I en s\u00e5dan situation b\u00f6r riktm\u00e4rket fastst\u00e4llas genom anpassning av de villkor som r\u00e5der i det ber\u00f6rda landet p\u00e5 grundval av de faktiska uppgifter som \u00e4r tillg\u00e4ngliga d\u00e4r. Om detta inte \u00e4r praktiskt genomf\u00f6rbart p\u00e5 grund av att det inte finns n\u00e5gra uppgifter om s\u00e5dana priser och kostnader eller p\u00e5 grund av att dessa \u00e4r otillf\u00f6rlitliga, b\u00f6r riktm\u00e4rket fastst\u00e4llas med hj\u00e4lp av de villkor som g\u00e4ller p\u00e5 andra marknader. (3) I artikel 4 i f\u00f6rordning (EG) nr 2026/97 anges att vissa subventioner som r\u00f6r milj\u00f6, forskning och regional utveckling inte \u00e4r utj\u00e4mningsbara. I artikel 10.5 och 10.6 i den f\u00f6rordningen anges vidare att unders\u00f6kningar kan inledas f\u00f6r att avg\u00f6ra om subventioner \u00e4r icke-utj\u00e4mningsbara och att de inte b\u00f6r inledas om de r\u00f6r vissa icke-utj\u00e4mningsbara subventioner. Motsvarande best\u00e4mmelser i WTO-avtalet betr\u00e4ffande subventioner och utj\u00e4mnings\u00e5tg\u00e4rder var avsedda att l\u00f6pa ut den 31 december 1999, s\u00e5vida inte WTO-medlemsstaterna beslutade annat. Inget s\u00e5dant beslut har fattats och de relevanta best\u00e4mmelserna \u00e4r d\u00e4rf\u00f6r inte l\u00e4ngre till\u00e4mpliga. Det \u00e4r d\u00e4rf\u00f6r n\u00f6dv\u00e4ndigt att fastst\u00e4lla huruvida best\u00e4mmelserna r\u00f6rande icke-utj\u00e4mningsbara subventioner i f\u00f6rordning (EG) nr 2026/97 b\u00f6r forts\u00e4tta att g\u00e4lla. Gemenskapens viktigaste handelspartner till\u00e4mpar inte l\u00e4ngre dessa best\u00e4mmelser i sina utj\u00e4mningsunders\u00f6kningar. Av denna anledning och i syfte att uppr\u00e4tth\u00e5lla balansen mellan r\u00e4ttigheter och skyldigheter enligt n\u00e4mnda WTO-avtal b\u00f6r de best\u00e4mmelser i f\u00f6rordning (EG) nr 2026/97 som r\u00f6r icke-utj\u00e4mningsbara subventioner upph\u00f6ra att g\u00e4lla. (4) I artikel 28.5 i f\u00f6rordning (EG) nr 2026/97 anges att om tillg\u00e4ngliga uppgifter anv\u00e4nds skall upplysningarna kontrolleras genom att j\u00e4mf\u00f6ras med uppgifter fr\u00e5n flera k\u00e4llor. Det b\u00f6r specificeras att dessa k\u00e4llor ocks\u00e5 kan utg\u00f6ras av uppgifter om v\u00e4rldsmarknaden eller andra representativa marknader. (5) Ur r\u00e4ttss\u00e4kerhetssynpunkt \u00e4r det l\u00e4mpligt att dessa \u00e4ndringar till\u00e4mpas s\u00e5 snart som m\u00f6jligt i samband med alla nya unders\u00f6kningar. H\u00c4RIGENOM F\u00d6RESKRIVS F\u00d6LJANDE. Artikel 1 F\u00f6rordning (EG) nr 2026/97 \u00e4ndras enligt f\u00f6ljande: 1. I artikel 6 d skall f\u00f6ljande text l\u00e4ggas till: %quot%Om det inte finns n\u00e5gra s\u00e5dana r\u00e5dande marknadsvillkor f\u00f6r produkterna eller tj\u00e4nsterna i fr\u00e5ga i det land som tillhandah\u00e5ller eller k\u00f6per dem, som kan anv\u00e4ndas som l\u00e4mpliga riktm\u00e4rken, skall en av f\u00f6ljande best\u00e4mmelser till\u00e4mpas: i) De villkor som r\u00e5der i landet i fr\u00e5ga skall justeras p\u00e5 grundval av de faktiska kostnader, priser och andra faktorer som \u00e4r tillg\u00e4ngliga i det landet med hj\u00e4lp av ett l\u00e4mpligt belopp som avspeglar normala marknadsvillkor. ii) I till\u00e4mpliga fall skall de villkor anv\u00e4ndas som r\u00e5der p\u00e5 marknaden i ett annat land eller p\u00e5 v\u00e4rldsmarknaden och som \u00e4r tillg\u00e4ngliga f\u00f6r mottagaren.%quot% 2. Artikel 4 och artikel 10.5 och 10.6 skall utg\u00e5. 3. I artikel 28.5 skall f\u00f6ljande mening l\u00e4ggas till: %quot%S\u00e5dana uppgifter kan, i till\u00e4mpliga fall, inbegripa relevanta upplysningar om v\u00e4rldsmarknaden eller andra representativa marknader.%quot% Artikel 2 Denna f\u00f6rordning tr\u00e4der i kraft dagen efter det att den har offentliggjorts i Europeiska gemenskapernas officiella tidning. Den skall till\u00e4mpas i samband med alla unders\u00f6kningar som inleds i enlighet med f\u00f6rordning (EG) nr 2026/97 efter dagen f\u00f6r ikrafttr\u00e4dandet av denna f\u00f6rordning. Denna f\u00f6rordning \u00e4r till alla delar bindande och direkt till\u00e4mplig i alla medlemsstater. Utf\u00e4rdad i Bryssel den 5 november 2002. P\u00e5 r\u00e5dets v\u00e4gnar T. Pedersen Ordf\u00f6rande (1) EGT L 288, 21.10.1997, s. 1."}]}
SEBIS/legal_t5_small_cls_sv
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "classification Swedish model", "dataset:jrc-acquis", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
{}
SEBIS/legal_t5_small_finetuned_summ_cs
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
{}
SEBIS/legal_t5_small_finetuned_summ_de
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
{}
SEBIS/legal_t5_small_finetuned_summ_en
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
{}
SEBIS/legal_t5_small_finetuned_summ_es
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
{}
SEBIS/legal_t5_small_finetuned_summ_fr
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
{}
SEBIS/legal_t5_small_finetuned_summ_it
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
{}
SEBIS/legal_t5_small_finetuned_summ_sv
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_multitask_cs_de model Model on translating legal text from Cszech to Deustch. It was first released in [this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model. ## Model description No pretraining is involved in case of legal_t5_small_multitask_cs_de model, rather the unsupervised task is added with all the translation task to realize the multitask learning scenario. ## Intended uses & limitations The model could be used for translation of legal texts from Cszech to Deustch. ### How to use Here is how to use this model to translate legal text from Cszech to Deustch in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_cs_de"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_cs_de", do_lower_case=False, skip_special_tokens=True), device=0 ) cs_text = "Postavení žen v ozbrojených konfliktech a jejich úloha při obnově zemí po ukončení konfliktu a v demokratickém procesu v těchto zemích" pipeline([cs_text], max_length=512) ``` ## Training data The legal_t5_small_multitask_cs_de model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_multitask_cs_de | 43.145| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Cszech Deustch", "tags": ["translation Cszech Deustch model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Postaven\u00ed \u017een v ozbrojen\u00fdch konfliktech a jejich \u00faloha p\u0159i obnov\u011b zem\u00ed po ukon\u010den\u00ed konfliktu a v demokratick\u00e9m procesu v t\u011bchto zem\u00edch"}]}
SEBIS/legal_t5_small_multitask_cs_de
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Cszech Deustch model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_multitask_cs_en model Model on translating legal text from Cszech to English. It was first released in [this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model. ## Model description No pretraining is involved in case of legal_t5_small_multitask_cs_en model, rather the unsupervised task is added with all the translation task to realize the multitask learning scenario. ## Intended uses & limitations The model could be used for translation of legal texts from Cszech to English. ### How to use Here is how to use this model to translate legal text from Cszech to English in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_cs_en"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_cs_en", do_lower_case=False, skip_special_tokens=True), device=0 ) cs_text = "Komise musí vypracovat zprávu o hodnotících zprávách týkajících se uplatňování této směrnice v členských státech." pipeline([cs_text], max_length=512) ``` ## Training data The legal_t5_small_multitask_cs_en model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_multitask_cs_en | 37.136| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Cszech English", "tags": ["translation Cszech English model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Komise mus\u00ed vypracovat zpr\u00e1vu o hodnot\u00edc\u00edch zpr\u00e1v\u00e1ch t\u00fdkaj\u00edc\u00edch se uplat\u0148ov\u00e1n\u00ed t\u00e9to sm\u011brnice v \u010dlensk\u00fdch st\u00e1tech."}]}
SEBIS/legal_t5_small_multitask_cs_en
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Cszech English model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_multitask_cs_es model Model on translating legal text from Cszech to Spanish. It was first released in [this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model. ## Model description No pretraining is involved in case of legal_t5_small_multitask_cs_es model, rather the unsupervised task is added with all the translation task to realize the multitask learning scenario. ## Intended uses & limitations The model could be used for translation of legal texts from Cszech to Spanish. ### How to use Here is how to use this model to translate legal text from Cszech to Spanish in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_cs_es"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_cs_es", do_lower_case=False, skip_special_tokens=True), device=0 ) cs_text = "Antonio Tajani (místopředseda Komise) ." pipeline([cs_text], max_length=512) ``` ## Training data The legal_t5_small_multitask_cs_es model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_multitask_cs_es | 48.559| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Cszech Spanish", "tags": ["translation Cszech Spanish model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Antonio Tajani (m\u00edstop\u0159edseda Komise) ."}]}
SEBIS/legal_t5_small_multitask_cs_es
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Cszech Spanish model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_multitask_cs_fr model Model on translating legal text from Cszech to French. It was first released in [this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model. ## Model description No pretraining is involved in case of legal_t5_small_multitask_cs_fr model, rather the unsupervised task is added with all the translation task to realize the multitask learning scenario. ## Intended uses & limitations The model could be used for translation of legal texts from Cszech to French. ### How to use Here is how to use this model to translate legal text from Cszech to French in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_cs_fr"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_cs_fr", do_lower_case=False, skip_special_tokens=True), device=0 ) cs_text = "Agentura USA pro ochranu životního prostředí ve své hodnotící studii v roce 2002 zjistila možnou systémovou toxicitu a karcinogenitu a údaje získané z krevních testů nasvědčují rozsáhlé expozici obyvatelstva." pipeline([cs_text], max_length=512) ``` ## Training data The legal_t5_small_multitask_cs_fr model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_multitask_cs_fr | 47.588| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Cszech French", "tags": ["translation Cszech French model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Agentura USA pro ochranu \u017eivotn\u00edho prost\u0159ed\u00ed ve sv\u00e9 hodnot\u00edc\u00ed studii v roce 2002 zjistila mo\u017enou syst\u00e9movou toxicitu a karcinogenitu a \u00fadaje z\u00edskan\u00e9 z krevn\u00edch test\u016f nasv\u011bd\u010duj\u00ed rozs\u00e1hl\u00e9 expozici obyvatelstva."}]}
SEBIS/legal_t5_small_multitask_cs_fr
null
[ "transformers", "pytorch", "t5", "text2text-generation", "translation Cszech French model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_multitask_cs_it model Model on translating legal text from Cszech to Italian. It was first released in [this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model. ## Model description No pretraining is involved in case of legal_t5_small_multitask_cs_it model, rather the unsupervised task is added with all the translation task to realize the multitask learning scenario. ## Intended uses & limitations The model could be used for translation of legal texts from Cszech to Italian. ### How to use Here is how to use this model to translate legal text from Cszech to Italian in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_cs_it"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_cs_it", do_lower_case=False, skip_special_tokens=True), device=0 ) cs_text = "Příprava Evropské rady (29.-30. října 2009)" pipeline([cs_text], max_length=512) ``` ## Training data The legal_t5_small_multitask_cs_it model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_multitask_cs_it | 45.297| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Cszech Italian", "tags": ["translation Cszech Italian model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "P\u0159\u00edprava Evropsk\u00e9 rady (29.-30. \u0159\u00edjna 2009)"}]}
SEBIS/legal_t5_small_multitask_cs_it
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Cszech Italian model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_multitask_cs_sv model Model on translating legal text from Cszech to Swedish. It was first released in [this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model. ## Model description No pretraining is involved in case of legal_t5_small_multitask_cs_sv model, rather the unsupervised task is added with all the translation task to realize the multitask learning scenario. ## Intended uses & limitations The model could be used for translation of legal texts from Cszech to Swedish. ### How to use Here is how to use this model to translate legal text from Cszech to Swedish in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_cs_sv"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_cs_sv", do_lower_case=False, skip_special_tokens=True), device=0 ) cs_text = "Hračky určené pro častý kontakt s kůží obsahující alergenní látky jiné než vonné, které jsou známé vyvoláváním vážných nebo dokonce osudných účinků na zdraví dětí (například látky, které mohou vyvolat anafylaktický šok), musí být v souladu s ustanoveními týkajícími se označování uvedenými ve směrnici Komise 2006/125/ES ze dne 5. prosince 2006 o obilných a ostatních příkrmech pro kojence a malé děti." pipeline([cs_text], max_length=512) ``` ## Training data The legal_t5_small_multitask_cs_sv model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_multitask_cs_sv | 35.871| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Cszech Swedish", "tags": ["translation Cszech Swedish model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Hra\u010dky ur\u010den\u00e9 pro \u010dast\u00fd kontakt s k\u016f\u017e\u00ed obsahuj\u00edc\u00ed alergenn\u00ed l\u00e1tky jin\u00e9 ne\u017e vonn\u00e9, kter\u00e9 jsou zn\u00e1m\u00e9 vyvol\u00e1v\u00e1n\u00edm v\u00e1\u017en\u00fdch nebo dokonce osudn\u00fdch \u00fa\u010dink\u016f na zdrav\u00ed d\u011bt\u00ed (nap\u0159\u00edklad l\u00e1tky, kter\u00e9 mohou vyvolat anafylaktick\u00fd \u0161ok), mus\u00ed b\u00fdt v souladu s ustanoven\u00edmi t\u00fdkaj\u00edc\u00edmi se ozna\u010dov\u00e1n\u00ed uveden\u00fdmi ve sm\u011brnici Komise 2006/125/ES ze dne 5. prosince 2006 o obiln\u00fdch a ostatn\u00edch p\u0159\u00edkrmech pro kojence a mal\u00e9 d\u011bti."}]}
SEBIS/legal_t5_small_multitask_cs_sv
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Cszech Swedish model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
SEBIS/legal_t5_small_multitask_de_cs
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_multitask_de_en model Model on translating legal text from Deustch to English. It was first released in [this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model. ## Model description No pretraining is involved in case of legal_t5_small_multitask_de_en model, rather the unsupervised task is added with all the translation task to realize the multitask learning scenario. ## Intended uses & limitations The model could be used for translation of legal texts from Deustch to English. ### How to use Here is how to use this model to translate legal text from Deustch to English in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_de_en"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_de_en", do_lower_case=False, skip_special_tokens=True), device=0 ) de_text = "Der zuständige Ausschuss wacht darüber, dass alle Angaben, die die Ausübung des Mandats eines Mitglieds bzw. die Rangfolge der Stellvertreter beeinflussen können, dem Parlament unverzüglich von den Behörden der Mitgliedstaaten und der Union - unter Angabe deren Wirksamwerdens im Falle einer Benennung - übermittelt werden." pipeline([de_text], max_length=512) ``` ## Training data The legal_t5_small_multitask_de_en model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_multitask_de_en | 42.437| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Deustch English", "tags": ["translation Deustch English model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Der zust\u00e4ndige Ausschuss wacht dar\u00fcber, dass alle Angaben, die die Aus\u00fcbung des Mandats eines Mitglieds bzw. die Rangfolge der Stellvertreter beeinflussen k\u00f6nnen, dem Parlament unverz\u00fcglich von den Beh\u00f6rden der Mitgliedstaaten und der Union - unter Angabe deren Wirksamwerdens im Falle einer Benennung - \u00fcbermittelt werden."}]}
SEBIS/legal_t5_small_multitask_de_en
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Deustch English model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_multitask_de_es model Model on translating legal text from Deustch to Spanish. It was first released in [this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model. ## Model description No pretraining is involved in case of legal_t5_small_multitask_de_es model, rather the unsupervised task is added with all the translation task to realize the multitask learning scenario. ## Intended uses & limitations The model could be used for translation of legal texts from Deustch to Spanish. ### How to use Here is how to use this model to translate legal text from Deustch to Spanish in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_de_es"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_de_es", do_lower_case=False, skip_special_tokens=True), device=0 ) de_text = "Kugelförmige, eiförmige oder ellipsenförmige Verpackungen dürfen keine Abmessungen aufweisen, die durch eine Einklemmung im Mund oder Rachen eine Blockierung der internen Atemwege verursachen können." pipeline([de_text], max_length=512) ``` ## Training data The legal_t5_small_multitask_de_es model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_multitask_de_es | 36.458| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Deustch Spanish", "tags": ["translation Deustch Spanish model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Kugelf\u00f6rmige, eif\u00f6rmige oder ellipsenf\u00f6rmige Verpackungen d\u00fcrfen keine Abmessungen aufweisen, die durch eine Einklemmung im Mund oder Rachen eine Blockierung der internen Atemwege verursachen k\u00f6nnen."}]}
SEBIS/legal_t5_small_multitask_de_es
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Deustch Spanish model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_multitask_de_fr model Model on translating legal text from Deustch to French. It was first released in [this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model. ## Model description No pretraining is involved in case of legal_t5_small_multitask_de_fr model, rather the unsupervised task is added with all the translation task to realize the multitask learning scenario. ## Intended uses & limitations The model could be used for translation of legal texts from Deustch to French. ### How to use Here is how to use this model to translate legal text from Deustch to French in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_de_fr"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_de_fr", do_lower_case=False, skip_special_tokens=True), device=0 ) de_text = "Wegen einer in Ausübung ihres Amtes erfolgten Äußerung oder Abstimmung dürfen Mitglieder des Europäischen Parlaments weder in ein Ermittlungsverfahren verwickelt noch festgenommen oder verfolgt werden." pipeline([de_text], max_length=512) ``` ## Training data The legal_t5_small_multitask_de_fr model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_multitask_de_fr | 41.003| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Deustch French", "tags": ["translation Deustch French model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Wegen einer in Aus\u00fcbung ihres Amtes erfolgten \u00c4u\u00dferung oder Abstimmung d\u00fcrfen Mitglieder des Europ\u00e4ischen Parlaments weder in ein Ermittlungsverfahren verwickelt noch festgenommen oder verfolgt werden."}]}
SEBIS/legal_t5_small_multitask_de_fr
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Deustch French model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_multitask_de_it model Model on translating legal text from Deustch to Italian. It was first released in [this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model. ## Model description No pretraining is involved in case of legal_t5_small_multitask_de_it model, rather the unsupervised task is added with all the translation task to realize the multitask learning scenario. ## Intended uses & limitations The model could be used for translation of legal texts from Deustch to Italian. ### How to use Here is how to use this model to translate legal text from Deustch to Italian in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_de_it"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_de_it", do_lower_case=False, skip_special_tokens=True), device=0 ) de_text = "Im vergangenen März hat die Parlamentarische Versammlung der Union für den Mittelmeerraum einstimmig den Bericht „Einwanderung und Integration: Dialog zwischen den neuen Generationen zur Entwicklung einer Kultur des Friedens“ verabschiedet." pipeline([de_text], max_length=512) ``` ## Training data The legal_t5_small_multitask_de_it model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_multitask_de_it | 41.405| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Deustch Italian", "tags": ["translation Deustch Italian model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Im vergangenen M\u00e4rz hat die Parlamentarische Versammlung der Union f\u00fcr den Mittelmeerraum einstimmig den Bericht \u201eEinwanderung und Integration: Dialog zwischen den neuen Generationen zur Entwicklung einer Kultur des Friedens\u201c verabschiedet."}]}
SEBIS/legal_t5_small_multitask_de_it
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Deustch Italian model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_multitask_de_sv model Model on translating legal text from Deustch to Swedish. It was first released in [this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model. ## Model description No pretraining is involved in case of legal_t5_small_multitask_de_sv model, rather the unsupervised task is added with all the translation task to realize the multitask learning scenario. ## Intended uses & limitations The model could be used for translation of legal texts from Deustch to Swedish. ### How to use Here is how to use this model to translate legal text from Deustch to Swedish in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_de_sv"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_de_sv", do_lower_case=False, skip_special_tokens=True), device=0 ) de_text = "SCHRIFTLICHE ANFRAGE P-1584/03" pipeline([de_text], max_length=512) ``` ## Training data The legal_t5_small_multitask_de_sv model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_multitask_de_sv | 35.945| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Deustch Swedish", "tags": ["translation Deustch Swedish model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "SCHRIFTLICHE ANFRAGE P-1584/03"}]}
SEBIS/legal_t5_small_multitask_de_sv
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Deustch Swedish model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_multitask_en_cs model Model on translating legal text from English to Cszech. It was first released in [this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model. ## Model description No pretraining is involved in case of legal_t5_small_multitask_en_cs model, rather the unsupervised task is added with all the translation task to realize the multitask learning scenario. ## Intended uses & limitations The model could be used for translation of legal texts from English to Cszech. ### How to use Here is how to use this model to translate legal text from English to Cszech in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_en_cs"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_en_cs", do_lower_case=False, skip_special_tokens=True), device=0 ) en_text = "Text proposed by the Commission" pipeline([en_text], max_length=512) ``` ## Training data The legal_t5_small_multitask_en_cs model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_multitask_en_cs | 36.226| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "English Cszech", "tags": ["translation English Cszech model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Text proposed by the Commission"}]}
SEBIS/legal_t5_small_multitask_en_cs
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation English Cszech model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_multitask_en_de model Model on translating legal text from English to Deustch. It was first released in [this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model. ## Model description No pretraining is involved in case of legal_t5_small_multitask_en_de model, rather the unsupervised task is added with all the translation task to realize the multitask learning scenario. ## Intended uses & limitations The model could be used for translation of legal texts from English to Deustch. ### How to use Here is how to use this model to translate legal text from English to Deustch in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_en_de"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_en_de", do_lower_case=False, skip_special_tokens=True), device=0 ) en_text = "Reiterates its call on the Commission to submit a proposal to the Parliament and Council as soon as possible in order to ensure that bunker oil for engine fuel in new ships is stored in safer, double-hull tanks since freight or container ships often contain heavy fuel as engine fuel in their bunkers the quantity of which may considerably exceed the cargoes of smaller oil tankers; considers that, before submitting such a proposal, the Commission should ascertain whether or not the existing IMO rules laid down in Resolution MEPC.141(54) are sufficient to guarantee the safe transport of bunker oil used as fuel;" pipeline([en_text], max_length=512) ``` ## Training data The legal_t5_small_multitask_en_de model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_multitask_en_de | 41.337| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "English Deustch", "tags": ["translation English Deustch model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Reiterates its call on the Commission to submit a proposal to the Parliament and Council as soon as possible in order to ensure that bunker oil for engine fuel in new ships is stored in safer, double-hull tanks since freight or container ships often contain heavy fuel as engine fuel in their bunkers the quantity of which may considerably exceed the cargoes of smaller oil tankers; considers that, before submitting such a proposal, the Commission should ascertain whether or not the existing IMO rules laid down in Resolution MEPC.141(54) are sufficient to guarantee the safe transport of bunker oil used as fuel;"}]}
SEBIS/legal_t5_small_multitask_en_de
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation English Deustch model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_multitask_en_es model Model on translating legal text from English to Spanish. It was first released in [this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model. ## Model description No pretraining is involved in case of legal_t5_small_multitask_en_es model, rather the unsupervised task is added with all the translation task to realize the multitask learning scenario. ## Intended uses & limitations The model could be used for translation of legal texts from English to Spanish. ### How to use Here is how to use this model to translate legal text from English to Spanish in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_en_es"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_en_es", do_lower_case=False, skip_special_tokens=True), device=0 ) en_text = "Amendment 14 Article 5, paragraph 1, point (a)" pipeline([en_text], max_length=512) ``` ## Training data The legal_t5_small_multitask_en_es model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_multitask_en_es | 37.404| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "English Spanish", "tags": ["translation English Spanish model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Amendment 14 Article 5, paragraph 1, point (a)"}]}
SEBIS/legal_t5_small_multitask_en_es
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation English Spanish model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_multitask_en_fr model Model on translating legal text from English to French. It was first released in [this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model. ## Model description No pretraining is involved in case of legal_t5_small_multitask_en_fr model, rather the unsupervised task is added with all the translation task to realize the multitask learning scenario. ## Intended uses & limitations The model could be used for translation of legal texts from English to French. ### How to use Here is how to use this model to translate legal text from English to French in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_en_fr"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_en_fr", do_lower_case=False, skip_special_tokens=True), device=0 ) en_text = "Article 2(b), sub-heading" pipeline([en_text], max_length=512) ``` ## Training data The legal_t5_small_multitask_en_fr model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_multitask_en_fr | 38.063| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "English French", "tags": ["translation English French model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Article 2(b), sub-heading"}]}
SEBIS/legal_t5_small_multitask_en_fr
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation English French model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_multitask_en_it model Model on translating legal text from English to Italian. It was first released in [this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model. ## Model description No pretraining is involved in case of legal_t5_small_multitask_en_it model, rather the unsupervised task is added with all the translation task to realize the multitask learning scenario. ## Intended uses & limitations The model could be used for translation of legal texts from English to Italian. ### How to use Here is how to use this model to translate legal text from English to Italian in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_en_it"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_en_it", do_lower_case=False, skip_special_tokens=True), device=0 ) en_text = "WRITTEN QUESTION E-1184/07" pipeline([en_text], max_length=512) ``` ## Training data The legal_t5_small_multitask_en_it model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_multitask_en_it | 47.070| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "English Italian", "tags": ["translation English Italian model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "WRITTEN QUESTION E-1184/07"}]}
SEBIS/legal_t5_small_multitask_en_it
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation English Italian model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_multitask_en_sv model Model on translating legal text from English to Swedish. It was first released in [this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model. ## Model description No pretraining is involved in case of legal_t5_small_multitask_en_sv model, rather the unsupervised task is added with all the translation task to realize the multitask learning scenario. ## Intended uses & limitations The model could be used for translation of legal texts from English to Swedish. ### How to use Here is how to use this model to translate legal text from English to Swedish in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_en_sv"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_en_sv", do_lower_case=False, skip_special_tokens=True), device=0 ) en_text = "whereas enlargement to Bulgaria and Romania should be effective in 2007," pipeline([en_text], max_length=512) ``` ## Training data The legal_t5_small_multitask_en_sv model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_multitask_en_sv | 47.968| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "English Swedish", "tags": ["translation English Swedish model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "whereas enlargement to Bulgaria and Romania should be effective in 2007,"}]}
SEBIS/legal_t5_small_multitask_en_sv
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation English Swedish model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_multitask_es_cs model Model on translating legal text from Spanish to Cszech. It was first released in [this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model. ## Model description No pretraining is involved in case of legal_t5_small_multitask_es_cs model, rather the unsupervised task is added with all the translation task to realize the multitask learning scenario. ## Intended uses & limitations The model could be used for translation of legal texts from Spanish to Cszech. ### How to use Here is how to use this model to translate legal text from Spanish to Cszech in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_es_cs"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_es_cs", do_lower_case=False, skip_special_tokens=True), device=0 ) es_text = "La política pesquera supone que se tenga en cuenta un gran número de dimensiones – social, medioambiental, económica – lo que exige un enfoque integrado y equilibrado, incompatible con una visión que los sobrestima, en particular, mediante una definición a priori de cualquier jerarquía de prioridades." pipeline([es_text], max_length=512) ``` ## Training data The legal_t5_small_multitask_es_cs model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_multitask_es_cs | 47.673| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Spanish Cszech", "tags": ["translation Spanish Cszech model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "La pol\u00edtica pesquera supone que se tenga en cuenta un gran n\u00famero de dimensiones \u2013 social, medioambiental, econ\u00f3mica \u2013 lo que exige un enfoque integrado y equilibrado, incompatible con una visi\u00f3n que los sobrestima, en particular, mediante una definici\u00f3n a priori de cualquier jerarqu\u00eda de prioridades."}]}
SEBIS/legal_t5_small_multitask_es_cs
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Spanish Cszech model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_multitask_es_de model Model on translating legal text from Spanish to Deustch. It was first released in [this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model. ## Model description No pretraining is involved in case of legal_t5_small_multitask_es_de model, rather the unsupervised task is added with all the translation task to realize the multitask learning scenario. ## Intended uses & limitations The model could be used for translation of legal texts from Spanish to Deustch. ### How to use Here is how to use this model to translate legal text from Spanish to Deustch in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_es_de"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_es_de", do_lower_case=False, skip_special_tokens=True), device=0 ) es_text = "Estudios y publicaciones realizados por el Parlamento Europeo" pipeline([es_text], max_length=512) ``` ## Training data The legal_t5_small_multitask_es_de model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_multitask_es_de | 41.196| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Spanish Deustch", "tags": ["translation Spanish Deustch model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Estudios y publicaciones realizados por el Parlamento Europeo"}]}
SEBIS/legal_t5_small_multitask_es_de
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Spanish Deustch model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_multitask_es_en model Model on translating legal text from Spanish to English. It was first released in [this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model. ## Model description No pretraining is involved in case of legal_t5_small_multitask_es_en model, rather the unsupervised task is added with all the translation task to realize the multitask learning scenario. ## Intended uses & limitations The model could be used for translation of legal texts from Spanish to English. ### How to use Here is how to use this model to translate legal text from Spanish to English in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_es_en"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_es_en", do_lower_case=False, skip_special_tokens=True), device=0 ) es_text = "PPE-DE: 6', PSE: 6', ALDE: 5', Verts/ALE: 4', GUE/NGL: 4', IND/DEM:4', UEN: 4', NI: 4'" pipeline([es_text], max_length=512) ``` ## Training data The legal_t5_small_multitask_es_en model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_multitask_es_en | 36.607| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Spanish English", "tags": ["translation Spanish English model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "PPE-DE: 6', PSE: 6', ALDE: 5', Verts/ALE: 4', GUE/NGL: 4', IND/DEM:4', UEN: 4', NI: 4'"}]}
SEBIS/legal_t5_small_multitask_es_en
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Spanish English model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_multitask_es_fr model Model on translating legal text from Spanish to French. It was first released in [this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model. ## Model description No pretraining is involved in case of legal_t5_small_multitask_es_fr model, rather the unsupervised task is added with all the translation task to realize the multitask learning scenario. ## Intended uses & limitations The model could be used for translation of legal texts from Spanish to French. ### How to use Here is how to use this model to translate legal text from Spanish to French in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_es_fr"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_es_fr", do_lower_case=False, skip_special_tokens=True), device=0 ) es_text = "Fecha del anuncio en el Pleno" pipeline([es_text], max_length=512) ``` ## Training data The legal_t5_small_multitask_es_fr model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_multitask_es_fr | 41.523| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Spanish French", "tags": ["translation Spanish French model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Fecha del anuncio en el Pleno"}]}
SEBIS/legal_t5_small_multitask_es_fr
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Spanish French model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_multitask_es_it model Model on translating legal text from Spanish to Italian. It was first released in [this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model. ## Model description No pretraining is involved in case of legal_t5_small_multitask_es_it model, rather the unsupervised task is added with all the translation task to realize the multitask learning scenario. ## Intended uses & limitations The model could be used for translation of legal texts from Spanish to Italian. ### How to use Here is how to use this model to translate legal text from Spanish to Italian in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_es_it"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_es_it", do_lower_case=False, skip_special_tokens=True), device=0 ) es_text = "Por el Parlamento Europeo Por el Consejo" pipeline([es_text], max_length=512) ``` ## Training data The legal_t5_small_multitask_es_it model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_multitask_es_it | 37.386| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Spanish Italian", "tags": ["translation Spanish Italian model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Por el Parlamento Europeo Por el Consejo"}]}
SEBIS/legal_t5_small_multitask_es_it
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Spanish Italian model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_multitask_es_sv model Model on translating legal text from Spanish to Swedish. It was first released in [this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model. ## Model description No pretraining is involved in case of legal_t5_small_multitask_es_sv model, rather the unsupervised task is added with all the translation task to realize the multitask learning scenario. ## Intended uses & limitations The model could be used for translation of legal texts from Spanish to Swedish. ### How to use Here is how to use this model to translate legal text from Spanish to Swedish in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_es_sv"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_es_sv", do_lower_case=False, skip_special_tokens=True), device=0 ) es_text = "Tiempo de uso de la palabra ( artículo 149 del Reglamento PE)" pipeline([es_text], max_length=512) ``` ## Training data The legal_t5_small_multitask_es_sv model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_multitask_es_sv | 37.975| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Spanish Swedish", "tags": ["translation Spanish Swedish model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Tiempo de uso de la palabra ( art\u00edculo 149 del Reglamento PE)"}]}
SEBIS/legal_t5_small_multitask_es_sv
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Spanish Swedish model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_multitask_fr_cs model Model on translating legal text from French to Cszech. It was first released in [this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model. ## Model description No pretraining is involved in case of legal_t5_small_multitask_fr_cs model, rather the unsupervised task is added with all the translation task to realize the multitask learning scenario. ## Intended uses & limitations The model could be used for translation of legal texts from French to Cszech. ### How to use Here is how to use this model to translate legal text from French to Cszech in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_fr_cs"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_fr_cs", do_lower_case=False, skip_special_tokens=True), device=0 ) fr_text = "BUDG – Décision: aucun avis" pipeline([fr_text], max_length=512) ``` ## Training data The legal_t5_small_multitask_fr_cs model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_multitask_fr_cs | 44.499| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "French Cszech", "tags": ["translation French Cszech model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "BUDG \u2013 D\u00e9cision: aucun avis"}]}
SEBIS/legal_t5_small_multitask_fr_cs
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation French Cszech model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
{}
SEBIS/legal_t5_small_multitask_fr_de
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_multitask_fr_en model Model on translating legal text from French to English. It was first released in [this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model. ## Model description No pretraining is involved in case of legal_t5_small_multitask_fr_en model, rather the unsupervised task is added with all the translation task to realize the multitask learning scenario. ## Intended uses & limitations The model could be used for translation of legal texts from French to English. ### How to use Here is how to use this model to translate legal text from French to English in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_fr_en"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_fr_en", do_lower_case=False, skip_special_tokens=True), device=0 ) fr_text = "Raül Romeva i Rueda (Verts/ALE)" pipeline([fr_text], max_length=512) ``` ## Training data The legal_t5_small_multitask_fr_en model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_multitask_fr_en | 39.123| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "French English", "tags": ["translation French English model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Ra\u00fcl Romeva i Rueda (Verts/ALE)"}]}
SEBIS/legal_t5_small_multitask_fr_en
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation French English model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_multitask_fr_es model Model on translating legal text from French to Spanish. It was first released in [this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model. ## Model description No pretraining is involved in case of legal_t5_small_multitask_fr_es model, rather the unsupervised task is added with all the translation task to realize the multitask learning scenario. ## Intended uses & limitations The model could be used for translation of legal texts from French to Spanish. ### How to use Here is how to use this model to translate legal text from French to Spanish in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_fr_es"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_fr_es", do_lower_case=False, skip_special_tokens=True), device=0 ) fr_text = "+ lettre autorités suédoises" pipeline([fr_text], max_length=512) ``` ## Training data The legal_t5_small_multitask_fr_es model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_multitask_fr_es | 43.807| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "French Spanish", "tags": ["translation French Spanish model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "+ lettre autorit\u00e9s su\u00e9doises"}]}
SEBIS/legal_t5_small_multitask_fr_es
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation French Spanish model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_multitask_fr_it model Model on translating legal text from French to Italian. It was first released in [this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model. ## Model description No pretraining is involved in case of legal_t5_small_multitask_fr_it model, rather the unsupervised task is added with all the translation task to realize the multitask learning scenario. ## Intended uses & limitations The model could be used for translation of legal texts from French to Italian. ### How to use Here is how to use this model to translate legal text from French to Italian in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_fr_it"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_fr_it", do_lower_case=False, skip_special_tokens=True), device=0 ) fr_text = "Situation humanitaire au Soudan" pipeline([fr_text], max_length=512) ``` ## Training data The legal_t5_small_multitask_fr_it model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_multitask_fr_it | 41.140| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "French Italian", "tags": ["translation French Italian model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Situation humanitaire au Soudan"}]}
SEBIS/legal_t5_small_multitask_fr_it
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation French Italian model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_multitask_fr_sv model Model on translating legal text from French to Swedish. It was first released in [this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model. ## Model description No pretraining is involved in case of legal_t5_small_multitask_fr_sv model, rather the unsupervised task is added with all the translation task to realize the multitask learning scenario. ## Intended uses & limitations The model could be used for translation of legal texts from French to Swedish. ### How to use Here is how to use this model to translate legal text from French to Swedish in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_fr_sv"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_fr_sv", do_lower_case=False, skip_special_tokens=True), device=0 ) fr_text = "**I Procédure de coopération (première lecture)" pipeline([fr_text], max_length=512) ``` ## Training data The legal_t5_small_multitask_fr_sv model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_multitask_fr_sv | 39.947| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "French Swedish", "tags": ["translation French Swedish model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "**I Proc\u00e9dure de coop\u00e9ration (premi\u00e8re lecture)"}]}
SEBIS/legal_t5_small_multitask_fr_sv
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation French Swedish model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_multitask_it_cs model Model on translating legal text from Italian to Cszech. It was first released in [this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model. ## Model description No pretraining is involved in case of legal_t5_small_multitask_it_cs model, rather the unsupervised task is added with all the translation task to realize the multitask learning scenario. ## Intended uses & limitations The model could be used for translation of legal texts from Italian to Cszech. ### How to use Here is how to use this model to translate legal text from Italian to Cszech in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_it_cs"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_it_cs", do_lower_case=False, skip_special_tokens=True), device=0 ) it_text = "Per mobilitare il Fondo, la Commissione ha presentato all'autorità di bilancio una richiesta di storno per un importo complessivo di 667.823 EUR dalla riserva FEG (40 02 43) in stanziamenti d'impegno verso la linea di bilancio FEG." pipeline([it_text], max_length=512) ``` ## Training data The legal_t5_small_multitask_it_cs model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_multitask_it_cs | 37.935| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Italian Cszech", "tags": ["translation Italian Cszech model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Per mobilitare il Fondo, la Commissione ha presentato all'autorit\u00e0 di bilancio una richiesta di storno per un importo complessivo di 667.823 EUR dalla riserva FEG (40 02 43) in stanziamenti d'impegno verso la linea di bilancio FEG."}]}
SEBIS/legal_t5_small_multitask_it_cs
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Italian Cszech model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_multitask_it_de model Model on translating legal text from Italian to Deustch. It was first released in [this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model. ## Model description No pretraining is involved in case of legal_t5_small_multitask_it_de model, rather the unsupervised task is added with all the translation task to realize the multitask learning scenario. ## Intended uses & limitations The model could be used for translation of legal texts from Italian to Deustch. ### How to use Here is how to use this model to translate legal text from Italian to Deustch in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_it_de"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_it_de", do_lower_case=False, skip_special_tokens=True), device=0 ) it_text = "di Alyn Smith (Verts/ALE)" pipeline([it_text], max_length=512) ``` ## Training data The legal_t5_small_multitask_it_de model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_multitask_it_de | 35.365| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Italian Deustch", "tags": ["translation Italian Deustch model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "di Alyn Smith (Verts/ALE)"}]}
SEBIS/legal_t5_small_multitask_it_de
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Italian Deustch model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_multitask_it_en model Model on translating legal text from Italian to English. It was first released in [this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model. ## Model description No pretraining is involved in case of legal_t5_small_multitask_it_en model, rather the unsupervised task is added with all the translation task to realize the multitask learning scenario. ## Intended uses & limitations The model could be used for translation of legal texts from Italian to English. ### How to use Here is how to use this model to translate legal text from Italian to English in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_it_en"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_it_en", do_lower_case=False, skip_special_tokens=True), device=0 ) it_text = "Con l’adesione all'area dell'euro questo procedimento non è stato più possibile." pipeline([it_text], max_length=512) ``` ## Training data The legal_t5_small_multitask_it_en model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_multitask_it_en | 36.687| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Italian English", "tags": ["translation Italian English model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Con l\u2019adesione all'area dell'euro questo procedimento non \u00e8 stato pi\u00f9 possibile."}]}
SEBIS/legal_t5_small_multitask_it_en
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Italian English model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_multitask_it_es model Model on translating legal text from Italian to Spanish. It was first released in [this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model. ## Model description No pretraining is involved in case of legal_t5_small_multitask_it_es model, rather the unsupervised task is added with all the translation task to realize the multitask learning scenario. ## Intended uses & limitations The model could be used for translation of legal texts from Italian to Spanish. ### How to use Here is how to use this model to translate legal text from Italian to Spanish in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_it_es"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_it_es", do_lower_case=False, skip_special_tokens=True), device=0 ) it_text = "Interrogazione con richiesta di risposta scritta E-005808/2011" pipeline([it_text], max_length=512) ``` ## Training data The legal_t5_small_multitask_it_es model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_multitask_it_es | 36.980| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Italian Spanish", "tags": ["translation Italian Spanish model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Interrogazione con richiesta di risposta scritta E-005808/2011"}]}
SEBIS/legal_t5_small_multitask_it_es
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Italian Spanish model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_multitask_it_fr model Model on translating legal text from Italian to French. It was first released in [this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model. ## Model description No pretraining is involved in case of legal_t5_small_multitask_it_fr model, rather the unsupervised task is added with all the translation task to realize the multitask learning scenario. ## Intended uses & limitations The model could be used for translation of legal texts from Italian to French. ### How to use Here is how to use this model to translate legal text from Italian to French in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_it_fr"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_it_fr", do_lower_case=False, skip_special_tokens=True), device=0 ) it_text = "Gli Stati membri adottano le leggi, i regolamenti e le disposizioni amministrative necessari per ottemperare alla presente direttiva entro il 31 dicembre 2002 e ne informano immediatamente la Commissione." pipeline([it_text], max_length=512) ``` ## Training data The legal_t5_small_multitask_it_fr model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_multitask_it_fr | 41.956| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Italian French", "tags": ["translation Italian French model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Gli Stati membri adottano le leggi, i regolamenti e le disposizioni amministrative necessari per ottemperare alla presente direttiva entro il 31 dicembre 2002 e ne informano immediatamente la Commissione."}]}
SEBIS/legal_t5_small_multitask_it_fr
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Italian French model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_multitask_it_sv model Model on translating legal text from Italian to Swedish. It was first released in [this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model. ## Model description No pretraining is involved in case of legal_t5_small_multitask_it_sv model, rather the unsupervised task is added with all the translation task to realize the multitask learning scenario. ## Intended uses & limitations The model could be used for translation of legal texts from Italian to Swedish. ### How to use Here is how to use this model to translate legal text from Italian to Swedish in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_it_sv"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_it_sv", do_lower_case=False, skip_special_tokens=True), device=0 ) it_text = "Può il Commissario responsabile comunicare al Parlamento in che modo la DG Ricerca garantirà che l’Europa possa svolgere un ruolo di primo piano in questo sforzo globale di ricerca sul diabete?" pipeline([it_text], max_length=512) ``` ## Training data The legal_t5_small_multitask_it_sv model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_multitask_it_sv | 41.523| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Italian Swedish", "tags": ["translation Italian Swedish model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Pu\u00f2 il Commissario responsabile comunicare al Parlamento in che modo la DG Ricerca garantir\u00e0 che l\u2019Europa possa svolgere un ruolo di primo piano in questo sforzo globale di ricerca sul diabete?"}]}
SEBIS/legal_t5_small_multitask_it_sv
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Italian Swedish model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_multitask_sv_cs model Model on translating legal text from Swedish to Cszech. It was first released in [this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model. ## Model description No pretraining is involved in case of legal_t5_small_multitask_sv_cs model, rather the unsupervised task is added with all the translation task to realize the multitask learning scenario. ## Intended uses & limitations The model could be used for translation of legal texts from Swedish to Cszech. ### How to use Here is how to use this model to translate legal text from Swedish to Cszech in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_sv_cs"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_sv_cs", do_lower_case=False, skip_special_tokens=True), device=0 ) sv_text = "Standarderna för integrerat växtskydd bör tillämpas snabbare än vad kommissionen föreskrivit." pipeline([sv_text], max_length=512) ``` ## Training data The legal_t5_small_multitask_sv_cs model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_multitask_sv_cs | 45.058| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Swedish Cszech", "tags": ["translation Swedish Cszech model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Standarderna f\u00f6r integrerat v\u00e4xtskydd b\u00f6r till\u00e4mpas snabbare \u00e4n vad kommissionen f\u00f6reskrivit."}]}
SEBIS/legal_t5_small_multitask_sv_cs
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Swedish Cszech model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_multitask_sv_de model Model on translating legal text from Swedish to Deustch. It was first released in [this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model. ## Model description No pretraining is involved in case of legal_t5_small_multitask_sv_de model, rather the unsupervised task is added with all the translation task to realize the multitask learning scenario. ## Intended uses & limitations The model could be used for translation of legal texts from Swedish to Deustch. ### How to use Here is how to use this model to translate legal text from Swedish to Deustch in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_sv_de"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_sv_de", do_lower_case=False, skip_special_tokens=True), device=0 ) sv_text = "Kan kommissionen bekräfta att i Olaf‑handlingar som samlats in inom ramen för denna granskning, daterade mellan 2000 och 2004, kan följande information hittas: —" pipeline([sv_text], max_length=512) ``` ## Training data The legal_t5_small_multitask_sv_de model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_multitask_sv_de | 44.684| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Swedish Deustch", "tags": ["translation Swedish Deustch model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Kan kommissionen bekr\u00e4fta att i Olaf\u2011handlingar som samlats in inom ramen f\u00f6r denna granskning, daterade mellan 2000 och 2004, kan f\u00f6ljande information hittas: \u2014"}]}
SEBIS/legal_t5_small_multitask_sv_de
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Swedish Deustch model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_multitask_sv_en model Model on translating legal text from Swedish to English. It was first released in [this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model. ## Model description No pretraining is involved in case of legal_t5_small_multitask_sv_en model, rather the unsupervised task is added with all the translation task to realize the multitask learning scenario. ## Intended uses & limitations The model could be used for translation of legal texts from Swedish to English. ### How to use Here is how to use this model to translate legal text from Swedish to English in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_sv_en"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_sv_en", do_lower_case=False, skip_special_tokens=True), device=0 ) sv_text = "inlämnat av följande ledamöter:" pipeline([sv_text], max_length=512) ``` ## Training data The legal_t5_small_multitask_sv_en model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_multitask_sv_en | 36.195| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Swedish English", "tags": ["translation Swedish English model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "inl\u00e4mnat av f\u00f6ljande ledam\u00f6ter:"}]}
SEBIS/legal_t5_small_multitask_sv_en
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Swedish English model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_multitask_sv_es model Model on translating legal text from Swedish to Spanish. It was first released in [this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model. ## Model description No pretraining is involved in case of legal_t5_small_multitask_sv_es model, rather the unsupervised task is added with all the translation task to realize the multitask learning scenario. ## Intended uses & limitations The model could be used for translation of legal texts from Swedish to Spanish. ### How to use Here is how to use this model to translate legal text from Swedish to Spanish in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_sv_es"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_sv_es", do_lower_case=False, skip_special_tokens=True), device=0 ) sv_text = "med beaktande av sin resolution av den 14 april 2005 om torkan i Portugal," pipeline([sv_text], max_length=512) ``` ## Training data The legal_t5_small_multitask_sv_es model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_multitask_sv_es | 35.506| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Swedish Spanish", "tags": ["translation Swedish Spanish model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "med beaktande av sin resolution av den 14 april 2005 om torkan i Portugal,"}]}
SEBIS/legal_t5_small_multitask_sv_es
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Swedish Spanish model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_multitask_sv_fr model Model on translating legal text from Swedish to French. It was first released in [this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model. ## Model description No pretraining is involved in case of legal_t5_small_multitask_sv_fr model, rather the unsupervised task is added with all the translation task to realize the multitask learning scenario. ## Intended uses & limitations The model could be used for translation of legal texts from Swedish to French. ### How to use Here is how to use this model to translate legal text from Swedish to French in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_sv_fr"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_sv_fr", do_lower_case=False, skip_special_tokens=True), device=0 ) sv_text = "Europaparlamentet understryker att det stora antalet kvinnor och barn bland flyktingar och internt fördrivna som registrerats av internationella organ som resultat av väpnade konflikter och inbördeskrig är mycket oroväckande." pipeline([sv_text], max_length=512) ``` ## Training data The legal_t5_small_multitask_sv_fr model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_multitask_sv_fr | 45.790| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Swedish French", "tags": ["translation Swedish French model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Europaparlamentet understryker att det stora antalet kvinnor och barn bland flyktingar och internt f\u00f6rdrivna som registrerats av internationella organ som resultat av v\u00e4pnade konflikter och inb\u00f6rdeskrig \u00e4r mycket orov\u00e4ckande."}]}
SEBIS/legal_t5_small_multitask_sv_fr
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Swedish French model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_multitask_sv_it model Model on translating legal text from Swedish to Italian. It was first released in [this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model. ## Model description No pretraining is involved in case of legal_t5_small_multitask_sv_it model, rather the unsupervised task is added with all the translation task to realize the multitask learning scenario. ## Intended uses & limitations The model could be used for translation of legal texts from Swedish to Italian. ### How to use Here is how to use this model to translate legal text from Swedish to Italian in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_sv_it"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_sv_it", do_lower_case=False, skip_special_tokens=True), device=0 ) sv_text = "De nationella tillsynsmyndigheterna får använda" pipeline([sv_text], max_length=512) ``` ## Training data The legal_t5_small_multitask_sv_it model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_multitask_sv_it | 44.242| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Swedish Italian", "tags": ["translation Swedish Italian model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "De nationella tillsynsmyndigheterna f\u00e5r anv\u00e4nda"}]}
SEBIS/legal_t5_small_multitask_sv_it
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Swedish Italian model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_summ_cs model Model for Summarization of legal text written in Cszech. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis. ## Model description legal_t5_small_summ_cs is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for summarization of legal texts written in Cszech. ### How to use Here is how to use this model to summarize legal text written in Cszech in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_summ_cs"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_summ_cs", do_lower_case=False, skip_special_tokens=True), device=0 ) cs_text = "(2006/C 67/15) (Text s významem pro EHP) Dne 10. března 2006 se Komise rozhodla nevznést námitky proti výše uvedenému spojení a prohlásit ho za slučitelné se společným trhem. Toto rozhodnutí je založeno na čl. 6 odst. 1 písm. b) nařízení Rady (ES) č. 139/2004. Celý text rozhodnutí je přístupný pouze v angličtině a bude uveřejněn poté, co bude zbaven obchodního tajemství, které může případně obsahovat. Text bude dosažitelný: - na webové stránce Europa – hospodářská soutěž (http://europa.eu.int/comm/competition/mergers/cases/). Tato webová stránka umožňuje vyhledat jednotlivá rozhodnutí o spojení, a to včetně společnosti, čísla případu, data a indexu odvětví hospodářství. - v elektronické podobě na webové stránce EUR-Lex, pod dokumentem č. 32006M4093. EUR-Lex umožňuje přístup k Evropskému právu přes Internet. (http://europa.eu.int/eur-lex/lex) -------------------------------------------------- " pipeline([cs_text], max_length=512) ``` ## Training data The legal_t5_small_summ_cs model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html) dataset consisting of 18 Thousand texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 64). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for classification test dataset, achieves the following results: Test results : | Model | Rouge1 | Rouge2 | Rouge Lsum | |:-----:|:-----:|:-----:|:-----:| | legal_t5_small_summ_cs | 75.86|65.82 |74.95| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Cszech", "tags": ["summarization Cszech model"], "datasets": ["jrc-acquis"], "widget": [{"text": "(2006/C 67/15) (Text s v\u00fdznamem pro EHP) Dne 10. b\u0159ezna 2006 se Komise rozhodla nevzn\u00e9st n\u00e1mitky proti v\u00fd\u0161e uveden\u00e9mu spojen\u00ed a prohl\u00e1sit ho za slu\u010diteln\u00e9 se spole\u010dn\u00fdm trhem. Toto rozhodnut\u00ed je zalo\u017eeno na \u010dl. 6 odst. 1 p\u00edsm. b) na\u0159\u00edzen\u00ed Rady (ES) \u010d. 139/2004. Cel\u00fd text rozhodnut\u00ed je p\u0159\u00edstupn\u00fd pouze v angli\u010dtin\u011b a bude uve\u0159ejn\u011bn pot\u00e9, co bude zbaven obchodn\u00edho tajemstv\u00ed, kter\u00e9 m\u016f\u017ee p\u0159\u00edpadn\u011b obsahovat. Text bude dosa\u017eiteln\u00fd: - na webov\u00e9 str\u00e1nce Europa \u2013 hospod\u00e1\u0159sk\u00e1 sout\u011b\u017e (http://europa.eu.int/comm/competition/mergers/cases/). Tato webov\u00e1 str\u00e1nka umo\u017e\u0148uje vyhledat jednotliv\u00e1 rozhodnut\u00ed o spojen\u00ed, a to v\u010detn\u011b spole\u010dnosti, \u010d\u00edsla p\u0159\u00edpadu, data a indexu odv\u011btv\u00ed hospod\u00e1\u0159stv\u00ed. - v elektronick\u00e9 podob\u011b na webov\u00e9 str\u00e1nce EUR-Lex, pod dokumentem \u010d. 32006M4093. EUR-Lex umo\u017e\u0148uje p\u0159\u00edstup k Evropsk\u00e9mu pr\u00e1vu p\u0159es Internet. (http://europa.eu.int/eur-lex/lex) -------------------------------------------------- "}]}
SEBIS/legal_t5_small_summ_cs
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "summarization Cszech model", "dataset:jrc-acquis", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_summ_de model Model for Summarization of legal text written in Deustch. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis. ## Model description legal_t5_small_summ_de is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for summarization of legal texts written in Deustch. ### How to use Here is how to use this model to summarize legal text written in Deustch in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_summ_de"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_summ_de", do_lower_case=False, skip_special_tokens=True), device=0 ) de_text = "(90/365/EWG) DER RAT DER EUROPÄISCHEN GEMEINSCHAFTEN - gestützt auf den Vertrag zur Gründung der Europäischen Wirtschaftsgemeinschaft, insbesondere auf Artikel 235, auf Vorschlag der Kommission (1), nach Stellungnahme des Europäischen Parlaments (2), nach Stellungnahme des Wirtschafts- und Sozialausschusses (3), in Erwägung nachstehender Gründe: Gemäß Artikel 3 Buchstabe c) des Vertrages umfasst die Tätigkeit der Gemeinschaft, nach Maßgabe des Vertrages, die Beseitigung der Hindernisse für den freien Personenverkehr zwischen den Mitgliedstaaten. Artikel 8a des Vertrages sieht vor, daß der Binnenmarkt bis zum 31. Dezember 1992 zu verwirklichen ist. Der Binnenmarkt umfasst einen Raum ohne Binnengrenzen, in dem der freie Verkehr von Waren, Personen, Dienstleistungen und Kapital gemäß den Bestimmungen des Vertrages gewährleistet ist. Die Artikel 48 und 52 des Vertrages sehen die Freizuegigkeit der Arbeitnehmer und selbständig Erwerbstätigen vor, was ein Recht auf Aufenthalt in dem Mitgliedstaat beinhaltet, in dem sie ihr Berufsleben verbringen. Es empfiehlt sich, dieses Aufenthaltsrecht auch Personen zu gewähren, die aus dem Erwerbsleben ausgeschieden sind, auch wenn sie während ihres Berufslebens von dem Recht auf Freizuegigkeit keinen Gebrauch gemacht haben. Die Aufenthaltsberechtigten dürfen die öffentlichen Finanzen des Aufnahmemitgliedstaates nicht über Gebühr belasten. Nach Artikel 10 der Verordnung (EWG) Nr. 1408/71 (4) in der Fassung der Verordnung (EWG) Nr. 1390/81 (5) haben die Empfänger von Geldleistungen bei Invalidität und Alter und die Bezieher von Renten bei Arbeitsunfällen oder Berufskrankheiten auch dann weiterhin Anspruch auf diese Leistungen und Renten, wenn sie im Gebiet eines anderen Mitgliedstaates als des Staates wohnen, auf dessen Gebiet der zur Zahlung verpflichtete Träger seinen Sitz hat. Die Ausübung des Aufenthaltsrechts wird erst dann eine reale Möglichkeit, wenn es auch den Familienangehörigen zugestanden wird. Für die von dieser Richtlinie Begünstigten sollte eine Verwaltungsregelung entsprechend der insbesondere in der Richtlinie 68/360/EWG (6) und in der Richtlinie 64/221/EWG (7) vorgesehenen Regelung gelten. Der Vertrag enthält Befugnisse für den Erlaß der vorliegenden Richtlinie nur in Artikel 235 - HAT FOLGENDE RICHTLINIE ERLASSEN: Artikel 1 (1) Die Mitgliedstaaten gewähren den Angehörigen der Mitgliedstaaten, die in der Gemeinschaft eine Tätigkeit als Arbeitnehmer oder als Selbständige ausgeuebt haben, sowie deren Familienangehörigen nach der Definition von Absatz 2 unter der Bedingung das Aufenthaltsrecht, daß sie eine Invaliditäts-, Vorruhestands- oder Altersrente oder eine Rente wegen Arbeitsunfalls oder Berufskrankheit in einer solchen Höhe beziehen, daß sie während ihres Aufenthalts nicht die Sozialhilfe des Aufnahmemitgliedstaats in Anspruch nehmen müssen, und einen Krankenversicherungsschutz genießen, der im Aufnahmemitgliedstaat alle Risiken abdeckt. Die Existenzmittel des Antragstellers gelten als ausreichend, wenn sie einen Betrag übersteigen, unterhalb dessen der Aufnahmemitgliedstaat seinen Staatsangehörigen aufgrund der persönlichen Situation des Antragstellers und gegebenenfalls der Situation der nach Absatz 2 aufgenommenen Personen Sozialhilfe gewähren kann. Ist Unterabsatz 2 in einem Mitgliedstaat nicht anwendbar, so gelten die Existenzmittel des Antragstellers als ausreichend, wenn sie den Betrag der Grundrente der Sozialversicherung übersteigen, die der Aufnahmemitgliedstaat zahlt. (2) Bei dem Aufenthaltsberechtigten dürfen folgende Personen ungeachtet ihrer Staatsangehörigkeit in einem anderen Mitgliedstaat Wohnung nehmen: a) sein Ehegatte sowie die Verwandten in absteigender Linie, denen Unterhalt gewährt wird; b) seine Verwandten und die Verwandten seines Ehegatten in aufsteigender Linie, denen er Unterhalt gewährt. Artikel 2 (1) Zum Nachweis des Aufenthaltsrechts wird eine Bescheinigung, die »Aufenthaltserlaubnis für Staatsangehörige eines EWG-Mitgliedstaates%quot%, erteilt, deren Gültigkeit auf fünf Jahre mit Verlängerungsmöglichkeit begrenzt werden kann. Die Mitgliedstaaten können jedoch die Erneuerung der Aufenthaltserlaubnis nach den ersten zwei Aufenthaltsjahren verlangen, wenn sie dies für erforderlich halten. Einem Familienmitglied, das nicht die Staatsangehörigkeit eines Mitgliedstaats besitzt, wird ein Aufenthaltsdokument mit der gleichen Gültigkeitsdauer ausgestellt wie dem Staatsangehörigen, von dem es seine Rechte herleitet. Für die Erteilung der Aufenthaltserlaubnis oder des Aufenthaltsdokuments darf der Mitgliedstaat vom Antragsteller nur die Vorlage eines gültigen Personalausweises bzw. Reisepasses sowie den Nachweis verlangen, daß er die Voraussetzungen des Artikels 1 erfuellt. (2) Die Artikel 2 und 3, Artikel 6 Absatz 1 Buchstabe a) und Absatz 2 sowie Artikel 9 der Richtlinie 68/360/EWG finden auf die von dieser Richtlinie Begünstigten entsprechende Anwendung. Der Ehegatte eines Staatsangehörigen eines Mitgliedstaats, der im Hoheitsgebiet eines Mitgliedstaats aufenthaltsberechtigt ist, sowie die Kinder dieses Staatsangehörigen, denen er Unterhalt gewährt, haben, auch wenn sie die Staatsangehörigkeit eines Mitgliedstaats nicht besitzen, das Recht, im gesamten Hoheitsgebiet dieses Mitgliedstaats jedwede Tätigkeit im Lohn- oder Gehaltsverhältnis oder jedwede selbständige Erwerbstätigkeit auszuüben. Die Mitgliedstaaten dürfen nur aus Gründen der öffentlichen Ordnung, der öffentlichen Sicherheit oder der Volksgesundheit von den Bestimmungen dieser Richtlinie abweichen. In diesem Fall findet die Richtlinie 64/221/EWG Anwendung. (3) Die vorliegende Richtlinie berührt nicht die geltenden Rechtsvorschriften für den Erwerb von Zweitwohnungen. Artikel 3 Das Aufenthaltsrecht besteht, solange die Berechtigten die Bedingungen des Artikels 1 erfuellen. Artikel 4 Die Kommission arbeitet spätestens drei Jahre nach dem Beginn der Anwendung dieser Richtlinie und anschließend alle drei Jahre einen Bericht über ihre Anwendung aus und legt ihn dem Europäischen Parlament und dem Rat vor. Artikel 5 Die Mitgliedstaaten setzen die erforderlichen Rechts- und Verwaltungsvorschriften in Kraft, um dieser Richtlinie bis spätestens 30. Juni 1992 nachzukommen. Sie setzen die Kommission unverzueglich davon in Kenntnis. Artikel 6 Diese Richtlinie ist an die Mitgliedstaaten gerichtet. Geschehen zu Luxemburg am 28. Juni 1990. Im Namen des Rates Der Präsident M. GEOGHEGAN-QUINN (1) ABl. Nr. C 191 vom 28. 7. 1989, S. 3 und ABl. Nr. C 26 vom 3. 2. 1990, S. 19. (2) Stellungnahme vom 13. Juni 1990 (noch nicht im Amtsblatt veröffentlicht). (3) ABl. Nr. C 329 vom 30. 12. 1989, S. 25. (4) ABl. Nr. L 149 vom 5. 7. 1971, S. 2. (5) ABl. Nr. L 143 vom 29. 5. 1981, S. 1. (6) ABl. Nr. L 257 vom 19. 10. 1968, S. 13. (7) ABl. Nr. 56 vom 4. 4. 1964, S. 850/64. " pipeline([de_text], max_length=512) ``` ## Training data The legal_t5_small_summ_de model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html) dataset consisting of 23 Thousand texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 64). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for classification test dataset, achieves the following results: Test results : | Model | Rouge1 | Rouge2 | Rouge Lsum | |:-----:|:-----:|:-----:|:-----:| | legal_t5_small_summ_de | 78.03|68.84 |76.95| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Deustch", "tags": ["summarization Deustch model"], "datasets": ["jrc-acquis"], "widget": [{"text": "(90/365/EWG) DER RAT DER EUROP\u00c4ISCHEN GEMEINSCHAFTEN - gest\u00fctzt auf den Vertrag zur Gr\u00fcndung der Europ\u00e4ischen Wirtschaftsgemeinschaft, insbesondere auf Artikel 235, auf Vorschlag der Kommission (1), nach Stellungnahme des Europ\u00e4ischen Parlaments (2), nach Stellungnahme des Wirtschafts- und Sozialausschusses (3), in Erw\u00e4gung nachstehender Gr\u00fcnde: Gem\u00e4\u00df Artikel 3 Buchstabe c) des Vertrages umfasst die T\u00e4tigkeit der Gemeinschaft, nach Ma\u00dfgabe des Vertrages, die Beseitigung der Hindernisse f\u00fcr den freien Personenverkehr zwischen den Mitgliedstaaten. Artikel 8a des Vertrages sieht vor, da\u00df der Binnenmarkt bis zum 31. Dezember 1992 zu verwirklichen ist. Der Binnenmarkt umfasst einen Raum ohne Binnengrenzen, in dem der freie Verkehr von Waren, Personen, Dienstleistungen und Kapital gem\u00e4\u00df den Bestimmungen des Vertrages gew\u00e4hrleistet ist. Die Artikel 48 und 52 des Vertrages sehen die Freizuegigkeit der Arbeitnehmer und selbst\u00e4ndig Erwerbst\u00e4tigen vor, was ein Recht auf Aufenthalt in dem Mitgliedstaat beinhaltet, in dem sie ihr Berufsleben verbringen. Es empfiehlt sich, dieses Aufenthaltsrecht auch Personen zu gew\u00e4hren, die aus dem Erwerbsleben ausgeschieden sind, auch wenn sie w\u00e4hrend ihres Berufslebens von dem Recht auf Freizuegigkeit keinen Gebrauch gemacht haben. Die Aufenthaltsberechtigten d\u00fcrfen die \u00f6ffentlichen Finanzen des Aufnahmemitgliedstaates nicht \u00fcber Geb\u00fchr belasten. Nach Artikel 10 der Verordnung (EWG) Nr. 1408/71 (4) in der Fassung der Verordnung (EWG) Nr. 1390/81 (5) haben die Empf\u00e4nger von Geldleistungen bei Invalidit\u00e4t und Alter und die Bezieher von Renten bei Arbeitsunf\u00e4llen oder Berufskrankheiten auch dann weiterhin Anspruch auf diese Leistungen und Renten, wenn sie im Gebiet eines anderen Mitgliedstaates als des Staates wohnen, auf dessen Gebiet der zur Zahlung verpflichtete Tr\u00e4ger seinen Sitz hat. Die Aus\u00fcbung des Aufenthaltsrechts wird erst dann eine reale M\u00f6glichkeit, wenn es auch den Familienangeh\u00f6rigen zugestanden wird. F\u00fcr die von dieser Richtlinie Beg\u00fcnstigten sollte eine Verwaltungsregelung entsprechend der insbesondere in der Richtlinie 68/360/EWG (6) und in der Richtlinie 64/221/EWG (7) vorgesehenen Regelung gelten. Der Vertrag enth\u00e4lt Befugnisse f\u00fcr den Erla\u00df der vorliegenden Richtlinie nur in Artikel 235 - HAT FOLGENDE RICHTLINIE ERLASSEN: Artikel 1 (1) Die Mitgliedstaaten gew\u00e4hren den Angeh\u00f6rigen der Mitgliedstaaten, die in der Gemeinschaft eine T\u00e4tigkeit als Arbeitnehmer oder als Selbst\u00e4ndige ausgeuebt haben, sowie deren Familienangeh\u00f6rigen nach der Definition von Absatz 2 unter der Bedingung das Aufenthaltsrecht, da\u00df sie eine Invalidit\u00e4ts-, Vorruhestands- oder Altersrente oder eine Rente wegen Arbeitsunfalls oder Berufskrankheit in einer solchen H\u00f6he beziehen, da\u00df sie w\u00e4hrend ihres Aufenthalts nicht die Sozialhilfe des Aufnahmemitgliedstaats in Anspruch nehmen m\u00fcssen, und einen Krankenversicherungsschutz genie\u00dfen, der im Aufnahmemitgliedstaat alle Risiken abdeckt. Die Existenzmittel des Antragstellers gelten als ausreichend, wenn sie einen Betrag \u00fcbersteigen, unterhalb dessen der Aufnahmemitgliedstaat seinen Staatsangeh\u00f6rigen aufgrund der pers\u00f6nlichen Situation des Antragstellers und gegebenenfalls der Situation der nach Absatz 2 aufgenommenen Personen Sozialhilfe gew\u00e4hren kann. Ist Unterabsatz 2 in einem Mitgliedstaat nicht anwendbar, so gelten die Existenzmittel des Antragstellers als ausreichend, wenn sie den Betrag der Grundrente der Sozialversicherung \u00fcbersteigen, die der Aufnahmemitgliedstaat zahlt. (2) Bei dem Aufenthaltsberechtigten d\u00fcrfen folgende Personen ungeachtet ihrer Staatsangeh\u00f6rigkeit in einem anderen Mitgliedstaat Wohnung nehmen: a) sein Ehegatte sowie die Verwandten in absteigender Linie, denen Unterhalt gew\u00e4hrt wird; b) seine Verwandten und die Verwandten seines Ehegatten in aufsteigender Linie, denen er Unterhalt gew\u00e4hrt. Artikel 2 (1) Zum Nachweis des Aufenthaltsrechts wird eine Bescheinigung, die \u00bbAufenthaltserlaubnis f\u00fcr Staatsangeh\u00f6rige eines EWG-Mitgliedstaates%quot%, erteilt, deren G\u00fcltigkeit auf f\u00fcnf Jahre mit Verl\u00e4ngerungsm\u00f6glichkeit begrenzt werden kann. Die Mitgliedstaaten k\u00f6nnen jedoch die Erneuerung der Aufenthaltserlaubnis nach den ersten zwei Aufenthaltsjahren verlangen, wenn sie dies f\u00fcr erforderlich halten. Einem Familienmitglied, das nicht die Staatsangeh\u00f6rigkeit eines Mitgliedstaats besitzt, wird ein Aufenthaltsdokument mit der gleichen G\u00fcltigkeitsdauer ausgestellt wie dem Staatsangeh\u00f6rigen, von dem es seine Rechte herleitet. F\u00fcr die Erteilung der Aufenthaltserlaubnis oder des Aufenthaltsdokuments darf der Mitgliedstaat vom Antragsteller nur die Vorlage eines g\u00fcltigen Personalausweises bzw. Reisepasses sowie den Nachweis verlangen, da\u00df er die Voraussetzungen des Artikels 1 erfuellt. (2) Die Artikel 2 und 3, Artikel 6 Absatz 1 Buchstabe a) und Absatz 2 sowie Artikel 9 der Richtlinie 68/360/EWG finden auf die von dieser Richtlinie Beg\u00fcnstigten entsprechende Anwendung. Der Ehegatte eines Staatsangeh\u00f6rigen eines Mitgliedstaats, der im Hoheitsgebiet eines Mitgliedstaats aufenthaltsberechtigt ist, sowie die Kinder dieses Staatsangeh\u00f6rigen, denen er Unterhalt gew\u00e4hrt, haben, auch wenn sie die Staatsangeh\u00f6rigkeit eines Mitgliedstaats nicht besitzen, das Recht, im gesamten Hoheitsgebiet dieses Mitgliedstaats jedwede T\u00e4tigkeit im Lohn- oder Gehaltsverh\u00e4ltnis oder jedwede selbst\u00e4ndige Erwerbst\u00e4tigkeit auszu\u00fcben. Die Mitgliedstaaten d\u00fcrfen nur aus Gr\u00fcnden der \u00f6ffentlichen Ordnung, der \u00f6ffentlichen Sicherheit oder der Volksgesundheit von den Bestimmungen dieser Richtlinie abweichen. In diesem Fall findet die Richtlinie 64/221/EWG Anwendung. (3) Die vorliegende Richtlinie ber\u00fchrt nicht die geltenden Rechtsvorschriften f\u00fcr den Erwerb von Zweitwohnungen. Artikel 3 Das Aufenthaltsrecht besteht, solange die Berechtigten die Bedingungen des Artikels 1 erfuellen. Artikel 4 Die Kommission arbeitet sp\u00e4testens drei Jahre nach dem Beginn der Anwendung dieser Richtlinie und anschlie\u00dfend alle drei Jahre einen Bericht \u00fcber ihre Anwendung aus und legt ihn dem Europ\u00e4ischen Parlament und dem Rat vor. Artikel 5 Die Mitgliedstaaten setzen die erforderlichen Rechts- und Verwaltungsvorschriften in Kraft, um dieser Richtlinie bis sp\u00e4testens 30. Juni 1992 nachzukommen. Sie setzen die Kommission unverzueglich davon in Kenntnis. Artikel 6 Diese Richtlinie ist an die Mitgliedstaaten gerichtet. Geschehen zu Luxemburg am 28. Juni 1990. Im Namen des Rates Der Pr\u00e4sident M. GEOGHEGAN-QUINN (1) ABl. Nr. C 191 vom 28. 7. 1989, S. 3 und ABl. Nr. C 26 vom 3. 2. 1990, S. 19. (2) Stellungnahme vom 13. Juni 1990 (noch nicht im Amtsblatt ver\u00f6ffentlicht). (3) ABl. Nr. C 329 vom 30. 12. 1989, S. 25. (4) ABl. Nr. L 149 vom 5. 7. 1971, S. 2. (5) ABl. Nr. L 143 vom 29. 5. 1981, S. 1. (6) ABl. Nr. L 257 vom 19. 10. 1968, S. 13. (7) ABl. Nr. 56 vom 4. 4. 1964, S. 850/64. "}]}
SEBIS/legal_t5_small_summ_de
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "summarization Deustch model", "dataset:jrc-acquis", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_summ_en model Model for Summarization of legal text written in English. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis. ## Model description legal_t5_small_summ_en is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for summarization of legal texts written in English. ### How to use Here is how to use this model to summarize legal text written in English in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_summ_en"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_summ_en", do_lower_case=False, skip_special_tokens=True), device=0 ) en_text = "THE COMMISSION OF THE EUROPEAN COMMUNITIES, Having regard to the Treaty establishing the European Community, Having regard to Council Regulation (EC) No 1255/1999 of 17 May 1999 on the common organisation of the market in milk and milk products [1], and in particular Article 15 thereof, Whereas: (1) Article 7(1) of Commission Regulation (EC) No 2799/1999 [2] fixes the amount of aid for skimmed milk and skimmed-milk powder intended for animal feed taking into account the factors set out in Article 11(2) of Regulation (EC) No 1255/1999. In view of the developments in the market price of skimmed-milk powder, of the increase in the market prices for competing proteins, and of the reduction of the supply of skimmed-milk powder, the amount of aid should be reduced. (2) Regulation (EC) No 2799/1999 should therefore be amended accordingly. (3) The Management Committee for Milk and Milk Products has not delivered an opinion within the time-limit set by its chairman, HAS ADOPTED THIS REGULATION: Article 1 In Article 7 of Regulation (EC) No 2799/1999, paragraph 1 is replaced by the following: "1. Aid is fixed at: (a) EUR 1,62 per 100 kg of skimmed milk with a protein content of not less than 35,6 % of the non-fatty dry extract; (b) EUR 1,42 per 100 kg of skimmed milk with a protein content of not less than 31,4 % but less than 35,6 % of the non-fatty dry extract; (c) EUR 20,00 per 100 kg of skimmed-milk powder with a protein content of not less than 35,6 % of the non-fatty dry extract; (d) EUR 17,64 per 100 kg of skimmed-milk powder with a protein content of not less than 31,4 % but less than 35,6 % of the non-fatty dry extract." Article 2 This Regulation shall enter into force on the day following its publication in the Official Journal of the European Union. This Regulation shall be binding in its entirety and directly applicable in all Member States. Done at Brussels, 19 April 2006. For the Commission Mariann Fischer Boel Member of the Commission [1] OJ L 160, 26.6.1999, p. 48. Regulation as last amended by Regulation (EC) No 1913/2005 (OJ L 307, 25.11.2005, p. 2). [2] OJ L 340, 31.12.1999, p. 3. Regulation as last amended by Regulation (EC) No 1194/2005 (OJ L 194, 26.7.2005, p. 7). -------------------------------------------------- " pipeline([en_text], max_length=512) ``` ## Training data The legal_t5_small_summ_en model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html) dataset consisting of 22 Thousand texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 64). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for classification test dataset, achieves the following results: Test results : | Model | Rouge1 | Rouge2 | Rouge Lsum | |:-----:|:-----:|:-----:|:-----:| | legal_t5_small_summ_en | 78.11|68.78 |77.0| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "English", "tags": ["summarization English model"], "datasets": ["jrc-acquis"], "widget": [{"text": "THE COMMISSION OF THE EUROPEAN COMMUNITIES, Having regard to the Treaty establishing the European Community, Having regard to Council Regulation (EC) No 1255/1999 of 17 May 1999 on the common organisation of the market in milk and milk products [1], and in particular Article 15 thereof, Whereas: (1) Article 7(1) of Commission Regulation (EC) No 2799/1999 [2] fixes the amount of aid for skimmed milk and skimmed-milk powder intended for animal feed taking into account the factors set out in Article 11(2) of Regulation (EC) No 1255/1999. In view of the developments in the market price of skimmed-milk powder, of the increase in the market prices for competing proteins, and of the reduction of the supply of skimmed-milk powder, the amount of aid should be reduced. (2) Regulation (EC) No 2799/1999 should therefore be amended accordingly. (3) The Management Committee for Milk and Milk Products has not delivered an opinion within the time-limit set by its chairman, HAS ADOPTED THIS REGULATION: Article 1 In Article 7 of Regulation (EC) No 2799/1999, paragraph 1 is replaced by the following: \"1. Aid is fixed at: (a) EUR 1,62 per 100 kg of skimmed milk with a protein content of not less than 35,6 % of the non-fatty dry extract; (b) EUR 1,42 per 100 kg of skimmed milk with a protein content of not less than 31,4 % but less than 35,6 % of the non-fatty dry extract; (c) EUR 20,00 per 100 kg of skimmed-milk powder with a protein content of not less than 35,6 % of the non-fatty dry extract; (d) EUR 17,64 per 100 kg of skimmed-milk powder with a protein content of not less than 31,4 % but less than 35,6 % of the non-fatty dry extract.\" Article 2 This Regulation shall enter into force on the day following its publication in the Official Journal of the European Union. This Regulation shall be binding in its entirety and directly applicable in all Member States. Done at Brussels, 19 April 2006. For the Commission Mariann Fischer Boel Member of the Commission [1] OJ L 160, 26.6.1999, p. 48. Regulation as last amended by Regulation (EC) No 1913/2005 (OJ L 307, 25.11.2005, p. 2). [2] OJ L 340, 31.12.1999, p. 3. Regulation as last amended by Regulation (EC) No 1194/2005 (OJ L 194, 26.7.2005, p. 7)."}]}
SEBIS/legal_t5_small_summ_en
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "summarization English model", "dataset:jrc-acquis", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_summ_es model Model for Summarization of legal text written in Spanish. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis. ## Model description legal_t5_small_summ_es is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for summarization of legal texts written in Spanish. ### How to use Here is how to use this model to summarize legal text written in Spanish in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_summ_es"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_summ_es", do_lower_case=False, skip_special_tokens=True), device=0 ) es_text = "[notificada con el número C(2006) 166] (El texto en lengua portuguesa es el único auténtico) (2006/78/CE) LA COMISIÓN DE LAS COMUNIDADES EUROPEAS, Visto el Tratado constitutivo de la Comunidad Europea, Vista la Decisión 90/424/CEE del Consejo, de 26 de junio de 1990, relativa a determinados gastos en el sector veterinario [1], y, en particular, su artículo 3, apartado 2 bis, Considerando lo siguiente: (1) El 24 de noviembre de 2004 se declararon brotes de fiebre catarral ovina en Portugal. La aparición de esta enfermedad puede representar un grave riesgo para la cabaña ganadera de la Comunidad. (2) Para atajar la propagación de la enfermedad en el plazo más breve, la Comunidad debe participar en los gastos subvencionables que suponen para Portugal la adopción de medidas de urgencia contra la enfermedad, en las condiciones previstas en la Decisión 90/424/CEE. Por ello, el 15 de septiembre de 2005 se adoptó la Decisión 2005/660/CE de la Comisión relativa a una ayuda financiera de la Comunidad para medidas de urgencia contra la fiebre catarral ovina adoptadas en Portugal en 2004 y 2005 [2]. (3) La Comisión ha adoptado varias decisiones para delimitar las zonas de protección y vigilancia y fijar las condiciones que deben cumplir los animales que vayan a salir de esas zonas; la última de ellas es la Decisión 2005/393/CE, de 23 de mayo de 2005, sobre las zonas de protección y vigilancia en relación con la fiebre catarral ovina y las condiciones que se aplican a los traslados de animales desde estas zonas o a través de ellas [3]. (4) Desde el otoño de 2004, la excepcional escasez de lluvias en Portugal ha afectado gravemente al suministro de forraje y, en consecuencia, a las posibilidades de alimentación animal, lo que ha conllevado costes adicionales para los ganaderos. La situación tiene consecuencias particulares en Portugal, pues las explotaciones especializadas en reproducción de bovinos y de ovinos están ubicadas en las zonas afectadas por las restricciones aplicadas a los traslados de animales, mientras que las especializadas en engorde, que constituyen la salida lógica de los animales criados en aquéllas, están localizadas fuera de dichas zonas. (5) Portugal, en colaboración con España, puso en marcha otras medidas para controlar la epidemia, como la realización de estudios epidemiológicos y la aplicación de medidas de vigilancia de la enfermedad, incluidas las pruebas de laboratorio para el control serológico y virológico en el marco de las pruebas realizadas a los animales antes de su traslado y en el de la vigilancia entomológica. (6) Portugal y España han presentado pruebas de su cooperación para evitar la propagación de la enfermedad tomando medidas de vigilancia de la misma. (7) De conformidad con el artículo 3, apartado 2, del Reglamento (CE) no 1258/1999 del Consejo, de 17 de mayo de 1999, sobre la financiación de la política agrícola común [4], las medidas veterinarias y fitosanitarias ejecutadas según las normas comunitarias son financiadas por la sección Garantía del Fondo Europeo de Orientación y de Garantía Agrícola. El control financiero de estas acciones debe efectuarse de conformidad con lo dispuesto en los artículos 8 y 9 de dicho Reglamento. (8) El pago de la contribución financiera de la Comunidad se supedita a la realización efectiva de las acciones programadas y a la presentación por parte de las autoridades de toda la información necesaria en los plazos establecidos. (9) El 25 de febrero de 2005, Portugal presentó un primer cálculo de los costes de las demás medidas de urgencia, como las de vigilancia epidemiológica, tomadas para luchar contra la enfermedad. El importe estimado de las medidas de vigilancia epidemiológica se eleva a 4303336 EUR. (10) A la espera de que se efectúen los controles in situ de la Comisión, procede fijar desde ahora el importe de un primer pago de la ayuda financiera de la Comunidad. Este primer pago ha de ser igual al 50 % de la contribución de la Comunidad, establecida sobre la base del gasto subvencionable calculado para las medidas de vigilancia epidemiológica. Procede asimismo determinar los importes máximos que se reembolsarán en concepto de pruebas realizadas y de trampas utilizadas en el marco de dichas medidas. (11) Las autoridades portuguesas han cumplido íntegramente sus obligaciones técnicas y administrativas relacionadas con las medidas previstas en el artículo 3 de la Decisión 90/424/CEE. (12) Las medidas previstas en la presente Decisión se ajustan al dictamen del Comité permanente de la cadena alimentaria y de sanidad animal. HA ADOPTADO LA PRESENTE DECISIÓN: Artículo 1 Concesión de una ayuda financiera de la Comunidad a Portugal 1. En el marco de las medidas de urgencia contra la fiebre catarral ovina adoptadas en Portugal en 2004 y 2005, Portugal tendrá derecho a una contribución comunitaria del 50 % de los importes desembolsados en concepto de pruebas de laboratorio para la vigilancia serológica y virológica, así como en concepto de vigilancia entomológica, incluida la adquisición de trampas. 2. El importe máximo de los gastos que se reembolsarán a Portugal en concepto de las pruebas y las trampas mencionadas en el apartado 1 no excederá de: a) vigilancia serológica, prueba ELISA: 2,5 EUR por prueba; b) vigilancia virológica, reacción en cadena de la polimerasa retrotranscriptásica (RT.PCR): 15 EUR por prueba; c) vigilancia entomológica, trampa: 160 EUR por trampa. 3. El impuesto sobre el valor añadido se excluirá de la participación financiera de la Comunidad. Artículo 2 Modalidades de pago A reserva del resultado de los controles in situ llevados a cabo de conformidad con el artículo 9, apartado 1, de la Decisión 90/424/CEE, se efectuará un primer pago de 600000 EUR como parte de la ayuda financiera de la Comunidad prevista en el artículo 1. El pago se llevará a cabo previa presentación por parte de Portugal de justificantes de las pruebas de laboratorio y de la adquisición de las trampas mencionadas en el artículo 1, apartado 1. Artículo 3 Condiciones de pago y documentación justificativa 1. La ayuda financiera de la Comunidad contemplada en el artículo 1 se pagará atendiendo a los siguientes elementos: a) una solicitud que contenga los datos especificados en el anexo, presentada en el plazo establecido en el apartado 2 del presente artículo; b) la documentación justificativa mencionada en el artículo 2, que incluirá un informe epidemiológico y un informe financiero; c) el resultado de cualquiera de los controles in situ llevados a cabo de conformidad con el artículo 9, apartado 1, de la Decisión 90/424/CEE. Los documentos mencionados en la letra b) deberán estar disponibles para los controles in situ mencionados en la letra c). 2. La solicitud mencionada en el apartado 1, letra a), se presentará en formato electrónico en un plazo de 60 días naturales a partir de la fecha de notificación de la presente Decisión. Si no se respeta este plazo, la ayuda financiera comunitaria se reducirá un 25 % por cada mes de retraso. Artículo 4 Destinatario El destinatario de la presente Decisión es la República Portuguesa. Hecho en Bruselas, el 31 de enero de 2006. Por la Comisión Markos Kyprianou Miembro de la Comisión [1] DO L 224 de 18.8.1990, p. 19. Decisión modificada en último lugar por el Reglamento (CE) no 806/2003 (DO L 122 de 16.5.2003, p. 1). [2] DO L 244 de 20.9.2005, p. 28. [3] DO L 130 de 24.5.2005, p. 22. Decisión modificada en último lugar por la Decisión 2005/828/CE (DO L 311 de 26.11.2005, p. 37). [4] DO L 160 de 26.6.1999, p. 103. -------------------------------------------------- ANEXO Datos mencionados en el artículo 3, apartado 1, letra a) Gastos | Naturaleza de los costes | Número | Importe (sin IVA) | Pruebas ELISA | | | Pruebas RT.PCR | | | Otras pruebas virológicas | | | Trampas | | | Total | | -------------------------------------------------- " pipeline([es_text], max_length=512) ``` ## Training data The legal_t5_small_summ_es model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html) dataset consisting of 23 Thousand texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 64). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for classification test dataset, achieves the following results: Test results : | Model | Rouge1 | Rouge2 | Rouge Lsum | |:-----:|:-----:|:-----:|:-----:| | legal_t5_small_summ_es | 80.23|70.16 |78.69| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Spanish", "tags": ["summarization Spanish model"], "datasets": ["jrc-acquis"], "widget": [{"text": "[notificada con el n\u00famero C(2006) 166] (El texto en lengua portuguesa es el \u00fanico aut\u00e9ntico) (2006/78/CE) LA COMISI\u00d3N DE LAS COMUNIDADES EUROPEAS, Visto el Tratado constitutivo de la Comunidad Europea, Vista la Decisi\u00f3n 90/424/CEE del Consejo, de 26 de junio de 1990, relativa a determinados gastos en el sector veterinario [1], y, en particular, su art\u00edculo 3, apartado 2 bis, Considerando lo siguiente: (1) El 24 de noviembre de 2004 se declararon brotes de fiebre catarral ovina en Portugal. La aparici\u00f3n de esta enfermedad puede representar un grave riesgo para la caba\u00f1a ganadera de la Comunidad. (2) Para atajar la propagaci\u00f3n de la enfermedad en el plazo m\u00e1s breve, la Comunidad debe participar en los gastos subvencionables que suponen para Portugal la adopci\u00f3n de medidas de urgencia contra la enfermedad, en las condiciones previstas en la Decisi\u00f3n 90/424/CEE. Por ello, el 15 de septiembre de 2005 se adopt\u00f3 la Decisi\u00f3n 2005/660/CE de la Comisi\u00f3n relativa a una ayuda financiera de la Comunidad para medidas de urgencia contra la fiebre catarral ovina adoptadas en Portugal en 2004 y 2005 [2]. (3) La Comisi\u00f3n ha adoptado varias decisiones para delimitar las zonas de protecci\u00f3n y vigilancia y fijar las condiciones que deben cumplir los animales que vayan a salir de esas zonas; la \u00faltima de ellas es la Decisi\u00f3n 2005/393/CE, de 23 de mayo de 2005, sobre las zonas de protecci\u00f3n y vigilancia en relaci\u00f3n con la fiebre catarral ovina y las condiciones que se aplican a los traslados de animales desde estas zonas o a trav\u00e9s de ellas [3]. (4) Desde el oto\u00f1o de 2004, la excepcional escasez de lluvias en Portugal ha afectado gravemente al suministro de forraje y, en consecuencia, a las posibilidades de alimentaci\u00f3n animal, lo que ha conllevado costes adicionales para los ganaderos. La situaci\u00f3n tiene consecuencias particulares en Portugal, pues las explotaciones especializadas en reproducci\u00f3n de bovinos y de ovinos est\u00e1n ubicadas en las zonas afectadas por las restricciones aplicadas a los traslados de animales, mientras que las especializadas en engorde, que constituyen la salida l\u00f3gica de los animales criados en aqu\u00e9llas, est\u00e1n localizadas fuera de dichas zonas. (5) Portugal, en colaboraci\u00f3n con Espa\u00f1a, puso en marcha otras medidas para controlar la epidemia, como la realizaci\u00f3n de estudios epidemiol\u00f3gicos y la aplicaci\u00f3n de medidas de vigilancia de la enfermedad, incluidas las pruebas de laboratorio para el control serol\u00f3gico y virol\u00f3gico en el marco de las pruebas realizadas a los animales antes de su traslado y en el de la vigilancia entomol\u00f3gica. (6) Portugal y Espa\u00f1a han presentado pruebas de su cooperaci\u00f3n para evitar la propagaci\u00f3n de la enfermedad tomando medidas de vigilancia de la misma. (7) De conformidad con el art\u00edculo 3, apartado 2, del Reglamento (CE) no 1258/1999 del Consejo, de 17 de mayo de 1999, sobre la financiaci\u00f3n de la pol\u00edtica agr\u00edcola com\u00fan [4], las medidas veterinarias y fitosanitarias ejecutadas seg\u00fan las normas comunitarias son financiadas por la secci\u00f3n Garant\u00eda del Fondo Europeo de Orientaci\u00f3n y de Garant\u00eda Agr\u00edcola. El control financiero de estas acciones debe efectuarse de conformidad con lo dispuesto en los art\u00edculos 8 y 9 de dicho Reglamento. (8) El pago de la contribuci\u00f3n financiera de la Comunidad se supedita a la realizaci\u00f3n efectiva de las acciones programadas y a la presentaci\u00f3n por parte de las autoridades de toda la informaci\u00f3n necesaria en los plazos establecidos. (9) El 25 de febrero de 2005, Portugal present\u00f3 un primer c\u00e1lculo de los costes de las dem\u00e1s medidas de urgencia, como las de vigilancia epidemiol\u00f3gica, tomadas para luchar contra la enfermedad. El importe estimado de las medidas de vigilancia epidemiol\u00f3gica se eleva a 4303336 EUR. (10) A la espera de que se efect\u00faen los controles in situ de la Comisi\u00f3n, procede fijar desde ahora el importe de un primer pago de la ayuda financiera de la Comunidad. Este primer pago ha de ser igual al 50 % de la contribuci\u00f3n de la Comunidad, establecida sobre la base del gasto subvencionable calculado para las medidas de vigilancia epidemiol\u00f3gica. Procede asimismo determinar los importes m\u00e1ximos que se reembolsar\u00e1n en concepto de pruebas realizadas y de trampas utilizadas en el marco de dichas medidas. (11) Las autoridades portuguesas han cumplido \u00edntegramente sus obligaciones t\u00e9cnicas y administrativas relacionadas con las medidas previstas en el art\u00edculo 3 de la Decisi\u00f3n 90/424/CEE. (12) Las medidas previstas en la presente Decisi\u00f3n se ajustan al dictamen del Comit\u00e9 permanente de la cadena alimentaria y de sanidad animal. HA ADOPTADO LA PRESENTE DECISI\u00d3N: Art\u00edculo 1 Concesi\u00f3n de una ayuda financiera de la Comunidad a Portugal 1. En el marco de las medidas de urgencia contra la fiebre catarral ovina adoptadas en Portugal en 2004 y 2005, Portugal tendr\u00e1 derecho a una contribuci\u00f3n comunitaria del 50 % de los importes desembolsados en concepto de pruebas de laboratorio para la vigilancia serol\u00f3gica y virol\u00f3gica, as\u00ed como en concepto de vigilancia entomol\u00f3gica, incluida la adquisici\u00f3n de trampas. 2. El importe m\u00e1ximo de los gastos que se reembolsar\u00e1n a Portugal en concepto de las pruebas y las trampas mencionadas en el apartado 1 no exceder\u00e1 de: a) vigilancia serol\u00f3gica, prueba ELISA: 2,5 EUR por prueba; b) vigilancia virol\u00f3gica, reacci\u00f3n en cadena de la polimerasa retrotranscript\u00e1sica (RT.PCR): 15 EUR por prueba; c) vigilancia entomol\u00f3gica, trampa: 160 EUR por trampa. 3. El impuesto sobre el valor a\u00f1adido se excluir\u00e1 de la participaci\u00f3n financiera de la Comunidad. Art\u00edculo 2 Modalidades de pago A reserva del resultado de los controles in situ llevados a cabo de conformidad con el art\u00edculo 9, apartado 1, de la Decisi\u00f3n 90/424/CEE, se efectuar\u00e1 un primer pago de 600000 EUR como parte de la ayuda financiera de la Comunidad prevista en el art\u00edculo 1. El pago se llevar\u00e1 a cabo previa presentaci\u00f3n por parte de Portugal de justificantes de las pruebas de laboratorio y de la adquisici\u00f3n de las trampas mencionadas en el art\u00edculo 1, apartado 1. Art\u00edculo 3 Condiciones de pago y documentaci\u00f3n justificativa 1. La ayuda financiera de la Comunidad contemplada en el art\u00edculo 1 se pagar\u00e1 atendiendo a los siguientes elementos: a) una solicitud que contenga los datos especificados en el anexo, presentada en el plazo establecido en el apartado 2 del presente art\u00edculo; b) la documentaci\u00f3n justificativa mencionada en el art\u00edculo 2, que incluir\u00e1 un informe epidemiol\u00f3gico y un informe financiero; c) el resultado de cualquiera de los controles in situ llevados a cabo de conformidad con el art\u00edculo 9, apartado 1, de la Decisi\u00f3n 90/424/CEE. Los documentos mencionados en la letra b) deber\u00e1n estar disponibles para los controles in situ mencionados en la letra c). 2. La solicitud mencionada en el apartado 1, letra a), se presentar\u00e1 en formato electr\u00f3nico en un plazo de 60 d\u00edas naturales a partir de la fecha de notificaci\u00f3n de la presente Decisi\u00f3n. Si no se respeta este plazo, la ayuda financiera comunitaria se reducir\u00e1 un 25 % por cada mes de retraso. Art\u00edculo 4 Destinatario El destinatario de la presente Decisi\u00f3n es la Rep\u00fablica Portuguesa. Hecho en Bruselas, el 31 de enero de 2006. Por la Comisi\u00f3n Markos Kyprianou Miembro de la Comisi\u00f3n [1] DO L 224 de 18.8.1990, p. 19. Decisi\u00f3n modificada en \u00faltimo lugar por el Reglamento (CE) no 806/2003 (DO L 122 de 16.5.2003, p. 1). [2] DO L 244 de 20.9.2005, p. 28. [3] DO L 130 de 24.5.2005, p. 22. Decisi\u00f3n modificada en \u00faltimo lugar por la Decisi\u00f3n 2005/828/CE (DO L 311 de 26.11.2005, p. 37). [4] DO L 160 de 26.6.1999, p. 103. -------------------------------------------------- ANEXO Datos mencionados en el art\u00edculo 3, apartado 1, letra a) Gastos | Naturaleza de los costes | N\u00famero | Importe (sin IVA) | Pruebas ELISA | | | Pruebas RT.PCR | | | Otras pruebas virol\u00f3gicas | | | Trampas | | | Total | | -------------------------------------------------- "}]}
SEBIS/legal_t5_small_summ_es
null
[ "transformers", "pytorch", "tf", "jax", "t5", "text2text-generation", "summarization Spanish model", "dataset:jrc-acquis", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_summ_fr model Model for Summarization of legal text written in French. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis. ## Model description legal_t5_small_summ_fr is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for summarization of legal texts written in French. ### How to use Here is how to use this model to summarize legal text written in French in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_summ_fr"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_summ_fr", do_lower_case=False, skip_special_tokens=True), device=0 ) fr_text = "LA COMMISSION DES COMMUNAUTÉS EUROPÉENNES, vu le traité instituant la Communauté européenne, vu le règlement (CE) no 1784/2003 du Conseil du 29 septembre 2003 portant organisation commune des marchés dans le secteur des céréales [1], et notamment son article 13, paragraphe 3, vu le règlement (CE) no 1785/2003 du Conseil du 29 septembre 2003 portant organisation commune du marché du riz [2], et notamment son article 14, paragraphe 3, considérant ce qui suit: (1) Conformément à l'article 13, paragraphe 1, du règlement (CE) no 1784/2003 et à l'article 14, paragraphe 1, du règlement (CE) no 1785/2003, la différence entre les cours ou les prix sur le marché mondial des produits visés à l'article 1er de chacun de ces deux règlements et les prix dans la Communauté peut être couverte par une restitution à l'exportation. (2) Le règlement (CE) no 1043/2005 de la Commission du 30 juin 2005 portant application du règlement (CE) no 3448/93 du Conseil en ce qui concerne le système d’octroi des restitutions à l'exportation pour certains produits agricoles exportés sous forme de marchandises ne relevant pas de l'annexe I du traité ainsi que les critères de fixation de leurs montants [3] a spécifié ceux de ces produits pour lesquels il y a lieu de fixer un taux de restitution applicable lors de leur exportation sous forme de marchandises reprises, selon le cas, à l'annexe III du règlement (CE) no 1784/2003 ou à l'annexe IV du règlement (CE) no 1785/2003. (3) Conformément à l'article 14, paragraphe 1, du règlement (CE) no 1043/2005, le taux de la restitution par 100 kilogrammes de chacun des produits de base considérés doit être fixé chaque mois. (4) Les engagements pris en matière de restitutions pouvant être octroyées à l'exportation de produits agricoles incorporés dans des marchandises ne relevant pas de l'annexe I du traité peuvent être mis en péril par la fixation à l'avance de taux de restitution élevés. Il convient, dès lors, de prendre des mesures de sauvegarde dans ces situations sans empêcher pour autant la conclusion de contrats à long terme. La fixation d'un taux de restitution spécifique pour la fixation à l'avance des restitutions est une mesure permettant de rencontrer ces différents objectifs. (5) À la suite de l'arrangement entre la Communauté européenne et les États-Unis d'Amérique concernant les exportations de pâtes alimentaires de la Communauté aux États-Unis approuvé par la décision 87/482/CEE du Conseil [4], il est nécessaire de différencier la restitution pour les marchandises relevant des codes NC 19021100 et 190219 selon leur destination. (6) Conformément à l'article 15, paragraphes 2 et 3, du règlement (CE) no 1043/2005, il y a lieu de fixer un taux de restitution à l'exportation réduit, compte tenu du montant de la restitution à la production applicable, en vertu du règlement (CEE) no 1722/93 de la Commission [5], au produit de base mis en œuvre, valable au cours de la période présumée de fabrication des marchandises. (7) Les boissons spiritueuses sont considérées comme moins sensibles au prix des céréales mises en œuvre pour leur fabrication. Toutefois, le protocole 19 du traité d'adhésion du Royaume-Uni, de l'Irlande et du Danemark prévoit que des mesures nécessaires doivent être arrêtées afin de faciliter l'utilisation des céréales communautaires pour la fabrication de boissons spiritueuses obtenues à partir de céréales. Il convient donc d'adapter le taux de restitution applicable aux céréales exportées sous forme de boissons spiritueuses. (8) Le comité de gestion des céréales n'a pas émis d'avis dans le délai imparti par son président, A ARRÊTÉ LE PRÉSENT RÈGLEMENT: Article premier Les taux des restitutions applicables aux produits de base figurant à l'annexe I du règlement (CE) no 1043/2005 et à l'article 1er du règlement (CE) no 1784/2003 ou à l'article 1er du règlement (CE) no 1785/2003 modifié, qui sont exportés sous forme de marchandises reprises respectivement à l'annexe III du règlement (CE) no 1784/2003 ou à l'annexe IV du règlement (CE) no 1785/2003, sont fixés comme indiqué à l'annexe du présent règlement. Article 2 Le présent règlement entre en vigueur le 23 septembre 2005. Le présent règlement est obligatoire dans tous ses éléments et directement applicable dans tout État membre. Fait à Bruxelles, le 22 septembre 2005. Par la Commission Günter Verheugen Vice-président [1] JO L 270 du 21.10.2003, p. 78. [2] JO L 270 du 21.10.2003, p. 96. [3] JO L 172 du 5.7.2005, p. 24. [4] JO L 275 du 29.9.1987, p. 36. [5] JO L 159 du 1.7.1993, p. 112. Règlement modifié en dernier lieu par le règlement (CE) no 1584/2004 (JO L 280 du 31.8.2004, p. 11). -------------------------------------------------- ANNEXE Taux des restitutions applicables à compter du 23 septembre 2005 à certains produits des secteurs des céréales et du riz exportés sous forme de marchandises ne relevant pas de l'annexe I du traité [1] (en EUR/100 kg) | Code NC | Désignation des marchandises | Taux de la restitution par 100 kg du produit de base | En cas de fixation à l'avance des restitutions | Autres | 10011000 | Froment (blé) dur: | | | – en cas d'exportation de marchandises relevant des codes NC 190211 et 190219 vers les États-Unis d'Amérique | — | — | – dans les autres cas | — | — | 10019099 | Froment (blé) tendre et méteil: | | | – en cas d'exportation de marchandises relevant des codes NC 190211 et 190219 vers les États-Unis d'Amérique | — | — | – dans les autres cas: | | | – – en cas d'application de l'article 15, paragraphe 3, du règlement (CE) no 1043/2005 | — | — | – – en cas d'exportation de marchandises relevant du sous-chapitre 2208 | — | — | – – dans les autres cas | — | — | 10020000 | Seigle | — | — | 10030090 | Orge | | | – en cas d'exportation de marchandises relevant du sous-chapitre 2208 | — | — | – dans les autres cas | — | — | 10040000 | Avoine | — | — | 10059000 | Maïs, mis en œuvre sous forme de: | | | – amidon: | | | – – en cas d'application de l'article 15, paragraphe 3, du règlement (CE) no 1043/2005 | 2,994 | 3,150 | – – en cas d'exportation de marchandises relevant du sous-chapitre 2208 | 2,368 | 2,368 | – – dans les autres cas | 4,615 | 4,615 | – glucose, sirop de glucose, maltodextrine, sirop de maltodextrine des codes NC 17023051, 17023059, 17023091, 17023099, 17024090, 17029050, 17029075, 17029079, 21069055: | | | – – en cas d'application de l'article 15, paragraphe 3, du règlement (CE) no 1043/2005 | 1,840 | 1,996 | – – en cas d'exportation de marchandises relevant du sous-chapitre 2208 | 1,776 | 1,776 | – – dans les autres cas | 3,461 | 3,461 | – en cas d'exportation de marchandises relevant du sous-chapitre 2208 | 2,368 | 2,368 | – autres (y compris en l'état) | 4,615 | 4,615 | Fécule de pommes de terre du code NC 11081300 assimilée à un produit issu de la transformation du maïs: | | | – en cas d'application de l'article 15, paragraphe 3, du règlement (CE) no 1043/2005 | 2,435 | 2,585 | – en cas d'exportation de marchandises relevant du sous-chapitre 2208 | 2,368 | 2,368 | – dans les autres cas | 4,615 | 4,615 | ex100630 | Riz blanchi: | | | – à grains ronds | — | — | – à grains moyens | — | — | – à grains longs | — | — | 10064000 | Riz en brisures | — | — | 10070090 | Sorgho à grains (à l'excl. du sorgho à grains, hybride, destiné à l'ensemencement) | — | — | [1] Les taux prévus à la présente annexe ne s’appliquent pas avec effet au 1er octobre 2004 aux exportations vers la Bulgarie et avec effet au 1er février 2005 aux marchandises visées aux tableaux I et II du Protocole no 2 de l’Accord entre la Communauté économique européenne et la Confédération suisse du 22 juillet 1972 qui sont exportées vers la Confédération suisse ou la principauté de Liechtenstein. [2] En ce qui concerne les produits agricoles obtenus par transformation d’un produit de base et/ou de produits assimilés, les coefficients fixés à l’annexe V du règlement (CE) no 1043/2005 de la Commission s’appliquent. [3] La marchandise concernée relève du code NC 35051050. [4] Marchandises reprises à l'annexe III du règlement (CE) no 1784/2003 ou visées à l'article 2 du règlement (CEE) no 2825/93 (JO L 258 du 16.10.1993, p. 6). [5] Pour les sirops des codes NC 17023099, 17024090 et 17026090, obtenus par mélange de sirops de glucose et fructose, seul le sirop de glucose a droit à la restitution à l'exportation. -------------------------------------------------- " pipeline([fr_text], max_length=512) ``` ## Training data The legal_t5_small_summ_fr model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html) dataset consisting of 23 Thousand texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 64). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for classification test dataset, achieves the following results: Test results : | Model | Rouge1 | Rouge2 | Rouge Lsum | |:-----:|:-----:|:-----:|:-----:| | legal_t5_small_summ_fr | 77.1|67.97 |75.74| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "French", "tags": ["summarization French model"], "datasets": ["jrc-acquis"], "widget": [{"text": "LA COMMISSION DES COMMUNAUT\u00c9S EUROP\u00c9ENNES, vu le trait\u00e9 instituant la Communaut\u00e9 europ\u00e9enne, vu le r\u00e8glement (CE) no 1784/2003 du Conseil du 29 septembre 2003 portant organisation commune des march\u00e9s dans le secteur des c\u00e9r\u00e9ales [1], et notamment son article 13, paragraphe 3, vu le r\u00e8glement (CE) no 1785/2003 du Conseil du 29 septembre 2003 portant organisation commune du march\u00e9 du riz [2], et notamment son article 14, paragraphe 3, consid\u00e9rant ce qui suit: (1) Conform\u00e9ment \u00e0 l'article 13, paragraphe 1, du r\u00e8glement (CE) no 1784/2003 et \u00e0 l'article 14, paragraphe 1, du r\u00e8glement (CE) no 1785/2003, la diff\u00e9rence entre les cours ou les prix sur le march\u00e9 mondial des produits vis\u00e9s \u00e0 l'article 1er de chacun de ces deux r\u00e8glements et les prix dans la Communaut\u00e9 peut \u00eatre couverte par une restitution \u00e0 l'exportation. (2) Le r\u00e8glement (CE) no 1043/2005 de la Commission du 30 juin 2005 portant application du r\u00e8glement (CE) no 3448/93 du Conseil en ce qui concerne le syst\u00e8me d\u2019octroi des restitutions \u00e0 l'exportation pour certains produits agricoles export\u00e9s sous forme de marchandises ne relevant pas de l'annexe I du trait\u00e9 ainsi que les crit\u00e8res de fixation de leurs montants [3] a sp\u00e9cifi\u00e9 ceux de ces produits pour lesquels il y a lieu de fixer un taux de restitution applicable lors de leur exportation sous forme de marchandises reprises, selon le cas, \u00e0 l'annexe III du r\u00e8glement (CE) no 1784/2003 ou \u00e0 l'annexe IV du r\u00e8glement (CE) no 1785/2003. (3) Conform\u00e9ment \u00e0 l'article 14, paragraphe 1, du r\u00e8glement (CE) no 1043/2005, le taux de la restitution par 100 kilogrammes de chacun des produits de base consid\u00e9r\u00e9s doit \u00eatre fix\u00e9 chaque mois. (4) Les engagements pris en mati\u00e8re de restitutions pouvant \u00eatre octroy\u00e9es \u00e0 l'exportation de produits agricoles incorpor\u00e9s dans des marchandises ne relevant pas de l'annexe I du trait\u00e9 peuvent \u00eatre mis en p\u00e9ril par la fixation \u00e0 l'avance de taux de restitution \u00e9lev\u00e9s. Il convient, d\u00e8s lors, de prendre des mesures de sauvegarde dans ces situations sans emp\u00eacher pour autant la conclusion de contrats \u00e0 long terme. La fixation d'un taux de restitution sp\u00e9cifique pour la fixation \u00e0 l'avance des restitutions est une mesure permettant de rencontrer ces diff\u00e9rents objectifs. (5) \u00c0 la suite de l'arrangement entre la Communaut\u00e9 europ\u00e9enne et les \u00c9tats-Unis d'Am\u00e9rique concernant les exportations de p\u00e2tes alimentaires de la Communaut\u00e9 aux \u00c9tats-Unis approuv\u00e9 par la d\u00e9cision 87/482/CEE du Conseil [4], il est n\u00e9cessaire de diff\u00e9rencier la restitution pour les marchandises relevant des codes NC 19021100 et 190219 selon leur destination. (6) Conform\u00e9ment \u00e0 l'article 15, paragraphes 2 et 3, du r\u00e8glement (CE) no 1043/2005, il y a lieu de fixer un taux de restitution \u00e0 l'exportation r\u00e9duit, compte tenu du montant de la restitution \u00e0 la production applicable, en vertu du r\u00e8glement (CEE) no 1722/93 de la Commission [5], au produit de base mis en \u0153uvre, valable au cours de la p\u00e9riode pr\u00e9sum\u00e9e de fabrication des marchandises. (7) Les boissons spiritueuses sont consid\u00e9r\u00e9es comme moins sensibles au prix des c\u00e9r\u00e9ales mises en \u0153uvre pour leur fabrication. Toutefois, le protocole 19 du trait\u00e9 d'adh\u00e9sion du Royaume-Uni, de l'Irlande et du Danemark pr\u00e9voit que des mesures n\u00e9cessaires doivent \u00eatre arr\u00eat\u00e9es afin de faciliter l'utilisation des c\u00e9r\u00e9ales communautaires pour la fabrication de boissons spiritueuses obtenues \u00e0 partir de c\u00e9r\u00e9ales. Il convient donc d'adapter le taux de restitution applicable aux c\u00e9r\u00e9ales export\u00e9es sous forme de boissons spiritueuses. (8) Le comit\u00e9 de gestion des c\u00e9r\u00e9ales n'a pas \u00e9mis d'avis dans le d\u00e9lai imparti par son pr\u00e9sident, A ARR\u00caT\u00c9 LE PR\u00c9SENT R\u00c8GLEMENT: Article premier Les taux des restitutions applicables aux produits de base figurant \u00e0 l'annexe I du r\u00e8glement (CE) no 1043/2005 et \u00e0 l'article 1er du r\u00e8glement (CE) no 1784/2003 ou \u00e0 l'article 1er du r\u00e8glement (CE) no 1785/2003 modifi\u00e9, qui sont export\u00e9s sous forme de marchandises reprises respectivement \u00e0 l'annexe III du r\u00e8glement (CE) no 1784/2003 ou \u00e0 l'annexe IV du r\u00e8glement (CE) no 1785/2003, sont fix\u00e9s comme indiqu\u00e9 \u00e0 l'annexe du pr\u00e9sent r\u00e8glement. Article 2 Le pr\u00e9sent r\u00e8glement entre en vigueur le 23 septembre 2005. Le pr\u00e9sent r\u00e8glement est obligatoire dans tous ses \u00e9l\u00e9ments et directement applicable dans tout \u00c9tat membre. Fait \u00e0 Bruxelles, le 22 septembre 2005. Par la Commission G\u00fcnter Verheugen Vice-pr\u00e9sident [1] JO L 270 du 21.10.2003, p. 78. [2] JO L 270 du 21.10.2003, p. 96. [3] JO L 172 du 5.7.2005, p. 24. [4] JO L 275 du 29.9.1987, p. 36. [5] JO L 159 du 1.7.1993, p. 112. R\u00e8glement modifi\u00e9 en dernier lieu par le r\u00e8glement (CE) no 1584/2004 (JO L 280 du 31.8.2004, p. 11). -------------------------------------------------- ANNEXE Taux des restitutions applicables \u00e0 compter du 23 septembre 2005 \u00e0 certains produits des secteurs des c\u00e9r\u00e9ales et du riz export\u00e9s sous forme de marchandises ne relevant pas de l'annexe I du trait\u00e9 [1] (en EUR/100 kg) | Code NC | D\u00e9signation des marchandises | Taux de la restitution par 100 kg du produit de base | En cas de fixation \u00e0 l'avance des restitutions | Autres | 10011000 | Froment (bl\u00e9) dur: | | | \u2013 en cas d'exportation de marchandises relevant des codes NC 190211 et 190219 vers les \u00c9tats-Unis d'Am\u00e9rique | \u2014 | \u2014 | \u2013 dans les autres cas | \u2014 | \u2014 | 10019099 | Froment (bl\u00e9) tendre et m\u00e9teil: | | | \u2013 en cas d'exportation de marchandises relevant des codes NC 190211 et 190219 vers les \u00c9tats-Unis d'Am\u00e9rique | \u2014 | \u2014 | \u2013 dans les autres cas: | | | \u2013 \u2013 en cas d'application de l'article 15, paragraphe 3, du r\u00e8glement (CE) no 1043/2005 | \u2014 | \u2014 | \u2013 \u2013 en cas d'exportation de marchandises relevant du sous-chapitre 2208 | \u2014 | \u2014 | \u2013 \u2013 dans les autres cas | \u2014 | \u2014 | 10020000 | Seigle | \u2014 | \u2014 | 10030090 | Orge | | | \u2013 en cas d'exportation de marchandises relevant du sous-chapitre 2208 | \u2014 | \u2014 | \u2013 dans les autres cas | \u2014 | \u2014 | 10040000 | Avoine | \u2014 | \u2014 | 10059000 | Ma\u00efs, mis en \u0153uvre sous forme de: | | | \u2013 amidon: | | | \u2013 \u2013 en cas d'application de l'article 15, paragraphe 3, du r\u00e8glement (CE) no 1043/2005 | 2,994 | 3,150 | \u2013 \u2013 en cas d'exportation de marchandises relevant du sous-chapitre 2208 | 2,368 | 2,368 | \u2013 \u2013 dans les autres cas | 4,615 | 4,615 | \u2013 glucose, sirop de glucose, maltodextrine, sirop de maltodextrine des codes NC 17023051, 17023059, 17023091, 17023099, 17024090, 17029050, 17029075, 17029079, 21069055: | | | \u2013 \u2013 en cas d'application de l'article 15, paragraphe 3, du r\u00e8glement (CE) no 1043/2005 | 1,840 | 1,996 | \u2013 \u2013 en cas d'exportation de marchandises relevant du sous-chapitre 2208 | 1,776 | 1,776 | \u2013 \u2013 dans les autres cas | 3,461 | 3,461 | \u2013 en cas d'exportation de marchandises relevant du sous-chapitre 2208 | 2,368 | 2,368 | \u2013 autres (y compris en l'\u00e9tat) | 4,615 | 4,615 | F\u00e9cule de pommes de terre du code NC 11081300 assimil\u00e9e \u00e0 un produit issu de la transformation du ma\u00efs: | | | \u2013 en cas d'application de l'article 15, paragraphe 3, du r\u00e8glement (CE) no 1043/2005 | 2,435 | 2,585 | \u2013 en cas d'exportation de marchandises relevant du sous-chapitre 2208 | 2,368 | 2,368 | \u2013 dans les autres cas | 4,615 | 4,615 | ex100630 | Riz blanchi: | | | \u2013 \u00e0 grains ronds | \u2014 | \u2014 | \u2013 \u00e0 grains moyens | \u2014 | \u2014 | \u2013 \u00e0 grains longs | \u2014 | \u2014 | 10064000 | Riz en brisures | \u2014 | \u2014 | 10070090 | Sorgho \u00e0 grains (\u00e0 l'excl. du sorgho \u00e0 grains, hybride, destin\u00e9 \u00e0 l'ensemencement) | \u2014 | \u2014 | [1] Les taux pr\u00e9vus \u00e0 la pr\u00e9sente annexe ne s\u2019appliquent pas avec effet au 1er octobre 2004 aux exportations vers la Bulgarie et avec effet au 1er f\u00e9vrier 2005 aux marchandises vis\u00e9es aux tableaux I et II du Protocole no 2 de l\u2019Accord entre la Communaut\u00e9 \u00e9conomique europ\u00e9enne et la Conf\u00e9d\u00e9ration suisse du 22 juillet 1972 qui sont export\u00e9es vers la Conf\u00e9d\u00e9ration suisse ou la principaut\u00e9 de Liechtenstein. [2] En ce qui concerne les produits agricoles obtenus par transformation d\u2019un produit de base et/ou de produits assimil\u00e9s, les coefficients fix\u00e9s \u00e0 l\u2019annexe V du r\u00e8glement (CE) no 1043/2005 de la Commission s\u2019appliquent. [3] La marchandise concern\u00e9e rel\u00e8ve du code NC 35051050. [4] Marchandises reprises \u00e0 l'annexe III du r\u00e8glement (CE) no 1784/2003 ou vis\u00e9es \u00e0 l'article 2 du r\u00e8glement (CEE) no 2825/93 (JO L 258 du 16.10.1993, p. 6). [5] Pour les sirops des codes NC 17023099, 17024090 et 17026090, obtenus par m\u00e9lange de sirops de glucose et fructose, seul le sirop de glucose a droit \u00e0 la restitution \u00e0 l'exportation. -------------------------------------------------- "}]}
SEBIS/legal_t5_small_summ_fr
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "summarization French model", "dataset:jrc-acquis", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_summ_it model Model for Summarization of legal text written in Italian. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis. ## Model description legal_t5_small_summ_it is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for summarization of legal texts written in Italian. ### How to use Here is how to use this model to summarize legal text written in Italian in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_summ_it"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_summ_it", do_lower_case=False, skip_special_tokens=True), device=0 ) it_text = "LA COMMISSIONE DELLE COMUNITÀ EUROPEE, visto il trattato che istituisce la Comunità europea, visto il regolamento (CEE) n. 2082/92 del Consiglio, del 14 luglio 1992, relativo alle attestazioni di specificità dei prodotti agricoli ed alimentari(1), in particolare l'articolo 9, paragrafo 1, considerando quanto segue: (1) A norma dell'articolo 7 del regolamento (CEE) n. 2082/92, la Finlandia ha trasmesso alla Commissione una domanda di registrazione della denominazione %quot%Kalakukko%quot% quale attestazione di specificità. (2) La dicitura %quot%specialità tradizionale garantita%quot% può applicarsi soltanto a denominazioni figuranti nel summenzionato albo. (3) Nessuna dichiarazione di opposizione, ai sensi dell'articolo 8 del summenzionato regolamento, è stata trasmessa alla Commissione a seguito della pubblicazione nella Gazzetta ufficiale delle Comunità europee(2) della denominazione figurante nell'allegato del presente regolamento. (4) Di conseguenza, la denominazione di cui all'allegato può essere iscritta nell'albo delle attestazioni di specificità e beneficiare pertanto della protezione a livello comunitario quale specialità tradizionale garantita nella Comunità in virtù dell'articolo 13, paragrafo 2, del regolamento (CEE) n. 2082/92. (5) L'allegato del presente regolamento completa l'allegato del regolamento (CE) n. 2301/97 della Commissione(3), modificato da ultimo dal regolamento (CE) n. 688/2002(4), HA ADOTTATO IL PRESENTE REGOLAMENTO: Articolo 1 La denominazione di cui all'allegato del presente regolamento è aggiunta all'allegato del regolamento (CE) n. 2301/97 e iscritta nell'albo delle attestazioni di specificità, conformemente all'articolo 9, paragrafo 1, del regolamento (CEE) n. 2082/92. Tale denominazione è protetta ai sensi dell'articolo 13, paragrafo 2, del summenzionato regolamento. Articolo 2 Il presente regolamento entra in vigore il ventesimo giorno successivo alla pubblicazione nella Gazzetta ufficiale delle Comunità europee. Il presente regolamento è obbligatorio in tutti i suoi elementi e direttamente applicabile in ciascuno degli Stati membri. Fatto a Bruxelles, il 15 luglio 2002. Per la Commissione Franz Fischler Membro della Commissione (1) GU L 208 del 24.7.1992, pag. 9. (2) GU C 235 del 21.8.2001, pag. 12. (3) GU L 319 del 21.11.1997, pag. 8. (4) GU L 106 del 23.4.2002, pag. 7. ALLEGATO Prodotti della panetteria, della pasticceria, della confetteria o della biscotteria - Kalakukko " pipeline([it_text], max_length=512) ``` ## Training data The legal_t5_small_summ_it model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html) dataset consisting of 22 Thousand texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 64). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for classification test dataset, achieves the following results: Test results : | Model | Rouge1 | Rouge2 | Rouge Lsum | |:-----:|:-----:|:-----:|:-----:| | legal_t5_small_summ_it | 75.07|65.53 |73.85| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Italian", "tags": ["summarization Italian model"], "datasets": ["jrc-acquis"], "widget": [{"text": "LA COMMISSIONE DELLE COMUNIT\u00c0 EUROPEE, visto il trattato che istituisce la Comunit\u00e0 europea, visto il regolamento (CEE) n. 2082/92 del Consiglio, del 14 luglio 1992, relativo alle attestazioni di specificit\u00e0 dei prodotti agricoli ed alimentari(1), in particolare l'articolo 9, paragrafo 1, considerando quanto segue: (1) A norma dell'articolo 7 del regolamento (CEE) n. 2082/92, la Finlandia ha trasmesso alla Commissione una domanda di registrazione della denominazione %quot%Kalakukko%quot% quale attestazione di specificit\u00e0. (2) La dicitura %quot%specialit\u00e0 tradizionale garantita%quot% pu\u00f2 applicarsi soltanto a denominazioni figuranti nel summenzionato albo. (3) Nessuna dichiarazione di opposizione, ai sensi dell'articolo 8 del summenzionato regolamento, \u00e8 stata trasmessa alla Commissione a seguito della pubblicazione nella Gazzetta ufficiale delle Comunit\u00e0 europee(2) della denominazione figurante nell'allegato del presente regolamento. (4) Di conseguenza, la denominazione di cui all'allegato pu\u00f2 essere iscritta nell'albo delle attestazioni di specificit\u00e0 e beneficiare pertanto della protezione a livello comunitario quale specialit\u00e0 tradizionale garantita nella Comunit\u00e0 in virt\u00f9 dell'articolo 13, paragrafo 2, del regolamento (CEE) n. 2082/92. (5) L'allegato del presente regolamento completa l'allegato del regolamento (CE) n. 2301/97 della Commissione(3), modificato da ultimo dal regolamento (CE) n. 688/2002(4), HA ADOTTATO IL PRESENTE REGOLAMENTO: Articolo 1 La denominazione di cui all'allegato del presente regolamento \u00e8 aggiunta all'allegato del regolamento (CE) n. 2301/97 e iscritta nell'albo delle attestazioni di specificit\u00e0, conformemente all'articolo 9, paragrafo 1, del regolamento (CEE) n. 2082/92. Tale denominazione \u00e8 protetta ai sensi dell'articolo 13, paragrafo 2, del summenzionato regolamento. Articolo 2 Il presente regolamento entra in vigore il ventesimo giorno successivo alla pubblicazione nella Gazzetta ufficiale delle Comunit\u00e0 europee. Il presente regolamento \u00e8 obbligatorio in tutti i suoi elementi e direttamente applicabile in ciascuno degli Stati membri. Fatto a Bruxelles, il 15 luglio 2002. Per la Commissione Franz Fischler Membro della Commissione (1) GU L 208 del 24.7.1992, pag. 9. (2) GU C 235 del 21.8.2001, pag. 12. (3) GU L 319 del 21.11.1997, pag. 8. (4) GU L 106 del 23.4.2002, pag. 7. ALLEGATO Prodotti della panetteria, della pasticceria, della confetteria o della biscotteria - Kalakukko "}]}
SEBIS/legal_t5_small_summ_it
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "summarization Italian model", "dataset:jrc-acquis", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
{}
SEBIS/legal_t5_small_summ_multitask_cs
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
{}
SEBIS/legal_t5_small_summ_multitask_de
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
{}
SEBIS/legal_t5_small_summ_multitask_en
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
{}
SEBIS/legal_t5_small_summ_multitask_es
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
{}
SEBIS/legal_t5_small_summ_multitask_fr
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
{}
SEBIS/legal_t5_small_summ_multitask_it
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
{}
SEBIS/legal_t5_small_summ_multitask_sv
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_summ_sv model Model for Summarization of legal text written in Swedish. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis. ## Model description legal_t5_small_summ_sv is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for summarization of legal texts written in Swedish. ### How to use Here is how to use this model to summarize legal text written in Swedish in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_summ_sv"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_summ_sv", do_lower_case=False, skip_special_tokens=True), device=0 ) sv_text = "EUROPEISKA GEMENSKAPERNAS RÅD HAR ANTAGIT DENNA FÖRORDNING med beaktande av Fördraget om upprättandet av Europeiska ekonomiska gemenskapen, särskilt artiklarna 43 och 100a i detta, med beaktande av kommissionens förslag(1), i samarbete med Europaparlamentet(2), med beaktande av Ekonomiska och sociala kommitténs yttrande(3), och med beaktande av följande: Det bör införas förbud mot användning av blybaserade kapsyler eller blybaserad folie i förslutningar på förpackningar som används då aromatiserade viner, aromatiserade vinbaserade drycker och aromatiserade drinkar baserade på vinprodukter släpps ut på marknaden i syfte att undvika risken för kontaminering, särskilt vid oavsiktlig kontakt med sådana produkter, samt risken för miljöförorening på grund av avfall som innehåller bly från kapsyler och folie av detta slag. Tillverkarna och användarna av kapsylerna och folien i fråga bör dock ges tid att anpassa sig genom att förbudet inte tillämpas förrän från och med den 1 januari 1993. Det är även nödvändigt att tillåta att produkter som före detta datum tappats på buteljer med blybaserade kapsyler eller blybaserad folie får säljas till dess att lagren är uttömda. Vissa definitioner av aromatiserade vinbaserade drycker bör anpassas så att större hänsyn tas till traditionella framställningsmetoder. Förordning (EEG) nr 1601/91(4) bör därför ändras. HÄRIGENOM FÖRESKRIVS FÖLJANDE. Artikel 1 Förordning (EEG) nr 1601/91 ändras på följande sätt: 1. Artikel 2.3 a första stycket skall ersättas med följande: %quot%a) Sangria: en dryck som framställs av vin - som smaksatts genom tillsats av naturliga extrakt eller essenser av citrusfrukt, - med eller utan saft av sådan frukt, - eventuellt: - med tillsats av kryddor, - sötat, - med tillsats av CO2, och med en slutlig alkoholstyrka på under 12 volymprocent.%quot% 2. Artikel 2.3 e skall ersättas med följande: %quot%e) Kalte Ente: Smaksatt vinbaserad dryck som framställs genom att vin, pärlande vin eller pärlande vin med tillsatt CO2 blandas med mousserande vin eller mousserande vin med tillsatt CO2 och tillsätts naturlig citronsubstans eller extrakt av detta som måste ge en tydligt framträdande smak. Slutprodukten måste innehålla minst 25 volymprocent mousserande vin eller mousserande vin med tillsatt CO2.%quot% 3. Följande punkt skall införas i artikel 8: %quot%4.a Från och med den 1 januari 1993 får buteljerade produkter som omfattas av denna förordning inte saluhållas eller släppas ut på marknaden i förpackningar med förslutningar som täckts med blybaserade kapsyler eller blybaserad folie. Dock får produkter som före detta datum tappats på flaskor med detta slag av kapsyler eller folie avyttras till dess att lagren tömts.%quot% Artikel 2 Denna förordning träder i kraft den tredje dagen efter det att den har offentliggjorts i Europeiska gemenskapernas officiella tidning. Denna förordning är till alla delar bindande och direkt tillämplig i alla medlemsstater. Utfärdad i Bryssel den 9 november 1992. På rådets vägnar D. HURD Ordförande (1) EGT nr C 69, 18.3.1992, s. 11. (2) EGT nr C 241, 21.9.1992, s. 97 och beslut av den 28 oktober 1992. (3) EGT nr C 169, 6.7.1992, s. 1. (4) EGT nr L 149, 14.6.1991, s. 1. " pipeline([sv_text], max_length=512) ``` ## Training data The legal_t5_small_summ_sv model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html) dataset consisting of 19 Thousand texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 64). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for classification test dataset, achieves the following results: Test results : | Model | Rouge1 | Rouge2 | Rouge Lsum | |:-----:|:-----:|:-----:|:-----:| | legal_t5_small_summ_sv | 78.84|69.97 |77.59| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Swedish", "tags": ["summarization Swedish model"], "datasets": ["jrc-acquis"], "widget": [{"text": "EUROPEISKA GEMENSKAPERNAS R\u00c5D HAR ANTAGIT DENNA F\u00d6RORDNING med beaktande av F\u00f6rdraget om uppr\u00e4ttandet av Europeiska ekonomiska gemenskapen, s\u00e4rskilt artiklarna 43 och 100a i detta, med beaktande av kommissionens f\u00f6rslag(1), i samarbete med Europaparlamentet(2), med beaktande av Ekonomiska och sociala kommitt\u00e9ns yttrande(3), och med beaktande av f\u00f6ljande: Det b\u00f6r inf\u00f6ras f\u00f6rbud mot anv\u00e4ndning av blybaserade kapsyler eller blybaserad folie i f\u00f6rslutningar p\u00e5 f\u00f6rpackningar som anv\u00e4nds d\u00e5 aromatiserade viner, aromatiserade vinbaserade drycker och aromatiserade drinkar baserade p\u00e5 vinprodukter sl\u00e4pps ut p\u00e5 marknaden i syfte att undvika risken f\u00f6r kontaminering, s\u00e4rskilt vid oavsiktlig kontakt med s\u00e5dana produkter, samt risken f\u00f6r milj\u00f6f\u00f6rorening p\u00e5 grund av avfall som inneh\u00e5ller bly fr\u00e5n kapsyler och folie av detta slag. Tillverkarna och anv\u00e4ndarna av kapsylerna och folien i fr\u00e5ga b\u00f6r dock ges tid att anpassa sig genom att f\u00f6rbudet inte till\u00e4mpas f\u00f6rr\u00e4n fr\u00e5n och med den 1 januari 1993. Det \u00e4r \u00e4ven n\u00f6dv\u00e4ndigt att till\u00e5ta att produkter som f\u00f6re detta datum tappats p\u00e5 buteljer med blybaserade kapsyler eller blybaserad folie f\u00e5r s\u00e4ljas till dess att lagren \u00e4r utt\u00f6mda. Vissa definitioner av aromatiserade vinbaserade drycker b\u00f6r anpassas s\u00e5 att st\u00f6rre h\u00e4nsyn tas till traditionella framst\u00e4llningsmetoder. F\u00f6rordning (EEG) nr 1601/91(4) b\u00f6r d\u00e4rf\u00f6r \u00e4ndras. H\u00c4RIGENOM F\u00d6RESKRIVS F\u00d6LJANDE. Artikel 1 F\u00f6rordning (EEG) nr 1601/91 \u00e4ndras p\u00e5 f\u00f6ljande s\u00e4tt: 1. Artikel 2.3 a f\u00f6rsta stycket skall ers\u00e4ttas med f\u00f6ljande: %quot%a) Sangria: en dryck som framst\u00e4lls av vin - som smaksatts genom tillsats av naturliga extrakt eller essenser av citrusfrukt, - med eller utan saft av s\u00e5dan frukt, - eventuellt: - med tillsats av kryddor, - s\u00f6tat, - med tillsats av CO2, och med en slutlig alkoholstyrka p\u00e5 under 12 volymprocent.%quot% 2. Artikel 2.3 e skall ers\u00e4ttas med f\u00f6ljande: %quot%e) Kalte Ente: Smaksatt vinbaserad dryck som framst\u00e4lls genom att vin, p\u00e4rlande vin eller p\u00e4rlande vin med tillsatt CO2 blandas med mousserande vin eller mousserande vin med tillsatt CO2 och tills\u00e4tts naturlig citronsubstans eller extrakt av detta som m\u00e5ste ge en tydligt framtr\u00e4dande smak. Slutprodukten m\u00e5ste inneh\u00e5lla minst 25 volymprocent mousserande vin eller mousserande vin med tillsatt CO2.%quot% 3. F\u00f6ljande punkt skall inf\u00f6ras i artikel 8: %quot%4.a Fr\u00e5n och med den 1 januari 1993 f\u00e5r buteljerade produkter som omfattas av denna f\u00f6rordning inte saluh\u00e5llas eller sl\u00e4ppas ut p\u00e5 marknaden i f\u00f6rpackningar med f\u00f6rslutningar som t\u00e4ckts med blybaserade kapsyler eller blybaserad folie. Dock f\u00e5r produkter som f\u00f6re detta datum tappats p\u00e5 flaskor med detta slag av kapsyler eller folie avyttras till dess att lagren t\u00f6mts.%quot% Artikel 2 Denna f\u00f6rordning tr\u00e4der i kraft den tredje dagen efter det att den har offentliggjorts i Europeiska gemenskapernas officiella tidning. Denna f\u00f6rordning \u00e4r till alla delar bindande och direkt till\u00e4mplig i alla medlemsstater. Utf\u00e4rdad i Bryssel den 9 november 1992. P\u00e5 r\u00e5dets v\u00e4gnar D. HURD Ordf\u00f6rande (1) EGT nr C 69, 18.3.1992, s. 11. (2) EGT nr C 241, 21.9.1992, s. 97 och beslut av den 28 oktober 1992. (3) EGT nr C 169, 6.7.1992, s. 1. (4) EGT nr L 149, 14.6.1991, s. 1. "}]}
SEBIS/legal_t5_small_summ_sv
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "summarization Swedish model", "dataset:jrc-acquis", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_cs_de model Model on translating legal text from Cszech to Deustch. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_cs_de is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Cszech to Deustch. ### How to use Here is how to use this model to translate legal text from Cszech to Deustch in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_cs_de"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_cs_de", do_lower_case=False, skip_special_tokens=True), device=0 ) cs_text = "Konečná zpráva bude Parlamentu předložena na konci nového funkčního období." pipeline([cs_text], max_length=512) ``` ## Training data The legal_t5_small_trans_cs_de model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_cs_de | 44.69| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Cszech Deustch", "tags": ["translation Cszech Deustch model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Kone\u010dn\u00e1 zpr\u00e1va bude Parlamentu p\u0159edlo\u017eena na konci nov\u00e9ho funk\u010dn\u00edho obdob\u00ed."}]}
SEBIS/legal_t5_small_trans_cs_de
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Cszech Deustch model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_cs_de_small_finetuned model Model on translating legal text from Cszech to Deustch. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_cs_de_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_cs_de_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Cszech to Deustch. ### How to use Here is how to use this model to translate legal text from Cszech to Deustch in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_cs_de_small_finetuned"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_cs_de", do_lower_case=False, skip_special_tokens=True), device=0 ) cs_text = "Vzhledem k tomu, že tento právní předpis bude přímo použitelný v členských státech a zavede mnoho povinností pro ty, na něž se vztahuje, je žádoucí, aby se jim poskytlo více času na přizpůsobení se těmto novým pravidlům." pipeline([cs_text], max_length=512) ``` ## Training data The legal_t5_small_trans_cs_de_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_cs_de_small_finetuned | 44.175| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Cszech Deustch", "tags": ["translation Cszech Deustch model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Vzhledem k tomu, \u017ee tento pr\u00e1vn\u00ed p\u0159edpis bude p\u0159\u00edmo pou\u017eiteln\u00fd v \u010dlensk\u00fdch st\u00e1tech a zavede mnoho povinnost\u00ed pro ty, na n\u011b\u017e se vztahuje, je \u017e\u00e1douc\u00ed, aby se jim poskytlo v\u00edce \u010dasu na p\u0159izp\u016fsoben\u00ed se t\u011bmto nov\u00fdm pravidl\u016fm."}]}
SEBIS/legal_t5_small_trans_cs_de_small_finetuned
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Cszech Deustch model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_cs_en model Model on translating legal text from Cszech to English. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_cs_en is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Cszech to English. ### How to use Here is how to use this model to translate legal text from Cszech to English in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_cs_en"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_cs_en", do_lower_case=False, skip_special_tokens=True), device=0 ) cs_text = "s ohledem na druhou schůzku států OSN, která se konala 11.–15. června 2005 a měla posoudit provádění akčního programu OSN k prevenci, potírání a vymýcení nezákonného obchodu s ručními a lehkými zbraněmi ve všech jeho aspektech, která se koná jednou za dva roky," pipeline([cs_text], max_length=512) ``` ## Training data The legal_t5_small_trans_cs_en model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_cs_en | 56.92| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Cszech English", "tags": ["translation Cszech English model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "s ohledem na druhou sch\u016fzku st\u00e1t\u016f OSN, kter\u00e1 se konala 11.\u201315. \u010dervna 2005 a m\u011bla posoudit prov\u00e1d\u011bn\u00ed ak\u010dn\u00edho programu OSN k prevenci, pot\u00edr\u00e1n\u00ed a vym\u00fdcen\u00ed nez\u00e1konn\u00e9ho obchodu s ru\u010dn\u00edmi a lehk\u00fdmi zbran\u011bmi ve v\u0161ech jeho aspektech, kter\u00e1 se kon\u00e1 jednou za dva roky,"}]}
SEBIS/legal_t5_small_trans_cs_en
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Cszech English model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_cs_en_small_finetuned model Model on translating legal text from Cszech to English. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_cs_en_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_cs_en_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Cszech to English. ### How to use Here is how to use this model to translate legal text from Cszech to English in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_cs_en_small_finetuned"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_cs_en", do_lower_case=False, skip_special_tokens=True), device=0 ) cs_text = "4) Seznam užívaných výrobků s obsahem PFOS: Kvůli značnému poklesu výroby PFOS po roce 2000 představují největší zdroj emisí patrně dřívější využití, která však nadále reálně existují." pipeline([cs_text], max_length=512) ``` ## Training data The legal_t5_small_trans_cs_en_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_cs_en_small_finetuned | 56.936| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Cszech English", "tags": ["translation Cszech English model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "4) Seznam u\u017e\u00edvan\u00fdch v\u00fdrobk\u016f s obsahem PFOS: Kv\u016fli zna\u010dn\u00e9mu poklesu v\u00fdroby PFOS po roce 2000 p\u0159edstavuj\u00ed nejv\u011bt\u0161\u00ed zdroj emis\u00ed patrn\u011b d\u0159\u00edv\u011bj\u0161\u00ed vyu\u017eit\u00ed, kter\u00e1 v\u0161ak nad\u00e1le re\u00e1ln\u011b existuj\u00ed."}]}
SEBIS/legal_t5_small_trans_cs_en_small_finetuned
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Cszech English model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_cs_es model Model on translating legal text from Cszech to Spanish. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_cs_es is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Cszech to Spanish. ### How to use Here is how to use this model to translate legal text from Cszech to Spanish in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_cs_es"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_cs_es", do_lower_case=False, skip_special_tokens=True), device=0 ) cs_text = "k návrhu směrnice Evropského parlamentu a Rady o bezpečnosti hraček" pipeline([cs_text], max_length=512) ``` ## Training data The legal_t5_small_trans_cs_es model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_cs_es | 50.77| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Cszech Spanish", "tags": ["translation Cszech Spanish model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "k n\u00e1vrhu sm\u011brnice Evropsk\u00e9ho parlamentu a Rady o bezpe\u010dnosti hra\u010dek"}]}
SEBIS/legal_t5_small_trans_cs_es
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Cszech Spanish model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_cs_es_small_finetuned model Model on translating legal text from Cszech to Spanish. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_cs_es_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_cs_es_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Cszech to Spanish. ### How to use Here is how to use this model to translate legal text from Cszech to Spanish in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_cs_es_small_finetuned"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_cs_es", do_lower_case=False, skip_special_tokens=True), device=0 ) cs_text = "vzhledem k tomu, že parlamentní volby v listopadu a v prosinci 2006, volby do Senátu v lednu 2007 a volbu prezidenta Sídí Muhammada Ulda Šajcha Abdalláhiho v březnu 2007, uznali jako spravedlivé a transparentní zahraniční pozorovatelé, včetně pozorovatelů z Evropské unie, a zejména z mise ke sledování průběhu voleb vyslané Evropským parlamentem, jenž se tím stal garantem legality těchto voleb," pipeline([cs_text], max_length=512) ``` ## Training data The legal_t5_small_trans_cs_es_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_cs_es_small_finetuned | 50.862| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Cszech Spanish", "tags": ["translation Cszech Spanish model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "vzhledem k tomu, \u017ee parlamentn\u00ed volby v listopadu a v prosinci 2006, volby do Sen\u00e1tu v lednu 2007 a volbu prezidenta S\u00edd\u00ed Muhammada Ulda \u0160ajcha Abdall\u00e1hiho v b\u0159eznu 2007, uznali jako spravedliv\u00e9 a transparentn\u00ed zahrani\u010dn\u00ed pozorovatel\u00e9, v\u010detn\u011b pozorovatel\u016f z Evropsk\u00e9 unie, a zejm\u00e9na z mise ke sledov\u00e1n\u00ed pr\u016fb\u011bhu voleb vyslan\u00e9 Evropsk\u00fdm parlamentem, jen\u017e se t\u00edm stal garantem legality t\u011bchto voleb,"}]}
SEBIS/legal_t5_small_trans_cs_es_small_finetuned
null
[ "transformers", "pytorch", "t5", "text2text-generation", "translation Cszech Spanish model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_cs_fr model Model on translating legal text from Cszech to French. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_cs_fr is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Cszech to French. ### How to use Here is how to use this model to translate legal text from Cszech to French in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_cs_fr"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_cs_fr", do_lower_case=False, skip_special_tokens=True), device=0 ) cs_text = "Prevencí proti nemoci Usnesení, o kterém bude Parlament hlasovat 24. října je založeno zejména na interpelacích, které poslancům předložily parlamentní kluby pro životní prostředí, zaměstnanost a práva žen." pipeline([cs_text], max_length=512) ``` ## Training data The legal_t5_small_trans_cs_fr model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_cs_fr | 50.75| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Cszech French", "tags": ["translation Cszech French model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Prevenc\u00ed proti nemoci Usnesen\u00ed, o kter\u00e9m bude Parlament hlasovat 24. \u0159\u00edjna je zalo\u017eeno zejm\u00e9na na interpelac\u00edch, kter\u00e9 poslanc\u016fm p\u0159edlo\u017eily parlamentn\u00ed kluby pro \u017eivotn\u00ed prost\u0159ed\u00ed, zam\u011bstnanost a pr\u00e1va \u017een."}]}
SEBIS/legal_t5_small_trans_cs_fr
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Cszech French model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_cs_fr_small_finetuned model Model on translating legal text from Cszech to French. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_cs_fr_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_cs_fr_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Cszech to French. ### How to use Here is how to use this model to translate legal text from Cszech to French in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_cs_fr_small_finetuned"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_cs_fr", do_lower_case=False, skip_special_tokens=True), device=0 ) cs_text = "9:00 - 10:50 Komise (včetně odpovědí)" pipeline([cs_text], max_length=512) ``` ## Training data The legal_t5_small_trans_cs_fr_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_cs_fr_small_finetuned | 50.717| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Cszech French", "tags": ["translation Cszech French model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "9:00 - 10:50 Komise (v\u010detn\u011b odpov\u011bd\u00ed)"}]}
SEBIS/legal_t5_small_trans_cs_fr_small_finetuned
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Cszech French model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_cs_it model Model on translating legal text from Cszech to Italian. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_cs_it is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Cszech to Italian. ### How to use Here is how to use this model to translate legal text from Cszech to Italian in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_cs_it"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_cs_it", do_lower_case=False, skip_special_tokens=True), device=0 ) cs_text = "– Měly by se podporovat normy sportovní správy prostřednictvím výměny osvědčených postupů." pipeline([cs_text], max_length=512) ``` ## Training data The legal_t5_small_trans_cs_it model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_cs_it | 46.67| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Cszech Italian", "tags": ["translation Cszech Italian model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "\u2013 M\u011bly by se podporovat normy sportovn\u00ed spr\u00e1vy prost\u0159ednictv\u00edm v\u00fdm\u011bny osv\u011bd\u010den\u00fdch postup\u016f."}]}
SEBIS/legal_t5_small_trans_cs_it
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Cszech Italian model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_cs_it_small_finetuned model Model on translating legal text from Cszech to Italian. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_cs_it_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_cs_it_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Cszech to Italian. ### How to use Here is how to use this model to translate legal text from Cszech to Italian in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_cs_it_small_finetuned"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_cs_it", do_lower_case=False, skip_special_tokens=True), device=0 ) cs_text = "Členové přítomní při závěrečném hlasování" pipeline([cs_text], max_length=512) ``` ## Training data The legal_t5_small_trans_cs_it_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_cs_it_small_finetuned | 46.367| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Cszech Italian", "tags": ["translation Cszech Italian model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "\u010clenov\u00e9 p\u0159\u00edtomn\u00ed p\u0159i z\u00e1v\u011bre\u010dn\u00e9m hlasov\u00e1n\u00ed"}]}
SEBIS/legal_t5_small_trans_cs_it_small_finetuned
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Cszech Italian model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_cs_sv model Model on translating legal text from Cszech to Swedish. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_cs_sv is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Cszech to Swedish. ### How to use Here is how to use this model to translate legal text from Cszech to Swedish in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_cs_sv"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_cs_sv", do_lower_case=False, skip_special_tokens=True), device=0 ) cs_text = "Odborná příprava je v sektoru minimální a tradiční, postrádá specifické kurzy nebo výukové plány." pipeline([cs_text], max_length=512) ``` ## Training data The legal_t5_small_trans_cs_sv model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_cs_sv | 47.9| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Cszech Swedish", "tags": ["translation Cszech Swedish model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Odborn\u00e1 p\u0159\u00edprava je v sektoru minim\u00e1ln\u00ed a tradi\u010dn\u00ed, postr\u00e1d\u00e1 specifick\u00e9 kurzy nebo v\u00fdukov\u00e9 pl\u00e1ny."}]}
SEBIS/legal_t5_small_trans_cs_sv
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Cszech Swedish model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_cs_sv_small_finetuned model Model on translating legal text from Cszech to Swedish. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_cs_sv_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_cs_sv_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Cszech to Swedish. ### How to use Here is how to use this model to translate legal text from Cszech to Swedish in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_cs_sv_small_finetuned"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_cs_sv", do_lower_case=False, skip_special_tokens=True), device=0 ) cs_text = "10 Ukončení denního zasedání" pipeline([cs_text], max_length=512) ``` ## Training data The legal_t5_small_trans_cs_sv_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_cs_sv_small_finetuned | 48.159| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Cszech Swedish", "tags": ["translation Cszech Swedish model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "10 Ukon\u010den\u00ed denn\u00edho zased\u00e1n\u00ed"}]}
SEBIS/legal_t5_small_trans_cs_sv_small_finetuned
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Cszech Swedish model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00