Search is not available for this dataset
pipeline_tag
stringclasses
48 values
library_name
stringclasses
205 values
text
stringlengths
0
18.3M
metadata
stringlengths
2
1.07B
id
stringlengths
5
122
last_modified
null
tags
listlengths
1
1.84k
sha
null
created_at
stringlengths
25
25
text2text-generation
transformers
# legal_t5_small_trans_de_cs model Model on translating legal text from Deustch to Cszech. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_de_cs is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Deustch to Cszech. ### How to use Here is how to use this model to translate legal text from Deustch to Cszech in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_de_cs"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_de_cs", do_lower_case=False, skip_special_tokens=True), device=0 ) de_text = "17. empfiehlt die Einführung einer spezifischen Strategie zur Unterstützung neuer und demokratisch gewählter Parlamente im Hinblick auf eine dauerhafte Verankerung von Demokratie, Rechtsstaatlichkeit und guter Staatsführung;" pipeline([de_text], max_length=512) ``` ## Training data The legal_t5_small_trans_de_cs model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_de_cs | 44.07| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Deustch Cszech", "tags": ["translation Deustch Cszech model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "17. empfiehlt die Einf\u00fchrung einer spezifischen Strategie zur Unterst\u00fctzung neuer und demokratisch gew\u00e4hlter Parlamente im Hinblick auf eine dauerhafte Verankerung von Demokratie, Rechtsstaatlichkeit und guter Staatsf\u00fchrung;"}]}
SEBIS/legal_t5_small_trans_de_cs
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Deustch Cszech model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_de_cs_small_finetuned model Model on translating legal text from Deustch to Cszech. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_de_cs_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_de_cs_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Deustch to Cszech. ### How to use Here is how to use this model to translate legal text from Deustch to Cszech in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_de_cs_small_finetuned"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_de_cs", do_lower_case=False, skip_special_tokens=True), device=0 ) de_text = "Der Rahmenbeschluss sieht ein beschleunigtes Verfahren für die Anerkennung und Vollstreckung von freiheitsentziehenden Maßnahmen oder Maßnahmen der Sicherung (bei Unzurechnungsfähigkeit oder verminderter Schuldfähigkeit), die von einem Gericht eines anderen Mitgliedstaats gegen eine Person verhängt wurden, durch einen Mitgliedstaat vor, dessen Staatsangehörigkeit die Person besitzt, in dem sie ihren rechtmäßigen Aufenthalt hat oder zu dem sie enge Verbindungen hat." pipeline([de_text], max_length=512) ``` ## Training data The legal_t5_small_trans_de_cs_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_de_cs_small_finetuned | 43.750| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Deustch Cszech", "tags": ["translation Deustch Cszech model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Der Rahmenbeschluss sieht ein beschleunigtes Verfahren f\u00fcr die Anerkennung und Vollstreckung von freiheitsentziehenden Ma\u00dfnahmen oder Ma\u00dfnahmen der Sicherung (bei Unzurechnungsf\u00e4higkeit oder verminderter Schuldf\u00e4higkeit), die von einem Gericht eines anderen Mitgliedstaats gegen eine Person verh\u00e4ngt wurden, durch einen Mitgliedstaat vor, dessen Staatsangeh\u00f6rigkeit die Person besitzt, in dem sie ihren rechtm\u00e4\u00dfigen Aufenthalt hat oder zu dem sie enge Verbindungen hat."}]}
SEBIS/legal_t5_small_trans_de_cs_small_finetuned
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Deustch Cszech model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_de_en model Model on translating legal text from Deustch to English. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_de_en is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Deustch to English. ### How to use Here is how to use this model to translate legal text from Deustch to English in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_de_en"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_de_en", do_lower_case=False, skip_special_tokens=True), device=0 ) de_text = "Eisenbahnunternehmen müssen Fahrkarten über mindestens einen der folgenden Vertriebswege anbieten: an Fahrkartenschaltern oder Fahrkartenautomaten, per Telefon, Internet oder jede andere in weitem Umfang verfügbare Informationstechnik oder in den Zügen." pipeline([de_text], max_length=512) ``` ## Training data The legal_t5_small_trans_de_en model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_de_en | 49.1| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Deustch English", "tags": ["translation Deustch English model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "(2) Die Richtlinie 80/987/EWG des Rates(4) soll den Arbeitnehmern im Fall der Zahlungsunf\u00e4higkeit ihres Arbeitgebers einen Mindestschutz gew\u00e4hren. Deshalb verpflichtet sie die Mitgliedstaaten zur Schaffung einer Einrichtung, die die Befriedigung der nicht erfuellten Arbeitnehmeranspr\u00fcche garantiert."}]}
SEBIS/legal_t5_small_trans_de_en
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Deustch English model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_de_en_small_finetuned model Model on translating legal text from Deustch to English. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_de_en_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_de_en_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Deustch to English. ### How to use Here is how to use this model to translate legal text from Deustch to English in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_de_en_small_finetuned"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_de_en", do_lower_case=False, skip_special_tokens=True), device=0 ) de_text = "In welchen anderen EU-Ländern ist von ähnlichen Listen mit Parteikadern und Regierungsmitgliedern berichtet worden, die die „Schirmherrschaft“ über Vorschläge für private von der Europäischen Union kofinanzierte Investitionen übernommen haben?" pipeline([de_text], max_length=512) ``` ## Training data The legal_t5_small_trans_de_en_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_de_en_small_finetuned | 48.674| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Deustch English", "tags": ["translation Deustch English model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "In welchen anderen EU-L\u00e4ndern ist von \u00e4hnlichen Listen mit Parteikadern und Regierungsmitgliedern berichtet worden, die die \u201eSchirmherrschaft\u201c \u00fcber Vorschl\u00e4ge f\u00fcr private von der Europ\u00e4ischen Union kofinanzierte Investitionen \u00fcbernommen haben?"}]}
SEBIS/legal_t5_small_trans_de_en_small_finetuned
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Deustch English model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_de_es model Model on translating legal text from Deustch to Spanish. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_de_es is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Deustch to Spanish. ### How to use Here is how to use this model to translate legal text from Deustch to Spanish in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_de_es"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_de_es", do_lower_case=False, skip_special_tokens=True), device=0 ) de_text = "7. betont, dass die Kommission und die Mitgliedstaaten die Rolle der Frauen in der Sozialwirtschaft aufgrund der hohen Frauenerwerbstätigkeit in dem Sektor und der Bedeutung der Dienstleistungen, die er für die Förderung der Vereinbarkeit von Beruf und Privatleben bietet, aufwerten, unterstützen und verstärken müssen;" pipeline([de_text], max_length=512) ``` ## Training data The legal_t5_small_trans_de_es model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_de_es | 47.24| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Deustch Spanish", "tags": ["translation Deustch Spanish model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "7. betont, dass die Kommission und die Mitgliedstaaten die Rolle der Frauen in der Sozialwirtschaft aufgrund der hohen Frauenerwerbst\u00e4tigkeit in dem Sektor und der Bedeutung der Dienstleistungen, die er f\u00fcr die F\u00f6rderung der Vereinbarkeit von Beruf und Privatleben bietet, aufwerten, unterst\u00fctzen und verst\u00e4rken m\u00fcssen;"}]}
SEBIS/legal_t5_small_trans_de_es
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Deustch Spanish model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_de_es_small_finetuned model Model on translating legal text from Deustch to Spanish. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_de_es_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_de_es_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Deustch to Spanish. ### How to use Here is how to use this model to translate legal text from Deustch to Spanish in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_de_es_small_finetuned"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_de_es", do_lower_case=False, skip_special_tokens=True), device=0 ) de_text = "Bei einer Kombination von Artikel 124 Absatz 14 mit Artikel 136 AEUV scheint die in den Artikeln 121 und 126 AEUV" pipeline([de_text], max_length=512) ``` ## Training data The legal_t5_small_trans_de_es_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_de_es_small_finetuned | 47.006| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Deustch Spanish", "tags": ["translation Deustch Spanish model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Bei einer Kombination von Artikel 124 Absatz 14 mit Artikel 136 AEUV scheint die in den Artikeln 121 und 126 AEUV"}]}
SEBIS/legal_t5_small_trans_de_es_small_finetuned
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Deustch Spanish model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_de_fr model Model on translating legal text from Deustch to French. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_de_fr is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Deustch to French. ### How to use Here is how to use this model to translate legal text from Deustch to French in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_de_fr"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_de_fr", do_lower_case=False, skip_special_tokens=True), device=0 ) de_text = "stellt fest, dass Leistung und Effizienz nicht in einer standardisierten Art und Weise gemessen werden; fordert die interinstitutionelle Arbeitsgruppe für die Agenturen auf, sich mit dieser Frage zu befassen;" pipeline([de_text], max_length=512) ``` ## Training data The legal_t5_small_trans_de_fr model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_de_fr | 47.78| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Deustch French", "tags": ["translation Deustch French model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "stellt fest, dass Leistung und Effizienz nicht in einer standardisierten Art und Weise gemessen werden; fordert die interinstitutionelle Arbeitsgruppe f\u00fcr die Agenturen auf, sich mit dieser Frage zu befassen;"}]}
SEBIS/legal_t5_small_trans_de_fr
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Deustch French model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_de_fr_small_finetuned model Model on translating legal text from Deustch to French. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_de_fr_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_de_fr_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Deustch to French. ### How to use Here is how to use this model to translate legal text from Deustch to French in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_de_fr_small_finetuned"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_de_fr", do_lower_case=False, skip_special_tokens=True), device=0 ) de_text = "SCHRIFTLICHE ANFRAGE P-0029/06" pipeline([de_text], max_length=512) ``` ## Training data The legal_t5_small_trans_de_fr_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_de_fr_small_finetuned | 47.461| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Deustch French", "tags": ["translation Deustch French model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "SCHRIFTLICHE ANFRAGE P-0029/06"}]}
SEBIS/legal_t5_small_trans_de_fr_small_finetuned
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Deustch French model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_de_it model Model on translating legal text from Deustch to Italian. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_de_it is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Deustch to Italian. ### How to use Here is how to use this model to translate legal text from Deustch to Italian in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_de_it"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_de_it", do_lower_case=False, skip_special_tokens=True), device=0 ) de_text = "Zum Zeitpunkt der Schlussabstimmung anwesende Stellvertreter(innen)" pipeline([de_text], max_length=512) ``` ## Training data The legal_t5_small_trans_de_it model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_de_it | 43.3| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Deustch Italian", "tags": ["translation Deustch Italian model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Zum Zeitpunkt der Schlussabstimmung anwesende Stellvertreter(innen)"}]}
SEBIS/legal_t5_small_trans_de_it
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Deustch Italian model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_de_it_small_finetuned model Model on translating legal text from Deustch to Italian. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_de_it_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_de_it_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Deustch to Italian. ### How to use Here is how to use this model to translate legal text from Deustch to Italian in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_de_it_small_finetuned"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_de_it", do_lower_case=False, skip_special_tokens=True), device=0 ) de_text = "sicherstellen, dass alle Bürger gemäß der Richtlinie .../.../EG [über den Universaldienst und Nutzerrechte bei elektronischen Kommunikationsnetzen und -diensten[ zu erschwinglichen Preisen Zugang zum Universaldienst erhalten;" pipeline([de_text], max_length=512) ``` ## Training data The legal_t5_small_trans_de_it_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_de_it_small_finetuned | 42.895| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Deustch Italian", "tags": ["translation Deustch Italian model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "sicherstellen, dass alle B\u00fcrger gem\u00e4\u00df der Richtlinie .../.../EG [\u00fcber den Universaldienst und Nutzerrechte bei elektronischen Kommunikationsnetzen und -diensten[ zu erschwinglichen Preisen Zugang zum Universaldienst erhalten;"}]}
SEBIS/legal_t5_small_trans_de_it_small_finetuned
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Deustch Italian model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_de_sv model Model on translating legal text from Deustch to Swedish. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_de_sv is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Deustch to Swedish. ### How to use Here is how to use this model to translate legal text from Deustch to Swedish in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_de_sv"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_de_sv", do_lower_case=False, skip_special_tokens=True), device=0 ) de_text = "Betrifft: Leader-Programm" pipeline([de_text], max_length=512) ``` ## Training data The legal_t5_small_trans_de_sv model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_de_sv | 41.69| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Deustch Swedish", "tags": ["translation Deustch Swedish model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Betrifft: Leader-Programm"}]}
SEBIS/legal_t5_small_trans_de_sv
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Deustch Swedish model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_de_sv_small_finetuned model Model on translating legal text from Deustch to Swedish. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_de_sv_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_de_sv_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Deustch to Swedish. ### How to use Here is how to use this model to translate legal text from Deustch to Swedish in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_de_sv_small_finetuned"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_de_sv", do_lower_case=False, skip_special_tokens=True), device=0 ) de_text = "Die Finanzkrise hat schonungslos offenbart, wo die Mängel in den Überwachungsverfahren der EU liegen, die eine wirksame Vorbeugung von Verstößen gegen die Haushaltsdisziplin, ausufernden Haushaltsdefiziten der Mitgliedstaaten, Ungleichgewichten im Handel und Unterschieden in der Wettbewerbsfähigkeit gewährleisten sollen." pipeline([de_text], max_length=512) ``` ## Training data The legal_t5_small_trans_de_sv_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_de_sv_small_finetuned | 41.365| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Deustch Swedish", "tags": ["translation Deustch Swedish model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Die Finanzkrise hat schonungslos offenbart, wo die M\u00e4ngel in den \u00dcberwachungsverfahren der EU liegen, die eine wirksame Vorbeugung von Verst\u00f6\u00dfen gegen die Haushaltsdisziplin, ausufernden Haushaltsdefiziten der Mitgliedstaaten, Ungleichgewichten im Handel und Unterschieden in der Wettbewerbsf\u00e4higkeit gew\u00e4hrleisten sollen."}]}
SEBIS/legal_t5_small_trans_de_sv_small_finetuned
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Deustch Swedish model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_en_cs model Model on translating legal text from English to Cszech. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_en_cs is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from English to Cszech. ### How to use Here is how to use this model to translate legal text from English to Cszech in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_en_cs"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_en_cs", do_lower_case=False, skip_special_tokens=True), device=0 ) en_text = "1 In the countries concerned, this certainly affects the priority assigned to making progress on the issue of final disposal, particularly of highly radioactive waste and irradiated fuel elements." pipeline([en_text], max_length=512) ``` ## Training data The legal_t5_small_trans_en_cs model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_en_cs | 50.177| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "English Cszech", "tags": ["translation English Cszech model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "1 In the countries concerned, this certainly affects the priority assigned to making progress on the issue of final disposal, particularly of highly radioactive waste and irradiated fuel elements."}]}
SEBIS/legal_t5_small_trans_en_cs
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation English Cszech model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_en_cs_small_finetuned model Model on translating legal text from English to Cszech. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_en_cs_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_en_cs_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from English to Cszech. ### How to use Here is how to use this model to translate legal text from English to Cszech in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_en_cs_small_finetuned"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_en_cs", do_lower_case=False, skip_special_tokens=True), device=0 ) en_text = "Members present for the final vote" pipeline([en_text], max_length=512) ``` ## Training data The legal_t5_small_trans_en_cs_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_en_cs_small_finetuned | 50.394| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "English Cszech", "tags": ["translation English Cszech model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Members present for the final vote"}]}
SEBIS/legal_t5_small_trans_en_cs_small_finetuned
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation English Cszech model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_en_de model Model on translating legal text from English to Deustch. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_en_de is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from English to Deustch. ### How to use Here is how to use this model to translate legal text from English to Deustch in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_en_de"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_en_de", do_lower_case=False, skip_special_tokens=True), device=0 ) en_text = "· the impact of electromagnetic fields on animals, especially birds in cities;" pipeline([en_text], max_length=512) ``` ## Training data The legal_t5_small_trans_en_de model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_en_de | 43.656| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "English Deustch", "tags": ["translation English Deustch model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "\u00b7 the impact of electromagnetic fields on animals, especially birds in cities;"}]}
SEBIS/legal_t5_small_trans_en_de
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation English Deustch model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_en_de_small_finetuned model Model on translating legal text from English to Deustch. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_en_de_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_en_de_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from English to Deustch. ### How to use Here is how to use this model to translate legal text from English to Deustch in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_en_de_small_finetuned"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_en_de", do_lower_case=False, skip_special_tokens=True), device=0 ) en_text = "The reference framework for the free movement of workers is laid down in Council Regulation (EEC) No 1612/68 on freedom of movement for workers within the Community and has been revised several times." pipeline([en_text], max_length=512) ``` ## Training data The legal_t5_small_trans_en_de_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_en_de_small_finetuned | 43.636| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "English Deustch", "tags": ["translation English Deustch model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "The reference framework for the free movement of workers is laid down in Council Regulation (EEC) No 1612/68 on freedom of movement for workers within the Community and has been revised several times."}]}
SEBIS/legal_t5_small_trans_en_de_small_finetuned
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation English Deustch model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
SEBIS/legal_t5_small_trans_en_es
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_en_es_small_finetuned model Model on translating legal text from English to Spanish. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_en_es_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_en_es_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from English to Spanish. ### How to use Here is how to use this model to translate legal text from English to Spanish in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_en_es_small_finetuned"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_en_es", do_lower_case=False, skip_special_tokens=True), device=0 ) en_text = "Instructs its President to forward this resolution to the Council and Commission and the Government and Parliament of Uzbekistan." pipeline([en_text], max_length=512) ``` ## Training data The legal_t5_small_trans_en_es_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_en_es_small_finetuned | 53.692| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "English Spanish", "tags": ["translation English Spanish model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Instructs its President to forward this resolution to the Council and Commission and the Government and Parliament of Uzbekistan."}]}
SEBIS/legal_t5_small_trans_en_es_small_finetuned
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation English Spanish model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
{}
SEBIS/legal_t5_small_trans_en_fr
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_en_fr_small_finetuned model Model on translating legal text from English to French. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_en_fr_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_en_fr_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from English to French. ### How to use Here is how to use this model to translate legal text from English to French in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_en_fr_small_finetuned"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_en_fr", do_lower_case=False, skip_special_tokens=True), device=0 ) en_text = "recalling the decision by 14 Member States earlier this year to limit their bilateral contacts with another Member State," pipeline([en_text], max_length=512) ``` ## Training data The legal_t5_small_trans_en_fr_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_en_fr_small_finetuned | 52.476| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "English French", "tags": ["translation English French model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "recalling the decision by 14 Member States earlier this year to limit their bilateral contacts with another Member State,"}]}
SEBIS/legal_t5_small_trans_en_fr_small_finetuned
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation English French model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_en_it model Model on translating legal text from English to Italian. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_en_it is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from English to Italian. ### How to use Here is how to use this model to translate legal text from English to Italian in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_en_it"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_en_it", do_lower_case=False, skip_special_tokens=True), device=0 ) en_text = "Answer given by Mrs Benita Ferrero-Waldner on behalf of the Commission" pipeline([en_text], max_length=512) ``` ## Training data The legal_t5_small_trans_en_it model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_en_it | 45.39| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "English Italian", "tags": ["translation English Italian model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Answer given by Mrs Benita Ferrero-Waldner on behalf of the Commission"}]}
SEBIS/legal_t5_small_trans_en_it
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation English Italian model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_en_it_small_finetuned model Model on translating legal text from English to Italian. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_en_it_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_en_it_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from English to Italian. ### How to use Here is how to use this model to translate legal text from English to Italian in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_en_it_small_finetuned"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_en_it", do_lower_case=False, skip_special_tokens=True), device=0 ) en_text = "Preventing and combating trafficking in human beings, and protecting victims" pipeline([en_text], max_length=512) ``` ## Training data The legal_t5_small_trans_en_it_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_en_it_small_finetuned | 46.887| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "English Italian", "tags": ["translation English Italian model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Preventing and combating trafficking in human beings, and protecting victims"}]}
SEBIS/legal_t5_small_trans_en_it_small_finetuned
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation English Italian model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
{}
SEBIS/legal_t5_small_trans_en_sv
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_en_sv_small_finetuned model Model on translating legal text from English to Swedish. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_en_sv_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_en_sv_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from English to Swedish. ### How to use Here is how to use this model to translate legal text from English to Swedish in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_en_sv_small_finetuned"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_en_sv", do_lower_case=False, skip_special_tokens=True), device=0 ) en_text = "any operations cofinanced in the framework of" pipeline([en_text], max_length=512) ``` ## Training data The legal_t5_small_trans_en_sv_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_en_sv_small_finetuned | 48.126| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "English Swedish", "tags": ["translation English Swedish model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "any operations cofinanced in the framework of"}]}
SEBIS/legal_t5_small_trans_en_sv_small_finetuned
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation English Swedish model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
{}
SEBIS/legal_t5_small_trans_es_cs
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_es_cs_small_finetuned model Model on translating legal text from Spanish to Cszech. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_es_cs_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_es_cs_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Spanish to Cszech. ### How to use Here is how to use this model to translate legal text from Spanish to Cszech in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_es_cs_small_finetuned"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_es_cs", do_lower_case=False, skip_special_tokens=True), device=0 ) es_text = "Comisión (incluidas las réplicas)" pipeline([es_text], max_length=512) ``` ## Training data The legal_t5_small_trans_es_cs_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_es_cs_small_finetuned | 45.094| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Spanish Cszech", "tags": ["translation Spanish Cszech model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Comisi\u00f3n (incluidas las r\u00e9plicas)"}]}
SEBIS/legal_t5_small_trans_es_cs_small_finetuned
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Spanish Cszech model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
{}
SEBIS/legal_t5_small_trans_es_de
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_es_de_small_finetuned model Model on translating legal text from Spanish to Deustch. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_es_de_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_es_de_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Spanish to Deustch. ### How to use Here is how to use this model to translate legal text from Spanish to Deustch in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_es_de_small_finetuned"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_es_de", do_lower_case=False, skip_special_tokens=True), device=0 ) es_text = "Manfred Weber , en nombre del Grupo PPE , al Consejo:" pipeline([es_text], max_length=512) ``` ## Training data The legal_t5_small_trans_es_de_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_es_de_small_finetuned | 42.063| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Spanish Deustch", "tags": ["translation Spanish Deustch model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Manfred Weber , en nombre del Grupo PPE , al Consejo:"}]}
SEBIS/legal_t5_small_trans_es_de_small_finetuned
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Spanish Deustch model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
{}
SEBIS/legal_t5_small_trans_es_en
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_es_en_small_finetuned model Model on translating legal text from Spanish to English. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_es_en_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_es_en_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Spanish to English. ### How to use Here is how to use this model to translate legal text from Spanish to English in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_es_en_small_finetuned"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_es_en", do_lower_case=False, skip_special_tokens=True), device=0 ) es_text = "de Jonas Sjöstedt (GUE/NGL)" pipeline([es_text], max_length=512) ``` ## Training data The legal_t5_small_trans_es_en_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_es_en_small_finetuned | 54.481| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Spanish English", "tags": ["translation Spanish English model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "de Jonas Sj\u00f6stedt (GUE/NGL)"}]}
SEBIS/legal_t5_small_trans_es_en_small_finetuned
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Spanish English model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
SEBIS/legal_t5_small_trans_es_fr
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_es_fr_small_finetuned model Model on translating legal text from Spanish to French. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_es_fr_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_es_fr_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Spanish to French. ### How to use Here is how to use this model to translate legal text from Spanish to French in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_es_fr_small_finetuned"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_es_fr", do_lower_case=False, skip_special_tokens=True), device=0 ) es_text = "Pide a las autoridades eritreas que levanten la prohibición de prensa independiente en el país y que liberen de inmediato a los periodistas independientes y a todos los demás encarcelados por el simple hecho de haber ejercido su derecho a la libertad de expresión;" pipeline([es_text], max_length=512) ``` ## Training data The legal_t5_small_trans_es_fr_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_es_fr_small_finetuned | 52.694| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Spanish French", "tags": ["translation Spanish French model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Pide a las autoridades eritreas que levanten la prohibici\u00f3n de prensa independiente en el pa\u00eds y que liberen de inmediato a los periodistas independientes y a todos los dem\u00e1s encarcelados por el simple hecho de haber ejercido su derecho a la libertad de expresi\u00f3n;"}]}
SEBIS/legal_t5_small_trans_es_fr_small_finetuned
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Spanish French model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
{}
SEBIS/legal_t5_small_trans_es_it
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_es_it_small_finetuned model Model on translating legal text from Spanish to Italian. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_es_it_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_es_it_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Spanish to Italian. ### How to use Here is how to use this model to translate legal text from Spanish to Italian in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_es_it_small_finetuned"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_es_it", do_lower_case=False, skip_special_tokens=True), device=0 ) es_text = "El acceso a las pruebas de densitometría ósea es totalmente inadecuado." pipeline([es_text], max_length=512) ``` ## Training data The legal_t5_small_trans_es_it_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_es_it_small_finetuned | 46.422| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Spanish Italian", "tags": ["translation Spanish Italian model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "El acceso a las pruebas de densitometr\u00eda \u00f3sea es totalmente inadecuado."}]}
SEBIS/legal_t5_small_trans_es_it_small_finetuned
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Spanish Italian model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
{}
SEBIS/legal_t5_small_trans_es_sv
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_es_sv_small_finetuned model Model on translating legal text from Spanish to Swedish. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_es_sv_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_es_sv_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Spanish to Swedish. ### How to use Here is how to use this model to translate legal text from Spanish to Swedish in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_es_sv_small_finetuned"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_es_sv", do_lower_case=False, skip_special_tokens=True), device=0 ) es_text = "Marie Anne Isler Béguin ," pipeline([es_text], max_length=512) ``` ## Training data The legal_t5_small_trans_es_sv_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_es_sv_small_finetuned | 43.838| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Spanish Swedish", "tags": ["translation Spanish Swedish model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Marie Anne Isler B\u00e9guin ,"}]}
SEBIS/legal_t5_small_trans_es_sv_small_finetuned
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Spanish Swedish model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_fr_cs model Model on translating legal text from French to Cszech. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_fr_cs is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from French to Cszech. ### How to use Here is how to use this model to translate legal text from French to Cszech in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_fr_cs"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_fr_cs", do_lower_case=False, skip_special_tokens=True), device=0 ) fr_text = "Hannes Swoboda , au nom du groupe PSE," pipeline([fr_text], max_length=512) ``` ## Training data The legal_t5_small_trans_fr_cs model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_fr_cs | 44.34| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "French Cszech", "tags": ["translation French Cszech model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Hannes Swoboda , au nom du groupe PSE,"}]}
SEBIS/legal_t5_small_trans_fr_cs
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation French Cszech model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_fr_cs_small_finetuned model Model on translating legal text from French to Cszech. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_fr_cs_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_fr_cs_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from French to Cszech. ### How to use Here is how to use this model to translate legal text from French to Cszech in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_fr_cs_small_finetuned"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_fr_cs", do_lower_case=False, skip_special_tokens=True), device=0 ) fr_text = "Compte rendu de la délégation à la Convention-cadre des Nations unies sur le changement climatique (COP17) à Durban (Afrique du Sud)" pipeline([fr_text], max_length=512) ``` ## Training data The legal_t5_small_trans_fr_cs_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_fr_cs_small_finetuned | 44.410| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "French Cszech", "tags": ["translation French Cszech model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Compte rendu de la d\u00e9l\u00e9gation \u00e0 la Convention-cadre des Nations unies sur le changement climatique (COP17) \u00e0 Durban (Afrique du Sud)"}]}
SEBIS/legal_t5_small_trans_fr_cs_small_finetuned
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation French Cszech model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_fr_de model Model on translating legal text from French to Deustch. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_fr_de is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from French to Deustch. ### How to use Here is how to use this model to translate legal text from French to Deustch in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_fr_de"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_fr_de", do_lower_case=False, skip_special_tokens=True), device=0 ) fr_text = "Les États membres notifient ces dispositions à la Commission au plus tard à la date mentionnée à l'article 15 et toute modification ultérieure les concernant dans les meilleurs délais." pipeline([fr_text], max_length=512) ``` ## Training data The legal_t5_small_trans_fr_de model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_fr_de | 41.33| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "French Deustch", "tags": ["translation French Deustch model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Les \u00c9tats membres notifient ces dispositions \u00e0 la Commission au plus tard \u00e0 la date mentionn\u00e9e \u00e0 l'article 15 et toute modification ult\u00e9rieure les concernant dans les meilleurs d\u00e9lais."}]}
SEBIS/legal_t5_small_trans_fr_de
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation French Deustch model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_fr_de_small_finetuned model Model on translating legal text from French to Deustch. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_fr_de_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_fr_de_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from French to Deustch. ### How to use Here is how to use this model to translate legal text from French to Deustch in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_fr_de_small_finetuned"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_fr_de", do_lower_case=False, skip_special_tokens=True), device=0 ) fr_text = "7. demande instamment à la Commission de veiller à ce que l'objectif d'une part de 20% d'énergie renouvelable soit rendue contraignante pour les États membres par des dispositions législatives à cet effet et soit mis en œuvre d'une manière conséquente, et à ce que les États membres qui n'honorent pas leurs engagements soient frappés de lourdes sanctions; souligne la nécessité de plans d'action nationaux dans le cadre desquels chaque État membre se fixe un objectif contraignant pour chaque secteur en fonction de ses possibilités spécifiques météorologiques, géographiques et géologiques et de ses réalisations dans le passé; demande instamment à la Commission de procéder à une évaluation préalable puis intermédiaire de ces plans d'action;" pipeline([fr_text], max_length=512) ``` ## Training data The legal_t5_small_trans_fr_de_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_fr_de_small_finetuned | 41.085| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "French Deustch", "tags": ["translation French Deustch model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "7. demande instamment \u00e0 la Commission de veiller \u00e0 ce que l'objectif d'une part de 20% d'\u00e9nergie renouvelable soit rendue contraignante pour les \u00c9tats membres par des dispositions l\u00e9gislatives \u00e0 cet effet et soit mis en \u0153uvre d'une mani\u00e8re cons\u00e9quente, et \u00e0 ce que les \u00c9tats membres qui n'honorent pas leurs engagements soient frapp\u00e9s de lourdes sanctions; souligne la n\u00e9cessit\u00e9 de plans d'action nationaux dans le cadre desquels chaque \u00c9tat membre se fixe un objectif contraignant pour chaque secteur en fonction de ses possibilit\u00e9s sp\u00e9cifiques m\u00e9t\u00e9orologiques, g\u00e9ographiques et g\u00e9ologiques et de ses r\u00e9alisations dans le pass\u00e9; demande instamment \u00e0 la Commission de proc\u00e9der \u00e0 une \u00e9valuation pr\u00e9alable puis interm\u00e9diaire de ces plans d'action;"}]}
SEBIS/legal_t5_small_trans_fr_de_small_finetuned
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation French Deustch model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_fr_en model Model on translating legal text from French to English. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_fr_en is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from French to English. ### How to use Here is how to use this model to translate legal text from French to English in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_fr_en"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_fr_en", do_lower_case=False, skip_special_tokens=True), device=0 ) fr_text = "quels montants ont été attribués et quelles sommes ont été effectivement utilisées dans chaque État membre? 4." pipeline([fr_text], max_length=512) ``` ## Training data The legal_t5_small_trans_fr_en model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_fr_en | 51.44| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "French English", "tags": ["translation French English model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "quels montants ont \u00e9t\u00e9 attribu\u00e9s et quelles sommes ont \u00e9t\u00e9 effectivement utilis\u00e9es dans chaque \u00c9tat membre? 4."}]}
SEBIS/legal_t5_small_trans_fr_en
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation French English model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_fr_en_small_finetuned model Model on translating legal text from French to English. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_fr_en_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_fr_en_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from French to English. ### How to use Here is how to use this model to translate legal text from French to English in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_fr_en_small_finetuned"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_fr_en", do_lower_case=False, skip_special_tokens=True), device=0 ) fr_text = "RÉSULTAT DU VOTE FINAL EN COMMISSION" pipeline([fr_text], max_length=512) ``` ## Training data The legal_t5_small_trans_fr_en_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_fr_en_small_finetuned | 51.351| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "French English", "tags": ["translation French English model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "R\u00c9SULTAT DU VOTE FINAL EN COMMISSION"}]}
SEBIS/legal_t5_small_trans_fr_en_small_finetuned
null
[ "transformers", "pytorch", "t5", "text2text-generation", "translation French English model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_fr_es model Model on translating legal text from French to Spanish. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_fr_es is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from French to Spanish. ### How to use Here is how to use this model to translate legal text from French to Spanish in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_fr_es"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_fr_es", do_lower_case=False, skip_special_tokens=True), device=0 ) fr_text = "commission des libertés civiles, de la justice et des affaires intérieures" pipeline([fr_text], max_length=512) ``` ## Training data The legal_t5_small_trans_fr_es model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_fr_es | 51.16| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "French Spanish", "tags": ["translation French Spanish model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "commission des libert\u00e9s civiles, de la justice et des affaires int\u00e9rieures"}]}
SEBIS/legal_t5_small_trans_fr_es
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation French Spanish model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_fr_es_small_finetuned model Model on translating legal text from French to Spanish. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_fr_es_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_fr_es_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from French to Spanish. ### How to use Here is how to use this model to translate legal text from French to Spanish in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_fr_es_small_finetuned"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_fr_es", do_lower_case=False, skip_special_tokens=True), device=0 ) fr_text = "A‑t‑elle déjà engagé, ou compte-t-elle engager, la réalisation d'une étude visant, comme préconisé ci‑dessus, à recenser les principaux problèmes et les besoins spécifiques des régions ultrapériphériques en matière de transport maritime, compte tenu des caractéristiques et des besoins propres à ce secteur, dans la perspective de la réalisation des projets d'autoroutes de la mer dans lesdites régions? 2." pipeline([fr_text], max_length=512) ``` ## Training data The legal_t5_small_trans_fr_es_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_fr_es_small_finetuned | 51.202| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "French Spanish", "tags": ["translation French Spanish model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "A\u2011t\u2011elle d\u00e9j\u00e0 engag\u00e9, ou compte-t-elle engager, la r\u00e9alisation d'une \u00e9tude visant, comme pr\u00e9conis\u00e9 ci\u2011dessus, \u00e0 recenser les principaux probl\u00e8mes et les besoins sp\u00e9cifiques des r\u00e9gions ultrap\u00e9riph\u00e9riques en mati\u00e8re de transport maritime, compte tenu des caract\u00e9ristiques et des besoins propres \u00e0 ce secteur, dans la perspective de la r\u00e9alisation des projets d'autoroutes de la mer dans lesdites r\u00e9gions? 2."}]}
SEBIS/legal_t5_small_trans_fr_es_small_finetuned
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation French Spanish model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_fr_it model Model on translating legal text from French to Italian. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_fr_it is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from French to Italian. ### How to use Here is how to use this model to translate legal text from French to Italian in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_fr_it"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_fr_it", do_lower_case=False, skip_special_tokens=True), device=0 ) fr_text = "considérant la multiplication des constructions qui ne respectent pas la culture des lieux et leur paysage particulier, dégradations à l'appui," pipeline([fr_text], max_length=512) ``` ## Training data The legal_t5_small_trans_fr_it model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_fr_it | 46.45| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "French Italian", "tags": ["translation French Italian model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "consid\u00e9rant la multiplication des constructions qui ne respectent pas la culture des lieux et leur paysage particulier, d\u00e9gradations \u00e0 l'appui,"}]}
SEBIS/legal_t5_small_trans_fr_it
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation French Italian model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_fr_it_small_finetuned model Model on translating legal text from French to Italian. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_fr_it_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_fr_it_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from French to Italian. ### How to use Here is how to use this model to translate legal text from French to Italian in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_fr_it_small_finetuned"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_fr_it", do_lower_case=False, skip_special_tokens=True), device=0 ) fr_text = "Le vote a lieu dans un délai de deux mois après réception de la proposition, à moins qu'à la demande de la commission compétente, d'un groupe politique ou de quarante députés au moins, le Parlement n'en décide autrement." pipeline([fr_text], max_length=512) ``` ## Training data The legal_t5_small_trans_fr_it_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_fr_it_small_finetuned | 46.309| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "French Italian", "tags": ["translation French Italian model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Le vote a lieu dans un d\u00e9lai de deux mois apr\u00e8s r\u00e9ception de la proposition, \u00e0 moins qu'\u00e0 la demande de la commission comp\u00e9tente, d'un groupe politique ou de quarante d\u00e9put\u00e9s au moins, le Parlement n'en d\u00e9cide autrement."}]}
SEBIS/legal_t5_small_trans_fr_it_small_finetuned
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation French Italian model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_fr_sv model Model on translating legal text from French to Swedish. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_fr_sv is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from French to Swedish. ### How to use Here is how to use this model to translate legal text from French to Swedish in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_fr_sv"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_fr_sv", do_lower_case=False, skip_special_tokens=True), device=0 ) fr_text = "posée conformément à l'article 43 du règlement" pipeline([fr_text], max_length=512) ``` ## Training data The legal_t5_small_trans_fr_sv model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_fr_sv | 41.9| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "French Swedish", "tags": ["translation French Swedish model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "pos\u00e9e conform\u00e9ment \u00e0 l'article 43 du r\u00e8glement"}]}
SEBIS/legal_t5_small_trans_fr_sv
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation French Swedish model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_fr_sv_small_finetuned model Model on translating legal text from French to Swedish. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_fr_sv_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_fr_sv_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from French to Swedish. ### How to use Here is how to use this model to translate legal text from French to Swedish in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_fr_sv_small_finetuned"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_fr_sv", do_lower_case=False, skip_special_tokens=True), device=0 ) fr_text = "Budget 2009: Section III - Commission" pipeline([fr_text], max_length=512) ``` ## Training data The legal_t5_small_trans_fr_sv_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_fr_sv_small_finetuned | 41.768| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "French Swedish", "tags": ["translation French Swedish model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Budget 2009: Section III - Commission"}]}
SEBIS/legal_t5_small_trans_fr_sv_small_finetuned
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation French Swedish model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_it_cs model Model on translating legal text from Italian to Cszech. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_it_cs is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Italian to Cszech. ### How to use Here is how to use this model to translate legal text from Italian to Cszech in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_it_cs"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_it_cs", do_lower_case=False, skip_special_tokens=True), device=0 ) it_text = "sull'aumento dei prezzi dei prodotti alimentari" pipeline([it_text], max_length=512) ``` ## Training data The legal_t5_small_trans_it_cs model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_it_cs | 43.302| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Italian Cszech", "tags": ["translation Italian Cszech model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "sull'aumento dei prezzi dei prodotti alimentari"}]}
SEBIS/legal_t5_small_trans_it_cs
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Italian Cszech model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_it_cs_small_finetuned model Model on translating legal text from Italian to Cszech. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_it_cs_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_it_cs_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Italian to Cszech. ### How to use Here is how to use this model to translate legal text from Italian to Cszech in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_it_cs_small_finetuned"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_it_cs", do_lower_case=False, skip_special_tokens=True), device=0 ) it_text = "Il consiglio di amministrazione è assistito da un comitato esecutivo." pipeline([it_text], max_length=512) ``` ## Training data The legal_t5_small_trans_it_cs_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_it_cs_small_finetuned | 43.236| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Italian Cszech", "tags": ["translation Italian Cszech model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Il consiglio di amministrazione \u00e8 assistito da un comitato esecutivo."}]}
SEBIS/legal_t5_small_trans_it_cs_small_finetuned
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Italian Cszech model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_it_de model Model on translating legal text from Italian to Deustch. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_it_de is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Italian to Deustch. ### How to use Here is how to use this model to translate legal text from Italian to Deustch in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_it_de"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_it_de", do_lower_case=False, skip_special_tokens=True), device=0 ) it_text = "presentata con richiesta di iscrizione all'ordine del giorno della discussione su problemi di attualità, urgenti e di notevole rilevanza" pipeline([it_text], max_length=512) ``` ## Training data The legal_t5_small_trans_it_de model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_it_de | 40.615| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Italian Deustch", "tags": ["translation Italian Deustch model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "presentata con richiesta di iscrizione all'ordine del giorno della discussione su problemi di attualit\u00e0, urgenti e di notevole rilevanza"}]}
SEBIS/legal_t5_small_trans_it_de
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Italian Deustch model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_it_de_small_finetuned model Model on translating legal text from Italian to Deustch. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_it_de_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_it_de_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Italian to Deustch. ### How to use Here is how to use this model to translate legal text from Italian to Deustch in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_it_de_small_finetuned"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_it_de", do_lower_case=False, skip_special_tokens=True), device=0 ) it_text = "Interventi sulla votazione:" pipeline([it_text], max_length=512) ``` ## Training data The legal_t5_small_trans_it_de_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_it_de_small_finetuned | 40.524| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Italian Deustch", "tags": ["translation Italian Deustch model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Interventi sulla votazione:"}]}
SEBIS/legal_t5_small_trans_it_de_small_finetuned
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Italian Deustch model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_it_en model Model on translating legal text from Italian to English. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_it_en is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Italian to English. ### How to use Here is how to use this model to translate legal text from Italian to English in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_it_en"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_it_en", do_lower_case=False, skip_special_tokens=True), device=0 ) it_text = "Oggetto: Libertà di culto in Turchia" pipeline([it_text], max_length=512) ``` ## Training data The legal_t5_small_trans_it_en model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_it_en | 50.068| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Italian English", "tags": ["translation Italian English model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Oggetto: Libert\u00e0 di culto in Turchia"}]}
SEBIS/legal_t5_small_trans_it_en
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Italian English model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_it_en_small_finetuned model Model on translating legal text from Italian to English. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_it_en_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_it_en_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Italian to English. ### How to use Here is how to use this model to translate legal text from Italian to English in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_it_en_small_finetuned"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_it_en", do_lower_case=False, skip_special_tokens=True), device=0 ) it_text = "Supplenti presenti al momento della votazione finale" pipeline([it_text], max_length=512) ``` ## Training data The legal_t5_small_trans_it_en_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_it_en_small_finetuned | 49.840| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Italian English", "tags": ["translation Italian English model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Supplenti presenti al momento della votazione finale"}]}
SEBIS/legal_t5_small_trans_it_en_small_finetuned
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Italian English model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_it_es model Model on translating legal text from Italian to Spanish. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_it_es is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Italian to Spanish. ### How to use Here is how to use this model to translate legal text from Italian to Spanish in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_it_es"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_it_es", do_lower_case=False, skip_special_tokens=True), device=0 ) it_text = "Risoluzione del Parlamento europeo sulle perquisizioni effettuate ad Ankara nella sede principale dell'Associazione per i diritti dell'uomo in Turchia" pipeline([it_text], max_length=512) ``` ## Training data The legal_t5_small_trans_it_es model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_it_es | 48.998| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Italian Spanish", "tags": ["translation Italian Spanish model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Risoluzione del Parlamento europeo sulle perquisizioni effettuate ad Ankara nella sede principale dell'Associazione per i diritti dell'uomo in Turchia"}]}
SEBIS/legal_t5_small_trans_it_es
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Italian Spanish model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_it_es_small_finetuned model Model on translating legal text from Italian to Spanish. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_it_es_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_it_es_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Italian to Spanish. ### How to use Here is how to use this model to translate legal text from Italian to Spanish in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_it_es_small_finetuned"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_it_es", do_lower_case=False, skip_special_tokens=True), device=0 ) it_text = "considerando che il 28 marzo 2002 il Consiglio di sicurezza dell'ONU si è dichiarato favorevole all'attuazione integrale del Protocollo di Lusaka e si è detto disposto a cooperare con tutte le parti in conflitto per raggiungere tale obiettivo, nonché ad avviare consultazioni con il governo dell'Angola per ricercare i mezzi con cui modificare le sanzioni imposte all'UNITA attraverso la risoluzione 1127 (1997), e ciò al fine di agevolare i colloqui di pace," pipeline([it_text], max_length=512) ``` ## Training data The legal_t5_small_trans_it_es_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_it_es_small_finetuned | 49.083| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Italian Spanish", "tags": ["translation Italian Spanish model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "considerando che il 28 marzo 2002 il Consiglio di sicurezza dell'ONU si \u00e8 dichiarato favorevole all'attuazione integrale del Protocollo di Lusaka e si \u00e8 detto disposto a cooperare con tutte le parti in conflitto per raggiungere tale obiettivo, nonch\u00e9 ad avviare consultazioni con il governo dell'Angola per ricercare i mezzi con cui modificare le sanzioni imposte all'UNITA attraverso la risoluzione 1127 (1997), e ci\u00f2 al fine di agevolare i colloqui di pace,"}]}
SEBIS/legal_t5_small_trans_it_es_small_finetuned
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Italian Spanish model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_it_fr model Model on translating legal text from Italian to French. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_it_fr is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Italian to French. ### How to use Here is how to use this model to translate legal text from Italian to French in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_it_fr"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_it_fr", do_lower_case=False, skip_special_tokens=True), device=0 ) it_text = "Qualora gli emendamenti approvati dal Parlamento abbiano l'effetto di aumentare le spese iscritte nel progetto di bilancio oltre il tasso massimo previsto, la commissione competente per il merito sottopone al Parlamento una proposta intesa a fissare un nuovo tasso massimo in conformità del paragrafo 9, ultimo comma, degli articoli 78 del trattato CECA, 272 del trattato CE e 177 del trattato CEEA." pipeline([it_text], max_length=512) ``` ## Training data The legal_t5_small_trans_it_fr model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_it_fr | 50.559| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Italian French", "tags": ["translation Italian French model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Qualora gli emendamenti approvati dal Parlamento abbiano l'effetto di aumentare le spese iscritte nel progetto di bilancio oltre il tasso massimo previsto, la commissione competente per il merito sottopone al Parlamento una proposta intesa a fissare un nuovo tasso massimo in conformit\u00e0 del paragrafo 9, ultimo comma, degli articoli 78 del trattato CECA, 272 del trattato CE e 177 del trattato CEEA."}]}
SEBIS/legal_t5_small_trans_it_fr
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Italian French model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_it_fr_small_finetuned model Model on translating legal text from Italian to French. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_it_fr_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_it_fr_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Italian to French. ### How to use Here is how to use this model to translate legal text from Italian to French in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_it_fr_small_finetuned"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_it_fr", do_lower_case=False, skip_special_tokens=True), device=0 ) it_text = "Dichiarazioni del Consiglio e della Commissione" pipeline([it_text], max_length=512) ``` ## Training data The legal_t5_small_trans_it_fr_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_it_fr_small_finetuned | 50.557| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Italian French", "tags": ["translation Italian French model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Dichiarazioni del Consiglio e della Commissione"}]}
SEBIS/legal_t5_small_trans_it_fr_small_finetuned
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Italian French model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_it_sv model Model on translating legal text from Italian to Swedish. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_it_sv is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Italian to Swedish. ### How to use Here is how to use this model to translate legal text from Italian to Swedish in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_it_sv"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_it_sv", do_lower_case=False, skip_special_tokens=True), device=0 ) it_text = "K. considerando che, come avviene con tutti i sistemi di sanità elettronica, la progettazione, lo sviluppo e l’attuazione di sistemi abilitati alla tecnologia RFID presuppongono il coinvolgimento diretto dei professionisti sanitari, dei pazienti e delle commissioni competenti (per esempio, sulla protezione dei dati e sull’etica)," pipeline([it_text], max_length=512) ``` ## Training data The legal_t5_small_trans_it_sv model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_it_sv | 41.508| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Italian Swedish", "tags": ["translation Italian Swedish model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "K. considerando che, come avviene con tutti i sistemi di sanit\u00e0 elettronica, la progettazione, lo sviluppo e l\u2019attuazione di sistemi abilitati alla tecnologia RFID presuppongono il coinvolgimento diretto dei professionisti sanitari, dei pazienti e delle commissioni competenti (per esempio, sulla protezione dei dati e sull\u2019etica),"}]}
SEBIS/legal_t5_small_trans_it_sv
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Italian Swedish model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_it_sv_small_finetuned model Model on translating legal text from Italian to Swedish. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_it_sv_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_it_sv_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Italian to Swedish. ### How to use Here is how to use this model to translate legal text from Italian to Swedish in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_it_sv_small_finetuned"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_it_sv", do_lower_case=False, skip_special_tokens=True), device=0 ) it_text = "Cooperazione rafforzata Annuncio in Aula" pipeline([it_text], max_length=512) ``` ## Training data The legal_t5_small_trans_it_sv_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_it_sv_small_finetuned | 41.243| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Italian Swedish", "tags": ["translation Italian Swedish model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Cooperazione rafforzata Annuncio in Aula"}]}
SEBIS/legal_t5_small_trans_it_sv_small_finetuned
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Italian Swedish model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_sv_cs model Model on translating legal text from Swedish to Cszech. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_sv_cs is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Swedish to Cszech. ### How to use Here is how to use this model to translate legal text from Swedish to Cszech in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_sv_cs"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_sv_cs", do_lower_case=False, skip_special_tokens=True), device=0 ) sv_text = "En kvalitetscertifiering av administrativa förfaranden i enlighet med ISO eller motsvarande normer skulle dessutom leda till likvärdiga villkor för sjöfartsadministrationer." pipeline([sv_text], max_length=512) ``` ## Training data The legal_t5_small_trans_sv_cs model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_sv_cs | 45.569| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Swedish Cszech", "tags": ["translation Swedish Cszech model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "En kvalitetscertifiering av administrativa f\u00f6rfaranden i enlighet med ISO eller motsvarande normer skulle dessutom leda till likv\u00e4rdiga villkor f\u00f6r sj\u00f6fartsadministrationer."}]}
SEBIS/legal_t5_small_trans_sv_cs
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Swedish Cszech model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_sv_cs_small_finetuned model Model on translating legal text from Swedish to Cszech. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_sv_cs_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_sv_cs_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Swedish to Cszech. ### How to use Here is how to use this model to translate legal text from Swedish to Cszech in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_sv_cs_small_finetuned"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_sv_cs", do_lower_case=False, skip_special_tokens=True), device=0 ) sv_text = "Kommissionens personal och extern personal som bemyndigas av kommissionen måste få tillträde till bidragsmottagarens lokaler och tillgång till all information som behövs för att genomföra sådana revisioner, inbegripet information i elektronisk form." pipeline([sv_text], max_length=512) ``` ## Training data The legal_t5_small_trans_sv_cs_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_sv_cs_small_finetuned | 45.472| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Swedish Cszech", "tags": ["translation Swedish Cszech model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Kommissionens personal och extern personal som bemyndigas av kommissionen m\u00e5ste f\u00e5 tilltr\u00e4de till bidragsmottagarens lokaler och tillg\u00e5ng till all information som beh\u00f6vs f\u00f6r att genomf\u00f6ra s\u00e5dana revisioner, inbegripet information i elektronisk form."}]}
SEBIS/legal_t5_small_trans_sv_cs_small_finetuned
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Swedish Cszech model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_sv_de model Model on translating legal text from Swedish to Deustch. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_sv_de is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Swedish to Deustch. ### How to use Here is how to use this model to translate legal text from Swedish to Deustch in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_sv_de"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_sv_de", do_lower_case=False, skip_special_tokens=True), device=0 ) sv_text = "b) Bekämpning av skadegörare inom skogsbruket." pipeline([sv_text], max_length=512) ``` ## Training data The legal_t5_small_trans_sv_de model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_sv_de | 40.264| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Swedish Deustch", "tags": ["translation Swedish Deustch model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "b) Bek\u00e4mpning av skadeg\u00f6rare inom skogsbruket."}]}
SEBIS/legal_t5_small_trans_sv_de
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Swedish Deustch model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_sv_de_small_finetuned model Model on translating legal text from Swedish to Deustch. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_sv_de_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_sv_de_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Swedish to Deustch. ### How to use Here is how to use this model to translate legal text from Swedish to Deustch in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_sv_de_small_finetuned"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_sv_de", do_lower_case=False, skip_special_tokens=True), device=0 ) sv_text = "G. Mäns och kvinnors förmåga att delta på lika villkor i det politiska livet och i beslutsfattandet är en grundläggande förutsättning för en verklig demokrati." pipeline([sv_text], max_length=512) ``` ## Training data The legal_t5_small_trans_sv_de_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_sv_de_small_finetuned | 40.240| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Swedish Deustch", "tags": ["translation Swedish Deustch model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "G. M\u00e4ns och kvinnors f\u00f6rm\u00e5ga att delta p\u00e5 lika villkor i det politiska livet och i beslutsfattandet \u00e4r en grundl\u00e4ggande f\u00f6ruts\u00e4ttning f\u00f6r en verklig demokrati."}]}
SEBIS/legal_t5_small_trans_sv_de_small_finetuned
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Swedish Deustch model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_sv_en model Model on translating legal text from Swedish to English. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_sv_en is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Swedish to English. ### How to use Here is how to use this model to translate legal text from Swedish to English in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_sv_en"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_sv_en", do_lower_case=False, skip_special_tokens=True), device=0 ) sv_text = "Om rättsliga förfaranden inleds rörande omständigheter som ombudsmannen utreder skall han avsluta ärendet." pipeline([sv_text], max_length=512) ``` ## Training data The legal_t5_small_trans_sv_en model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_sv_en | 52.025| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Swedish English", "tags": ["translation Swedish English model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Om r\u00e4ttsliga f\u00f6rfaranden inleds r\u00f6rande omst\u00e4ndigheter som ombudsmannen utreder skall han avsluta \u00e4rendet."}]}
SEBIS/legal_t5_small_trans_sv_en
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Swedish English model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_sv_en_small_finetuned model Model on translating legal text from Swedish to English. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_sv_en_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_sv_en_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Swedish to English. ### How to use Here is how to use this model to translate legal text from Swedish to English in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_sv_en_small_finetuned"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_sv_en", do_lower_case=False, skip_special_tokens=True), device=0 ) sv_text = "Alejo Vidal-Quadras : 262 röster" pipeline([sv_text], max_length=512) ``` ## Training data The legal_t5_small_trans_sv_en_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_sv_en_small_finetuned | 52.084| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Swedish English", "tags": ["translation Swedish English model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Alejo Vidal-Quadras : 262 r\u00f6ster"}]}
SEBIS/legal_t5_small_trans_sv_en_small_finetuned
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Swedish English model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_sv_es model Model on translating legal text from Swedish to Spanish. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_sv_es is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Swedish to Spanish. ### How to use Here is how to use this model to translate legal text from Swedish to Spanish in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_sv_es"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_sv_es", do_lower_case=False, skip_special_tokens=True), device=0 ) sv_text = "Monika Flašíková Beňová (S&D)" pipeline([sv_text], max_length=512) ``` ## Training data The legal_t5_small_trans_sv_es model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_sv_es | 47.407| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Swedish Spanish", "tags": ["translation Swedish Spanish model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Monika Fla\u0161\u00edkov\u00e1 Be\u0148ov\u00e1 (S&D)"}]}
SEBIS/legal_t5_small_trans_sv_es
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Swedish Spanish model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_sv_es_small_finetuned model Model on translating legal text from Swedish to Spanish. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_sv_es_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_sv_es_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Swedish to Spanish. ### How to use Here is how to use this model to translate legal text from Swedish to Spanish in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_sv_es_small_finetuned"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_sv_es", do_lower_case=False, skip_special_tokens=True), device=0 ) sv_text = "– med beaktande av kommissionen vitbok om idrott ( KOM(2007)0391 )," pipeline([sv_text], max_length=512) ``` ## Training data The legal_t5_small_trans_sv_es_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_sv_es_small_finetuned | 47.411| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Swedish Spanish", "tags": ["translation Swedish Spanish model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "\u2013 med beaktande av kommissionen vitbok om idrott ( KOM(2007)0391 ),"}]}
SEBIS/legal_t5_small_trans_sv_es_small_finetuned
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Swedish Spanish model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_sv_fr model Model on translating legal text from Swedish to French. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_sv_fr is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Swedish to French. ### How to use Here is how to use this model to translate legal text from Swedish to French in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_sv_fr"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_sv_fr", do_lower_case=False, skip_special_tokens=True), device=0 ) sv_text = "Kunden måste ha rätt att avsäga sig information i skriftlig form." pipeline([sv_text], max_length=512) ``` ## Training data The legal_t5_small_trans_sv_fr model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_sv_fr | 47.623| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Swedish French", "tags": ["translation Swedish French model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Kunden m\u00e5ste ha r\u00e4tt att avs\u00e4ga sig information i skriftlig form."}]}
SEBIS/legal_t5_small_trans_sv_fr
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Swedish French model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_sv_fr_small_finetuned model Model on translating legal text from Swedish to French. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_sv_fr_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_sv_fr_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Swedish to French. ### How to use Here is how to use this model to translate legal text from Swedish to French in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_sv_fr_small_finetuned"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_sv_fr", do_lower_case=False, skip_special_tokens=True), device=0 ) sv_text = "Samreglering bör följa samma principer som de formella bestämmelserna, vilket betyder att den bör vara objektiv, välgrundad, proportionell och icke-diskriminerande, och bör möjliggöra insyn." pipeline([sv_text], max_length=512) ``` ## Training data The legal_t5_small_trans_sv_fr_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_sv_fr_small_finetuned | 47.508| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Swedish French", "tags": ["translation Swedish French model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Samreglering b\u00f6r f\u00f6lja samma principer som de formella best\u00e4mmelserna, vilket betyder att den b\u00f6r vara objektiv, v\u00e4lgrundad, proportionell och icke-diskriminerande, och b\u00f6r m\u00f6jligg\u00f6ra insyn."}]}
SEBIS/legal_t5_small_trans_sv_fr_small_finetuned
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Swedish French model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_sv_it model Model on translating legal text from Swedish to Italian. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_sv_it is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Swedish to Italian. ### How to use Here is how to use this model to translate legal text from Swedish to Italian in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_sv_it"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_sv_it", do_lower_case=False, skip_special_tokens=True), device=0 ) sv_text = "Den 25 juni 2002 lade kommissionen fram ett förslag till förordning om ”kontroller av kontanta medel som förs in i eller ut ur gemenskapen” i syfte att komplettera direktiv 91/308/EEG om penningtvätt." pipeline([sv_text], max_length=512) ``` ## Training data The legal_t5_small_trans_sv_it model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_sv_it | 42.577| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Swedish Italian", "tags": ["translation Swedish Italian model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Den 25 juni 2002 lade kommissionen fram ett f\u00f6rslag till f\u00f6rordning om \u201dkontroller av kontanta medel som f\u00f6rs in i eller ut ur gemenskapen\u201d i syfte att komplettera direktiv 91/308/EEG om penningtv\u00e4tt."}]}
SEBIS/legal_t5_small_trans_sv_it
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Swedish Italian model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# legal_t5_small_trans_sv_it_small_finetuned model Model on translating legal text from Swedish to Italian. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_sv_it_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_sv_it_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Swedish to Italian. ### How to use Here is how to use this model to translate legal text from Swedish to Italian in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_sv_it_small_finetuned"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_sv_it", do_lower_case=False, skip_special_tokens=True), device=0 ) sv_text = "– med beaktande av rådet beslut om Syrien av den 12 april, 9 och 23 maj, 20 och 25 juni samt den 2 september 2011 och av uttalandena från unionens höga representant av den 9, 23 och 29 april, 9 maj, 6, 9 och 11 juni, 9 och 31 juli, 1, 4, 18 och 30 augusti samt den 2 september 2011 om en utvidgning av de restriktiva åtgärderna mot den syriska regimen," pipeline([sv_text], max_length=512) ``` ## Training data The legal_t5_small_trans_sv_it_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_sv_it_small_finetuned | 42.575| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Swedish Italian", "tags": ["translation Swedish Italian model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "\u2013 med beaktande av r\u00e5det beslut om Syrien av den 12 april, 9 och 23 maj, 20 och 25 juni samt den 2 september 2011 och av uttalandena fr\u00e5n unionens h\u00f6ga representant av den 9, 23 och 29 april, 9 maj, 6, 9 och 11 juni, 9 och 31 juli, 1, 4, 18 och 30 augusti samt den 2 september 2011 om en utvidgning av de restriktiva \u00e5tg\u00e4rderna mot den syriska regimen,"}]}
SEBIS/legal_t5_small_trans_sv_it_small_finetuned
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Swedish Italian model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-mnli This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6560 - Accuracy: 0.8219 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:------:|:---------------:|:--------:| | 0.5161 | 1.0 | 24544 | 0.5025 | 0.8037 | | 0.4176 | 2.0 | 49088 | 0.5274 | 0.8131 | | 0.3154 | 3.0 | 73632 | 0.5348 | 0.8194 | | 0.2294 | 4.0 | 98176 | 0.6560 | 0.8219 | | 0.1827 | 5.0 | 122720 | 0.8190 | 0.8203 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased-finetuned-mnli", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "mnli"}, "metrics": [{"type": "accuracy", "value": 0.82190524707081, "name": "Accuracy"}]}]}]}
SEISHIN/distilbert-base-uncased-finetuned-mnli
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0605 - Precision: 0.9289 - Recall: 0.9387 - F1: 0.9338 - Accuracy: 0.9843 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2388 | 1.0 | 878 | 0.0671 | 0.9162 | 0.9211 | 0.9187 | 0.9813 | | 0.0504 | 2.0 | 1756 | 0.0602 | 0.9225 | 0.9366 | 0.9295 | 0.9834 | | 0.0299 | 3.0 | 2634 | 0.0605 | 0.9289 | 0.9387 | 0.9338 | 0.9843 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "distilbert-base-uncased-finetuned-ner", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}, "metrics": [{"type": "precision", "value": 0.9289272666888077, "name": "Precision"}, {"type": "recall", "value": 0.9386956035350711, "name": "Recall"}, {"type": "f1", "value": 0.933785889160917, "name": "F1"}, {"type": "accuracy", "value": 0.9842565968195466, "name": "Accuracy"}]}]}]}
SEISHIN/distilbert-base-uncased-finetuned-ner
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.1605 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.2172 | 1.0 | 5533 | 1.1532 | | 0.9446 | 2.0 | 11066 | 1.1184 | | 0.7671 | 3.0 | 16599 | 1.1605 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "distilbert-base-uncased-finetuned-squad", "results": []}]}
SEISHIN/distilbert-base-uncased-finetuned-squad
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
SEUN/test
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
SEUN/test2
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
GPT2-first-model
{}
SIC98/GPT2-first-model
null
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
Github - https://github.com/SIC98/GPT2-python-code-generator
{}
SIC98/GPT2-python-code-generator
null
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
fill-mask
transformers
# SikuBERT ## Model description ![SikuBERT](https://raw.githubusercontent.com/SIKU-BERT/SikuBERT-for-digital-humanities-and-classical-Chinese-information-processing/main/appendix/sikubert.png) Digital humanities research needs the support of large-scale corpus and high-performance ancient Chinese natural language processing tools. The pre-training language model has greatly improved the accuracy of text mining in English and modern Chinese texts. At present, there is an urgent need for a pre-training model specifically for the automatic processing of ancient texts. We used the verified high-quality “Siku Quanshu” full-text corpus as the training set, based on the BERT deep language model architecture, we constructed the SikuBERT and SikuRoBERTa pre-training language models for intelligent processing tasks of ancient Chinese. ## How to use ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("SIKU-BERT/sikubert") model = AutoModel.from_pretrained("SIKU-BERT/sikubert") ``` ## About Us We are from Nanjing Agricultural University. > Created with by SIKU-BERT [![Github icon](https://cdn0.iconfinder.com/data/icons/octicons/1024/mark-github-32.png)](https://github.com/SIKU-BERT/SikuBERT-for-digital-humanities-and-classical-Chinese-information-processing)
{"language": ["zh"], "license": "apache-2.0", "tags": ["chinese", "classical chinese", "literary chinese", "ancient chinese", "bert", "roberta", "pytorch"], "thumbnail": "https://raw.githubusercontent.com/SIKU-BERT/SikuBERT/main/appendix/sikubert.png", "inference": false}
SIKU-BERT/sikubert
null
[ "transformers", "pytorch", "bert", "fill-mask", "chinese", "classical chinese", "literary chinese", "ancient chinese", "roberta", "zh", "license:apache-2.0", "autotrain_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
fill-mask
transformers
# SikuBERT ## Model description ![SikuBERT](https://raw.githubusercontent.com/SIKU-BERT/SikuBERT-for-digital-humanities-and-classical-Chinese-information-processing/main/appendix/sikubert.png) Digital humanities research needs the support of large-scale corpus and high-performance ancient Chinese natural language processing tools. The pre-training language model has greatly improved the accuracy of text mining in English and modern Chinese texts. At present, there is an urgent need for a pre-training model specifically for the automatic processing of ancient texts. We used the verified high-quality “Siku Quanshu” full-text corpus as the training set, based on the BERT deep language model architecture, we constructed the SikuBERT and SikuRoBERTa pre-training language models for intelligent processing tasks of ancient Chinese. ## How to use ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("SIKU-BERT/sikuroberta") model = AutoModel.from_pretrained("SIKU-BERT/sikuroberta") ``` ## About Us We are from Nanjing Agricultural University. > Created with by SIKU-BERT [![Github icon](https://cdn0.iconfinder.com/data/icons/octicons/1024/mark-github-32.png)](https://github.com/SIKU-BERT/SikuBERT-for-digital-humanities-and-classical-Chinese-information-processing)
{"language": ["zh"], "license": "apache-2.0", "tags": ["chinese", "classical chinese", "literary chinese", "ancient chinese", "bert", "roberta", "pytorch"], "thumbnail": "https://raw.githubusercontent.com/SIKU-BERT/SikuBERT/main/appendix/sikubert.png", "inference": false}
SIKU-BERT/sikuroberta
null
[ "transformers", "pytorch", "bert", "fill-mask", "chinese", "classical chinese", "literary chinese", "ancient chinese", "roberta", "zh", "license:apache-2.0", "autotrain_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
{}
SJSui/AstroBot
null
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
{}
SJSui/NekuBot
null
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# RickBot
{"tags": ["conversational"]}
SJSui/RickBot
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
## LiveSafe chatbot response generation model based on DialogGPT
{"license": "mit", "tags": ["conversational"]}
SPGT/LiveSafe-DialoGPT
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # test This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 7810, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - TensorFlow 2.7.0 - Datasets 1.17.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "model-index": [{"name": "test", "results": []}]}
SS8/test
null
[ "transformers", "tf", "distilbert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # test2 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2510 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 7810, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Epoch | |:----------:|:-----:| | 0.2510 | 0 | ### Framework versions - Transformers 4.16.0.dev0 - TensorFlow 2.7.0 - Datasets 1.17.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "model-index": [{"name": "test2", "results": []}]}
SS8/test2
null
[ "transformers", "tf", "distilbert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
just a test
{}
SSY/mytest
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
ST/xlm-roberta-base
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
SWyer/gpt2-wikitext2
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
SXB/SXB
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
transformers
# huBERT base model (cased) ## Model description Cased BERT model for Hungarian, trained on the (filtered, deduplicated) Hungarian subset of the Common Crawl and a snapshot of the Hungarian Wikipedia. ## Intended uses & limitations The model can be used as any other (cased) BERT model. It has been tested on the chunking and named entity recognition tasks and set a new state-of-the-art on the former. ## Training Details of the training data and procedure can be found in the PhD thesis linked below. (With the caveat that it only contains preliminary results based on the Wikipedia subcorpus. Evaluation of the full model will appear in a future paper.) ## Eval results When fine-tuned (via `BertForTokenClassification`) on chunking and NER, the model outperforms multilingual BERT, achieves state-of-the-art results on both tasks. The exact scores are | NER | Minimal NP | Maximal NP | |-----|------------|------------| | **97.62%** | **97.14%** | **96.97%** | ### BibTeX entry and citation info If you use the model, please cite the following papers: [Nemeskey, Dávid Márk (2020). "Natural Language Processing Methods for Language Modeling." PhD Thesis. Eötvös Loránd University.](https://hlt.bme.hu/en/publ/nemeskey_2020) Bibtex: ```bibtex @PhDThesis{ Nemeskey:2020, author = {Nemeskey, Dávid Márk}, title = {Natural Language Processing Methods for Language Modeling}, year = {2020}, school = {E\"otv\"os Lor\'and University} } ``` [Nemeskey, Dávid Márk (2021). "Introducing huBERT." In: XVII. Magyar Számítógépes Nyelvészeti Konferencia (MSZNY 2021). Szeged, pp. 3-14](https://hlt.bme.hu/en/publ/hubert_2021) Bibtex: ```bibtex @InProceedings{ Nemeskey:2021a, author = {Nemeskey, Dávid Márk}, title = {Introducing \texttt{huBERT}}, booktitle = {{XVII}.\ Magyar Sz{\'a}m{\'i}t{\'o}g{\'e}pes Nyelv{\'e}szeti Konferencia ({MSZNY}2021)}, year = 2021, pages = {TBA}, address = {Szeged}, } ```
{"language": "hu", "license": "apache-2.0", "datasets": ["common_crawl", "wikipedia"]}
SZTAKI-HLT/hubert-base-cc
null
[ "transformers", "pytorch", "tf", "jax", "bert", "hu", "dataset:common_crawl", "dataset:wikipedia", "license:apache-2.0", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
SabadModi/ReccipeReccomender
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
SaberXLancer/autoNLP
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
{}
Sabokou/squad-qg-gen
null
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
fill-mask
transformers
{}
Sachinkelenjaguri/sa_bioclinical_bert
null
[ "transformers", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
transformers
{}
Sadaf/God
null
[ "transformers", "pytorch", "gpt2", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Saeed120/Seed
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Safeai/ksic
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# Jett DialoGPT Model
{"tags": ["conversational"]}
SaffronIce/DialoGPT-medium-Jett
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00