modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-08-01 12:29:10
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
547 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-08-01 12:28:04
card
stringlengths
11
1.01M
SEBIS/code_trans_t5_large_program_synthese_multitask
SEBIS
2021-06-23T08:39:57Z
12
0
transformers
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "text-generation-inference", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:04Z
--- tags: - summarization widget: - text: "you are given an array of numbers a and a number b , compute the difference of elements in a and b" --- # CodeTrans model for program synthesis Pretrained model on programming language lisp inspired DSL using the t5 large model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). ## Model description This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. ## Intended uses & limitations The model could be used to generate lisp inspired DSL code given the human language description tasks. ### How to use Here is how to use this model to generate lisp inspired DSL code using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_program_synthese_multitask"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_program_synthese_multitask", skip_special_tokens=True), device=0 ) tokenized_code = "you are given an array of numbers a and a number b , compute the difference of elements in a and b" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/program%20synthesis/large_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 220,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | LISP | | -------------------- | :------------: | | CodeTrans-ST-Small | 89.43 | | CodeTrans-ST-Base | 89.65 | | CodeTrans-TF-Small | 90.30 | | CodeTrans-TF-Base | 90.24 | | CodeTrans-TF-Large | 90.21 | | CodeTrans-MT-Small | 82.88 | | CodeTrans-MT-Base | 86.99 | | CodeTrans-MT-Large | 90.27 | | CodeTrans-MT-TF-Small | **90.31** | | CodeTrans-MT-TF-Base | 90.30 | | CodeTrans-MT-TF-Large | 90.17 | | State of the art | 85.80 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
SEBIS/code_trans_t5_large_commit_generation_transfer_learning_finetune
SEBIS
2021-06-23T08:34:21Z
11
0
transformers
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "text-generation-inference", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:04Z
--- tags: - summarization widget: - text: "new file mode 100644 index 000000000 . . 892fda21b Binary files / dev / null and b / src / plugins / gateway / lib / joscar . jar differ" --- # CodeTrans model for git commit message generation Pretrained model on git commit using the t5 large model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized git commit: it works best with tokenized git commit. ## Model description This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the git commit message generation task for the java commit changes. ## Intended uses & limitations The model could be used to generate the git commit message for the git commit changes or be fine-tuned on other relevant tasks. It can be used on unparsed and untokenized commit changes. However, if the change is tokenized, the performance should be better. ### How to use Here is how to use this model to generate git commit message using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_commit_generation_transfer_learning_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_commit_generation_transfer_learning_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "new file mode 100644 index 000000000 . . 892fda21b Binary files / dev / null and b / src / plugins / gateway / lib / joscar . jar differ" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/transfer%20learning%20fine-tuning/commit%20generation/large_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Transfer-learning Pretraining The model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 4,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing commit changes. ## Evaluation results For the git commit message generation task, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Java | | -------------------- | :------------: | | CodeTrans-ST-Small | 39.61 | | CodeTrans-ST-Base | 38.67 | | CodeTrans-TF-Small | 44.22 | | CodeTrans-TF-Base | 44.17 | | CodeTrans-TF-Large | **44.41** | | CodeTrans-MT-Small | 36.17 | | CodeTrans-MT-Base | 39.25 | | CodeTrans-MT-Large | 41.18 | | CodeTrans-MT-TF-Small | 43.96 | | CodeTrans-MT-TF-Base | 44.19 | | CodeTrans-MT-TF-Large | 44.34 | | State of the art | 32.81 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
SEBIS/code_trans_t5_large_commit_generation_multitask_finetune
SEBIS
2021-06-23T08:28:55Z
13
0
transformers
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "text-generation-inference", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:04Z
--- tags: - summarization widget: - text: "new file mode 100644 index 000000000 . . 892fda21b Binary files / dev / null and b / src / plugins / gateway / lib / joscar . jar differ" --- # CodeTrans model for git commit message generation Pretrained model on git commit using the t5 large model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized git commit: it works best with tokenized git commit. ## Model description This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the git commit message generation task for the java commit changes. ## Intended uses & limitations The model could be used to generate the git commit message for the git commit changes or be fine-tuned on other relevant tasks. It can be used on unparsed and untokenized commit changes. However, if the change is tokenized, the performance should be better. ### How to use Here is how to use this model to generate git commit message using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_commit_generation_multitask_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_commit_generation_multitask_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "new file mode 100644 index 000000000 . . 892fda21b Binary files / dev / null and b / src / plugins / gateway / lib / joscar . jar differ" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/commit%20generation/large_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 3,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing commit changes. ## Evaluation results For the git commit message generation task, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Java | | -------------------- | :------------: | | CodeTrans-ST-Small | 39.61 | | CodeTrans-ST-Base | 38.67 | | CodeTrans-TF-Small | 44.22 | | CodeTrans-TF-Base | 44.17 | | CodeTrans-TF-Large | **44.41** | | CodeTrans-MT-Small | 36.17 | | CodeTrans-MT-Base | 39.25 | | CodeTrans-MT-Large | 41.18 | | CodeTrans-MT-TF-Small | 43.96 | | CodeTrans-MT-TF-Base | 44.19 | | CodeTrans-MT-TF-Large | 44.34 | | State of the art | 32.81 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
huggingtweets/elonmusk-lateriser12-officialfpl
huggingtweets
2021-06-23T08:14:10Z
6
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/elonmusk-lateriser12-officialfpl/1624436046730/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1404334078388670466/DgO3WL4S_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/935069556929941504/K-1hWYqV_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1041289003381604352/feioKyKN_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Elon Musk & Fantasy Premier League & Lateriser12</div> <div style="text-align: center; font-size: 14px;">@elonmusk-lateriser12-officialfpl</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Elon Musk & Fantasy Premier League & Lateriser12. | Data | Elon Musk | Fantasy Premier League | Lateriser12 | | --- | --- | --- | --- | | Tweets downloaded | 1600 | 3250 | 3247 | | Retweets | 97 | 732 | 158 | | Short tweets | 420 | 112 | 633 | | Tweets kept | 1083 | 2406 | 2456 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/kzv5wz9k/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @elonmusk-lateriser12-officialfpl's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/31sjj826) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/31sjj826/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/elonmusk-lateriser12-officialfpl') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
SEBIS/code_trans_t5_large_commit_generation_multitask
SEBIS
2021-06-23T08:09:26Z
16
0
transformers
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "text-generation-inference", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:04Z
--- tags: - summarization widget: - text: "new file mode 100644 index 000000000 . . 892fda21b Binary files / dev / null and b / src / plugins / gateway / lib / joscar . jar differ" --- # CodeTrans model for git commit message generation Pretrained model on git commit using the t5 large model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized git commit: it works best with tokenized git commit. ## Model description This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. ## Intended uses & limitations The model could be used to generate the git commit message for the git commit changes or be fine-tuned on other relevant tasks. It can be used on unparsed and untokenized commit changes. However, if the change is tokenized, the performance should be better. ### How to use Here is how to use this model to generate git commit message using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_commit_generation_multitask"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_commit_generation_multitask", skip_special_tokens=True), device=0 ) tokenized_code = "new file mode 100644 index 000000000 . . 892fda21b Binary files / dev / null and b / src / plugins / gateway / lib / joscar . jar differ" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/commit%20generation/large_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 220,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ## Evaluation results For the git commit message generation task, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Java | | -------------------- | :------------: | | CodeTrans-ST-Small | 39.61 | | CodeTrans-ST-Base | 38.67 | | CodeTrans-TF-Small | 44.22 | | CodeTrans-TF-Base | 44.17 | | CodeTrans-TF-Large | **44.41** | | CodeTrans-MT-Small | 36.17 | | CodeTrans-MT-Base | 39.25 | | CodeTrans-MT-Large | 41.18 | | CodeTrans-MT-TF-Small | 43.96 | | CodeTrans-MT-TF-Base | 44.19 | | CodeTrans-MT-TF-Large | 44.34 | | State of the art | 32.81 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
SEBIS/code_trans_t5_large_code_documentation_generation_ruby_multitask_finetune
SEBIS
2021-06-23T07:57:33Z
10
0
transformers
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "text-generation-inference", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:04Z
--- tags: - summarization widget: - text: "def add ( severity , progname , & block ) return true if io . nil? || severity < level message = format_message ( severity , progname , yield ) MUTEX . synchronize { io . write ( message ) } true end" --- # CodeTrans model for code documentation generation ruby Pretrained model on programming language ruby using the t5 large model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized ruby code functions: it works best with tokenized ruby functions. ## Model description This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the code documentation generation task for the ruby function/method. ## Intended uses & limitations The model could be used to generate the description for the ruby function or be fine-tuned on other ruby code tasks. It can be used on unparsed and untokenized ruby code. However, if the ruby code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate ruby function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_ruby_multitask_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_ruby_multitask_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "def add ( severity , progname , & block ) return true if io . nil? || severity < level message = format_message ( severity , progname , yield ) MUTEX . synchronize { io . write ( message ) } true end" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/function%20documentation%20generation/ruby/large_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 2,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing ruby code. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
SEBIS/code_trans_t5_large_code_documentation_generation_ruby_multitask
SEBIS
2021-06-23T07:52:40Z
8
0
transformers
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "text-generation-inference", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:04Z
--- tags: - summarization widget: - text: "def add ( severity , progname , & block ) return true if io . nil? || severity < level message = format_message ( severity , progname , yield ) MUTEX . synchronize { io . write ( message ) } true end" --- # CodeTrans model for code documentation generation ruby Pretrained model on programming language ruby using the t5 large model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized ruby code functions: it works best with tokenized ruby functions. ## Model description This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. ## Intended uses & limitations The model could be used to generate the description for the ruby function or be fine-tuned on other ruby code tasks. It can be used on unparsed and untokenized ruby code. However, if the ruby code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate ruby function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_ruby_multitask"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_ruby_multitask", skip_special_tokens=True), device=0 ) tokenized_code = "def add ( severity , progname , & block ) return true if io . nil? || severity < level message = format_message ( severity , progname , yield ) MUTEX . synchronize { io . write ( message ) } true end" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/function%20documentation%20generation/ruby/base_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 80,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. (We have trained in total 260,000 steps.) ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
SEBIS/code_trans_t5_large_code_documentation_generation_python_transfer_learning_finetune
SEBIS
2021-06-23T07:46:32Z
14
0
transformers
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "text-generation-inference", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:04Z
--- tags: - summarization widget: - text: "def e ( message , exit_code = None ) : print_log ( message , YELLOW , BOLD ) if exit_code is not None : sys . exit ( exit_code )" --- # CodeTrans model for code documentation generation python Pretrained model on programming language python using the t5 large model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized python code functions: it works best with tokenized python functions. ## Model description This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the code documentation generation task for the python function/method. ## Intended uses & limitations The model could be used to generate the description for the python function or be fine-tuned on other python code tasks. It can be used on unparsed and untokenized python code. However, if the python code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate python function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_python_transfer_learning_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_python_transfer_learning_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "def e ( message , exit_code = None ) : print_log ( message , YELLOW , BOLD ) if exit_code is not None : sys . exit ( exit_code )" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/transfer%20learning%20fine-tuning/function%20documentation%20generation/python/large_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Transfer-learning Pretraining The model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 500 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing python code. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
SEBIS/code_trans_t5_large_code_documentation_generation_python_multitask_finetune
SEBIS
2021-06-23T07:39:56Z
11
1
transformers
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "text-generation-inference", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:04Z
--- tags: - summarization widget: - text: "def e ( message , exit_code = None ) : print_log ( message , YELLOW , BOLD ) if exit_code is not None : sys . exit ( exit_code )" --- # CodeTrans model for code documentation generation python Pretrained model on programming language python using the t5 large model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized python code functions: it works best with tokenized python functions. ## Model description This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the code documentation generation task for the python function/method. ## Intended uses & limitations The model could be used to generate the description for the python function or be fine-tuned on other python code tasks. It can be used on unparsed and untokenized python code. However, if the python code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate python function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_python_multitask_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_python_multitask_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "def e ( message , exit_code = None ) : print_log ( message , YELLOW , BOLD ) if exit_code is not None : sys . exit ( exit_code )" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/function%20documentation%20generation/python/large_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 500 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing python code. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
SEBIS/code_trans_t5_large_code_documentation_generation_php_multitask_finetune
SEBIS
2021-06-23T07:21:51Z
10
0
transformers
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "text-generation-inference", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:04Z
--- tags: - summarization widget: - text: "public static function update ( $ table ) { if ( ! is_array ( $ table ) ) { $ table = json_decode ( $ table , true ) ; } if ( ! SchemaManager :: tableExists ( $ table [ 'oldName' ] ) ) { throw SchemaException :: tableDoesNotExist ( $ table [ 'oldName' ] ) ; } $ updater = new self ( $ table ) ; $ updater -> updateTable ( ) ; }" --- # CodeTrans model for code documentation generation php Pretrained model on programming language php using the t5 large model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized php code functions: it works best with tokenized php functions. ## Model description This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the code documentation generation task for the php function/method. ## Intended uses & limitations The model could be used to generate the description for the php function or be fine-tuned on other php code tasks. It can be used on unparsed and untokenized php code. However, if the php code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate php function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_php_multitask_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_php_multitask_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "public static function update ( $ table ) { if ( ! is_array ( $ table ) ) { $ table = json_decode ( $ table , true ) ; } if ( ! SchemaManager :: tableExists ( $ table [ 'oldName' ] ) ) { throw SchemaException :: tableDoesNotExist ( $ table [ 'oldName' ] ) ; } $ updater = new self ( $ table ) ; $ updater -> updateTable ( ) ; }" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/function%20documentation%20generation/php/large_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 8000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing php code. Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
SEBIS/code_trans_t5_large_code_documentation_generation_php_multitask
SEBIS
2021-06-23T07:15:38Z
15
0
transformers
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "text-generation-inference", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:04Z
--- tags: - summarization widget: - text: "public static function update ( $ table ) { if ( ! is_array ( $ table ) ) { $ table = json_decode ( $ table , true ) ; } if ( ! SchemaManager :: tableExists ( $ table [ 'oldName' ] ) ) { throw SchemaException :: tableDoesNotExist ( $ table [ 'oldName' ] ) ; } $ updater = new self ( $ table ) ; $ updater -> updateTable ( ) ; }" --- # CodeTrans model for code documentation generation php Pretrained model on programming language php using the t5 large model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized php code functions: it works best with tokenized php functions. ## Model description This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. ## Intended uses & limitations The model could be used to generate the description for the php function or be fine-tuned on other php code tasks. It can be used on unparsed and untokenized php code. However, if the php code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate php function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_php_multitask"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_php_multitask", skip_special_tokens=True), device=0 ) tokenized_code = "public static function update ( $ table ) { if ( ! is_array ( $ table ) ) { $ table = json_decode ( $ table , true ) ; } if ( ! SchemaManager :: tableExists ( $ table [ 'oldName' ] ) ) { throw SchemaException :: tableDoesNotExist ( $ table [ 'oldName' ] ) ; } $ updater = new self ( $ table ) ; $ updater -> updateTable ( ) ; }" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/function%20documentation%20generation/php/large_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
SEBIS/code_trans_t5_large_code_documentation_generation_javascript_transfer_learning_finetune
SEBIS
2021-06-23T07:07:09Z
10
0
transformers
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "text-generation-inference", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:04Z
--- tags: - summarization widget: - text: "function isStandardBrowserEnv ( ) { if ( typeof navigator !== 'undefined' && ( navigator . product === 'ReactNative' || navigator . product === 'NativeScript' || navigator . product === 'NS' ) ) { return false ; } return ( typeof window !== 'undefined' && typeof document !== 'undefined' ) ; }" --- # CodeTrans model for code documentation generation javascript Pretrained model on programming language javascript using the t5 large model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized javascript code functions: it works best with tokenized javascript functions. ## Model description This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the code documentation generation task for the javascript function/method. ## Intended uses & limitations The model could be used to generate the description for the javascript function or be fine-tuned on other javascript code tasks. It can be used on unparsed and untokenized javascript code. However, if the javascript code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate javascript function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_javascript_transfer_learning_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_javascript_transfer_learning_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "function isStandardBrowserEnv ( ) { if ( typeof navigator !== 'undefined' && ( navigator . product === 'ReactNative' || navigator . product === 'NativeScript' || navigator . product === 'NS' ) ) { return false ; } return ( typeof window !== 'undefined' && typeof document !== 'undefined' ) ; }" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/transfer%20learning%20fine-tuning/function%20documentation%20generation/javascript/large_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Transfer-learning Pretraining The model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V3-8 for 4,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing javascript code. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
SEBIS/code_trans_t5_large_code_documentation_generation_javascript_multitask
SEBIS
2021-06-23T06:56:03Z
13
2
transformers
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "text-generation-inference", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:04Z
--- tags: - summarization widget: - text: "function isStandardBrowserEnv ( ) { if ( typeof navigator !== 'undefined' && ( navigator . product === 'ReactNative' || navigator . product === 'NativeScript' || navigator . product === 'NS' ) ) { return false ; } return ( typeof window !== 'undefined' && typeof document !== 'undefined' ) ; }" --- # CodeTrans model for code documentation generation javascript Pretrained model on programming language javascript using the t5 large model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized javascript code functions: it works best with tokenized javascript functions. ## Model description This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. ## Intended uses & limitations The model could be used to generate the description for the javascript function or be fine-tuned on other javascript code tasks. It can be used on unparsed and untokenized javascript code. However, if the javascript code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate javascript function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_javascript_multitask"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_javascript_multitask", skip_special_tokens=True), device=0 ) tokenized_code = "function isStandardBrowserEnv ( ) { if ( typeof navigator !== 'undefined' && ( navigator . product === 'ReactNative' || navigator . product === 'NativeScript' || navigator . product === 'NS' ) ) { return false ; } return ( typeof window !== 'undefined' && typeof document !== 'undefined' ) ; }" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/function%20documentation%20generation/javascript/large_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 120,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
SEBIS/code_trans_t5_large_code_documentation_generation_java_transfer_learning_finetune
SEBIS
2021-06-23T06:50:46Z
12
0
transformers
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "text-generation-inference", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:04Z
--- tags: - summarization widget: - text: "public static < T , U > Function < T , U > castFunction ( Class < U > target ) { return new CastToClass < T , U > ( target ) ; }" --- # CodeTrans model for code documentation generation java Pretrained model on programming language java using the t5 large model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized java code functions: it works best with tokenized java functions. ## Model description This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the code documentation generation task for the java function/method. ## Intended uses & limitations The model could be used to generate the description for the java function or be fine-tuned on other java code tasks. It can be used on unparsed and untokenized java code. However, if the java code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_java_transfer_learning_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_java_transfer_learning_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "public static < T , U > Function < T , U > castFunction ( Class < U > target ) { return new CastToClass < T , U > ( target ) ; }" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/transfer%20learning%20fine-tuning/function%20documentation%20generation/java/large_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Transfer-learning Pretraining The model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 500 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing java code. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
SEBIS/code_trans_t5_large_code_documentation_generation_java_multitask_finetune
SEBIS
2021-06-23T06:45:18Z
14
0
transformers
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "text-generation-inference", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:04Z
--- tags: - summarization widget: - text: "public static < T , U > Function < T , U > castFunction ( Class < U > target ) { return new CastToClass < T , U > ( target ) ; }" --- # CodeTrans model for code documentation generation java Pretrained model on programming language java using the t5 large model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized java code functions: it works best with tokenized java functions. ## Model description This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the code documentation generation task for the java function/method. ## Intended uses & limitations The model could be used to generate the description for the java function or be fine-tuned on other java code tasks. It can be used on unparsed and untokenized java code. However, if the java code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_java_multitask_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_java_multitask_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "public static < T , U > Function < T , U > castFunction ( Class < U > target ) { return new CastToClass < T , U > ( target ) ; }" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/function%20documentation%20generation/java/large_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 500 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing java code. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
SEBIS/code_trans_t5_large_code_documentation_generation_go_multitask_finetune
SEBIS
2021-06-23T06:27:30Z
9
0
transformers
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "text-generation-inference", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:04Z
--- tags: - summarization widget: - text: "func ( pr * Progress ) needSnapshotAbort ( ) bool { return pr . State == ProgressStateSnapshot && pr . Match >= pr . PendingSnapshot }" --- # CodeTrans model for code documentation generation go Pretrained model on programming language go using the t5 large model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized go code functions: it works best with tokenized go functions. ## Model description This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the code documentation generation task for the go function/method. ## Intended uses & limitations The model could be used to generate the description for the go function or be fine-tuned on other go code tasks. It can be used on unparsed and untokenized go code. However, if the go code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate go function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_go_multitask_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_go_multitask_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "func ( pr * Progress ) needSnapshotAbort ( ) bool { return pr . State == ProgressStateSnapshot && pr . Match >= pr . PendingSnapshot }" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/function%20documentation%20generation/go/large_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 4500 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing go code. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
SEBIS/code_trans_t5_large_code_documentation_generation_go_multitask
SEBIS
2021-06-23T06:19:18Z
11
1
transformers
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "text-generation-inference", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:04Z
--- tags: - summarization widget: - text: "func ( pr * Progress ) needSnapshotAbort ( ) bool { return pr . State == ProgressStateSnapshot && pr . Match >= pr . PendingSnapshot }" --- # CodeTrans model for code documentation generation go Pretrained model on programming language go using the t5 large model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized go code functions: it works best with tokenized go functions. ## Model description This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. ## Intended uses & limitations The model could be used to generate the description for the go function or be fine-tuned on other go code tasks. It can be used on unparsed and untokenized go code. However, if the go code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate go function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_go_multitask"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_go_multitask", skip_special_tokens=True), device=0 ) tokenized_code = "func ( pr * Progress ) needSnapshotAbort ( ) bool { return pr . State == ProgressStateSnapshot && pr . Match >= pr . PendingSnapshot }" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/function%20documentation%20generation/go/large_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 180,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
SEBIS/code_trans_t5_large_api_generation_multitask_finetune
SEBIS
2021-06-23T05:45:56Z
13
0
transformers
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "text-generation-inference", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:04Z
--- tags: - summarization widget: - text: "parse the uses licence node of this package , if any , and returns the license definition if theres" --- # CodeTrans model for api recommendation generation Pretrained model for api recommendation generation using the t5 large model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). ## Model description This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the api recommendation generation task for the java apis. ## Intended uses & limitations The model could be used to generate api usage for the java programming tasks. ### How to use Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_api_generation_multitask_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_api_generation_multitask_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "parse the uses licence node of this package , if any , and returns the license definition if theres" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/api%20generation/large_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 130,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing api recommendation generation data. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Java | | -------------------- | :------------: | | CodeTrans-ST-Small | 68.71 | | CodeTrans-ST-Base | 70.45 | | CodeTrans-TF-Small | 68.90 | | CodeTrans-TF-Base | 72.11 | | CodeTrans-TF-Large | 73.26 | | CodeTrans-MT-Small | 58.43 | | CodeTrans-MT-Base | 67.97 | | CodeTrans-MT-Large | 72.29 | | CodeTrans-MT-TF-Small | 69.29 | | CodeTrans-MT-TF-Base | 72.89 | | CodeTrans-MT-TF-Large | **73.39** | | State of the art | 54.42 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
SEBIS/code_trans_t5_large_api_generation_multitask
SEBIS
2021-06-23T05:40:22Z
10
0
transformers
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "text-generation-inference", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:04Z
--- tags: - summarization widget: - text: "parse the uses licence node of this package , if any , and returns the license definition if theres" --- # CodeTrans model for api recommendation generation Pretrained model for api recommendation generation using the t5 large model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). ## Model description This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. ## Intended uses & limitations The model could be used to generate api usage for the java programming tasks. ### How to use Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_api_generation_multitask"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_api_generation_multitask", skip_special_tokens=True), device=0 ) tokenized_code = "parse the uses licence node of this package , if any , and returns the license definition if theres" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/api%20generation/large_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Java | | -------------------- | :------------: | | CodeTrans-ST-Small | 68.71 | | CodeTrans-ST-Base | 70.45 | | CodeTrans-TF-Small | 68.90 | | CodeTrans-TF-Base | 72.11 | | CodeTrans-TF-Large | 73.26 | | CodeTrans-MT-Small | 58.43 | | CodeTrans-MT-Base | 67.97 | | CodeTrans-MT-Large | 72.29 | | CodeTrans-MT-TF-Small | 69.29 | | CodeTrans-MT-TF-Base | 72.89 | | CodeTrans-MT-TF-Large | **73.39** | | State of the art | 54.42 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
SEBIS/code_trans_t5_base_source_code_summarization_sql_transfer_learning_finetune
SEBIS
2021-06-23T05:34:07Z
15
1
transformers
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "text-generation-inference", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:04Z
--- tags: - summarization widget: - text: "select time ( col0 ) from tab0" --- # CodeTrans model for source code summarization sql Pretrained model on programming language sql using the t5 base model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized sql code functions: it works best with tokenized sql functions. ## Model description This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the source code summarization task for the sql code snippets. ## Intended uses & limitations The model could be used to generate the description for the sql function or be fine-tuned on other sql code tasks. It can be used on unparsed and untokenized sql code. However, if the sql code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate sql function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_source_code_summarization_sql_transfer_learning_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_source_code_summarization_sql_transfer_learning_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "select time ( col0 ) from tab0" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/transfer%20learning%20fine-tuning/source%20code%20summarization/sql/base_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Transfer-learning Pretraining The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 500 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing sql code. ## Evaluation results For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | SQL | C# | | -------------------- | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 | | CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 | | CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 | | CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 | | CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 | | CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 | | CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 | | CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** | | CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 | | CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 | | CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 | | CODE-NN | -- | 18.40 | 20.50 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
SEBIS/code_trans_t5_base_source_code_summarization_sql_multitask_finetune
SEBIS
2021-06-23T05:32:32Z
13
0
transformers
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "text-generation-inference", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:04Z
--- tags: - summarization widget: - text: "select time ( col0 ) from tab0" --- # CodeTrans model for source code summarization sql Pretrained model on programming language sql using the t5 base model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized sql code functions: it works best with tokenized sql functions. ## Model description This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the source code summarization task for the sql code snippets. ## Intended uses & limitations The model could be used to generate the description for the sql function or be fine-tuned on other sql code tasks. It can be used on unparsed and untokenized sql code. However, if the sql code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate sql function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_source_code_summarization_sql_multitask_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_source_code_summarization_sql_multitask_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "select time ( col0 ) from tab0" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/source%20code%20summarization/sql/base_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 500 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing sql code. ## Evaluation results For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | SQL | C# | | -------------------- | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 | | CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 | | CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 | | CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 | | CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 | | CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 | | CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 | | CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** | | CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 | | CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 | | CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 | | CODE-NN | -- | 18.40 | 20.50 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
SEBIS/code_trans_t5_base_source_code_summarization_sql
SEBIS
2021-06-23T05:27:39Z
23
1
transformers
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "text-generation-inference", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:04Z
--- tags: - summarization widget: - text: "select time ( col0 ) from tab0" --- # CodeTrans model for source code summarization sql Pretrained model on programming language sql using the t5 base model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized sql code functions: it works best with tokenized sql functions. ## Model description This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used single-task training on source code summarization sql dataset. ## Intended uses & limitations The model could be used to generate the description for the sql function or be fine-tuned on other sql code tasks. It can be used on unparsed and untokenized sql code. However, if the sql code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate sql function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_source_code_summarization_sql"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_source_code_summarization_sql", skip_special_tokens=True), device=0 ) tokenized_code = "select time ( col0 ) from tab0" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/single%20task/source%20code%20summarization/sql/base_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Evaluation results For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | SQL | C# | | -------------------- | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 | | CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 | | CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 | | CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 | | CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 | | CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 | | CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 | | CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** | | CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 | | CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 | | CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 | | CODE-NN | -- | 18.40 | 20.50 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
SEBIS/code_trans_t5_base_source_code_summarization_python_transfer_learning_finetune
SEBIS
2021-06-23T05:25:27Z
12
0
transformers
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "text-generation-inference", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:04Z
--- tags: - summarization widget: - text: '''with open ( CODE_STRING , CODE_STRING ) as in_file : buf = in_file . readlines ( ) with open ( CODE_STRING , CODE_STRING ) as out_file : for line in buf : if line == " ; Include this text " : line = line + " Include below " out_file . write ( line ) ''' --- # CodeTrans model for source code summarization python Pretrained model on programming language python using the t5 base model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized python code functions: it works best with tokenized python functions. ## Model description This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the source code summarization task for the python code snippets. ## Intended uses & limitations The model could be used to generate the description for the python function or be fine-tuned on other python code tasks. It can be used on unparsed and untokenized python code. However, if the python code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate python function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_source_code_summarization_python_transfer_learning_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_source_code_summarization_python_transfer_learning_finetune", skip_special_tokens=True), device=0 ) tokenized_code = '''with open ( CODE_STRING , CODE_STRING ) as in_file : buf = in_file . readlines ( ) with open ( CODE_STRING , CODE_STRING ) as out_file : for line in buf : if line == " ; Include this text " : line = line + " Include below " out_file . write ( line ) ''' pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/transfer%20learning%20fine-tuning/source%20code%20summarization/python/base_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Transfer-learning Pretraining The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 1000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing python code. ## Evaluation results For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | SQL | C# | | -------------------- | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 | | CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 | | CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 | | CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 | | CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 | | CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 | | CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 | | CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** | | CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 | | CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 | | CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 | | CODE-NN | -- | 18.40 | 20.50 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
SEBIS/code_trans_t5_base_source_code_summarization_python_multitask_finetune
SEBIS
2021-06-23T05:23:42Z
9
0
transformers
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "text-generation-inference", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:04Z
--- tags: - summarization widget: - text: '''with open ( CODE_STRING , CODE_STRING ) as in_file : buf = in_file . readlines ( ) with open ( CODE_STRING , CODE_STRING ) as out_file : for line in buf : if line == " ; Include this text " : line = line + " Include below " out_file . write ( line ) ''' --- # CodeTrans model for source code summarization python Pretrained model on programming language python using the t5 base model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized python code functions: it works best with tokenized python functions. ## Model description This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the source code summarization task for the python code snippets. ## Intended uses & limitations The model could be used to generate the description for the python function or be fine-tuned on other python code tasks. It can be used on unparsed and untokenized python code. However, if the python code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate python function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_source_code_summarization_python_multitask_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_source_code_summarization_python_multitask_finetune", skip_special_tokens=True), device=0 ) tokenized_code = '''with open ( CODE_STRING , CODE_STRING ) as in_file : buf = in_file . readlines ( ) with open ( CODE_STRING , CODE_STRING ) as out_file : for line in buf : if line == " ; Include this text " : line = line + " Include below " out_file . write ( line ) ''' pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/source%20code%20summarization/python/base_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 1000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing python code. ## Evaluation results For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | SQL | C# | | -------------------- | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 | | CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 | | CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 | | CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 | | CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 | | CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 | | CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 | | CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** | | CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 | | CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 | | CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 | | CODE-NN | -- | 18.40 | 20.50 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
SEBIS/code_trans_t5_base_source_code_summarization_csharp_transfer_learning_finetune
SEBIS
2021-06-23T05:17:39Z
11
0
transformers
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "text-generation-inference", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:04Z
--- tags: - summarization widget: - text: "public static DateTime ParseUnixDateTime ( double unixTime ) { var dt = new DateTime ( CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , System . DateTimeKind . Utc ) ; dt = dt . AddSeconds ( unixTimeStamp ) . ToLocalTime ( ) ; return dt ; }" --- # CodeTrans model for source code summarization csharp Pretrained model on programming language csharp using the t5 base model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized csharp code functions: it works best with tokenized csharp functions. ## Model description This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the source code summarization task for the csharp code snippets. ## Intended uses & limitations The model could be used to generate the description for the csharp function or be fine-tuned on other csharp code tasks. It can be used on unparsed and untokenized csharp code. However, if the csharp code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate csharp function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_source_code_summarization_csharp_transfer_learning_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_source_code_summarization_csharp_transfer_learning_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "public static DateTime ParseUnixDateTime ( double unixTime ) { var dt = new DateTime ( CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , System . DateTimeKind . Utc ) ; dt = dt . AddSeconds ( unixTimeStamp ) . ToLocalTime ( ) ; return dt ; }" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/transfer%20learning%20fine-tuning/source%20code%20summarization/csharp/base_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Transfer-learning Pretraining The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 500 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing csharp code. ## Evaluation results For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | SQL | C# | | -------------------- | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 | | CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 | | CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 | | CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 | | CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 | | CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 | | CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 | | CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** | | CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 | | CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 | | CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 | | CODE-NN | -- | 18.40 | 20.50 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
SEBIS/code_trans_t5_base_program_synthese_multitask_finetune
SEBIS
2021-06-23T05:09:05Z
12
0
transformers
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "text-generation-inference", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:04Z
--- tags: - summarization widget: - text: "you are given an array of numbers a and a number b , compute the difference of elements in a and b" --- # CodeTrans model for program synthesis Pretrained model on programming language lisp inspired DSL using the t5 base model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). ## Model description This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the program synthesis task for the lisp inspired DSL code. ## Intended uses & limitations The model could be used to generate lisp inspired DSL code given the human language description tasks. ### How to use Here is how to use this model to generate lisp inspired DSL code using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_program_synthese_multitask_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_program_synthese_multitask_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "you are given an array of numbers a and a number b , compute the difference of elements in a and b" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/program%20synthesis/base_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 30,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing lisp inspired DSL data. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | LISP | | -------------------- | :------------: | | CodeTrans-ST-Small | 89.43 | | CodeTrans-ST-Base | 89.65 | | CodeTrans-TF-Small | 90.30 | | CodeTrans-TF-Base | 90.24 | | CodeTrans-TF-Large | 90.21 | | CodeTrans-MT-Small | 82.88 | | CodeTrans-MT-Base | 86.99 | | CodeTrans-MT-Large | 90.27 | | CodeTrans-MT-TF-Small | **90.31** | | CodeTrans-MT-TF-Base | 90.30 | | CodeTrans-MT-TF-Large | 90.17 | | State of the art | 85.80 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
SEBIS/code_trans_t5_base_program_synthese
SEBIS
2021-06-23T05:03:36Z
30
0
transformers
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "text-generation-inference", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:04Z
--- tags: - summarization widget: - text: "you are given an array of numbers a and a number b , compute the difference of elements in a and b" --- # CodeTrans model for program synthesis Pretrained model on programming language lisp inspired DSL using the t5 base model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). ## Model description This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used single-task training on Program Synthesis dataset. ## Intended uses & limitations The model could be used to generate lisp inspired DSL code based on the human language description tasks. ### How to use Here is how to use this model to generate lisp inspired DSL code using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_program_synthese"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_program_synthese", skip_special_tokens=True), device=0 ) tokenized_code = "you are given an array of numbers a and a number b , compute the difference of elements in a and b" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/single%20task/program%20synthesis/base_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | LISP | | -------------------- | :------------: | | CodeTrans-ST-Small | 89.43 | | CodeTrans-ST-Base | 89.65 | | CodeTrans-TF-Small | 90.30 | | CodeTrans-TF-Base | 90.24 | | CodeTrans-TF-Large | 90.21 | | CodeTrans-MT-Small | 82.88 | | CodeTrans-MT-Base | 86.99 | | CodeTrans-MT-Large | 90.27 | | CodeTrans-MT-TF-Small | **90.31** | | CodeTrans-MT-TF-Base | 90.30 | | CodeTrans-MT-TF-Large | 90.17 | | State of the art | 85.80 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
SEBIS/code_trans_t5_base_commit_generation_multitask_finetune
SEBIS
2021-06-23T05:00:29Z
14
1
transformers
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "text-generation-inference", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:04Z
--- tags: - summarization widget: - text: "new file mode 100644 index 000000000 . . 892fda21b Binary files / dev / null and b / src / plugins / gateway / lib / joscar . jar differ" --- # CodeTrans model for git commit message generation Pretrained model on git commit using the t5 base model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized git commit: it works best with tokenized git commit. ## Model description This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the git commit message generation task for the java commit changes. ## Intended uses & limitations The model could be used to generate the git commit message for the git commit changes or be fine-tuned on other relevant tasks. It can be used on unparsed and untokenized commit changes. However, if the change is tokenized, the performance should be better. ### How to use Here is how to use this model to generate git commit message using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_commit_generation_multitask_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_commit_generation_multitask_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "new file mode 100644 index 000000000 . . 892fda21b Binary files / dev / null and b / src / plugins / gateway / lib / joscar . jar differ" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/commit%20generation/base_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 16,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing commit changes. ## Evaluation results For the git commit message generation task, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Java | | -------------------- | :------------: | | CodeTrans-ST-Small | 39.61 | | CodeTrans-ST-Base | 38.67 | | CodeTrans-TF-Small | 44.22 | | CodeTrans-TF-Base | 44.17 | | CodeTrans-TF-Large | **44.41** | | CodeTrans-MT-Small | 36.17 | | CodeTrans-MT-Base | 39.25 | | CodeTrans-MT-Large | 41.18 | | CodeTrans-MT-TF-Small | 43.96 | | CodeTrans-MT-TF-Base | 44.19 | | CodeTrans-MT-TF-Large | 44.34 | | State of the art | 32.81 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
SEBIS/code_trans_t5_base_code_documentation_generation_ruby_transfer_learning_finetune
SEBIS
2021-06-23T04:55:29Z
9
0
transformers
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "text-generation-inference", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:04Z
--- tags: - summarization widget: - text: "def add ( severity , progname , & block ) return true if io . nil? || severity < level message = format_message ( severity , progname , yield ) MUTEX . synchronize { io . write ( message ) } true end" --- # CodeTrans model for code documentation generation ruby Pretrained model on programming language ruby using the t5 base model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized ruby code functions: it works best with tokenized ruby functions. ## Model description This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the code documentation generation task for the ruby function/method. ## Intended uses & limitations The model could be used to generate the description for the ruby function or be fine-tuned on other ruby code tasks. It can be used on unparsed and untokenized ruby code. However, if the ruby code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate ruby function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_code_documentation_generation_ruby_transfer_learning_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_code_documentation_generation_ruby_transfer_learning_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "def add ( severity , progname , & block ) return true if io . nil? || severity < level message = format_message ( severity , progname , yield ) MUTEX . synchronize { io . write ( message ) } true end" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/transfer%20learning%20fine-tuning/function%20documentation%20generation/ruby/base_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Transfer-learning Pretraining The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 5000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing ruby code. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
SEBIS/code_trans_t5_base_code_documentation_generation_ruby_multitask
SEBIS
2021-06-23T04:52:10Z
11
0
transformers
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "text-generation-inference", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:04Z
--- tags: - summarization widget: - text: "def add ( severity , progname , & block ) return true if io . nil? || severity < level message = format_message ( severity , progname , yield ) MUTEX . synchronize { io . write ( message ) } true end" --- # CodeTrans model for code documentation generation ruby Pretrained model on programming language ruby using the t5 base model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized ruby code functions: it works best with tokenized ruby functions. ## Model description This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. ## Intended uses & limitations The model could be used to generate the description for the ruby function or be fine-tuned on other ruby code tasks. It can be used on unparsed and untokenized ruby code. However, if the ruby code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate ruby function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_code_documentation_generation_ruby_multitask"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_code_documentation_generation_ruby_multitask", skip_special_tokens=True), device=0 ) tokenized_code = "def add ( severity , progname , & block ) return true if io . nil? || severity < level message = format_message ( severity , progname , yield ) MUTEX . synchronize { io . write ( message ) } true end" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/function%20documentation%20generation/ruby/base_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 160,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
SEBIS/code_trans_t5_base_code_documentation_generation_python_multitask
SEBIS
2021-06-23T04:45:11Z
17
1
transformers
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "text-generation-inference", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:04Z
--- tags: - summarization widget: - text: "def e ( message , exit_code = None ) : print_log ( message , YELLOW , BOLD ) if exit_code is not None : sys . exit ( exit_code )" --- # CodeTrans model for code documentation generation python Pretrained model on programming language python using the t5 base model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized python code functions: it works best with tokenized python functions. ## Model description This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. ## Intended uses & limitations The model could be used to generate the description for the python function or be fine-tuned on other python code tasks. It can be used on unparsed and untokenized python code. However, if the python code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate python function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_code_documentation_generation_python_multitask"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_code_documentation_generation_python_multitask", skip_special_tokens=True), device=0 ) tokenized_code = "def e ( message , exit_code = None ) : print_log ( message , YELLOW , BOLD ) if exit_code is not None : sys . exit ( exit_code )" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/function%20documentation%20generation/python/base_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 420,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
SEBIS/code_trans_t5_base_code_documentation_generation_php_transfer_learning_finetune
SEBIS
2021-06-23T04:40:47Z
12
0
transformers
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "text-generation-inference", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:04Z
--- tags: - summarization widget: - text: "public static function update ( $ table ) { if ( ! is_array ( $ table ) ) { $ table = json_decode ( $ table , true ) ; } if ( ! SchemaManager :: tableExists ( $ table [ 'oldName' ] ) ) { throw SchemaException :: tableDoesNotExist ( $ table [ 'oldName' ] ) ; } $ updater = new self ( $ table ) ; $ updater -> updateTable ( ) ; }" --- # CodeTrans model for code documentation generation php Pretrained model on programming language php using the t5 base model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized php code functions: it works best with tokenized php functions. ## Model description This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the code documentation generation task for the php function/method. ## Intended uses & limitations The model could be used to generate the description for the php function or be fine-tuned on other php code tasks. It can be used on unparsed and untokenized php code. However, if the php code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate php function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_code_documentation_generation_php_transfer_learning_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_code_documentation_generation_php_transfer_learning_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "public static function update ( $ table ) { if ( ! is_array ( $ table ) ) { $ table = json_decode ( $ table , true ) ; } if ( ! SchemaManager :: tableExists ( $ table [ 'oldName' ] ) ) { throw SchemaException :: tableDoesNotExist ( $ table [ 'oldName' ] ) ; } $ updater = new self ( $ table ) ; $ updater -> updateTable ( ) ; }" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/transfer%20learning%20fine-tuning/function%20documentation%20generation/php/base_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Transfer-learning Pretraining The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 65,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing php code. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
SEBIS/code_trans_t5_base_code_documentation_generation_php_multitask_finetune
SEBIS
2021-06-23T04:39:03Z
15
0
transformers
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "text-generation-inference", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:04Z
--- tags: - summarization widget: - text: "public static function update ( $ table ) { if ( ! is_array ( $ table ) ) { $ table = json_decode ( $ table , true ) ; } if ( ! SchemaManager :: tableExists ( $ table [ 'oldName' ] ) ) { throw SchemaException :: tableDoesNotExist ( $ table [ 'oldName' ] ) ; } $ updater = new self ( $ table ) ; $ updater -> updateTable ( ) ; }" --- # CodeTrans model for code documentation generation php Pretrained model on programming language php using the t5 base model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized php code functions: it works best with tokenized php functions. ## Model description This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the code documentation generation task for the php function/method. ## Intended uses & limitations The model could be used to generate the description for the php function or be fine-tuned on other php code tasks. It can be used on unparsed and untokenized php code. However, if the php code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate php function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_code_documentation_generation_php_multitask_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_code_documentation_generation_php_multitask_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "public static function update ( $ table ) { if ( ! is_array ( $ table ) ) { $ table = json_decode ( $ table , true ) ; } if ( ! SchemaManager :: tableExists ( $ table [ 'oldName' ] ) ) { throw SchemaException :: tableDoesNotExist ( $ table [ 'oldName' ] ) ; } $ updater = new self ( $ table ) ; $ updater -> updateTable ( ) ; }" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/function%20documentation%20generation/php/base_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 2000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing php code. Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
SEBIS/code_trans_t5_base_code_documentation_generation_javascript
SEBIS
2021-06-23T04:28:07Z
14
1
transformers
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "text-generation-inference", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:04Z
--- tags: - summarization widget: - text: "function isStandardBrowserEnv ( ) { if ( typeof navigator !== 'undefined' && ( navigator . product === 'ReactNative' || navigator . product === 'NativeScript' || navigator . product === 'NS' ) ) { return false ; } return ( typeof window !== 'undefined' && typeof document !== 'undefined' ) ; }" --- # CodeTrans model for code documentation generation javascript Pretrained model on programming language javascript using the t5 base model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized javascript code functions: it works best with tokenized javascript functions. ## Model description This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used single-task training on CodeSearchNet Corpus javascript dataset. ## Intended uses & limitations The model could be used to generate the description for the javascript function or be fine-tuned on other javascript code tasks. It can be used on unparsed and untokenized javascript code. However, if the javascript code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate javascript function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_code_documentation_generation_javascript"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_code_documentation_generation_javascript", skip_special_tokens=True), device=0 ) tokenized_code = "function isStandardBrowserEnv ( ) { if ( typeof navigator !== 'undefined' && ( navigator . product === 'ReactNative' || navigator . product === 'NativeScript' || navigator . product === 'NS' ) ) { return false ; } return ( typeof window !== 'undefined' && typeof document !== 'undefined' ) ; }" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/single%20task/function%20documentation%20generation/javascript/base_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
SEBIS/code_trans_t5_base_code_documentation_generation_java_multitask_finetune
SEBIS
2021-06-23T04:24:36Z
23
0
transformers
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "text-generation-inference", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:04Z
--- tags: - summarization widget: - text: "public static < T , U > Function < T , U > castFunction ( Class < U > target ) { return new CastToClass < T , U > ( target ) ; }" --- # CodeTrans model for code documentation generation java Pretrained model on programming language java using the t5 base model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized java code functions: it works best with tokenized java functions. ## Model description This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the code documentation generation task for the java function/method. ## Intended uses & limitations The model could be used to generate the description for the java function or be fine-tuned on other java code tasks. It can be used on unparsed and untokenized java code. However, if the java code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_code_documentation_generation_java_multitask_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_code_documentation_generation_java_multitask_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "public static < T , U > Function < T , U > castFunction ( Class < U > target ) { return new CastToClass < T , U > ( target ) ; }" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/function%20documentation%20generation/java/base_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 2000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing java code. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
SEBIS/code_trans_t5_base_code_documentation_generation_java
SEBIS
2021-06-23T04:20:17Z
28
1
transformers
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "text-generation-inference", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:04Z
--- tags: - summarization widget: - text: "public static < T , U > Function < T , U > castFunction ( Class < U > target ) { return new CastToClass < T , U > ( target ) ; }" --- # CodeTrans model for code documentation generation java Pretrained model on programming language java using the t5 base model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized java code functions: it works best with tokenized java functions. ## Model description This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used single-task training on CodeSearchNet Corpus java dataset. ## Intended uses & limitations The model could be used to generate the description for the java function or be fine-tuned on other java code tasks. It can be used on unparsed and untokenized java code. However, if the java code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_code_documentation_generation_java"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_code_documentation_generation_java", skip_special_tokens=True), device=0 ) tokenized_code = "public static < T , U > Function < T , U > castFunction ( Class < U > target ) { return new CastToClass < T , U > ( target ) ; }" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/single%20task/function%20documentation%20generation/java/base_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
SEBIS/code_trans_t5_base_code_comment_generation_java_multitask_finetune
SEBIS
2021-06-23T04:08:25Z
12
0
transformers
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "text-generation-inference", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:04Z
--- tags: - summarization widget: - text: "protected String renderUri ( URI uri ) { return uri . toASCIIString ( ) ; }" --- # CodeTrans model for code comment generation java Pretrained model on programming language java using the t5 base model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized java code functions: it works best with tokenized java functions. ## Model description This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the code comment generation task for the java function/method. ## Intended uses & limitations The model could be used to generate the description for the java function or be fine-tuned on other java code tasks. It can be used on unparsed and untokenized java code. However, if the java code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_code_comment_generation_java_multitask_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_code_comment_generation_java_multitask_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "protected String renderUri ( URI uri ) { return uri . toASCIIString ( ) ; }" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/code%20comment%20generation/base_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 260,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 60,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing java code. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Java | | -------------------- | :------------: | | CodeTrans-ST-Small | 37.98 | | CodeTrans-ST-Base | 38.07 | | CodeTrans-TF-Small | 38.56 | | CodeTrans-TF-Base | 39.06 | | CodeTrans-TF-Large | **39.50** | | CodeTrans-MT-Small | 20.15 | | CodeTrans-MT-Base | 27.44 | | CodeTrans-MT-Large | 34.69 | | CodeTrans-MT-TF-Small | 38.37 | | CodeTrans-MT-TF-Base | 38.90 | | CodeTrans-MT-TF-Large | 39.25 | | State of the art | 38.17 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
SEBIS/code_trans_t5_base_code_comment_generation_java
SEBIS
2021-06-23T04:05:04Z
18
1
transformers
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "text-generation-inference", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:04Z
--- tags: - summarization widget: - text: "protected String renderUri ( URI uri ) { return uri . toASCIIString ( ) ; }" --- # CodeTrans model for code comment generation java Pretrained model on programming language java using the t5 base model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized java code functions: it works best with tokenized java functions. ## Model description This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used single-task training on Code Comment Generation dataset. ## Intended uses & limitations The model could be used to generate the description for the java function or be fine-tuned on other java code tasks. It can be used on unparsed and untokenized java code. However, if the java code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_code_comment_generation_java"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_code_comment_generation_java", skip_special_tokens=True), device=0 ) tokenized_code = "protected String renderUri ( URI uri ) { return uri . toASCIIString ( ) ; }" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/single%20task/code%20comment%20generation/base_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Java | | -------------------- | :------------: | | CodeTrans-ST-Small | 37.98 | | CodeTrans-ST-Base | 38.07 | | CodeTrans-TF-Small | 38.56 | | CodeTrans-TF-Base | 39.06 | | CodeTrans-TF-Large | **39.50** | | CodeTrans-MT-Small | 20.15 | | CodeTrans-MT-Base | 27.44 | | CodeTrans-MT-Large | 34.69 | | CodeTrans-MT-TF-Small | 38.37 | | CodeTrans-MT-TF-Base | 38.90 | | CodeTrans-MT-TF-Large | 39.25 | | State of the art | 38.17 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
SEBIS/code_trans_t5_base_api_generation_multitask_finetune
SEBIS
2021-06-23T04:01:50Z
13
0
transformers
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "text-generation-inference", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:04Z
--- tags: - summarization widget: - text: "parse the uses licence node of this package , if any , and returns the license definition if theres" --- # CodeTrans model for api recommendation generation Pretrained model for api recommendation generation using the t5 base model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). ## Model description This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the api recommendation generation task for the java apis. ## Intended uses & limitations The model could be used to generate api usage for the java programming tasks. ### How to use Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_api_generation_multitask_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_api_generation_multitask_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "parse the uses licence node of this package , if any , and returns the license definition if theres" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/api%20generation/base_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 320,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing api recommendation generation data. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Java | | -------------------- | :------------: | | CodeTrans-ST-Small | 68.71 | | CodeTrans-ST-Base | 70.45 | | CodeTrans-TF-Small | 68.90 | | CodeTrans-TF-Base | 72.11 | | CodeTrans-TF-Large | 73.26 | | CodeTrans-MT-Small | 58.43 | | CodeTrans-MT-Base | 67.97 | | CodeTrans-MT-Large | 72.29 | | CodeTrans-MT-TF-Small | 69.29 | | CodeTrans-MT-TF-Base | 72.89 | | CodeTrans-MT-TF-Large | **73.39** | | State of the art | 54.42 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
SEBIS/code_trans_t5_base_api_generation_multitask
SEBIS
2021-06-23T03:59:20Z
12
1
transformers
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "text-generation-inference", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:04Z
--- tags: - summarization widget: - text: "parse the uses licence node of this package , if any , and returns the license definition if theres" --- # CodeTrans model for api recommendation generation Pretrained model for api recommendation generation using the t5 base model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). ## Model description This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. ## Intended uses & limitations The model could be used to generate api usage for the java programming tasks. ### How to use Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_api_generation_multitask"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_api_generation_multitask", skip_special_tokens=True), device=0 ) tokenized_code = "parse the uses licence node of this package , if any , and returns the license definition if theres" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/api%20generation/base_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 480,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Java | | -------------------- | :------------: | | CodeTrans-ST-Small | 68.71 | | CodeTrans-ST-Base | 70.45 | | CodeTrans-TF-Small | 68.90 | | CodeTrans-TF-Base | 72.11 | | CodeTrans-TF-Large | 73.26 | | CodeTrans-MT-Small | 58.43 | | CodeTrans-MT-Base | 67.97 | | CodeTrans-MT-Large | 72.29 | | CodeTrans-MT-TF-Small | 69.29 | | CodeTrans-MT-TF-Base | 72.89 | | CodeTrans-MT-TF-Large | **73.39** | | State of the art | 54.42 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
SEBIS/code_trans_t5_base_api_generation
SEBIS
2021-06-23T03:57:33Z
31
2
transformers
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "text-generation-inference", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:04Z
--- tags: - summarization widget: - text: "parse the uses licence node of this package , if any , and returns the license definition if theres" --- # CodeTrans model for api recommendation generation Pretrained model for api recommendation generation using the t5 base model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). ## Model description This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used single-task training on Api Recommendation Generation dataset. ## Intended uses & limitations The model could be used to generate api usage for the java programming tasks. ### How to use Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_api_generation"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_api_generation", skip_special_tokens=True), device=0 ) tokenized_code = "parse the uses licence node of this package , if any , and returns the license definition if theres" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/single%20task/api%20generation/base_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Java | | -------------------- | :------------: | | CodeTrans-ST-Small | 68.71 | | CodeTrans-ST-Base | 70.45 | | CodeTrans-TF-Small | 68.90 | | CodeTrans-TF-Base | 72.11 | | CodeTrans-TF-Large | 73.26 | | CodeTrans-MT-Small | 58.43 | | CodeTrans-MT-Base | 67.97 | | CodeTrans-MT-Large | 72.29 | | CodeTrans-MT-TF-Small | 69.29 | | CodeTrans-MT-TF-Base | 72.89 | | CodeTrans-MT-TF-Large | **73.39** | | State of the art | 54.42 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
Rostlab/prot_t5_base_mt_uniref50
Rostlab
2021-06-23T03:55:50Z
232
0
transformers
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "summarization", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:04Z
--- tags: - summarization widget: - text: "predict protein ms : Met Gly Leu Pro Val Ser Trp Ala Pro Pro Ala Leu" ---
NlpHUST/t5-vi-en-small
NlpHUST
2021-06-23T03:45:23Z
8
1
transformers
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:04Z
--- language: - vi tags: - t5 - seq2seq # Machine translation for vietnamese ## Model Description T5-vi-en-small is a transformer model for vietnamese machine translation designed using T5 architecture. ## Training data T5-vi-en-small was trained on 4M sentence pairs (english,vietnamese) ### How to use ```py from transformers import T5ForConditionalGeneration, T5Tokenizer import torch if torch.cuda.is_available(): device = torch.device("cuda") print('There are %d GPU(s) available.' % torch.cuda.device_count()) print('We will use the GPU:', torch.cuda.get_device_name(0)) else: print('No GPU available, using the CPU instead.') device = torch.device("cpu") model = T5ForConditionalGeneration.from_pretrained("NlpHUST/t5-vi-en-small") tokenizer = T5Tokenizer.from_pretrained("NlpHUST/t5-vi-en-small") model.to(device) src = "Indonesia phỏng đoán nguyên nhân tàu ngầm chở 53 người mất tích bí ẩn" tokenized_text = tokenizer.encode(src, return_tensors="pt").to(device) model.eval() summary_ids = model.generate( tokenized_text, max_length=256, num_beams=5, repetition_penalty=2.5, length_penalty=1.0, early_stopping=True ) output = tokenizer.decode(summary_ids[0], skip_special_tokens=True) print(output) Indonesia anticipates the cause of the submarine transporting 53 mysterious missing persons ```
NlpHUST/t5-en-vi-small
NlpHUST
2021-06-23T03:33:59Z
41
2
transformers
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "arxiv:1706.05565", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:04Z
# T5-EN-VI-SMALL:Pretraining Text-To-Text Transfer Transformer for English Vietnamese Translation # Dataset The *IWSLT'15 English-Vietnamese* data is used from [Stanford NLP group](https://nlp.stanford.edu/projects/nmt/). For all experiments the corpus was split into training, development and test set: | Data set | Sentences | Download | ----------- | --------- | --------------------------------------------------------------------------------------------------------------------------------- | Training | 133,317 | via [GitHub](https://github.com/stefan-it/nmt-en-vi/raw/master/data/train-en-vi.tgz) or located in `data/train-en-vi.tgz` | Development | 1,553 | via [GitHub](https://github.com/stefan-it/nmt-en-vi/raw/master/data/dev-2012-en-vi.tgz) or located in `data/dev-2012-en-vi.tgz` | Test | 1,268 | via [GitHub](https://github.com/stefan-it/nmt-en-vi/raw/master/data/test-2013-en-vi.tgz) or located in `data/test-2013-en-vi.tgz` ## Results The results on test set. | Model | BLEU (Beam Search) | ----------------------------------------------------------------------------------------------------- | ------------------ | [Luong & Manning (2015)](https://nlp.stanford.edu/pubs/luong-manning-iwslt15.pdf) | 23.30 | Sequence-to-sequence model with attention | 26.10 | Neural Phrase-based Machine Translation [Huang et. al. (2017)](https://arxiv.org/abs/1706.05565) | 27.69 | Neural Phrase-based Machine Translation + LM [Huang et. al. (2017)](https://arxiv.org/abs/1706.05565) | 28.07 | t5-en-vi-small (pretraining, without training data) | **28.46** (cased) / **29.23** (uncased) |t5-en-vi-small (fineturning with training data) | **32.38** (cased) / **33.19** (uncased) #### Example Using ``` bash import torch from transformers import T5ForConditionalGeneration, T5Tokenizer import torch if torch.cuda.is_available(): device = torch.device("cuda") print('There are %d GPU(s) available.' % torch.cuda.device_count()) print('We will use the GPU:', torch.cuda.get_device_name(0)) else: print('No GPU available, using the CPU instead.') device = torch.device("cpu") model = T5ForConditionalGeneration.from_pretrained("NlpHUST/t5-en-vi-small") tokenizer = T5Tokenizer.from_pretrained("NlpHUST/t5-en-vi-small") model.to(device) src = "In school , we spent a lot of time studying the history of Kim Il-Sung , but we never learned much about the outside world , except that America , South Korea , Japan are the enemies ." tokenized_text = tokenizer.encode(src, return_tensors="pt").to(device) model.eval() summary_ids = model.generate( tokenized_text, max_length=128, num_beams=5, repetition_penalty=2.5, length_penalty=1.0, early_stopping=True ) output = tokenizer.decode(summary_ids[0], skip_special_tokens=True) print(output) ``` #### Output ``` bash Ở trường, chúng tôi dành nhiều thời gian để nghiên cứu về lịch sử Kim Il-Sung, nhưng chúng tôi chưa bao giờ học được nhiều về thế giới bên ngoài, ngoại trừ Mỹ, Hàn Quốc, Nhật Bản là kẻ thù. ``` ### Contact information For personal communication related to this project, please contact Nha Nguyen Van ([email protected]).
NlpHUST/t5-en-vi-base
NlpHUST
2021-06-23T03:30:20Z
9
1
transformers
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "arxiv:1706.05565", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:04Z
# T5-EN-VI-BASE:Pretraining Text-To-Text Transfer Transformer for English Vietnamese Translation # Dataset The *IWSLT'15 English-Vietnamese* data is used from [Stanford NLP group](https://nlp.stanford.edu/projects/nmt/). For all experiments the corpus was split into training, development and test set: | Data set | Sentences | Download | ----------- | --------- | --------------------------------------------------------------------------------------------------------------------------------- | Training | 133,317 | via [GitHub](https://github.com/stefan-it/nmt-en-vi/raw/master/data/train-en-vi.tgz) or located in `data/train-en-vi.tgz` | Development | 1,553 | via [GitHub](https://github.com/stefan-it/nmt-en-vi/raw/master/data/dev-2012-en-vi.tgz) or located in `data/dev-2012-en-vi.tgz` | Test | 1,268 | via [GitHub](https://github.com/stefan-it/nmt-en-vi/raw/master/data/test-2013-en-vi.tgz) or located in `data/test-2013-en-vi.tgz` ## Results The results on test set. | Model | BLEU (Beam Search) | ----------------------------------------------------------------------------------------------------- | ------------------ | [Luong & Manning (2015)](https://nlp.stanford.edu/pubs/luong-manning-iwslt15.pdf) | 23.30 | Sequence-to-sequence model with attention | 26.10 | Neural Phrase-based Machine Translation [Huang et. al. (2017)](https://arxiv.org/abs/1706.05565) | 27.69 | Neural Phrase-based Machine Translation + LM [Huang et. al. (2017)](https://arxiv.org/abs/1706.05565) | 28.07 | t5-en-vi-small (pretraining, without training data) | **28.46** (cased) / **29.23** (uncased) |t5-en-vi-small (fineturning with training data) | **32.38** (cased) / **33.19** (uncased) | t5-en-vi-base (pretraining, without training data) | **29.66** (cased) / **30.37** (uncased) #### Example Using ``` bash import torch from transformers import T5ForConditionalGeneration, T5Tokenizer import torch if torch.cuda.is_available(): device = torch.device("cuda") print('There are %d GPU(s) available.' % torch.cuda.device_count()) print('We will use the GPU:', torch.cuda.get_device_name(0)) else: print('No GPU available, using the CPU instead.') device = torch.device("cpu") model = T5ForConditionalGeneration.from_pretrained("NlpHUST/t5-en-vi-small") tokenizer = T5Tokenizer.from_pretrained("NlpHUST/t5-en-vi-small") model.to(device) src = "In school , we spent a lot of time studying the history of Kim Il-Sung , but we never learned much about the outside world , except that America , South Korea , Japan are the enemies ." tokenized_text = tokenizer.encode(src, return_tensors="pt").to(device) model.eval() summary_ids = model.generate( tokenized_text, max_length=128, num_beams=5, repetition_penalty=2.5, length_penalty=1.0, early_stopping=True ) output = tokenizer.decode(summary_ids[0], skip_special_tokens=True) print(output) ``` #### Output ``` bash Ở trường, chúng tôi dành nhiều thời gian để nghiên cứu về lịch sử Kim Il-Sung, nhưng chúng tôi chưa bao giờ học được nhiều về thế giới bên ngoài, ngoại trừ Mỹ, Hàn Quốc, Nhật Bản là kẻ thù. ``` ### Contact information For personal communication related to this project, please contact Nha Nguyen Van ([email protected]).
Michau/t5-base-en-generate-headline
Michau
2021-06-23T03:17:34Z
23,131
52
transformers
[ "transformers", "pytorch", "tf", "jax", "t5", "text2text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:04Z
## About the model The model has been trained on a collection of 500k articles with headings. Its purpose is to create a one-line heading suitable for the given article. Sample code with a WikiNews article: ```python import torch from transformers import T5ForConditionalGeneration,T5Tokenizer device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model = T5ForConditionalGeneration.from_pretrained("Michau/t5-base-en-generate-headline") tokenizer = T5Tokenizer.from_pretrained("Michau/t5-base-en-generate-headline") model = model.to(device) article = ''' Very early yesterday morning, the United States President Donald Trump reported he and his wife First Lady Melania Trump tested positive for COVID-19. Officials said the Trumps' 14-year-old son Barron tested negative as did First Family and Senior Advisors Jared Kushner and Ivanka Trump. Trump took to social media, posting at 12:54 am local time (0454 UTC) on Twitter, "Tonight, [Melania] and I tested positive for COVID-19. We will begin our quarantine and recovery process immediately. We will get through this TOGETHER!" Yesterday afternoon Marine One landed on the White House's South Lawn flying Trump to Walter Reed National Military Medical Center (WRNMMC) in Bethesda, Maryland. Reports said both were showing "mild symptoms". Senior administration officials were tested as people were informed of the positive test. Senior advisor Hope Hicks had tested positive on Thursday. Presidential physician Sean Conley issued a statement saying Trump has been given zinc, vitamin D, Pepcid and a daily Aspirin. Conley also gave a single dose of the experimental polyclonal antibodies drug from Regeneron Pharmaceuticals. According to official statements, Trump, now operating from the WRNMMC, is to continue performing his duties as president during a 14-day quarantine. In the event of Trump becoming incapacitated, Vice President Mike Pence could take over the duties of president via the 25th Amendment of the US Constitution. The Pence family all tested negative as of yesterday and there were no changes regarding Pence's campaign events. ''' text = "headline: " + article max_len = 256 encoding = tokenizer.encode_plus(text, return_tensors = "pt") input_ids = encoding["input_ids"].to(device) attention_masks = encoding["attention_mask"].to(device) beam_outputs = model.generate( input_ids = input_ids, attention_mask = attention_masks, max_length = 64, num_beams = 3, early_stopping = True, ) result = tokenizer.decode(beam_outputs[0]) print(result) ``` Result: ```Trump and First Lady Melania Test Positive for COVID-19```
JDBN/t5-base-fr-qg-fquad
JDBN
2021-06-23T02:26:52Z
272
4
transformers
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "question-generation", "seq2seq", "fr", "dataset:fquad", "dataset:piaf", "arxiv:1910.10683", "arxiv:2002.06071", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:04Z
--- language: fr widget: - text: "generate question: Barack Hussein Obama, né le 4 aout 1961, est un homme politique américain et avocat. Il a été élu <hl> en 2009 <hl> pour devenir le 44ème président des Etats-Unis d'Amérique. </s>" - text: "question: Quand Barack Obama a t'il été élu président? context: Barack Hussein Obama, né le 4 aout 1961, est un homme politique américain et avocat. Il a été élu en 2009 pour devenir le 44ème président des Etats-Unis d'Amérique. </s>" tags: - pytorch - t5 - question-generation - seq2seq datasets: - fquad - piaf --- # T5 Question Generation and Question Answering ## Model description This model is a T5 Transformers model (airklizz/t5-base-multi-fr-wiki-news) that was fine-tuned in french on 3 different tasks * question generation * question answering * answer extraction It obtains quite good results on FQuAD validation dataset. ## Intended uses & limitations This model functions for the 3 tasks mentionned earlier and was not tested on other tasks. ```python from transformers import T5ForConditionalGeneration, T5Tokenizer model = T5ForConditionalGeneration.from_pretrained("JDBN/t5-base-fr-qg-fquad") tokenizer = T5Tokenizer.from_pretrained("JDBN/t5-base-fr-qg-fquad") ``` ## Training data The initial model used was https://huggingface.co/airKlizz/t5-base-multi-fr-wiki-news. This model was finetuned on a dataset composed of FQuAD and PIAF on the 3 tasks mentioned previously. The data were preprocessed like this * question generation: "generate question: Barack Hussein Obama, né le 4 aout 1961, est un homme politique américain et avocat. Il a été élu <hl> en 2009 <hl> pour devenir le 44ème président des Etats-Unis d'Amérique." * question answering: "question: Quand Barack Hussein Obamaa-t-il été élu président des Etats-Unis d’Amérique? context: Barack Hussein Obama, né le 4 aout 1961, est un homme politique américain et avocat. Il a été élu en 2009 pour devenir le 44ème président des Etats-Unis d’Amérique." * answer extraction: "extract_answers: Barack Hussein Obama, né le 4 aout 1961, est un homme politique américain et avocat. <hl> Il a été élu en 2009 pour devenir le 44ème président des Etats-Unis d’Amérique <hl>." The preprocessing we used was implemented in https://github.com/patil-suraj/question_generation ## Eval results #### On FQuAD validation set | BLEU_1 | BLEU_2 | BLEU_3 | BLEU_4 | METEOR | ROUGE_L | CIDEr | |--------|--------|--------|--------|--------|---------|-------| | 0.290 | 0.203 | 0.149 | 0.111 | 0.197 | 0.284 | 1.038 | #### Question Answering metrics For these metrics, the performance of this question answering model (https://huggingface.co/illuin/camembert-base-fquad) on FQuAD original question and on T5 generated questions are compared. | Questions | Exact Match | F1 Score | |------------------|--------|--------| |Original FQuAD | 54.015 | 77.466 | |Generated | 45.765 | 67.306 | ### BibTeX entry and citation info ```bibtex @misc{githubPatil, author = {Patil Suraj}, title = {question generation GitHub repository}, year = {2020}, howpublished={\url{https://github.com/patil-suraj/question_generation}} } @article{T5, title={Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer}, author={Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu}, year={2019}, eprint={1910.10683}, archivePrefix={arXiv}, primaryClass={cs.LG} } @misc{dhoffschmidt2020fquad, title={FQuAD: French Question Answering Dataset}, author={Martin d'Hoffschmidt and Wacim Belblidia and Tom Brendlé and Quentin Heinrich and Maxime Vidal}, year={2020}, eprint={2002.06071}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
BeIR/query-gen-msmarco-t5-large-v1
BeIR
2021-06-23T02:12:04Z
4,409
16
transformers
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:04Z
# Query Generation This model is the t5-base model from [docTTTTTquery](https://github.com/castorini/docTTTTTquery). The T5-base model was trained on the [MS MARCO Passage Dataset](https://github.com/microsoft/MSMARCO-Passage-Ranking), which consists of about 500k real search queries from Bing together with the relevant passage. The model can be used for query generation to learn semantic search models without requiring annotated training data: [Synthetic Query Generation](https://github.com/UKPLab/sentence-transformers/tree/master/examples/unsupervised_learning/query_generation). ## Usage ```python from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained('model-name') model = T5ForConditionalGeneration.from_pretrained('model-name') para = "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects." input_ids = tokenizer.encode(para, return_tensors='pt') outputs = model.generate( input_ids=input_ids, max_length=64, do_sample=True, top_p=0.95, num_return_sequences=3) print("Paragraph:") print(para) print("\nGenerated Queries:") for i in range(len(outputs)): query = tokenizer.decode(outputs[i], skip_special_tokens=True) print(f'{i + 1}: {query}') ```
hyunwoongko/reddit-9B
hyunwoongko
2021-06-22T16:09:14Z
7
7
transformers
[ "transformers", "pytorch", "blenderbot", "text2text-generation", "convAI", "conversational", "facebook", "en", "dataset:blended_skill_talk", "arxiv:1907.06616", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: - en thumbnail: tags: - convAI - conversational - facebook license: apache-2.0 datasets: - blended_skill_talk metrics: - perplexity --- ## Model description + Paper: [Recipes for building an open-domain chatbot](https://arxiv.org/abs/1907.06616) + [Original PARLAI Code](https://parl.ai/projects/recipes/) ### Abstract Building open-domain chatbots is a challenging area for machine learning research. While prior work has shown that scaling neural models in the number of parameters and the size of the data they are trained on gives improved results, we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to their partners, both asking and answering questions, and displaying knowledge, empathy and personality appropriately, depending on the situation. We show that large scale models can learn these skills when given appropriate training data and choice of generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter neural models, and make our models and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing failure cases of our models.
hyunwoongko/reddit-3B
hyunwoongko
2021-06-22T15:53:45Z
8
10
transformers
[ "transformers", "pytorch", "blenderbot", "text2text-generation", "convAI", "conversational", "facebook", "en", "dataset:blended_skill_talk", "arxiv:1907.06616", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: - en thumbnail: tags: - convAI - conversational - facebook license: apache-2.0 datasets: - blended_skill_talk metrics: - perplexity --- ## Model description + Paper: [Recipes for building an open-domain chatbot](https://arxiv.org/abs/1907.06616) + [Original PARLAI Code](https://parl.ai/projects/recipes/) ### Abstract Building open-domain chatbots is a challenging area for machine learning research. While prior work has shown that scaling neural models in the number of parameters and the size of the data they are trained on gives improved results, we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to their partners, both asking and answering questions, and displaying knowledge, empathy and personality appropriately, depending on the situation. We show that large scale models can learn these skills when given appropriate training data and choice of generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter neural models, and make our models and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing failure cases of our models.
huggingtweets/solarmonke
huggingtweets
2021-06-22T13:03:31Z
6
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/solarmonke/1624367006881/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1380728043761700865/ORlB55uo_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">🌞 𝕊𝕠𝕝𝕒𝕣 𝕄𝕠𝕟𝕜𝕖 🐵</div> <div style="text-align: center; font-size: 14px;">@solarmonke</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from 🌞 𝕊𝕠𝕝𝕒𝕣 𝕄𝕠𝕟𝕜𝕖 🐵. | Data | 🌞 𝕊𝕠𝕝𝕒𝕣 𝕄𝕠𝕟𝕜𝕖 🐵 | | --- | --- | | Tweets downloaded | 1280 | | Retweets | 255 | | Short tweets | 211 | | Tweets kept | 814 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/237my0cu/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @solarmonke's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1est0um6) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1est0um6/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/solarmonke') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
JuliusAlphonso/dear-jarvis-monolith-xed-en
JuliusAlphonso
2021-06-22T09:48:03Z
8
1
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
## Model description This model was trained on the XED dataset and achieved validation loss: 0.5995 validation acc: 84.28% (ROC-AUC) Labels are based on Plutchik's model of emotions and may be combined: ![image](https://user-images.githubusercontent.com/12978899/122398897-f60d2500-cf97-11eb-8991-61e68f4ea1fc.png) ### Framework versions - Transformers 4.6.1 - Pytorch 1.8.1+cu101 - Datasets 1.8.0 - Tokenizers 0.10.3
Narrativa/mbart-large-50-finetuned-opus-pt-en-translation
Narrativa
2021-06-21T11:16:19Z
511
5
transformers
[ "transformers", "pytorch", "mbart", "text2text-generation", "translation", "pt", "en", "dataset:opus100", "dataset:opusbook", "arxiv:2008.00401", "arxiv:2004.11867", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- language: - pt - en datasets: - opus100 - opusbook tags: - translation metrics: - bleu --- # mBART-large-50 fine-tuned onpus100 and opusbook for Portuguese to English translation. [mBART-50](https://huggingface.co/facebook/mbart-large-50/) large fine-tuned on [opus100](https://huggingface.co/datasets/viewer/?dataset=opus100) dataset for **NMT** downstream task. # Details of mBART-50 🧠 mBART-50 is a multilingual Sequence-to-Sequence model pre-trained using the "Multilingual Denoising Pretraining" objective. It was introduced in [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) paper. mBART-50 is a multilingual Sequence-to-Sequence model. It was created to show that multilingual translation models can be created through multilingual fine-tuning. Instead of fine-tuning on one direction, a pre-trained model is fine-tuned many directions simultaneously. mBART-50 is created using the original mBART model and extended to add extra 25 languages to support multilingual machine translation models of 50 languages. The pre-training objective is explained below. **Multilingual Denoising Pretraining**: The model incorporates N languages by concatenating data: `D = {D1, ..., DN }` where each Di is a collection of monolingual documents in language `i`. The source documents are noised using two schemes, first randomly shuffling the original sentences' order, and second a novel in-filling scheme, where spans of text are replaced with a single mask token. The model is then tasked to reconstruct the original text. 35% of each instance's words are masked by random sampling a span length according to a Poisson distribution `(λ = 3.5)`. The decoder input is the original text with one position offset. A language id symbol `LID` is used as the initial token to predict the sentence. ## Details of the downstream task (NMT) - Dataset 📚 - **Homepage:** [Link](http://opus.nlpl.eu/opus-100.php) - **Repository:** [GitHub](https://github.com/EdinburghNLP/opus-100-corpus) - **Paper:** [ARXIV](https://arxiv.org/abs/2004.11867) ### Dataset Summary OPUS-100 is English-centric, meaning that all training pairs include English on either the source or target side. The corpus covers 100 languages (including English). Languages were selected based on the volume of parallel data available in OPUS. ### Languages OPUS-100 contains approximately 55M sentence pairs. Of the 99 language pairs, 44 have 1M sentence pairs of training data, 73 have at least 100k, and 95 have at least 10k. ## Dataset Structure ### Data Fields - `src_tag`: `string` text in source language - `tgt_tag`: `string` translation of source language in target language ### Data Splits The dataset is split into training, development, and test portions. Data was prepared by randomly sampled up to 1M sentence pairs per language pair for training and up to 2000 each for development and test. To ensure that there was no overlap (at the monolingual sentence level) between the training and development/test data, they applied a filter during sampling to exclude sentences that had already been sampled. Note that this was done cross-lingually so that, for instance, an English sentence in the Portuguese-English portion of the training data could not occur in the Hindi-English test set. ## Test set metrics 🧾 We got a **BLEU score of 26.12** ## Model in Action 🚀 ```sh git clone https://github.com/huggingface/transformers.git pip install -q ./transformers ``` ```python from transformers import MBart50TokenizerFast, MBartForConditionalGeneration ckpt = 'Narrativa/mbart-large-50-finetuned-opus-pt-en-translation' tokenizer = MBart50TokenizerFast.from_pretrained(ckpt) model = MBartForConditionalGeneration.from_pretrained(ckpt).to("cuda") tokenizer.src_lang = 'pt_XX' def translate(text): inputs = tokenizer(text, return_tensors='pt') input_ids = inputs.input_ids.to('cuda') attention_mask = inputs.attention_mask.to('cuda') output = model.generate(input_ids, attention_mask=attention_mask, forced_bos_token_id=tokenizer.lang_code_to_id['en_XX']) return tokenizer.decode(output[0], skip_special_tokens=True) translate('here your Portuguese text to be translated to English...') ``` Created by: [Narrativa](https://www.narrativa.com/) About Narrativa: Natural Language Generation (NLG) | Gabriele, our machine learning-based platform, builds and deploys natural language solutions. #NLG #AI
dragonStyle/bert-303-step35000
dragonStyle
2021-06-21T03:01:59Z
4
0
transformers
[ "transformers", "pytorch", "bert", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
这是一个git lfs项目。 没有改造数据前的模型性能: knowledge points - max length is 1566, min length is 3, ave length is 87.96, 95% quantile is 490. question and answer - max length is 303, min length is 8, ave length is 47.09, 95% quantile is 119. 303精度为:2562/5232=48.97%
nreimers/mMiniLMv2-L12-H384-distilled-from-XLMR-Large
nreimers
2021-06-20T19:03:16Z
31,114
17
transformers
[ "transformers", "pytorch", "xlm-roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
# Multilingual MiniLMv2 This is a MiniLMv2 model from: [https://github.com/microsoft/unilm](https://github.com/microsoft/unilm/tree/master/minilm)
nreimers/MiniLMv2-L12-H384-distilled-from-RoBERTa-Large
nreimers
2021-06-20T19:02:55Z
340
5
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
# MiniLMv2 This is a MiniLMv2 model from: [https://github.com/microsoft/unilm](https://github.com/microsoft/unilm/tree/master/minilm)
nreimers/MiniLMv2-L6-H768-distilled-from-RoBERTa-Large
nreimers
2021-06-20T19:02:48Z
26
3
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
# MiniLMv2 This is a MiniLMv2 model from: [https://github.com/microsoft/unilm](https://github.com/microsoft/unilm/tree/master/minilm)
nreimers/MiniLMv2-L6-H768-distilled-from-BERT-Large
nreimers
2021-06-20T19:02:40Z
9
3
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
# MiniLMv2 This is a MiniLMv2 model from: [https://github.com/microsoft/unilm](https://github.com/microsoft/unilm/tree/master/minilm)
nreimers/MiniLMv2-L6-H768-distilled-from-BERT-Base
nreimers
2021-06-20T19:02:31Z
7
2
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
# MiniLMv2 This is a MiniLMv2 model from: [https://github.com/microsoft/unilm](https://github.com/microsoft/unilm/tree/master/minilm)
nreimers/MiniLMv2-L6-H384-distilled-from-BERT-Large
nreimers
2021-06-20T19:02:12Z
112,012
1
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
# MiniLMv2 This is a MiniLMv2 model from: [https://github.com/microsoft/unilm](https://github.com/microsoft/unilm/tree/master/minilm)
Huffon/klue-roberta-base-nli
Huffon
2021-06-20T17:32:53Z
50
5
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "nli", "ko", "dataset:klue", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- language: ko tags: - roberta - nli datasets: - klue ---
JuliusAlphonso/dear-jarvis-v5
JuliusAlphonso
2021-06-20T06:59:43Z
4
3
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- license: apache-2.0 datasets: - null model_index: - name: dear-jarvis-v5 results: - task: name: Text Classification type: text-classification --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # dear-jarvis-v5 This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3148 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 470 | 0.3106 | | 0.3452 | 2.0 | 940 | 0.3064 | | 0.2692 | 3.0 | 1410 | 0.3148 | ### Framework versions - Transformers 4.7.0 - Pytorch 1.9.0+cu102 - Datasets 1.8.0 - Tokenizers 0.10.3
JuliusAlphonso/distilbert-plutchik
JuliusAlphonso
2021-06-19T22:06:23Z
421
7
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
Labels are based on Plutchik's model of emotions and may be combined: ![image](https://user-images.githubusercontent.com/12978899/122398897-f60d2500-cf97-11eb-8991-61e68f4ea1fc.png)
HeyLucasLeao/gpt-neo-small-portuguese
HeyLucasLeao
2021-06-19T20:51:57Z
267
7
transformers
[ "transformers", "pytorch", "gpt_neo", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:04Z
## GPT-Neo Small Portuguese #### Model Description This is a finetuned version from GPT-Neo 125M by EletheurAI to Portuguese language. #### Training data It was trained from 227,382 selected texts from a PTWiki Dump. You can found all the data from here: https://archive.org/details/ptwiki-dump-20210520 #### Training Procedure Every text was passed through a GPT2-Tokenizer with bos and eos tokens to separate them, with max sequence length that the GPT-Neo could support. It was finetuned using the default metrics of the Trainer Class, available on the Hugging Face library. ##### Learning Rate: **2e-4** ##### Epochs: **1** #### Goals My true intention was totally educational, thus making available a Portuguese version of this model. How to use ``` python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("HeyLucasLeao/gpt-neo-small-portuguese") model = AutoModelForCausalLM.from_pretrained("HeyLucasLeao/gpt-neo-small-portuguese") text = 'eu amo o brasil.' generated = tokenizer(f'<|startoftext|> {text}', return_tensors='pt').input_ids.cuda() #Generating texts sample_outputs = model.generate(generated, # Use sampling instead of greedy decoding do_sample=True, # Keep only top 3 token with the highest probability top_k=3, # Maximum sequence length max_length=200, # Keep only the most probable tokens with cumulative probability of 95% top_p=0.95, # Changes randomness of generated sequences temperature=1.9, # Number of sequences to generate num_return_sequences=3) # Decoding and printing sequences for i, sample_output in enumerate(sample_outputs): print(">> Generated text {}\\\\ \\\\ {}".format(i+1, tokenizer.decode(sample_output.tolist()))) # >> Generated text #Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation. #>> Generated text 1 #<|startoftext|> eu amo o brasil. O termo foi usado por alguns autores como uma forma de designar a formação do poder político do Brasil. A partir da década de 1960, o termo passou a ser usado para designar a formação política do Brasil. A partir de meados da década de 1970 e até o inicio dos anos 2000, o termo foi aplicado à formação político-administrativo do país, sendo utilizado por alguns autores como uma expressão de "política de direita". História Antecedentes O termo "político-administrário" foi usado pela primeira vez em 1891 por um gru #>> Generated text 2 #<|startoftext|> eu amo o brasil. É uma das muitas pessoas do mundo, ao contrário da maioria das pessoas, que são chamados de "pessoas do Brasil", que são chamados de "brincos do país" e que têm uma carreira de mais de um século. O termo "brincal de ouro" é usado em referências às pessoas que vivem no Brasil, e que são chamados "brincos do país", que são "cidade" e que vivem na cidade de Nova York e que vive em um país onde a maior parte das pessoas são chamados de "cidades". Hist #>> Generated text 3 #<|startoftext|> eu amo o brasil. É uma expressão que se refere ao uso de um instrumento musical em particular para se referir à qualidade musical, o que é uma expressão da qualidade da qualidade musical de uma pessoa. A expressão "amor" (em inglês, amo), é a expressão que pode ser usada com o intuito empregado em qualquer situação em que a vontade de uma pessoa de se sentir amado ou amoroso é mais do que um desejo de uma vontade. Em geral, a expressão "amoro" (do inglês, amo) pode também se referir tanto a uma pessoa como um instrumento de cordas ou de uma ```
prithivida/informal_to_formal_styletransfer
prithivida
2021-06-19T08:30:19Z
444
8
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
## This model belongs to the Styleformer project [Please refer to github page](https://github.com/PrithivirajDamodaran/Styleformer)
Dev-DGT/food-dbert-multiling
Dev-DGT
2021-06-18T21:55:58Z
5
0
transformers
[ "transformers", "pytorch", "distilbert", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:04Z
--- widget: - text: "El paciente se alimenta de pan, sopa de calabaza y coca-cola" --- # Token classification for FOODs. Detects foods in sentences. Currently, only supports spanish. Multiple words foods are detected as one entity. ## To-do - English support. - Negation support. - Quantity tags. - Psychosocial tags.
ethanyt/guwen-sent
ethanyt
2021-06-18T04:51:54Z
8
4
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "chinese", "classical chinese", "literary chinese", "ancient chinese", "bert", "sentiment classificatio", "zh", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: - "zh" thumbnail: "https://user-images.githubusercontent.com/9592150/97142000-cad08e00-179a-11eb-88df-aff9221482d8.png" tags: - "chinese" - "classical chinese" - "literary chinese" - "ancient chinese" - "bert" - "pytorch" - "sentiment classificatio" license: "apache-2.0" pipeline_tag: "text-classification" widget: - text: "滚滚长江东逝水,浪花淘尽英雄" - text: "寻寻觅觅,冷冷清清,凄凄惨惨戚戚" - text: "执手相看泪眼,竟无语凝噎,念去去,千里烟波,暮霭沉沉楚天阔。" - text: "忽如一夜春风来,干树万树梨花开" --- # Guwen Sent A Classical Chinese Poem Sentiment Classifier. See also: <a href="https://github.com/ethan-yt/guwen-models"> <img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=guwen-models&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" /> </a> <a href="https://github.com/ethan-yt/cclue/"> <img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=cclue&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" /> </a> <a href="https://github.com/ethan-yt/guwenbert/"> <img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=guwenbert&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" /> </a>
danyaljj/gpt2_question_answering_squad2
danyaljj
2021-06-17T17:49:44Z
199
2
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
Sample usage: ```python tokenizer = GPT2Tokenizer.from_pretrained("gpt2") model = GPT2LMHeadModel.from_pretrained("danyaljj/gpt2_question_answering_squad2") input_ids = tokenizer.encode("There are two apples on the counter. Q: How many apples? A:", return_tensors="pt") outputs = model.generate(input_ids) print("Generated:", tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` Which should produce this: ``` Generated: There are two apples on the counter. Q: How many apples? A: two ```
Kalindu/SinBerto
Kalindu
2021-06-17T16:37:19Z
6
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "SinBERTo", "Sinhala", "si", "arxiv:1907.11692", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
--- language: si tags: - SinBERTo - Sinhala - roberta --- ### Overview SinBerto is a small language model trained on a small news corpus. SinBerto is trained on Sinhala Language which is a low resource language compared to other languages. ### Model Specifications. model : [Roberta](https://arxiv.org/abs/1907.11692) vocab_size=52_000, max_position_embeddings=514, num_attention_heads=12, num_hidden_layers=6, type_vocab_size=1 ### How to use from the Transformers Library from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("Kalindu/SinBerto") model = AutoModelForMaskedLM.from_pretrained("Kalindu/SinBerto") ### OR Clone the model repo git lfs install git clone https://huggingface.co/Kalindu/SinBerto
huggingtweets/ponkichi_book
huggingtweets
2021-06-17T14:47:52Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/ponkichi_book/1623941267798/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1108044599795175424/Ce8JrQsX_400x400.png&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">ぽんきち📚Web本屋さん店主</div> <div style="text-align: center; font-size: 14px;">@ponkichi_book</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from ぽんきち📚Web本屋さん店主. | Data | ぽんきち📚Web本屋さん店主 | | --- | --- | | Tweets downloaded | 3250 | | Retweets | 0 | | Short tweets | 471 | | Tweets kept | 2779 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1xfkfwq2/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ponkichi_book's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3nufawaq) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3nufawaq/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/ponkichi_book') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
ScottaStrong/DialogGPT-small-Scott
ScottaStrong
2021-06-17T04:11:39Z
7
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:04Z
--- thumbnail: https://huggingface.co/front/thumbnails/dialogpt.png tags: - conversational license: mit --- # DialoGPT Trained on the Speech of a Game Character This is an instance of [microsoft/DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-small) trained on a game character, Joshua from [The World Ends With You](https://en.wikipedia.org/wiki/The_World_Ends_with_You). The data comes from [a Kaggle game script dataset](https://www.kaggle.com/ruolinzheng/twewy-game-script). I built a Discord AI chatbot based on this model. [Check out my GitHub repo.](https://github.com/RuolinZheng08/twewy-discord-chatbot) Chat with the model: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("scottastrong/DialogGPT-small-Scott") model = AutoModelWithLMHead.from_pretrained("scottastrong/DialogGPT-small-Scott") # Let's chat for 4 lines for step in range(4): # encode the new user input, add the eos_token and return a tensor in Pytorch new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt') # print(new_user_input_ids) # append the new user input tokens to the chat history bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids # generated a response while limiting the total chat history to 1000 tokens, chat_history_ids = model.generate( bot_input_ids, max_length=200, pad_token_id=tokenizer.eos_token_id, no_repeat_ngram_size=3, do_sample=True, top_k=100, top_p=0.7, temperature=0.8 ) # pretty print last ouput tokens from bot print("JoshuaBot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True))) ```
hyunwoongko/blenderbot-9B
hyunwoongko
2021-06-17T01:26:34Z
44
22
transformers
[ "transformers", "pytorch", "blenderbot", "text2text-generation", "convAI", "conversational", "facebook", "en", "dataset:blended_skill_talk", "arxiv:1907.06616", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: - en thumbnail: tags: - convAI - conversational - facebook license: apache-2.0 datasets: - blended_skill_talk metrics: - perplexity --- ## Model description + Paper: [Recipes for building an open-domain chatbot](https://arxiv.org/abs/1907.06616) + [Original PARLAI Code](https://parl.ai/projects/recipes/) ### Abstract Building open-domain chatbots is a challenging area for machine learning research. While prior work has shown that scaling neural models in the number of parameters and the size of the data they are trained on gives improved results, we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to their partners, both asking and answering questions, and displaying knowledge, empathy and personality appropriately, depending on the situation. We show that large scale models can learn these skills when given appropriate training data and choice of generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter neural models, and make our models and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing failure cases of our models.
ScottaStrong/DialogGPT-medium-joshua
ScottaStrong
2021-06-17T00:25:33Z
8
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:04Z
--- thumbnail: https://huggingface.co/front/thumbnails/dialogpt.png tags: - conversational license: mit --- # DialoGPT Trained on the Speech of a Game Character This is an instance of [microsoft/DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) trained on a game character, Joshua from [The World Ends With You](https://en.wikipedia.org/wiki/The_World_Ends_with_You). The data comes from [a Kaggle game script dataset](https://www.kaggle.com/ruolinzheng/twewy-game-script). I built a Discord AI chatbot based on this model. [Check out my GitHub repo.](https://github.com/RuolinZheng08/twewy-discord-chatbot) Chat with the model: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("scottastrong/DialogGPT-medium-joshua") model = AutoModelWithLMHead.from_pretrained("scottastrong/DialogGPT-medium-joshua") # Let's chat for 4 lines for step in range(4): # encode the new user input, add the eos_token and return a tensor in Pytorch new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt') # print(new_user_input_ids) # append the new user input tokens to the chat history bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids # generated a response while limiting the total chat history to 1000 tokens, chat_history_ids = model.generate( bot_input_ids, max_length=200, pad_token_id=tokenizer.eos_token_id, no_repeat_ngram_size=3, do_sample=True, top_k=100, top_p=0.7, temperature=0.8 ) # pretty print last ouput tokens from bot print("JoshuaBot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True))) ```
ScottaStrong/DialogGPT-small-joshua
ScottaStrong
2021-06-16T21:40:45Z
7
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:04Z
--- thumbnail: https://huggingface.co/front/thumbnails/dialogpt.png tags: - conversational license: mit --- # DialoGPT Trained on the Speech of a Game Character This is an instance of [microsoft/DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) trained on a game character, Joshua from [The World Ends With You](https://en.wikipedia.org/wiki/The_World_Ends_with_You). The data comes from [a Kaggle game script dataset](https://www.kaggle.com/ruolinzheng/twewy-game-script). I built a Discord AI chatbot based on this model. [Check out my GitHub repo.](https://github.com/RuolinZheng08/twewy-discord-chatbot) Chat with the model: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("scottastrong/DialogGPT-small-joshua") model = AutoModelWithLMHead.from_pretrained("scottastrong/DialogGPT-small-joshua") # Let's chat for 4 lines for step in range(4): # encode the new user input, add the eos_token and return a tensor in Pytorch new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt') # print(new_user_input_ids) # append the new user input tokens to the chat history bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids # generated a response while limiting the total chat history to 1000 tokens, chat_history_ids = model.generate( bot_input_ids, max_length=200, pad_token_id=tokenizer.eos_token_id, no_repeat_ngram_size=3, do_sample=True, top_k=100, top_p=0.7, temperature=0.8 ) # pretty print last ouput tokens from bot print("JoshuaBot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True))) ```
danyaljj/opengpt2_pytorch_forward
danyaljj
2021-06-16T20:30:01Z
4
1
transformers
[ "transformers", "pytorch", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
West et al.'s model from their "reflective decoding" paper. Sample usage: ```python import torch from modeling_opengpt2 import OpenGPT2LMHeadModel from padded_encoder import Encoder path_to_forward = 'danyaljj/opengpt2_pytorch_forward' encoder = Encoder() model_backward = OpenGPT2LMHeadModel.from_pretrained(path_to_forward) input = "She tried to win but" input_ids = encoder.encode(input) input_ids = torch.tensor([input_ids ], dtype=torch.int) print(input_ids) output = model_backward.generate(input_ids) output_text = encoder.decode(output.tolist()[0]) print(output_text) ``` Download the additional files from here: https://github.com/peterwestuw/GPT2ForwardBackward
danyaljj/opengpt2_pytorch_backward
danyaljj
2021-06-16T20:29:52Z
31
1
transformers
[ "transformers", "pytorch", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
West et al.'s model from their "reflective decoding" paper. Sample usage: ```python import torch from modeling_opengpt2 import OpenGPT2LMHeadModel from padded_encoder import Encoder path_to_backward = 'danyaljj/opengpt2_pytorch_backward' encoder = Encoder() model_backward = OpenGPT2LMHeadModel.from_pretrained(path_to_backward) input = "until she finally won." input_ids = encoder.encode(input) input_ids = torch.tensor([input_ids[::-1] ], dtype=torch.int) print(input_ids) output = model_backward.generate(input_ids) output_text = encoder.decode(output.tolist()[0][::-1]) print(output_text) ``` Download the additional files from here: https://github.com/peterwestuw/GPT2ForwardBackward
madlag/bert-large-uncased-wwm-squadv2-x2.15-f83.2-d25-hybrid-v1
madlag
2021-06-16T17:10:27Z
77
0
transformers
[ "transformers", "pytorch", "tf", "bert", "question-answering", "en", "dataset:squad_v2", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- language: en thumbnail: license: mit tags: - question-answering - - datasets: - squad_v2 metrics: - squad_v2 widget: - text: "Where is the Eiffel Tower located?" context: "The Eiffel Tower is a wrought-iron lattice tower on the Champ de Mars in Paris, France. It is named after the engineer Gustave Eiffel, whose company designed and built the tower." - text: "Who is Frederic Chopin?" context: "Frédéric François Chopin, born Fryderyk Franciszek Chopin (1 March 1810 – 17 October 1849), was a Polish composer and virtuoso pianist of the Romantic era who wrote primarily for solo piano." --- ## bert-large-uncased-whole-word-masking model fine-tuned on SQuAD v2 This model was created using the [nn_pruning](https://github.com/huggingface/nn_pruning) python library: the **linear layers contains 25.0%** of the original weights. The model contains **32.0%** of the original weights **overall** (the embeddings account for a significant part of the model, and they are not pruned by this method). With a simple resizing of the linear matrices it ran **2.15x as fast as bert-large-uncased-whole-word-masking** on the evaluation. This is possible because the pruning method lead to structured matrices: to visualize them, hover below on the plot to see the non-zero/zero parts of each matrix. <div class="graph"><script src="/madlag/bert-large-uncased-wwm-squadv2-x2.15-f83.2-d25-hybrid-v1/raw/main/model_card/density_info.js" id="d55f6096-07eb-4cc1-b284-90ec6ced516c"></script></div> In terms of accuracy, its **F1 is 83.22**, compared with 85.85 for bert-large-uncased-whole-word-masking, a **F1 drop of 2.63**. ## Fine-Pruning details This model was fine-tuned from the HuggingFace [model](https://huggingface.co/bert-large-uncased-whole-word-masking) checkpoint on [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer), and distilled from the model [madlag/bert-large-uncased-whole-word-masking-finetuned-squadv2](https://huggingface.co/madlag/bert-large-uncased-whole-word-masking-finetuned-squadv2). This model is case-insensitive: it does not make a difference between english and English. A side-effect of the block pruning is that some of the attention heads are completely removed: 155 heads were removed on a total of 384 (40.4%). Here is a detailed view on how the remaining heads are distributed in the network after pruning. <div class="graph"><script src="/madlag/bert-large-uncased-wwm-squadv2-x2.15-f83.2-d25-hybrid-v1/raw/main/model_card/pruning_info.js" id="a474f11e-7e05-495e-bb21-4af0edfb6661"></script></div> ## Details of the SQuAD1.1 dataset | Dataset | Split | # samples | | -------- | ----- | --------- | | SQuAD 2.0 | train | 130.0K | | SQuAD 2.0 | eval | 11.9k | ### Fine-tuning - Python: `3.8.5` - Machine specs: ```CPU: Intel(R) Core(TM) i7-6700K CPU Memory: 64 GiB GPUs: 1 GeForce GTX 3090, with 24GiB memory GPU driver: 455.23.05, CUDA: 11.1 ``` ### Results **Pytorch model file size**: `1119MB` (original BERT: `1228.0MB`) | Metric | # Value | # Original ([Table 2](https://www.aclweb.org/anthology/N19-1423.pdf))| Variation | | ------ | --------- | --------- | --------- | | **EM** | **80.19** | **82.83** | **-3.64**| | **F1** | **83.22** | **85.85** | **-2.63**| ``` { "HasAns_exact": 76.48448043184885, "HasAns_f1": 82.55514100819374, "HasAns_total": 5928, "NoAns_exact": 83.8856181665265, "NoAns_f1": 83.8856181665265, "NoAns_total": 5945, "best_exact": 80.19034784805862, "best_exact_thresh": 0.0, "best_f1": 83.22133208932635, "best_f1_thresh": 0.0, "exact": 80.19034784805862, "f1": 83.22133208932645, "total": 11873 } ``` ## Example Usage Install nn_pruning: it contains the optimization script, which just pack the linear layers into smaller ones by removing empty rows/columns. `pip install nn_pruning` Then you can use the `transformers library` almost as usual: you just have to call `optimize_model` when the pipeline has loaded. ```python from transformers import pipeline from nn_pruning.inference_model_patcher import optimize_model qa_pipeline = pipeline( "question-answering", model="madlag/bert-large-uncased-wwm-squadv2-x2.15-f83.2-d25-hybrid-v1", tokenizer="madlag/bert-large-uncased-wwm-squadv2-x2.15-f83.2-d25-hybrid-v1" ) print("bert-large-uncased-whole-word-masking parameters: 497.0M") print(f"Parameters count (includes only head pruning, not feed forward pruning)={int(qa_pipeline.model.num_parameters() / 1E6)}M") qa_pipeline.model = optimize_model(qa_pipeline.model, "dense") print(f"Parameters count after complete optimization={int(qa_pipeline.model.num_parameters() / 1E6)}M") predictions = qa_pipeline({ 'context': "Frédéric François Chopin, born Fryderyk Franciszek Chopin (1 March 1810 – 17 October 1849), was a Polish composer and virtuoso pianist of the Romantic era who wrote primarily for solo piano.", 'question': "Who is Frederic Chopin?", }) print("Predictions", predictions) ```
madlag/bert-base-uncased-squadv1-x2.01-f89.2-d30-hybrid-rewind-opt-v1
madlag
2021-06-16T15:02:14Z
77
0
transformers
[ "transformers", "pytorch", "tf", "bert", "question-answering", "en", "dataset:squad", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- language: en thumbnail: license: mit tags: - question-answering - - datasets: - squad metrics: - squad widget: - text: "Where is the Eiffel Tower located?" context: "The Eiffel Tower is a wrought-iron lattice tower on the Champ de Mars in Paris, France. It is named after the engineer Gustave Eiffel, whose company designed and built the tower." - text: "Who is Frederic Chopin?" context: "Frédéric François Chopin, born Fryderyk Franciszek Chopin (1 March 1810 – 17 October 1849), was a Polish composer and virtuoso pianist of the Romantic era who wrote primarily for solo piano." --- ## BERT-base uncased model fine-tuned on SQuAD v1 This model was created using the [nn_pruning](https://github.com/huggingface/nn_pruning) python library: the **linear layers contains 30.0%** of the original weights. This model **CANNOT be used without using nn_pruning `optimize_model`** function, as it uses NoNorms instead of LayerNorms and this is not currently supported by the Transformers library. It uses ReLUs instead of GeLUs as in the initial BERT network, to speedup inference. This does not need special handling, as it is supported by the Transformers library, and flagged in the model config by the ```"hidden_act": "relu"``` entry. The model contains **45.0%** of the original weights **overall** (the embeddings account for a significant part of the model, and they are not pruned by this method). With a simple resizing of the linear matrices it ran **2.01x as fast as bert-base-uncased** on the evaluation. This is possible because the pruning method lead to structured matrices: to visualize them, hover below on the plot to see the non-zero/zero parts of each matrix. <div class="graph"><script src="/madlag/bert-base-uncased-squadv1-x2.01-f89.2-d30-hybrid-rewind-opt-v1/raw/main/model_card/density_info.js" id="c3b978cc-6d18-4fd0-a24b-e4369569d64d"></script></div> In terms of accuracy, its **F1 is 89.19**, compared with 88.5 for bert-base-uncased, a **F1 gain of 0.69**. ## Fine-Pruning details This model was fine-tuned from the HuggingFace [model](https://huggingface.co/bert-base-uncased) checkpoint on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer), and distilled from the model [bert-large-uncased-whole-word-masking-finetuned-squad](https://huggingface.co/bert-large-uncased-whole-word-masking-finetuned-squad) This model is case-insensitive: it does not make a difference between english and English. A side-effect of the block pruning is that some of the attention heads are completely removed: 55 heads were removed on a total of 144 (38.2%). Here is a detailed view on how the remaining heads are distributed in the network after pruning. <div class="graph"><script src="/madlag/bert-base-uncased-squadv1-x2.01-f89.2-d30-hybrid-rewind-opt-v1/raw/main/model_card/pruning_info.js" id="7de38b6d-774c-4313-a5a4-8e32f554d9ec"></script></div> ## Details of the SQuAD1.1 dataset | Dataset | Split | # samples | | -------- | ----- | --------- | | SQuAD1.1 | train | 90.6K | | SQuAD1.1 | eval | 11.1k | ### Fine-tuning - Python: `3.8.5` - Machine specs: ```CPU: Intel(R) Core(TM) i7-6700K CPU Memory: 64 GiB GPUs: 1 GeForce GTX 3090, with 24GiB memory GPU driver: 455.23.05, CUDA: 11.1 ``` ### Results **Pytorch model file size**: `374MB` (original BERT: `420MB`) | Metric | # Value | # Original ([Table 2](https://www.aclweb.org/anthology/N19-1423.pdf))| Variation | | ------ | --------- | --------- | --------- | | **EM** | **82.21** | **80.8** | **+1.41**| | **F1** | **89.19** | **88.5** | **+0.69**| ## Example Usage Install nn_pruning: it contains the optimization script, which just pack the linear layers into smaller ones by removing empty rows/columns. `pip install nn_pruning` Then you can use the `transformers library` almost as usual: you just have to call `optimize_model` when the pipeline has loaded. ```python from transformers import pipeline from nn_pruning.inference_model_patcher import optimize_model qa_pipeline = pipeline( "question-answering", model="madlag/bert-base-uncased-squadv1-x2.01-f89.2-d30-hybrid-rewind-opt-v1", tokenizer="madlag/bert-base-uncased-squadv1-x2.01-f89.2-d30-hybrid-rewind-opt-v1" ) print("bert-base-uncased parameters: 200.0M") print(f"Parameters count (includes only head pruning, not feed forward pruning)={int(qa_pipeline.model.num_parameters() / 1E6)}M") qa_pipeline.model = optimize_model(qa_pipeline.model, "dense") print(f"Parameters count after complete optimization={int(qa_pipeline.model.num_parameters() / 1E6)}M") predictions = qa_pipeline({ 'context': "Frédéric François Chopin, born Fryderyk Franciszek Chopin (1 March 1810 – 17 October 1849), was a Polish composer and virtuoso pianist of the Romantic era who wrote primarily for solo piano.", 'question': "Who is Frederic Chopin?", }) print("Predictions", predictions) ```
AgentPublic/dpr-ctx_encoder-fr_qa-camembert
AgentPublic
2021-06-16T11:22:59Z
17
5
transformers
[ "transformers", "pytorch", "camembert", "fr", "dataset:piaf", "dataset:FQuAD", "dataset:SQuAD-FR", "arxiv:2004.04906", "arxiv:1911.03894", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: fr datasets: - piaf - FQuAD - SQuAD-FR --- # dpr-ctx_encoder-fr_qa-camembert ## Description French [DPR model](https://arxiv.org/abs/2004.04906) using [CamemBERT](https://arxiv.org/abs/1911.03894) as base and then fine-tuned on a combo of three French Q&A ## Data ### French Q&A We use a combination of three French Q&A datasets: 1. [PIAFv1.1](https://www.data.gouv.fr/en/datasets/piaf-le-dataset-francophone-de-questions-reponses/) 2. [FQuADv1.0](https://fquad.illuin.tech/) 3. [SQuAD-FR (SQuAD automatically translated to French)](https://github.com/Alikabbadj/French-SQuAD) ### Training We are using 90 562 random questions for `train` and 22 391 for `dev`. No question in `train` exists in `dev`. For each question, we have a single `positive_context` (the paragraph where the answer to this question is found) and around 30 `hard_negtive_contexts`. Hard negative contexts are found by querying an ES instance (via bm25 retrieval) and getting the top-k candidates **that do not contain the answer**. The files are over [here](https://drive.google.com/file/d/1W5Jm3sqqWlsWsx2sFpA39Ewn33PaLQ7U/view?usp=sharing). ### Evaluation We use FQuADv1.0 and French-SQuAD evaluation sets. ## Training Script We use the official [Facebook DPR implentation](https://github.com/facebookresearch/DPR) with a slight modification: by default, the code can work with Roberta models, still we changed a single line to make it easier to work with Camembert. This modification can be found [over here](https://github.com/psorianom/DPR). ### Hyperparameters ```shell python -m torch.distributed.launch --nproc_per_node=8 train_dense_encoder.py \ --max_grad_norm 2.0 \ --encoder_model_type fairseq_roberta \ --pretrained_file data/camembert-base \ --seed 12345 \ --sequence_length 256 \ --warmup_steps 1237 \ --batch_size 16 \ --do_lower_case \ --train_file ./data/DPR_FR_train.json \ --dev_file ./data/DPR_FR_dev.json \ --output_dir ./output/ \ --learning_rate 2e-05 \ --num_train_epochs 35 \ --dev_batch_size 16 \ --val_av_rank_start_epoch 30 \ --pretrained_model_cfg ./data/camembert-base/ ``` ### ## Evaluation results We obtain the following evaluation by using FQuAD and SQuAD-FR evaluation (or validation) sets. To obtain these results, we use [haystack's evaluation script](https://github.com/deepset-ai/haystack/blob/db4151bbc026f27c6d709fefef1088cd3f1e18b9/tutorials/Tutorial5_Evaluation.py) (**we report Retrieval results only**). ### DPR #### FQuAD v1.0 Evaluation ```shell For 2764 out of 3184 questions (86.81%), the answer was in the top-20 candidate passages selected by the retriever. Retriever Recall: 0.87 Retriever Mean Avg Precision: 0.57 ``` #### SQuAD-FR Evaluation ```shell For 8945 out of 10018 questions (89.29%), the answer was in the top-20 candidate passages selected by the retriever. Retriever Recall: 0.89 Retriever Mean Avg Precision: 0.63 ``` ### BM25 For reference, BM25 gets the results shown below. As in the original paper, regarding SQuAD-like datasets, the results of DPR are consistently superseeded by BM25. #### FQuAD v1.0 Evaluation ```shell For 2966 out of 3184 questions (93.15%), the answer was in the top-20 candidate passages selected by the retriever. Retriever Recall: 0.93 Retriever Mean Avg Precision: 0.74 ``` #### SQuAD-FR Evaluation ```shell For 9353 out of 10018 questions (93.36%), the answer was in the top-20 candidate passages selected by the retriever. Retriever Recall: 0.93 Retriever Mean Avg Precision: 0.77 ``` ## Usage The results reported here are obtained with the `haystack` library. To get to similar embeddings using exclusively HF `transformers` library, you can do the following: ```python from transformers import AutoTokenizer, AutoModel query = "Salut, mon chien est-il mignon ?" tokenizer = AutoTokenizer.from_pretrained("etalab-ia/dpr-ctx_encoder-fr_qa-camembert", do_lower_case=True) input_ids = tokenizer(query, return_tensors='pt')["input_ids"] model = AutoModel.from_pretrained("etalab-ia/dpr-ctx_encoder-fr_qa-camembert", return_dict=True) embeddings = model.forward(input_ids).pooler_output print(embeddings) ``` And with `haystack`, we use it as a retriever: ``` retriever = DensePassageRetriever( document_store=document_store, query_embedding_model="etalab-ia/dpr-question_encoder-fr_qa-camembert", passage_embedding_model="etalab-ia/dpr-ctx_encoder-fr_qa-camembert", model_version=dpr_model_tag, infer_tokenizer_classes=True, ) ``` ## Acknowledgments This work was performed using HPC resources from GENCI–IDRIS (Grant 2020-AD011011224). ## Citations ### Datasets #### PIAF ``` @inproceedings{KeraronLBAMSSS20, author = {Rachel Keraron and Guillaume Lancrenon and Mathilde Bras and Fr{\'{e}}d{\'{e}}ric Allary and Gilles Moyse and Thomas Scialom and Edmundo{-}Pavel Soriano{-}Morales and Jacopo Staiano}, title = {Project {PIAF:} Building a Native French Question-Answering Dataset}, booktitle = {{LREC}}, pages = {5481--5490}, publisher = {European Language Resources Association}, year = {2020} } ``` #### FQuAD ``` @article{dHoffschmidt2020FQuADFQ, title={FQuAD: French Question Answering Dataset}, author={Martin d'Hoffschmidt and Maxime Vidal and Wacim Belblidia and Tom Brendl'e and Quentin Heinrich}, journal={ArXiv}, year={2020}, volume={abs/2002.06071} } ``` #### SQuAD-FR ``` @MISC{kabbadj2018, author = "Kabbadj, Ali", title = "Something new in French Text Mining and Information Extraction (Universal Chatbot): Largest Q&A French training dataset (110 000+) ", editor = "linkedin.com", month = "November", year = "2018", url = "\url{https://www.linkedin.com/pulse/something-new-french-text-mining-information-chatbot-largest-kabbadj/}", note = "[Online; posted 11-November-2018]", } ``` ### Models #### CamemBERT HF model card : [https://huggingface.co/camembert-base](https://huggingface.co/camembert-base) ``` @inproceedings{martin2020camembert, title={CamemBERT: a Tasty French Language Model}, author={Martin, Louis and Muller, Benjamin and Su{\'a}rez, Pedro Javier Ortiz and Dupont, Yoann and Romary, Laurent and de la Clergerie, {\'E}ric Villemonte and Seddah, Djam{\'e} and Sagot, Beno{\^\i}t}, booktitle={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics}, year={2020} } ``` #### DPR ``` @misc{karpukhin2020dense, title={Dense Passage Retrieval for Open-Domain Question Answering}, author={Vladimir Karpukhin and Barlas Oğuz and Sewon Min and Patrick Lewis and Ledell Wu and Sergey Edunov and Danqi Chen and Wen-tau Yih}, year={2020}, eprint={2004.04906}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
AgentPublic/dpr-question_encoder-fr_qa-camembert
AgentPublic
2021-06-16T10:10:09Z
37
8
transformers
[ "transformers", "pytorch", "camembert", "feature-extraction", "fr", "dataset:piaf", "dataset:FQuAD", "dataset:SQuAD-FR", "arxiv:2004.04906", "arxiv:1911.03894", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
--- language: fr datasets: - piaf - FQuAD - SQuAD-FR --- # dpr-question_encoder-fr_qa-camembert ## Description French [DPR model](https://arxiv.org/abs/2004.04906) using [CamemBERT](https://arxiv.org/abs/1911.03894) as base and then fine-tuned on a combo of three French Q&A ## Data ### French Q&A We use a combination of three French Q&A datasets: 1. [PIAFv1.1](https://www.data.gouv.fr/en/datasets/piaf-le-dataset-francophone-de-questions-reponses/) 2. [FQuADv1.0](https://fquad.illuin.tech/) 3. [SQuAD-FR (SQuAD automatically translated to French)](https://github.com/Alikabbadj/French-SQuAD) ### Training We are using 90 562 random questions for `train` and 22 391 for `dev`. No question in `train` exists in `dev`. For each question, we have a single `positive_context` (the paragraph where the answer to this question is found) and around 30 `hard_negtive_contexts`. Hard negative contexts are found by querying an ES instance (via bm25 retrieval) and getting the top-k candidates **that do not contain the answer**. The files are over [here](https://drive.google.com/file/d/1W5Jm3sqqWlsWsx2sFpA39Ewn33PaLQ7U/view?usp=sharing). ### Evaluation We use FQuADv1.0 and French-SQuAD evaluation sets. ## Training Script We use the official [Facebook DPR implentation](https://github.com/facebookresearch/DPR) with a slight modification: by default, the code can work with Roberta models, still we changed a single line to make it easier to work with Camembert. This modification can be found [over here](https://github.com/psorianom/DPR). ### Hyperparameters ```shell python -m torch.distributed.launch --nproc_per_node=8 train_dense_encoder.py \ --max_grad_norm 2.0 --encoder_model_type hf_bert --pretrained_file data/bert-base-multilingual-uncased \ --seed 12345 --sequence_length 256 --warmup_steps 1237 --batch_size 16 --do_lower_case \ --train_file DPR_FR_train.json \ --dev_file ./data/100_hard_neg_ctxs/DPR_FR_dev.json \ --output_dir ./output/bert --learning_rate 2e-05 --num_train_epochs 35 \ --dev_batch_size 16 --val_av_rank_start_epoch 25 \ --pretrained_model_cfg ./data/bert-base-multilingual-uncased ``` ### ## Evaluation results We obtain the following evaluation by using FQuAD and SQuAD-FR evaluation (or validation) sets. To obtain these results, we use [haystack's evaluation script](https://github.com/deepset-ai/haystack/blob/db4151bbc026f27c6d709fefef1088cd3f1e18b9/tutorials/Tutorial5_Evaluation.py) (**we report Retrieval results only**). ### DPR #### FQuAD v1.0 Evaluation ```shell For 2764 out of 3184 questions (86.81%), the answer was in the top-20 candidate passages selected by the retriever. Retriever Recall: 0.87 Retriever Mean Avg Precision: 0.57 ``` #### SQuAD-FR Evaluation ```shell For 8945 out of 10018 questions (89.29%), the answer was in the top-20 candidate passages selected by the retriever. Retriever Recall: 0.89 Retriever Mean Avg Precision: 0.63 ``` ### BM25 For reference, BM25 gets the results shown below. As in the original paper, regarding SQuAD-like datasets, the results of DPR are consistently superseeded by BM25. #### FQuAD v1.0 Evaluation ```shell For 2966 out of 3184 questions (93.15%), the answer was in the top-20 candidate passages selected by the retriever. Retriever Recall: 0.93 Retriever Mean Avg Precision: 0.74 ``` #### SQuAD-FR Evaluation ```shell For 9353 out of 10018 questions (93.36%), the answer was in the top-20 candidate passages selected by the retriever. Retriever Recall: 0.93 Retriever Mean Avg Precision: 0.77 ``` ## Usage The results reported here are obtained with the `haystack` library. To get to similar embeddings using exclusively HF `transformers` library, you can do the following: ```python from transformers import AutoTokenizer, AutoModel query = "Salut, mon chien est-il mignon ?" tokenizer = AutoTokenizer.from_pretrained("etalab-ia/dpr-question_encoder-fr_qa-camembert", do_lower_case=True) input_ids = tokenizer(query, return_tensors='pt')["input_ids"] model = AutoModel.from_pretrained("etalab-ia/dpr-question_encoder-fr_qa-camembert", return_dict=True) embeddings = model.forward(input_ids).pooler_output print(embeddings) ``` And with `haystack`, we use it as a retriever: ``` retriever = DensePassageRetriever( document_store=document_store, query_embedding_model="etalab-ia/dpr-question_encoder-fr_qa-camembert", passage_embedding_model="etalab-ia/dpr-ctx_encoder-fr_qa-camembert", model_version=dpr_model_tag, infer_tokenizer_classes=True, ) ``` ## Acknowledgments This work was performed using HPC resources from GENCI–IDRIS (Grant 2020-AD011011224). ## Citations ### Datasets #### PIAF ``` @inproceedings{KeraronLBAMSSS20, author = {Rachel Keraron and Guillaume Lancrenon and Mathilde Bras and Fr{\'{e}}d{\'{e}}ric Allary and Gilles Moyse and Thomas Scialom and Edmundo{-}Pavel Soriano{-}Morales and Jacopo Staiano}, title = {Project {PIAF:} Building a Native French Question-Answering Dataset}, booktitle = {{LREC}}, pages = {5481--5490}, publisher = {European Language Resources Association}, year = {2020} } ``` #### FQuAD ``` @article{dHoffschmidt2020FQuADFQ, title={FQuAD: French Question Answering Dataset}, author={Martin d'Hoffschmidt and Maxime Vidal and Wacim Belblidia and Tom Brendl'e and Quentin Heinrich}, journal={ArXiv}, year={2020}, volume={abs/2002.06071} } ``` #### SQuAD-FR ``` @MISC{kabbadj2018, author = "Kabbadj, Ali", title = "Something new in French Text Mining and Information Extraction (Universal Chatbot): Largest Q&A French training dataset (110 000+) ", editor = "linkedin.com", month = "November", year = "2018", url = "\url{https://www.linkedin.com/pulse/something-new-french-text-mining-information-chatbot-largest-kabbadj/}", note = "[Online; posted 11-November-2018]", } ``` ### Models #### CamemBERT HF model card : [https://huggingface.co/camembert-base](https://huggingface.co/camembert-base) ``` @inproceedings{martin2020camembert, title={CamemBERT: a Tasty French Language Model}, author={Martin, Louis and Muller, Benjamin and Su{\'a}rez, Pedro Javier Ortiz and Dupont, Yoann and Romary, Laurent and de la Clergerie, {\'E}ric Villemonte and Seddah, Djam{\'e} and Sagot, Beno{\^\i}t}, booktitle={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics}, year={2020} } ``` #### DPR ``` @misc{karpukhin2020dense, title={Dense Passage Retrieval for Open-Domain Question Answering}, author={Vladimir Karpukhin and Barlas Oğuz and Sewon Min and Patrick Lewis and Ledell Wu and Sergey Edunov and Danqi Chen and Wen-tau Yih}, year={2020}, eprint={2004.04906}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
huggingtweets/onlinepete-sematarygravemn-superpiss
huggingtweets
2021-06-16T09:25:28Z
6
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/onlinepete-sematarygravemn-superpiss/1623835524372/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1077258168315473920/b8-3h6l4_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/456958582731603969/QZKpv6eI_400x400.jpeg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1364495290833637382/kqjmEF5A_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">xbox 720 & im pete online & SEMATARY GRAVE MAN ✟ ✟ ✟</div> <div style="text-align: center; font-size: 14px;">@onlinepete-sematarygravemn-superpiss</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from xbox 720 & im pete online & SEMATARY GRAVE MAN ✟ ✟ ✟. | Data | xbox 720 | im pete online | SEMATARY GRAVE MAN ✟ ✟ ✟ | | --- | --- | --- | --- | | Tweets downloaded | 199 | 3188 | 517 | | Retweets | 0 | 92 | 67 | | Short tweets | 7 | 1003 | 101 | | Tweets kept | 192 | 2093 | 349 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1nem3gj6/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @onlinepete-sematarygravemn-superpiss's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/341d9x8y) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/341d9x8y/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/onlinepete-sematarygravemn-superpiss') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/biocrimed-bladeecity-w3bcam
huggingtweets
2021-06-16T09:00:55Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/biocrimed-bladeecity-w3bcam/1623834051692/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1398220397049434117/3i7JMNiF_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1399230370109825024/FypJacJv_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1404352885815664642/BEvtg0q4_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">bladee & Nothing person 2 & headaches</div> <div style="text-align: center; font-size: 14px;">@biocrimed-bladeecity-w3bcam</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from bladee & Nothing person 2 & headaches. | Data | bladee | Nothing person 2 | headaches | | --- | --- | --- | --- | | Tweets downloaded | 1599 | 1863 | 3231 | | Retweets | 313 | 117 | 62 | | Short tweets | 486 | 714 | 1451 | | Tweets kept | 800 | 1032 | 1718 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/37jgy6z4/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @biocrimed-bladeecity-w3bcam's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1xg0n2ib) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1xg0n2ib/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/biocrimed-bladeecity-w3bcam') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/gresham2x
huggingtweets
2021-06-16T01:23:49Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/gresham2x/1623806625441/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1286445572795305990/SPf0WZuT_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">BON</div> <div style="text-align: center; font-size: 14px;">@gresham2x</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from BON. | Data | BON | | --- | --- | | Tweets downloaded | 3235 | | Retweets | 172 | | Short tweets | 708 | | Tweets kept | 2355 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1mb1dknt/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gresham2x's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/kgizc73h) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/kgizc73h/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/gresham2x') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Davlan/bert-base-multilingual-cased-finetuned-naija
Davlan
2021-06-15T20:39:28Z
13
0
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
Hugging Face's logo --- language: pcm datasets: --- # bert-base-multilingual-cased-finetuned-naija ## Model description **bert-base-multilingual-cased-finetuned-naija** is a **Nigerian-Pidgin BERT** model obtained by fine-tuning **bert-base-multilingual-cased** model on Nigerian-Pidgin language texts. It provides **better performance** than the multilingual BERT on named entity recognition datasets. Specifically, this model is a *bert-base-multilingual-cased* model that was fine-tuned on Nigerian-Pidgin corpus. ## Intended uses & limitations #### How to use You can use this model with Transformers *pipeline* for masked token prediction. ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='Davlan/bert-base-multilingual-cased-finetuned-naija') >>> unmasker("Another attack on ambulance happen for Koforidua in March [MASK] year where robbers kill Ambulance driver") ``` #### Limitations and bias This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. ## Training data This model was fine-tuned on JW300 + [BBC Pidgin](https://www.bbc.com/pidgin) ## Training procedure This model was trained on a single NVIDIA V100 GPU ## Eval results on Test set (F-score, average over 5 runs) Dataset| mBERT F1 | pcm_bert F1 -|-|- [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 87.23 | 89.95 ### BibTeX entry and citation info By David Adelani ``` ```
mukund/privbert
mukund
2021-06-15T19:36:42Z
222
1
transformers
[ "transformers", "pytorch", "tf", "roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
# PrivBERT PrivBERT is a privacy policy language model. We pre-trained PrivBERT on ~1 million privacy policies starting with the pretrained Roberta model. The data is available at [https://privaseer.ist.psu.edu/data](https://privaseer.ist.psu.edu/data) ## Usage ``` from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("mukund/privbert") model = AutoModel.from_pretrained("mukund/privbert") ``` ## License If you use this dataset in research, you must cite the below paper. ``` Mukund Srinath, Shomir Wilson and C. Lee Giles. Privacy at Scale: Introducing the PrivaSeer Corpus of Web Privacy Policies. In Proc. ACL 2021. ``` For research, teaching, and scholarship purposes, the model is available under a CC BY-NC-SA license. Please contact us for any requests regarding commercial use.
huggingtweets/ai_hexcrawl-gods_txt
huggingtweets
2021-06-15T16:52:45Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/ai_hexcrawl-gods_txt/1623775960967/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1391882949650440200/lmEKl2ZQ_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1288860183515607041/uHoTEsFz_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">AI Hexcrawl & GPT-2 Religion AI</div> <div style="text-align: center; font-size: 14px;">@ai_hexcrawl-gods_txt</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from AI Hexcrawl & GPT-2 Religion AI. | Data | AI Hexcrawl | GPT-2 Religion AI | | --- | --- | --- | | Tweets downloaded | 245 | 3249 | | Retweets | 8 | 68 | | Short tweets | 0 | 9 | | Tweets kept | 237 | 3172 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/37knqj1s/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ai_hexcrawl-gods_txt's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/acyab0oh) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/acyab0oh/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/ai_hexcrawl-gods_txt') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
neuropark/sahajBERT-NCC
neuropark
2021-06-15T12:40:08Z
11
2
transformers
[ "transformers", "pytorch", "albert", "text-classification", "collaborative", "bengali", "SequenceClassification", "bn", "dataset:IndicGlue", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: bn tags: - collaborative - bengali - SequenceClassification license: apache-2.0 datasets: IndicGlue metrics: - Loss - Accuracy - Precision - Recall widget: - text: "এশিয়ায় প্রথম দৃষ্টিহীন ব্যক্তির মাউন্ট এভারেস্ট জয়|" --- # sahajBERT News Article Classification ## Model description [sahajBERT](https://huggingface.co/neuropark/sahajBERT) fine-tuned for news article classification using the `sna.bn` split of [IndicGlue](https://huggingface.co/datasets/indic_glue). The model is trained for classifying articles into 5 different classes: | Label id | Label | |:--------:|:----:| |0 | kolkata| |1 | state| |2 | national| |3 | sports| |4 | entertainment| |5 | international| ## Intended uses & limitations #### How to use You can use this model directly with a pipeline for Sequence Classification: ```python from transformers import AlbertForSequenceClassification, TextClassificationPipeline, PreTrainedTokenizerFast # Initialize tokenizer tokenizer = PreTrainedTokenizerFast.from_pretrained("neuropark/sahajBERT-NCC") # Initialize model model = AlbertForSequenceClassification.from_pretrained("neuropark/sahajBERT-NCC") # Initialize pipeline pipeline = TextClassificationPipeline(tokenizer=tokenizer, model=model) raw_text = "এই ইউনিয়নে ৩ টি মৌজা ও ১০ টি গ্রাম আছে ।" # Change me output = pipeline(raw_text) ``` #### Limitations and bias <!-- Provide examples of latent issues and potential remediations. --> WIP ## Training data The model was initialized with pre-trained weights of [sahajBERT](https://huggingface.co/neuropark/sahajBERT) at step 19519 and trained on the `sna.bn` split of [IndicGlue](https://huggingface.co/datasets/indic_glue). ## Training procedure Coming soon! <!-- ```bibtex @inproceedings{..., year={2020} } ``` --> ## Eval results Loss: 0.2477145493030548 Accuracy: 0.926293408929837 Macro F1: 0.9079785326650756 Recall: 0.926293408929837 Weighted F1: 0.9266428029354202 Macro Precision: 0.9109938492260489 Micro Precision: 0.926293408929837 Weighted Precision: 0.9288535478995414 Macro Recall: 0.9069095007692186 Micro Recall: 0.926293408929837 Weighted Recall: 0.926293408929837 ### BibTeX entry and citation info Coming soon! <!-- ```bibtex @inproceedings{..., year={2020} } ``` -->
huggingtweets/rantspakistani
huggingtweets
2021-06-15T09:17:31Z
6
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/rantspakistani/1623748645565/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1272527278744973315/PVkL9_v-_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Rants</div> <div style="text-align: center; font-size: 14px;">@rantspakistani</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Rants. | Data | Rants | | --- | --- | | Tweets downloaded | 3221 | | Retweets | 573 | | Short tweets | 142 | | Tweets kept | 2506 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2wyl63o2/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @rantspakistani's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/d2h287dr) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/d2h287dr/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/rantspakistani') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
neuropark/sahajBERT-NER
neuropark
2021-06-15T08:12:18Z
18
2
transformers
[ "transformers", "pytorch", "albert", "token-classification", "collaborative", "bengali", "NER", "bn", "dataset:xtreme", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: bn tags: - collaborative - bengali - NER license: apache-2.0 datasets: xtreme metrics: - Loss - Accuracy - Precision - Recall --- # sahajBERT Named Entity Recognition ## Model description [sahajBERT](https://huggingface.co/neuropark/sahajBERT-NER) fine-tuned for NER using the bengali split of [WikiANN ](https://huggingface.co/datasets/wikiann). Named Entities predicted by the model: | Label id | Label | |:--------:|:----:| |0 |O| |1 |B-PER| |2 |I-PER| |3 |B-ORG| |4 |I-ORG| |5 |B-LOC| |6 |I-LOC| ## Intended uses & limitations #### How to use You can use this model directly with a pipeline for token classification: ```python from transformers import AlbertForTokenClassification, TokenClassificationPipeline, PreTrainedTokenizerFast # Initialize tokenizer tokenizer = PreTrainedTokenizerFast.from_pretrained("neuropark/sahajBERT-NER") # Initialize model model = AlbertForTokenClassification.from_pretrained("neuropark/sahajBERT-NER") # Initialize pipeline pipeline = TokenClassificationPipeline(tokenizer=tokenizer, model=model) raw_text = "এই ইউনিয়নে ৩ টি মৌজা ও ১০ টি গ্রাম আছে ।" # Change me output = pipeline(raw_text) ``` #### Limitations and bias <!-- Provide examples of latent issues and potential remediations. --> WIP ## Training data The model was initialized with pre-trained weights of [sahajBERT](https://huggingface.co/neuropark/sahajBERT-NER) at step 19519 and trained on the bengali split of [WikiANN ](https://huggingface.co/datasets/wikiann) ## Training procedure Coming soon! <!-- ```bibtex @inproceedings{..., year={2020} } ``` --> ## Eval results loss: 0.11714419722557068 accuracy: 0.9772286821705426 precision: 0.9585365853658536 recall: 0.9651277013752456 f1 : 0.9618208516886931 ### BibTeX entry and citation info Coming soon! <!-- ```bibtex @inproceedings{..., year={2020} } ``` -->
Aero/Tsubomi-Haruno
Aero
2021-06-14T22:21:24Z
16
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:04Z
--- thumbnail: https://huggingface.co/front/thumbnails/dialogpt.png tags: - conversational license: mit --- # DialoGPT Trained on the Speech of a Game Character ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("r3dhummingbird/DialoGPT-medium-joshua") model = AutoModelWithLMHead.from_pretrained("r3dhummingbird/DialoGPT-medium-joshua") # Let's chat for 4 lines for step in range(4): # encode the new user input, add the eos_token and return a tensor in Pytorch new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt') # print(new_user_input_ids) # append the new user input tokens to the chat history bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids # generated a response while limiting the total chat history to 1000 tokens, chat_history_ids = model.generate( bot_input_ids, max_length=200, pad_token_id=tokenizer.eos_token_id, no_repeat_ngram_size=3, do_sample=True, top_k=100, top_p=0.7, temperature=0.8 ) # pretty print last ouput tokens from bot print("Tsubomi: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True))) ```
assemblyai/bert-large-uncased-sst2
assemblyai
2021-06-14T22:04:39Z
380
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "arxiv:1810.04805", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
# BERT-Large-Uncased for Sentiment Analysis This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) originally released in ["BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding"](https://arxiv.org/abs/1810.04805) and trained on the [Stanford Sentiment Treebank v2 (SST2)](https://nlp.stanford.edu/sentiment/); part of the [General Language Understanding Evaluation (GLUE)](https://gluebenchmark.com) benchmark. This model was fine-tuned by the team at [AssemblyAI](https://www.assemblyai.com) and is released with the [corresponding blog post](). ## Usage To download and utilize this model for sentiment analysis please execute the following: ```python import torch.nn.functional as F from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("assemblyai/bert-large-uncased-sst2") model = AutoModelForSequenceClassification.from_pretrained("assemblyai/bert-large-uncased-sst2") tokenized_segments = tokenizer(["AssemblyAI is the best speech-to-text API for modern developers with performance being second to none!"], return_tensors="pt", padding=True, truncation=True) tokenized_segments_input_ids, tokenized_segments_attention_mask = tokenized_segments.input_ids, tokenized_segments.attention_mask model_predictions = F.softmax(model(input_ids=tokenized_segments_input_ids, attention_mask=tokenized_segments_attention_mask)['logits'], dim=1) print("Positive probability: "+str(model_predictions[0][1].item()*100)+"%") print("Negative probability: "+str(model_predictions[0][0].item()*100)+"%") ``` For questions about how to use this model feel free to contact the team at [AssemblyAI](https://www.assemblyai.com)!
huggingtweets/fardeg1-jaypomeister-shortdaggerdick
huggingtweets
2021-06-14T21:56:30Z
6
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/fardeg1-jaypomeister-shortdaggerdick/1623707785167/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1324348950145544192/_NgUzqaJ_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1394641363740905478/eNKpHxUd_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1379154157098110977/lajO-om1_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">jaypo & Fardeg & Nial</div> <div style="text-align: center; font-size: 14px;">@fardeg1-jaypomeister-shortdaggerdick</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from jaypo & Fardeg & Nial. | Data | jaypo | Fardeg | Nial | | --- | --- | --- | --- | | Tweets downloaded | 399 | 3130 | 441 | | Retweets | 31 | 392 | 46 | | Short tweets | 168 | 785 | 202 | | Tweets kept | 200 | 1953 | 193 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/f0npandx/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @fardeg1-jaypomeister-shortdaggerdick's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/v3bv5lt7) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/v3bv5lt7/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/fardeg1-jaypomeister-shortdaggerdick') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/ubtiviv
huggingtweets
2021-06-14T14:48:42Z
6
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/ubtiviv/1623682118645/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/883722377661730817/YvEUxO80_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">transmission creeper</div> <div style="text-align: center; font-size: 14px;">@ubtiviv</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from transmission creeper. | Data | transmission creeper | | --- | --- | | Tweets downloaded | 924 | | Retweets | 6 | | Short tweets | 39 | | Tweets kept | 879 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1xh2gevj/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ubtiviv's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1zp8oiej) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1zp8oiej/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/ubtiviv') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/eptun2
huggingtweets
2021-06-14T14:00:23Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/eptun2/1623679218637/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1365330152155197440/u6okFTrC_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">patrik</div> <div style="text-align: center; font-size: 14px;">@eptun2</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from patrik. | Data | patrik | | --- | --- | | Tweets downloaded | 1197 | | Retweets | 110 | | Short tweets | 261 | | Tweets kept | 826 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/12mtydd9/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @eptun2's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/isk9pksq) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/isk9pksq/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/eptun2') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
begimayk/try1
begimayk
2021-06-14T13:09:54Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
from transformers import pipeline import json import requests API_URL = "https://api-inference.huggingface.co/models/EleutherAI/gpt-neo-2.7B" headers = {"Authorization": "Bearer api_hwKbAMoHAzOVDdCxgfpPxMjjcrdKHMakhg"} def query(payload): \tdata = json.dumps(payload) \tresponse = requests.request("POST", API_URL, headers=headers, data=data) \treturn json.loads(response.content.decode("utf-8")) data = query("Can you please let us know more details about your ")
valhalla/distilbart-mnli-12-3
valhalla
2021-06-14T10:29:48Z
8,317
19
transformers
[ "transformers", "pytorch", "jax", "bart", "text-classification", "distilbart", "distilbart-mnli", "zero-shot-classification", "dataset:mnli", "autotrain_compatible", "endpoints_compatible", "region:us" ]
zero-shot-classification
2022-03-02T23:29:05Z
--- datasets: - mnli tags: - distilbart - distilbart-mnli pipeline_tag: zero-shot-classification --- # DistilBart-MNLI distilbart-mnli is the distilled version of bart-large-mnli created using the **No Teacher Distillation** technique proposed for BART summarisation by Huggingface, [here](https://github.com/huggingface/transformers/tree/master/examples/seq2seq#distilbart). We just copy alternating layers from `bart-large-mnli` and finetune more on the same data. | | matched acc | mismatched acc | | ------------------------------------------------------------------------------------ | ----------- | -------------- | | [bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli) (baseline, 12-12) | 89.9 | 90.01 | | [distilbart-mnli-12-1](https://huggingface.co/valhalla/distilbart-mnli-12-1) | 87.08 | 87.5 | | [distilbart-mnli-12-3](https://huggingface.co/valhalla/distilbart-mnli-12-3) | 88.1 | 88.19 | | [distilbart-mnli-12-6](https://huggingface.co/valhalla/distilbart-mnli-12-6) | 89.19 | 89.01 | | [distilbart-mnli-12-9](https://huggingface.co/valhalla/distilbart-mnli-12-9) | 89.56 | 89.52 | This is a very simple and effective technique, as we can see the performance drop is very little. Detailed performace trade-offs will be posted in this [sheet](https://docs.google.com/spreadsheets/d/1dQeUvAKpScLuhDV1afaPJRRAE55s2LpIzDVA5xfqxvk/edit?usp=sharing). ## Fine-tuning If you want to train these models yourself, clone the [distillbart-mnli repo](https://github.com/patil-suraj/distillbart-mnli) and follow the steps below Clone and install transformers from source ```bash git clone https://github.com/huggingface/transformers.git pip install -qqq -U ./transformers ``` Download MNLI data ```bash python transformers/utils/download_glue_data.py --data_dir glue_data --tasks MNLI ``` Create student model ```bash python create_student.py \ --teacher_model_name_or_path facebook/bart-large-mnli \ --student_encoder_layers 12 \ --student_decoder_layers 6 \ --save_path student-bart-mnli-12-6 \ ``` Start fine-tuning ```bash python run_glue.py args.json ``` You can find the logs of these trained models in this [wandb project](https://wandb.ai/psuraj/distilbart-mnli).
valhalla/distilbart-mnli-12-1
valhalla
2021-06-14T10:27:55Z
109,694
51
transformers
[ "transformers", "pytorch", "jax", "bart", "text-classification", "distilbart", "distilbart-mnli", "zero-shot-classification", "dataset:mnli", "autotrain_compatible", "endpoints_compatible", "region:us" ]
zero-shot-classification
2022-03-02T23:29:05Z
--- datasets: - mnli tags: - distilbart - distilbart-mnli pipeline_tag: zero-shot-classification --- # DistilBart-MNLI distilbart-mnli is the distilled version of bart-large-mnli created using the **No Teacher Distillation** technique proposed for BART summarisation by Huggingface, [here](https://github.com/huggingface/transformers/tree/master/examples/seq2seq#distilbart). We just copy alternating layers from `bart-large-mnli` and finetune more on the same data. | | matched acc | mismatched acc | | ------------------------------------------------------------------------------------ | ----------- | -------------- | | [bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli) (baseline, 12-12) | 89.9 | 90.01 | | [distilbart-mnli-12-1](https://huggingface.co/valhalla/distilbart-mnli-12-1) | 87.08 | 87.5 | | [distilbart-mnli-12-3](https://huggingface.co/valhalla/distilbart-mnli-12-3) | 88.1 | 88.19 | | [distilbart-mnli-12-6](https://huggingface.co/valhalla/distilbart-mnli-12-6) | 89.19 | 89.01 | | [distilbart-mnli-12-9](https://huggingface.co/valhalla/distilbart-mnli-12-9) | 89.56 | 89.52 | This is a very simple and effective technique, as we can see the performance drop is very little. Detailed performace trade-offs will be posted in this [sheet](https://docs.google.com/spreadsheets/d/1dQeUvAKpScLuhDV1afaPJRRAE55s2LpIzDVA5xfqxvk/edit?usp=sharing). ## Fine-tuning If you want to train these models yourself, clone the [distillbart-mnli repo](https://github.com/patil-suraj/distillbart-mnli) and follow the steps below Clone and install transformers from source ```bash git clone https://github.com/huggingface/transformers.git pip install -qqq -U ./transformers ``` Download MNLI data ```bash python transformers/utils/download_glue_data.py --data_dir glue_data --tasks MNLI ``` Create student model ```bash python create_student.py \ --teacher_model_name_or_path facebook/bart-large-mnli \ --student_encoder_layers 12 \ --student_decoder_layers 6 \ --save_path student-bart-mnli-12-6 \ ``` Start fine-tuning ```bash python run_glue.py args.json ``` You can find the logs of these trained models in this [wandb project](https://wandb.ai/psuraj/distilbart-mnli).
huggingtweets/nilsmedzkills
huggingtweets
2021-06-14T10:24:42Z
6
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/nilsmedzkills/1623666278497/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1039164884305551367/6byB20dK_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">NilsMedZkills</div> <div style="text-align: center; font-size: 14px;">@nilsmedzkills</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from NilsMedZkills. | Data | NilsMedZkills | | --- | --- | | Tweets downloaded | 341 | | Retweets | 25 | | Short tweets | 89 | | Tweets kept | 227 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3lzg64xn/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @nilsmedzkills's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1bzoss63) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1bzoss63/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/nilsmedzkills') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
valhalla/bart-large-finetuned-squadv1
valhalla
2021-06-14T10:20:35Z
592
7
transformers
[ "transformers", "pytorch", "jax", "bart", "question-answering", "dataset:squad", "arxiv:1910.13461", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- datasets: - squad --- # BART-LARGE finetuned on SQuADv1 This is bart-large model finetuned on SQuADv1 dataset for question answering task ## Model details BART was propsed in the [paper](https://arxiv.org/abs/1910.13461) **BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension**. BART is a seq2seq model intended for both NLG and NLU tasks. To use BART for question answering tasks, we feed the complete document into the encoder and decoder, and use the top hidden state of the decoder as a representation for each word. This representation is used to classify the token. As given in the paper bart-large achives comparable to ROBERTa on SQuAD. Another notable thing about BART is that it can handle sequences with upto 1024 tokens. | Param | #Value | |---------------------|--------| | encoder layers | 12 | | decoder layers | 12 | | hidden size | 4096 | | num attetion heads | 16 | | on disk size | 1.63GB | ## Model training This model was trained on google colab v100 GPU. You can find the fine-tuning colab here [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1I5cK1M_0dLaf5xoewh6swcm5nAInfwHy?usp=sharing). ## Results The results are actually slightly worse than given in the paper. In the paper the authors mentioned that bart-large achieves 88.8 EM and 94.6 F1 | Metric | #Value | |--------|--------| | EM | 86.8022| | F1 | 92.7342| ## Model in Action 🚀 ```python3 from transformers import BartTokenizer, BartForQuestionAnswering import torch tokenizer = BartTokenizer.from_pretrained('valhalla/bart-large-finetuned-squadv1') model = BartForQuestionAnswering.from_pretrained('valhalla/bart-large-finetuned-squadv1') question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet" encoding = tokenizer(question, text, return_tensors='pt') input_ids = encoding['input_ids'] attention_mask = encoding['attention_mask'] start_scores, end_scores = model(input_ids, attention_mask=attention_mask, output_attentions=False)[:2] all_tokens = tokenizer.convert_ids_to_tokens(input_ids[0]) answer = ' '.join(all_tokens[torch.argmax(start_scores) : torch.argmax(end_scores)+1]) answer = tokenizer.convert_tokens_to_ids(answer.split()) answer = tokenizer.decode(answer) #answer => 'a nice puppet' ``` > Created with ❤️ by Suraj Patil [![Github icon](https://cdn0.iconfinder.com/data/icons/octicons/1024/mark-github-32.png)](https://github.com/patil-suraj/) [![Twitter icon](https://cdn0.iconfinder.com/data/icons/shift-logotypes/32/Twitter-32.png)](https://twitter.com/psuraj28)
sshleifer/distilbart-xsum-6-6
sshleifer
2021-06-14T08:25:26Z
43
0
transformers
[ "transformers", "pytorch", "jax", "bart", "text2text-generation", "summarization", "en", "dataset:cnn_dailymail", "dataset:xsum", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:05Z
--- language: en tags: - summarization license: apache-2.0 datasets: - cnn_dailymail - xsum thumbnail: https://huggingface.co/front/thumbnails/distilbart_medium.png --- ### Usage This checkpoint should be loaded into `BartForConditionalGeneration.from_pretrained`. See the [BART docs](https://huggingface.co/transformers/model_doc/bart.html?#transformers.BartForConditionalGeneration) for more information. ### Metrics for DistilBART models | Model Name | MM Params | Inference Time (MS) | Speedup | Rouge 2 | Rouge-L | |:---------------------------|------------:|----------------------:|----------:|----------:|----------:| | distilbart-xsum-12-1 | 222 | 90 | 2.54 | 18.31 | 33.37 | | distilbart-xsum-6-6 | 230 | 132 | 1.73 | 20.92 | 35.73 | | distilbart-xsum-12-3 | 255 | 106 | 2.16 | 21.37 | 36.39 | | distilbart-xsum-9-6 | 268 | 136 | 1.68 | 21.72 | 36.61 | | bart-large-xsum (baseline) | 406 | 229 | 1 | 21.85 | 36.50 | | distilbart-xsum-12-6 | 306 | 137 | 1.68 | 22.12 | 36.99 | | bart-large-cnn (baseline) | 406 | 381 | 1 | 21.06 | 30.63 | | distilbart-12-3-cnn | 255 | 214 | 1.78 | 20.57 | 30.00 | | distilbart-12-6-cnn | 306 | 307 | 1.24 | 21.26 | 30.59 | | distilbart-6-6-cnn | 230 | 182 | 2.09 | 20.17 | 29.70 |