modelId
stringlengths
4
81
tags
sequence
pipeline_tag
stringclasses
17 values
config
dict
downloads
int64
0
59.7M
first_commit
unknown
card
stringlengths
51
438k
ArBert/albert-base-v2-finetuned-ner-gmm-twitter
[ "pytorch", "tensorboard", "albert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "AlbertForTokenClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
"2021-05-23T16:27:51Z"
--- language: - en tags: - punctuation license: mit datasets: - yelp_polarity metrics: - f1 --- # ✨ bert-restore-punctuation [![forthebadge](https://forthebadge.com/images/badges/gluten-free.svg)]() This a bert-base-uncased model finetuned for punctuation restoration on [Yelp Reviews](https://www.tensorflow.org/datasets/catalog/yelp_polarity_reviews). The model predicts the punctuation and upper-casing of plain, lower-cased text. An example use case can be ASR output. Or other cases when text has lost punctuation. This model is intended for direct use as a punctuation restoration model for the general English language. Alternatively, you can use this for further fine-tuning on domain-specific texts for punctuation restoration tasks. Model restores the following punctuations -- **[! ? . , - : ; ' ]** The model also restores the upper-casing of words. ----------------------------------------------- ## 🚋 Usage **Below is a quick way to get up and running with the model.** 1. First, install the package. ```bash pip install rpunct ``` 2. Sample python code. ```python from rpunct import RestorePuncts # The default language is 'english' rpunct = RestorePuncts() rpunct.punctuate("""in 2018 cornell researchers built a high-powered detector that in combination with an algorithm-driven process called ptychography set a world record by tripling the resolution of a state-of-the-art electron microscope as successful as it was that approach had a weakness it only worked with ultrathin samples that were a few atoms thick anything thicker would cause the electrons to scatter in ways that could not be disentangled now a team again led by david muller the samuel b eckert professor of engineering has bested its own record by a factor of two with an electron microscope pixel array detector empad that incorporates even more sophisticated 3d reconstruction algorithms the resolution is so fine-tuned the only blurring that remains is the thermal jiggling of the atoms themselves""") # Outputs the following: # In 2018, Cornell researchers built a high-powered detector that, in combination with an algorithm-driven process called Ptychography, set a world record by tripling the # resolution of a state-of-the-art electron microscope. As successful as it was, that approach had a weakness. It only worked with ultrathin samples that were a few atoms # thick. Anything thicker would cause the electrons to scatter in ways that could not be disentangled. Now, a team again led by David Muller, the Samuel B. # Eckert Professor of Engineering, has bested its own record by a factor of two with an Electron microscope pixel array detector empad that incorporates even more # sophisticated 3d reconstruction algorithms. The resolution is so fine-tuned the only blurring that remains is the thermal jiggling of the atoms themselves. ``` **This model works on arbitrarily large text in English language and uses GPU if available.** ----------------------------------------------- ## 📡 Training data Here is the number of product reviews we used for finetuning the model: | Language | Number of text samples| | -------- | ----------------- | | English | 560,000 | We found the best convergence around _**3 epochs**_, which is what presented here and available via a download. ----------------------------------------------- ## 🎯 Accuracy The fine-tuned model obtained the following accuracy on 45,990 held-out text samples: | Accuracy | Overall F1 | Eval Support | | -------- | ---------------------- | ------------------- | | 91% | 90% | 45,990 Below is a breakdown of the performance of the model by each label: | label | precision | recall | f1-score | support| | --------- | -------------|-------- | ----------|--------| | **!** | 0.45 | 0.17 | 0.24 | 424 | **!+Upper** | 0.43 | 0.34 | 0.38 | 98 | **'** | 0.60 | 0.27 | 0.37 | 11 | **,** | 0.59 | 0.51 | 0.55 | 1522 | **,+Upper** | 0.52 | 0.50 | 0.51 | 239 | **-** | 0.00 | 0.00 | 0.00 | 18 | **.** | 0.69 | 0.84 | 0.75 | 2488 | **.+Upper** | 0.65 | 0.52 | 0.57 | 274 | **:** | 0.52 | 0.31 | 0.39 | 39 | **:+Upper** | 0.36 | 0.62 | 0.45 | 16 | **;** | 0.00 | 0.00 | 0.00 | 17 | **?** | 0.54 | 0.48 | 0.51 | 46 | **?+Upper** | 0.40 | 0.50 | 0.44 | 4 | **none** | 0.96 | 0.96 | 0.96 |35352 | **Upper** | 0.84 | 0.82 | 0.83 | 5442 ----------------------------------------------- ## ☕ Contact Contact [Daulet Nurmanbetov]([email protected]) for questions, feedback and/or requests for similar models. -----------------------------------------------
ArBert/albert-base-v2-finetuned-ner-gmm
[ "pytorch", "tensorboard", "albert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "AlbertForTokenClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
"2022-01-13T00:30:42Z"
--- tags: - conversational --- # DioloGPT KaeyaBot model
ArBert/albert-base-v2-finetuned-ner-kmeans-twitter
[ "pytorch", "tensorboard", "albert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "AlbertForTokenClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
"2022-01-15T05:09:37Z"
--- tags: - conversational --- # DioloGPT KaeyaBot model
ArBert/albert-base-v2-finetuned-ner-kmeans
[ "pytorch", "tensorboard", "albert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "AlbertForTokenClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
"2022-01-12T07:52:54Z"
--- tags: - conversational --- # DioloGPT LisaBot model
ArBert/bert-base-uncased-finetuned-ner-agglo
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
"2022-01-25T22:30:35Z"
--- tags: - conversational --- # DioloGPT KaeyaBot model
ArBert/bert-base-uncased-finetuned-ner-gmm
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
"2021-12-04T16:52:22Z"
--- license: apache-2.0 tags: - generated_from_trainer datasets: - wmt16 metrics: - bleu model-index: - name: opus-mt-de-en-finetuned-de-to-en-second results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: wmt16 type: wmt16 args: de-en metrics: - name: Bleu type: bleu value: 37.9762 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opus-mt-de-en-finetuned-de-to-en-second This model is a fine-tuned version of [Helsinki-NLP/opus-mt-de-en](https://huggingface.co/Helsinki-NLP/opus-mt-de-en) on the wmt16 dataset. It achieves the following results on the evaluation set: - Loss: 1.2282 - Bleu: 37.9762 - Gen Len: 25.3696 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:| | No log | 1.0 | 157 | 1.1837 | 38.8278 | 25.22 | | No log | 2.0 | 314 | 1.2057 | 38.3047 | 25.2908 | | No log | 3.0 | 471 | 1.2167 | 38.231 | 25.316 | | 1.4808 | 4.0 | 628 | 1.2256 | 37.9871 | 25.3556 | | 1.4808 | 5.0 | 785 | 1.2282 | 37.9762 | 25.3696 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
ArBert/roberta-base-finetuned-ner
[ "pytorch", "tensorboard", "roberta", "token-classification", "transformers", "generated_from_trainer", "license:mit", "autotrain_compatible" ]
token-classification
{ "architectures": [ "RobertaForTokenClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: rare-puppers results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.9861111044883728 --- # rare-puppers Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### corgi ![corgi](images/corgi.jpg) #### samoyed ![samoyed](images/samoyed.jpg) #### shiba inu ![shiba inu](images/shiba_inu.jpg)
ArJakusz/DialoGPT-small-stark
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- language: fi license: cc-by-4.0 --- # FinBERT fine-tuned with the FinnSentiment dataset This is a FinBERT model fine-tuned with the [FinnSentiment dataset](https://arxiv.org/pdf/2012.02613.pdf). 90% of sentences were used for training and 10% for evaluation. ## Evaluation results |Metric|Score| |--|--| |Accuracy|0.8639028475711893| |F1-score|0.8643024701696561| |Precision|0.8653866541244811| |Recall|0.8639028475711893| |Matthews|0.6764924917164834| ![kuva.png](https://s3.amazonaws.com/moonup/production/uploads/1661156173672-61561a042387f285c1f8aec3.png) ## License FinBERT-FinnSentiment is licensed under the [CC BY 4.0 License](https://creativecommons.org/licenses/by/4.0/deed.en) (same as FinBERT and the FinnSentiment dataset).
Aran/DialoGPT-medium-harrypotter
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: split metrics: - name: Accuracy type: accuracy value: 0.9265 - name: F1 type: f1 value: 0.9264826040883781 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2108 - Accuracy: 0.9265 - F1: 0.9265 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8108 | 1.0 | 250 | 0.3101 | 0.903 | 0.8995 | | 0.2423 | 2.0 | 500 | 0.2108 | 0.9265 | 0.9265 | ### Framework versions - Transformers 4.13.0 - Pytorch 1.13.1+cu116 - Datasets 2.8.0 - Tokenizers 0.10.3
Aravinth/test
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
"2021-12-03T02:26:55Z"
--- tags: - generated_from_trainer datasets: - wmt16_en_ro_pre_processed model-index: - name: t5-tiny-random-length-128-learning_rate-2e-05-weight_decay-0.01-finetuned-en-to-ro results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-tiny-random-length-128-learning_rate-2e-05-weight_decay-0.01-finetuned-en-to-ro This model is a fine-tuned version of [patrickvonplaten/t5-tiny-random](https://huggingface.co/patrickvonplaten/t5-tiny-random) on the wmt16_en_ro_pre_processed dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu102 - Datasets 1.15.1 - Tokenizers 0.10.3
ArcQ/gpt-experiments
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - generated_from_trainer datasets: - wmt16_en_ro_pre_processed metrics: - bleu model-index: - name: t5-tiny-random-length-96-learning_rate-0.0002-weight_decay-0.01-finetuned-en-to-ro results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: wmt16_en_ro_pre_processed type: wmt16_en_ro_pre_processed args: enro metrics: - name: Bleu type: bleu value: 0.0617 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-tiny-random-length-96-learning_rate-0.0002-weight_decay-0.01-finetuned-en-to-ro This model is a fine-tuned version of [patrickvonplaten/t5-tiny-random](https://huggingface.co/patrickvonplaten/t5-tiny-random) on the wmt16_en_ro_pre_processed dataset. It achieves the following results on the evaluation set: - Loss: 4.6426 - Bleu: 0.0617 - Gen Len: 8.9895 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:------:|:---------------:|:------:|:-------:| | 4.5828 | 1.0 | 76290 | 5.5397 | 0.0089 | 8.981 | | 4.187 | 2.0 | 152580 | 5.2241 | 0.0172 | 8.989 | | 3.9612 | 3.0 | 228870 | 5.0092 | 0.034 | 8.988 | | 3.8151 | 4.0 | 305160 | 4.8688 | 0.0365 | 8.9865 | | 3.7162 | 5.0 | 381450 | 4.7656 | 0.0469 | 8.9865 | | 3.6498 | 6.0 | 457740 | 4.6874 | 0.0531 | 8.9885 | | 3.6147 | 7.0 | 534030 | 4.6612 | 0.0585 | 8.9875 | | 3.5972 | 8.0 | 610320 | 4.6426 | 0.0617 | 8.9895 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu102 - Datasets 1.15.1 - Tokenizers 0.10.3
Arcktosh/DialoGPT-small-rick
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
"2021-12-03T02:26:05Z"
--- tags: - generated_from_trainer datasets: - wmt16_en_ro_pre_processed model-index: - name: t5-tiny-random-length-96-learning_rate-2e-05-weight_decay-0.01-finetuned-en-to-ro results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-tiny-random-length-96-learning_rate-2e-05-weight_decay-0.01-finetuned-en-to-ro This model is a fine-tuned version of [patrickvonplaten/t5-tiny-random](https://huggingface.co/patrickvonplaten/t5-tiny-random) on the wmt16_en_ro_pre_processed dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu102 - Datasets 1.15.1 - Tokenizers 0.10.3
ArenaGrenade/char-cnn
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - generated_from_trainer datasets: - wmt16_en_ro_pre_processed metrics: - bleu model-index: - name: t5-tiny-random-length-96-learning_rate-2e-05-weight_decay-0.02-finetuned-en-to-ro results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: wmt16_en_ro_pre_processed type: wmt16_en_ro_pre_processed args: enro metrics: - name: Bleu type: bleu value: 0.0002 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-tiny-random-length-96-learning_rate-2e-05-weight_decay-0.02-finetuned-en-to-ro This model is a fine-tuned version of [patrickvonplaten/t5-tiny-random](https://huggingface.co/patrickvonplaten/t5-tiny-random) on the wmt16_en_ro_pre_processed dataset. It achieves the following results on the evaluation set: - Loss: 6.4854 - Bleu: 0.0002 - Gen Len: 9.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:| | 6.2568 | 1.0 | 76290 | 6.4854 | 0.0002 | 9.0 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu102 - Datasets 1.15.1 - Tokenizers 0.10.3
AriakimTaiyo/DialoGPT-medium-Kumiko
[ "conversational" ]
conversational
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
"2021-11-30T13:59:00Z"
--- tags: - generated_from_trainer datasets: - wmt16_en_ro_pre_processed metrics: - bleu model-index: - name: tiny-mbart-length-128-learning_rate-2e-05-weight_decay-0.01-finetuned-en-to-ro results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: wmt16_en_ro_pre_processed type: wmt16_en_ro_pre_processed args: enro metrics: - name: Bleu type: bleu value: 0.0 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny-mbart-length-128-learning_rate-2e-05-weight_decay-0.01-finetuned-en-to-ro This model is a fine-tuned version of [sshleifer/tiny-mbart](https://huggingface.co/sshleifer/tiny-mbart) on the wmt16_en_ro_pre_processed dataset. It achieves the following results on the evaluation set: - Loss: 8.4656 - Bleu: 0.0 - Gen Len: 20.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:----:|:-------:| | 8.2268 | 1.0 | 76290 | 8.4656 | 0.0 | 20.0 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu102 - Datasets 1.15.1 - Tokenizers 0.10.3
AriakimTaiyo/DialoGPT-small-Rikka
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
"2021-11-30T17:09:09Z"
--- tags: - generated_from_trainer datasets: - wmt16_en_ro_pre_processed metrics: - bleu model-index: - name: tiny-mbart-length-96-learning_rate-2e-05-weight_decay-0.01-finetuned-en-to-ro results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: wmt16_en_ro_pre_processed type: wmt16_en_ro_pre_processed args: enro metrics: - name: Bleu type: bleu value: 0.0 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny-mbart-length-96-learning_rate-2e-05-weight_decay-0.01-finetuned-en-to-ro This model is a fine-tuned version of [sshleifer/tiny-mbart](https://huggingface.co/sshleifer/tiny-mbart) on the wmt16_en_ro_pre_processed dataset. It achieves the following results on the evaluation set: - Loss: 8.5137 - Bleu: 0.0 - Gen Len: 20.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:----:|:-------:| | 8.2817 | 1.0 | 76290 | 8.5137 | 0.0 | 20.0 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu102 - Datasets 1.15.1 - Tokenizers 0.10.3
Aries/T5_question_answering
[ "pytorch", "jax", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "T5ForConditionalGeneration" ], "model_type": "t5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": true, "length_penalty": 2, "max_length": 200, "min_length": 30, "no_repeat_ngram_size": 3, "num_beams": 4, "prefix": "summarize: " }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to German: " }, "translation_en_to_fr": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to French: " }, "translation_en_to_ro": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to Romanian: " } } }
5
"2022-01-30T21:17:59Z"
--- language: - en datasets: - c4 - squad tags: - text2text-generation widget: - text: "question: What is the atomic number for oxygen? context: Oxygen is a chemical element with symbol O and atomic number 8." - text: "question: What is the chemical symbol of Oxygen? context: Oxygen is a chemical element with symbol O and atomic number 8." license: apache-2.0 --- T5-small for QA --- [Google's T5-small](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) pre-trained on the [C4](https://huggingface.co/datasets/c4) dataset, fine-tuned for Question-Answering on [SQuAD v2](https://huggingface.co/datasets/squad_v2) with the following hyperparameters: ``` optimizer=adamw_hf learning_rate=3e-5 adam_beta1=0.9 adam_beta2=0.999 adam_epsilon=1e-08 num_train_epochs=2 per_device_train_batch_size=12 ``` Usage --- The input [context and question] has to be prepared in a specific way as follows: ```python from transformers import pipeline def prep_input(_context, _question): return " ".join(["question:", _question.strip(), "context:", _context.strip()]) t5qa = pipeline("text2text-generation", "fgaim/t5-small-squad-v2") context = """ Oxygen is a chemical element with symbol O and atomic number 8. It is a member of the chalcogen group on the periodic table and is a highly reactive nonmetal and oxidizing agent that readily forms compounds (notably oxides) with most elements. By mass, oxygen is the third-most abundant element in the universe, after hydrogen and helium. At standard temperature and pressure, two atoms of the element bind to form dioxygen, a colorless and odorless diatomic gas with the formula O. """ t5qa(prep_input(context, "How many atoms combine to form dioxygen?")) # [{'generated_text': 'two'}] t5qa(prep_input(context, "What element makes up almost half of the earth's crust by mass?")) # [{'generated_text': 'oxygen'}] t5qa(prep_input(context, "What are the most abundent elements of the universe by mass?")) # [{'generated_text': 'hydrogen and helium'}] ```
Arina/Erine
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: ti widget: - text: "ድምጻዊ ኣብርሃም ኣፈወርቂ ንዘልኣለም ህያው ኮይኑ ኣብ ልብና ይነብር" datasets: - TLMD - NTC metrics: - f1 - precision - recall - accuracy model-index: - name: tielectra-small-pos results: - task: name: Token Classification type: token-classification metrics: - name: F1 type: f1 value: 0.9456 - name: Precision type: precision value: 0.9456 - name: Recall type: recall value: 0.9456 - name: Accuracy type: accuracy value: 0.9456 --- # Tigrinya POS tagging with TiELECTRA This model is a fine-tuned version of [TiELECTRA](https://huggingface.co/fgaim/tielectra-small) on the NTC-v1 dataset (Tedla et al. 2016). ## Basic usage ```python from transformers import pipeline ti_pos = pipeline("token-classification", model="fgaim/tielectra-small-pos") ti_pos("ድምጻዊ ኣብርሃም ኣፈወርቂ ንዘልኣለም ህያው ኮይኑ ኣብ ልብና ይነብር") ``` ## Training ### Hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10.0 ### Results The model achieves the following results on the test set: - Loss: 0.2236 - Adj Precision: 0.9148 - Adj Recall: 0.9192 - Adj F1: 0.9170 - Adj Number: 1670 - Adv Precision: 0.8228 - Adv Recall: 0.8058 - Adv F1: 0.8142 - Adv Number: 484 - Con Precision: 0.9793 - Con Recall: 0.9743 - Con F1: 0.9768 - Con Number: 972 - Fw Precision: 0.5 - Fw Recall: 0.3214 - Fw F1: 0.3913 - Fw Number: 28 - Int Precision: 0.64 - Int Recall: 0.6154 - Int F1: 0.6275 - Int Number: 26 - N Precision: 0.9525 - N Recall: 0.9587 - N F1: 0.9556 - N Number: 3992 - Num Precision: 0.9825 - Num Recall: 0.9372 - Num F1: 0.9593 - Num Number: 239 - N Prp Precision: 0.9132 - N Prp Recall: 0.9404 - N Prp F1: 0.9266 - N Prp Number: 470 - N V Precision: 0.9667 - N V Recall: 0.9760 - N V F1: 0.9713 - N V Number: 416 - Pre Precision: 0.9645 - Pre Recall: 0.9592 - Pre F1: 0.9619 - Pre Number: 907 - Pro Precision: 0.9395 - Pro Recall: 0.9079 - Pro F1: 0.9234 - Pro Number: 445 - Pun Precision: 1.0 - Pun Recall: 0.9994 - Pun F1: 0.9997 - Pun Number: 1607 - Unc Precision: 0.9286 - Unc Recall: 0.8125 - Unc F1: 0.8667 - Unc Number: 16 - V Precision: 0.7609 - V Recall: 0.8974 - V F1: 0.8235 - V Number: 78 - V Aux Precision: 0.9581 - V Aux Recall: 0.9786 - V Aux F1: 0.9682 - V Aux Number: 654 - V Ger Precision: 0.9183 - V Ger Recall: 0.9415 - V Ger F1: 0.9297 - V Ger Number: 513 - V Imf Precision: 0.9473 - V Imf Recall: 0.9442 - V Imf F1: 0.9458 - V Imf Number: 914 - V Imv Precision: 0.8163 - V Imv Recall: 0.5714 - V Imv F1: 0.6723 - V Imv Number: 70 - V Prf Precision: 0.8927 - V Prf Recall: 0.8776 - V Prf F1: 0.8851 - V Prf Number: 294 - V Rel Precision: 0.9535 - V Rel Recall: 0.9485 - V Rel F1: 0.9510 - V Rel Number: 757 - Overall Precision: 0.9456 - Overall Recall: 0.9456 - Overall F1: 0.9456 - Overall Accuracy: 0.9456 ### Framework versions - Transformers 4.10.3 - Pytorch 1.9.0+cu111 - Datasets 1.10.2 - Tokenizers 0.10.1 ## Citation If you use this model in your product or research, please cite as follows: ``` @article{Fitsum2021TiPLMs, author= {Fitsum Gaim and Wonsuk Yang and Jong C. Park}, title= {Monolingual Pre-trained Language Models for Tigrinya}, year= 2021, publisher= {WiNLP 2021/EMNLP 2021} } ``` ## References ``` Tedla, Y., Yamamoto, K. & Marasinghe, A. 2016. Tigrinya Part-of-Speech Tagging with Morphological Patterns and the New Nagaoka Tigrinya Corpus. International Journal Of Computer Applications 146 pp. 33-41 (2016). ```
ArjunKadya/HuggingFace
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: ti widget: - text: "ድምጻዊ ኣብርሃም ኣፈወርቂ ንዘልኣለም ህያው ኮይኑ ኣብ ልብና ይነብር" metrics: - f1 - precision - recall - accuracy model-index: - name: tielectra-small-sentiment results: - task: name: Text Classification type: text-classification metrics: - name: F1 type: f1 value: 0.8228962818003914 - name: Precision type: precision value: 0.8055555555555556 - name: Recall type: recall value: 0.841 - name: Accuracy type: accuracy value: 0.819 --- # Sentiment Analysis for Tigrinya with TiELECTRA small This model is a fine-tuned version of [TiELECTRA small](https://huggingface.co/fgaim/tielectra-small) on a YouTube comments Sentiment Analysis dataset for Tigrinya (Tela et al. 2020). ## Basic usage ```python from transformers import pipeline ti_sent = pipeline("sentiment-analysis", model="fgaim/tielectra-small-sentiment") ti_sent("ድምጻዊ ኣብርሃም ኣፈወርቂ ንዘልኣለም ህያው ኮይኑ ኣብ ልብና ይነብር") ``` ## Training ### Hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Results The model achieves the following results on the evaluation set: - F1: 0.8229 - Precision: 0.8056 - Recall: 0.841 - Accuracy: 0.819 - Loss: 0.4299 ### Framework versions - Transformers 4.10.3 - Pytorch 1.9.0+cu111 - Datasets 1.10.2 - Tokenizers 0.10.1 ## Citation If you use this model in your product or research, please cite as follows: ``` @article{Fitsum2021TiPLMs, author={Fitsum Gaim and Wonsuk Yang and Jong C. Park}, title={Monolingual Pre-trained Language Models for Tigrinya}, year=2021, publisher= {WiNLP 2021/EMNLP 2021} } ``` ## References ``` Tela, A., Woubie, A. and Hautamäki, V. 2020. Transferring Monolingual Model to Low-Resource Language: The Case of Tigrinya. ArXiv, abs/2006.07698. ```
Arkadiusz/Test-model
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: ti widget: - text: "ዓቕሚ መንእሰይ ኤርትራ [MASK] ተራእዩ" --- # Pre-trained ELECTRA small for Tigrinya Language We pre-train ELECTRA small on the [TLMD](https://zenodo.org/record/5139094) dataset, with over 40 million tokens. Contained are trained Flax and PyTorch models. ## Hyperparameters The hyperparameters corresponding to model sizes mentioned above are as follows: | Model Size | L | AH | HS | FFN | P | Seq | |------------|----|----|-----|------|------|------| | SMALL | 12 | 4 | 256 | 1024 | 14M | 512 | (L = number of layers; AH = number of attention heads; HS = hidden size; FFN = feedforward network dimension; P = number of parameters; Seq = maximum sequence length.) ### Framework versions - Transformers 4.12.0.dev0 - Pytorch 1.9.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3 ## Citation If you use this model in your product or research, please cite as follows: ``` @article{Fitsum2021TiPLMs, author={Fitsum Gaim and Wonsuk Yang and Jong C. Park}, title={Monolingual Pre-trained Language Models for Tigrinya}, year=2021, publisher={WiNLP 2021 at EMNLP 2021} } ```
ArnaudPannatier/MLPMixer
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: ti widget: - text: "ድምጻዊ ኣብርሃም ኣፈወርቂ ንዘልኣለም ህያው ኮይኑ ኣብ ልብና ይነብር" datasets: - TLMD - NTC metrics: - f1 - precision - recall - accuracy model-index: - name: tiroberta-base-pos results: - task: name: Token Classification type: token-classification metrics: - name: F1 type: f1 value: 0.9562 - name: Precision type: precision value: 0.9562 - name: Recall type: recall value: 0.9562 - name: Accuracy type: accuracy value: 0.9562 --- # Tigrinya POS tagging with TiRoBERTa This model is a fine-tuned version of [TiRoBERTa](https://huggingface.co/fgaim/tiroberta) on the NTC-v1 dataset (Tedla et al. 2016). ## Training ### Hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10.0 ### Results The model achieves the following results on the test set: - Loss: 0.3194 - Adj Precision: 0.9219 - Adj Recall: 0.9335 - Adj F1: 0.9277 - Adj Number: 1670 - Adv Precision: 0.8297 - Adv Recall: 0.8554 - Adv F1: 0.8423 - Adv Number: 484 - Con Precision: 0.9844 - Con Recall: 0.9763 - Con F1: 0.9804 - Con Number: 972 - Fw Precision: 0.7895 - Fw Recall: 0.5357 - Fw F1: 0.6383 - Fw Number: 28 - Int Precision: 0.6552 - Int Recall: 0.7308 - Int F1: 0.6909 - Int Number: 26 - N Precision: 0.9650 - N Recall: 0.9662 - N F1: 0.9656 - N Number: 3992 - Num Precision: 0.9747 - Num Recall: 0.9665 - Num F1: 0.9706 - Num Number: 239 - N Prp Precision: 0.9308 - N Prp Recall: 0.9447 - N Prp F1: 0.9377 - N Prp Number: 470 - N V Precision: 0.9854 - N V Recall: 0.9736 - N V F1: 0.9794 - N V Number: 416 - Pre Precision: 0.9722 - Pre Recall: 0.9625 - Pre F1: 0.9673 - Pre Number: 907 - Pro Precision: 0.9448 - Pro Recall: 0.9236 - Pro F1: 0.9341 - Pro Number: 445 - Pun Precision: 1.0 - Pun Recall: 0.9994 - Pun F1: 0.9997 - Pun Number: 1607 - Unc Precision: 1.0 - Unc Recall: 0.875 - Unc F1: 0.9333 - Unc Number: 16 - V Precision: 0.8780 - V Recall: 0.9231 - V F1: 0.9 - V Number: 78 - V Aux Precision: 0.9685 - V Aux Recall: 0.9878 - V Aux F1: 0.9780 - V Aux Number: 654 - V Ger Precision: 0.9388 - V Ger Recall: 0.9571 - V Ger F1: 0.9479 - V Ger Number: 513 - V Imf Precision: 0.9634 - V Imf Recall: 0.9497 - V Imf F1: 0.9565 - V Imf Number: 914 - V Imv Precision: 0.8793 - V Imv Recall: 0.7286 - V Imv F1: 0.7969 - V Imv Number: 70 - V Prf Precision: 0.8960 - V Prf Recall: 0.9082 - V Prf F1: 0.9020 - V Prf Number: 294 - V Rel Precision: 0.9678 - V Rel Recall: 0.9538 - V Rel F1: 0.9607 - V Rel Number: 757 - Overall Precision: 0.9562 - Overall Recall: 0.9562 - Overall F1: 0.9562 - Overall Accuracy: 0.9562 ### Framework versions - Transformers 4.12.0.dev0 - Pytorch 1.9.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3 ## Citation If you use this model in your product or research, please cite as follows: ``` @article{Fitsum2021TiPLMs, author={Fitsum Gaim and Wonsuk Yang and Jong C. Park}, title={Monolingual Pre-trained Language Models for Tigrinya}, year=2021, publisher={WiNLP 2021/EMNLP 2021} } ``` ## References ``` Tedla, Y., Yamamoto, K. & Marasinghe, A. 2016. Tigrinya Part-of-Speech Tagging with Morphological Patterns and the New Nagaoka Tigrinya Corpus. International Journal Of Computer Applications 146 pp. 33-41 (2016). ```
Arnold/common_voiceha
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: ti widget: - text: "ድምጻዊ ኣብርሃም ኣፈወርቂ ንዘልኣለም ህያው ኮይኑ ኣብ ልብና ይነብር" datasets: - TLMD metrics: - accuracy - f1 - precision - recall model-index: - name: tiroberta-sentiment results: - task: name: Text Classification type: text-classification metrics: - name: Accuracy type: accuracy value: 0.828 - name: F1 type: f1 value: 0.8476527900797165 - name: Precision type: precision value: 0.760731319554849 - name: Recall type: recall value: 0.957 --- # Sentiment Analysis for Tigrinya with TiRoBERTa This model is a fine-tuned version of [TiRoBERTa](https://huggingface.co/fgaim/roberta-base-tigrinya) on a YouTube comments Sentiment Analysis dataset for Tigrinya (Tela et al. 2020). ## Basic usage ```python from transformers import pipeline ti_sent = pipeline("sentiment-analysis", model="fgaim/tiroberta-sentiment") ti_sent("ድምጻዊ ኣብርሃም ኣፈወርቂ ንዘልኣለም ህያው ኮይኑ ኣብ ልብና ይነብር") ``` ## Training ### Hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Results It achieves the following results on the evaluation set: - F1: 0.8477 - Precision: 0.7607 - Recall: 0.957 - Accuracy: 0.828 - Loss: 0.6796 ### Framework versions - Transformers 4.10.3 - Pytorch 1.9.0+cu111 - Datasets 1.10.2 - Tokenizers 0.10.1 ## Citation If you use this model in your product or research, please cite as follows: ``` @article{Fitsum2021TiPLMs, author={Fitsum Gaim and Wonsuk Yang and Jong C. Park}, title={Monolingual Pre-trained Language Models for Tigrinya}, year=2021, publisher={WiNLP 2021/EMNLP 2021} } ``` ## References ``` Tela, A., Woubie, A. and Hautamäki, V. 2020. Transferring Monolingual Model to Low-Resource Language: The Case of Tigrinya. ArXiv, abs/2006.07698. ```
Arnold/wav2vec2-large-xlsr-turkish-demo-colab
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: - en tags: - text-classification - sentiment-analysis - sentiment-classification - targeted-sentiment-classification - target-depentent-sentiment-classification license: "apache-2.0" datasets: "fhamborg/news_sentiment_newsmtsc" --- # NewsSentiment: easy-to-use, high-quality target-dependent sentiment classification for news articles ## Important: [use our PyPI package](https://pypi.org/project/NewsSentiment/) instead of this model on the Hub The Huggingface Hub architecture currently [does not support](https://github.com/huggingface/transformers/issues/14785) target-dependent sentiment classification since you cannot provide the required inputs, i.e., sentence and target. Thus, we recommend that you use our easy-to-use [PyPI package NewsSentiment](https://pypi.org/project/NewsSentiment/). ## Description This model is the currently [best performing](https://aclanthology.org/2021.eacl-main.142.pdf) targeted sentiment classifier for news articles. In contrast to regular sentiment classification, targeted sentiment classification allows you to provide a target in a sentence. Only for this target, the sentiment is then predicted. This is more reliable in many cases, as demonstrated by the following simplistic example: "I like Bert, but I hate Robert." This model is also available as an easy-to-use PyPI package named [`NewsSentiment`](https://pypi.org/project/NewsSentiment/) and in its original GitHub repository named [`NewsMTSC`](https://github.com/fhamborg/NewsMTSC), where you will find the dataset the model was trained on, other models for sentiment classification, and a training and testing framework. More information on the model and the dataset (consisting of more than 10k sentences sampled from news articles, each labeled and agreed upon by at least 5 annotators) can be found in our [EACL paper](https://aclanthology.org/2021.eacl-main.142.pdf). The dataset, the model, and its source code can be viewed in our [GitHub repository](https://github.com/fhamborg/NewsMTSC). We recommend to use our [PyPI package](https://pypi.org/project/NewsSentiment/) for sentiment classification since the Huggingface Hub platform seems to [not support](https://github.com/huggingface/transformers/issues/14785) target-dependent sentiment classification. # How to cite If you use the dataset or model, please cite our [paper](https://www.aclweb.org/anthology/2021.eacl-main.142/) ([PDF](https://www.aclweb.org/anthology/2021.eacl-main.142.pdf)): ``` @InProceedings{Hamborg2021b, author = {Hamborg, Felix and Donnay, Karsten}, title = {NewsMTSC: (Multi-)Target-dependent Sentiment Classification in News Articles}, booktitle = {Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2021)}, year = {2021}, month = {Apr.}, location = {Virtual Event}, } ```
Aron/distilbert-base-uncased-finetuned-emotion
[ "pytorch", "tensorboard", "distilbert", "text-classification", "dataset:emotion", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
{ "architectures": [ "DistilBertForSequenceClassification" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
36
null
--- language: de license: cc-by-sa-4.0 datasets: - germeval_14 tags: - German - de - NER --- # BERT-DE-NER ## What is it? This is a German BERT model fine-tuned for named entity recognition. ## Base model & training This model is based on [bert-base-german-dbmdz-cased](https://huggingface.co/bert-base-german-dbmdz-cased) and has been fine-tuned for NER on the training data from [GermEval2014](https://sites.google.com/site/germeval2014ner). ## Model results The results on the test data from GermEval2014 are (entities only): | Precision | Recall | F1-Score | |----------:|-------:|---------:| | 0.817 | 0.842 | 0.829 | ## How to use ```Python >>> from transformers import pipeline >>> classifier = pipeline('ner', model="fhswf/bert_de_ner") >>> classifier('Von der Organisation „medico international“ hieß es, die EU entziehe sich seit vielen Jahren der Verantwortung für die Menschen an ihren Außengrenzen.') [{'word': 'med', 'score': 0.9996621608734131, 'entity': 'B-ORG', 'index': 6}, {'word': '##ico', 'score': 0.9995362162590027, 'entity': 'I-ORG', 'index': 7}, {'word': 'international', 'score': 0.9996932744979858, 'entity': 'I-ORG', 'index': 8}, {'word': 'eu', 'score': 0.9997008442878723, 'entity': 'B-ORG', 'index': 14}] ```
Arpita/opus-mt-en-ro-finetuned-syn-to-react
[ "pytorch", "tensorboard", "marian", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "MarianMTModel" ], "model_type": "marian", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- tags: - generated_from_trainer datasets: - ncbi_disease metrics: - precision - recall - f1 - accuracy model_index: - name: biobert_v1.1_pubmed-finetuned-ner-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: ncbi_disease type: ncbi_disease args: ncbi_disease metric: name: Accuracy type: accuracy value: 0.9829142288061745 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # biobert_v1.1_pubmed-finetuned-ner-finetuned-ner This model is a fine-tuned version of [fidukm34/biobert_v1.1_pubmed-finetuned-ner](https://huggingface.co/fidukm34/biobert_v1.1_pubmed-finetuned-ner) on the ncbi_disease dataset. It achieves the following results on the evaluation set: - Loss: 0.0715 - Precision: 0.8464 - Recall: 0.8872 - F1: 0.8663 - Accuracy: 0.9829 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 340 | 0.0715 | 0.8464 | 0.8872 | 0.8663 | 0.9829 | ### Framework versions - Transformers 4.8.1 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
Arpita/opus-mt-en-ro-finetuned-synthon-to-reactant
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - generated_from_trainer datasets: - ncbi_disease metrics: - precision - recall - f1 - accuracy model_index: - name: biobert_v1.1_pubmed-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: ncbi_disease type: ncbi_disease args: ncbi_disease metric: name: Accuracy type: accuracy value: 0.9827274990663513 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # biobert_v1.1_pubmed-finetuned-ner This model is a fine-tuned version of [monologg/biobert_v1.1_pubmed](https://huggingface.co/monologg/biobert_v1.1_pubmed) on the ncbi_disease dataset. It achieves the following results on the evaluation set: - Loss: 0.0657 - Precision: 0.8338 - Recall: 0.8933 - F1: 0.8625 - Accuracy: 0.9827 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 340 | 0.0612 | 0.8268 | 0.85 | 0.8382 | 0.9806 | | 0.0987 | 2.0 | 680 | 0.0604 | 0.8397 | 0.8848 | 0.8616 | 0.9829 | | 0.0272 | 3.0 | 1020 | 0.0657 | 0.8338 | 0.8933 | 0.8625 | 0.9827 | ### Framework versions - Transformers 4.8.1 - Pytorch 1.9.0 - Datasets 1.6.2 - Tokenizers 0.10.3
ArtemisZealot/DialoGTP-small-Qkarin
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
This model can measure semantic similarity between pairs of texts containing figurative language. As far as we know, this model works slightly better than sup-simCSE-roberta-base. For example : **sentence 1**: I have been in seventh heaven since Harry entered my life . **sentence 2**: I have been in very happy since Harry entered my life. the cosin score of simcse: 0.897 the cosin score of us: 0.897 ------------------------------------------------------------------- **sentence 1**: I have been in seventh heaven since Harry entered my life . **sentence 2**: I have been in pain since Harry entered my life . the cosin score of simcse: 0.846 the cosin score of us: 0.753 -------------------------------------------------- It's still a big challenge for us to measure semantic similarity of figurative language from the sentence embedding perspective. unsupvised models may useless as the key is to infer the literal meaning of the figurative expression, since the annotated is rare.
ArthurBaia/bert-base-portuguese-cased-finetuned-squad
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
"2022-02-17T08:38:12Z"
This model can convert the literal expression to figurative/metaphorical expression. Below is the usage of our model: from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("figurative-nlp/t5-figurative-generation") model = AutoModelForSeq2SeqLM.from_pretrained("figurative-nlp/t5-figurative-generation") input_ids = tokenizer( "research is <m> very difficult </m> for me.", return_tensors="pt" ).input_ids # Batch size 1 outputs = model.generate(input_ids,beam_search = 5) result = tokenizer.decode(outputs[0], skip_special_tokens=True) #result : research is a tough nut to crack for me. For example (the &lt;m&gt; and &lt;/m&gt; is the mark that inform the model which literal expression we want to convert it as figurative expression): **Input**: as of a cloud that softly &lt;m&gt; covers &lt;/m&gt; the sun. **Output**: as of a cloud that softly drapes over the sun. **Input**: that car coming around the corner &lt;m&gt; surprised me. &lt;/m&gt; **Output**: that car coming around the corner knocked my socks off. Note: the figurative language here includes metaphor, idiom and simile. We don't guarantee that the results generated results are satisfactory to you. We are trying to improve the effect of the model.
Aruden/DialoGPT-medium-harrypotterall
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
import requests API_URL = "https://api-inference.huggingface.co/models/huggingface/prunebert-base-uncased-6-finepruned-w-distil-squad" headers = {"Authorization": "Bearer api_UXqrzQBiZKXaWxstVwEKcYvHQpGSGiQGbr"} def query(payload): response = requests.post(API_URL, headers=headers, json=payload) return response.json() output = query({ "inputs": { "question": "What's my name?", "context": "My name is Clara and I live in Berkeley.", }, })
ArvinZhuang/BiTAG-t5-large
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "T5ForConditionalGeneration" ], "model_type": "t5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": true, "length_penalty": 2, "max_length": 200, "min_length": 30, "no_repeat_ngram_size": 3, "num_beams": 4, "prefix": "summarize: " }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to German: " }, "translation_en_to_fr": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to French: " }, "translation_en_to_ro": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to Romanian: " } } }
4
null
# GPT2 base style transfer paraphraser This is the trained base-model from the paper [Reformulating Unsupervised Style Transfer as Paraphrase Generation](https://arxiv.org/abs/2010.05700) by Krishna K. et al. Note that I (the uploader) am not the author of the paper. Permission to upload to Huggingface was given by the main author. ## Citation If you found this model useful, please cite the original work: ``` @inproceedings{style20, author={Kalpesh Krishna and John Wieting and Mohit Iyyer}, Booktitle = {Empirical Methods in Natural Language Processing}, Year = "2020", Title={Reformulating Unsupervised Style Transfer as Paraphrase Generation}, } ```
Ateeb/asd
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
"2021-07-31T19:27:47Z"
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: llama_or_what results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.3125 --- # llama_or_what Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### alpaca ![alpaca](images/alpaca.jpg) #### guanaco ![guanaco](images/guanaco.jpg) #### llama ![llama](images/llama.jpg) #### vicuna ![vicuna](images/vicuna.jpg)
Augustvember/test
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
"2021-01-13T18:59:56Z"
--- tags: - flair - token-classification - sequence-tagger-model language: nl datasets: - conll2003 widget: - text: "George Washington ging naar Washington." --- # Dutch NER in Flair (default model) This is the standard 4-class NER model for Dutch that ships with [Flair](https://github.com/flairNLP/flair/). F1-Score: **92,58** (CoNLL-03) Predicts 4 tags: | **tag** | **meaning** | |---------------------------------|-----------| | PER | person name | | LOC | location name | | ORG | organization name | | MISC | other name | Based on Transformer embeddings and LSTM-CRF. --- # Demo: How to use in Flair Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`) ```python from flair.data import Sentence from flair.models import SequenceTagger # load tagger tagger = SequenceTagger.load("flair/ner-dutch") # make example sentence sentence = Sentence("George Washington ging naar Washington") # predict NER tags tagger.predict(sentence) # print sentence print(sentence) # print predicted NER spans print('The following NER tags are found:') # iterate over entities and print for entity in sentence.get_spans('ner'): print(entity) ``` This yields the following output: ``` Span [1,2]: "George Washington" [− Labels: PER (0.997)] Span [5]: "Washington" [− Labels: LOC (0.9996)] ``` So, the entities "*George Washington*" (labeled as a **person**) and "*Washington*" (labeled as a **location**) are found in the sentence "*George Washington ging naar Washington*". --- ### Training: Script to train this model The following Flair script was used to train this model: ```python from flair.data import Corpus from flair.datasets import CONLL_03_DUTCH from flair.embeddings import WordEmbeddings, StackedEmbeddings, FlairEmbeddings # 1. get the corpus corpus: Corpus = CONLL_03_DUTCH() # 2. what tag do we want to predict? tag_type = 'ner' # 3. make the tag dictionary from the corpus tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type) # 4. initialize embeddings embeddings = TransformerWordEmbeddings('wietsedv/bert-base-dutch-cased') # 5. initialize sequence tagger tagger: SequenceTagger = SequenceTagger(hidden_size=256, embeddings=embeddings, tag_dictionary=tag_dictionary, tag_type=tag_type) # 6. initialize trainer trainer: ModelTrainer = ModelTrainer(tagger, corpus) # 7. run training trainer.train('resources/taggers/ner-dutch', train_with_dev=True, max_epochs=150) ``` --- ### Cite Please cite the following paper when using this model. ``` @inproceedings{akbik-etal-2019-flair, title = "{FLAIR}: An Easy-to-Use Framework for State-of-the-Art {NLP}", author = "Akbik, Alan and Bergmann, Tanja and Blythe, Duncan and Rasul, Kashif and Schweter, Stefan and Vollgraf, Roland", booktitle = "Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics (Demonstrations)", year = "2019", url = "https://www.aclweb.org/anthology/N19-4010", pages = "54--59", } ``` --- ### Issues? The Flair issue tracker is available [here](https://github.com/flairNLP/flair/issues/).
Aviora/phobert-ner
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - flair - token-classification - sequence-tagger-model language: - en - de - nl - es datasets: - conll2003 widget: - text: "George Washington ging nach Washington" --- ## 4-Language NER in Flair (English, German, Dutch and Spanish) This is the fast 4-class NER model for 4 CoNLL-03 languages that ships with [Flair](https://github.com/flairNLP/flair/). Also kind of works for related languages like French. F1-Score: **91,51** (CoNLL-03 English), **85,72** (CoNLL-03 German revised), **86,22** (CoNLL-03 Dutch), **85,78** (CoNLL-03 Spanish) Predicts 4 tags: | **tag** | **meaning** | |---------------------------------|-----------| | PER | person name | | LOC | location name | | ORG | organization name | | MISC | other name | Based on [Flair embeddings](https://www.aclweb.org/anthology/C18-1139/) and LSTM-CRF. --- ### Demo: How to use in Flair Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`) ```python from flair.data import Sentence from flair.models import SequenceTagger # load tagger tagger = SequenceTagger.load("flair/ner-multi-fast") # make example sentence in any of the four languages sentence = Sentence("George Washington ging nach Washington") # predict NER tags tagger.predict(sentence) # print sentence print(sentence) # print predicted NER spans print('The following NER tags are found:') # iterate over entities and print for entity in sentence.get_spans('ner'): print(entity) ``` This yields the following output: ``` Span [1,2]: "George Washington" [− Labels: PER (0.9977)] Span [5]: "Washington" [− Labels: LOC (0.9895)] ``` So, the entities "*George Washington*" (labeled as a **person**) and "*Washington*" (labeled as a **location**) are found in the sentence "*George Washington ging nach Washington*". --- ### Training: Script to train this model The following Flair script was used to train this model: ```python from flair.data import Corpus from flair.datasets import CONLL_03, CONLL_03_GERMAN, CONLL_03_DUTCH, CONLL_03_SPANISH from flair.embeddings import WordEmbeddings, StackedEmbeddings, FlairEmbeddings # 1. get the multi-language corpus corpus: Corpus = MultiCorpus([ CONLL_03(), # English corpus CONLL_03_GERMAN(), # German corpus CONLL_03_DUTCH(), # Dutch corpus CONLL_03_SPANISH(), # Spanish corpus ]) # 2. what tag do we want to predict? tag_type = 'ner' # 3. make the tag dictionary from the corpus tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type) # 4. initialize each embedding we use embedding_types = [ # GloVe embeddings WordEmbeddings('glove'), # FastText embeddings WordEmbeddings('de'), # contextual string embeddings, forward FlairEmbeddings('multi-forward-fast'), # contextual string embeddings, backward FlairEmbeddings('multi-backward-fast'), ] # embedding stack consists of Flair and GloVe embeddings embeddings = StackedEmbeddings(embeddings=embedding_types) # 5. initialize sequence tagger from flair.models import SequenceTagger tagger = SequenceTagger(hidden_size=256, embeddings=embeddings, tag_dictionary=tag_dictionary, tag_type=tag_type) # 6. initialize trainer from flair.trainers import ModelTrainer trainer = ModelTrainer(tagger, corpus) # 7. run training trainer.train('resources/taggers/ner-multi-fast', train_with_dev=True, max_epochs=150) ``` --- ### Cite Please cite the following papers when using this model. ``` @misc{akbik2019multilingual, title={Multilingual sequence labeling with one model}, author={Akbik, Alan and Bergmann, Tanja and Vollgraf, Roland} booktitle = {{NLDL} 2019, Northern Lights Deep Learning Workshop}, year = {2019} } ``` ``` @inproceedings{akbik2018coling, title={Contextual String Embeddings for Sequence Labeling}, author={Akbik, Alan and Blythe, Duncan and Vollgraf, Roland}, booktitle = {{COLING} 2018, 27th International Conference on Computational Linguistics}, pages = {1638--1649}, year = {2018} } ```
Ayah/GPT2-DBpedia
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
"2021-02-23T20:39:54Z"
--- tags: - flair - token-classification - sequence-tagger-model language: - en - de - fr - it - nl - pl - es - sv - da - no - fi - cs datasets: - ontonotes widget: - text: "Ich liebe Berlin, as they say." --- ## Multilingual Universal Part-of-Speech Tagging in Flair (fast model) This is the fast multilingual universal part-of-speech tagging model that ships with [Flair](https://github.com/flairNLP/flair/). F1-Score: **92,88** (12 UD Treebanks covering English, German, French, Italian, Dutch, Polish, Spanish, Swedish, Danish, Norwegian, Finnish and Czech) Predicts universal POS tags: | **tag** | **meaning** | |---------------------------------|-----------| |ADJ | adjective | | ADP | adposition | | ADV | adverb | | AUX | auxiliary | | CCONJ | coordinating conjunction | | DET | determiner | | INTJ | interjection | | NOUN | noun | | NUM | numeral | | PART | particle | | PRON | pronoun | | PROPN | proper noun | | PUNCT | punctuation | | SCONJ | subordinating conjunction | | SYM | symbol | | VERB | verb | | X | other | Based on [Flair embeddings](https://www.aclweb.org/anthology/C18-1139/) and LSTM-CRF. --- ### Demo: How to use in Flair Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`) ```python from flair.data import Sentence from flair.models import SequenceTagger # load tagger tagger = SequenceTagger.load("flair/upos-multi-fast") # make example sentence sentence = Sentence("Ich liebe Berlin, as they say. ") # predict NER tags tagger.predict(sentence) # print sentence print(sentence) # print predicted NER spans print('The following NER tags are found:') # iterate over entities and print for entity in sentence.get_spans('pos'): print(entity) ``` This yields the following output: ``` Span [1]: "Ich" [− Labels: PRON (0.9999)] Span [2]: "liebe" [− Labels: VERB (0.9999)] Span [3]: "Berlin" [− Labels: PROPN (0.9997)] Span [4]: "," [− Labels: PUNCT (1.0)] Span [5]: "as" [− Labels: SCONJ (0.9991)] Span [6]: "they" [− Labels: PRON (0.9998)] Span [7]: "say" [− Labels: VERB (0.9998)] Span [8]: "." [− Labels: PUNCT (1.0)] ``` So, the words "*Ich*" and "*they*" are labeled as **pronouns** (PRON), while "*liebe*" and "*say*" are labeled as **verbs** (VERB) in the multilingual sentence "*Ich liebe Berlin, as they say*". --- ### Training: Script to train this model The following Flair script was used to train this model: ```python from flair.data import MultiCorpus from flair.datasets import UD_ENGLISH, UD_GERMAN, UD_FRENCH, UD_ITALIAN, UD_POLISH, UD_DUTCH, UD_CZECH, \ UD_DANISH, UD_SPANISH, UD_SWEDISH, UD_NORWEGIAN, UD_FINNISH from flair.embeddings import StackedEmbeddings, FlairEmbeddings # 1. make a multi corpus consisting of 12 UD treebanks (in_memory=False here because this corpus becomes large) corpus = MultiCorpus([ UD_ENGLISH(in_memory=False), UD_GERMAN(in_memory=False), UD_DUTCH(in_memory=False), UD_FRENCH(in_memory=False), UD_ITALIAN(in_memory=False), UD_SPANISH(in_memory=False), UD_POLISH(in_memory=False), UD_CZECH(in_memory=False), UD_DANISH(in_memory=False), UD_SWEDISH(in_memory=False), UD_NORWEGIAN(in_memory=False), UD_FINNISH(in_memory=False), ]) # 2. what tag do we want to predict? tag_type = 'upos' # 3. make the tag dictionary from the corpus tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type) # 4. initialize each embedding we use embedding_types = [ # contextual string embeddings, forward FlairEmbeddings('multi-forward-fast'), # contextual string embeddings, backward FlairEmbeddings('multi-backward-fast'), ] # embedding stack consists of Flair and GloVe embeddings embeddings = StackedEmbeddings(embeddings=embedding_types) # 5. initialize sequence tagger from flair.models import SequenceTagger tagger = SequenceTagger(hidden_size=256, embeddings=embeddings, tag_dictionary=tag_dictionary, tag_type=tag_type, use_crf=False) # 6. initialize trainer from flair.trainers import ModelTrainer trainer = ModelTrainer(tagger, corpus) # 7. run training trainer.train('resources/taggers/upos-multi-fast', train_with_dev=True, max_epochs=150) ``` --- ### Cite Please cite the following paper when using this model. ``` @inproceedings{akbik2018coling, title={Contextual String Embeddings for Sequence Labeling}, author={Akbik, Alan and Blythe, Duncan and Vollgraf, Roland}, booktitle = {{COLING} 2018, 27th International Conference on Computational Linguistics}, pages = {1638--1649}, year = {2018} } ``` --- ### Issues? The Flair issue tracker is available [here](https://github.com/flairNLP/flair/issues/).
Ayham/albert_gpt2_summarization_xsum
[ "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "dataset:xsum", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
"2020-01-22T10:38:16Z"
--- language: fr license: mit datasets: - flaubert metrics: - flue tags: - bert - language-model - flaubert - flue - french - flaubert-base - uncased --- # FlauBERT: Unsupervised Language Model Pre-training for French **FlauBERT** is a French BERT trained on a very large and heterogeneous French corpus. Models of different sizes are trained using the new CNRS (French National Centre for Scientific Research) [Jean Zay](http://www.idris.fr/eng/jean-zay/ ) supercomputer. Along with FlauBERT comes [**FLUE**](https://github.com/getalp/Flaubert/tree/master/flue): an evaluation setup for French NLP systems similar to the popular GLUE benchmark. The goal is to enable further reproducible experiments in the future and to share models and progress on the French language.For more details please refer to the [official website](https://github.com/getalp/Flaubert). ## FlauBERT models | Model name | Number of layers | Attention Heads | Embedding Dimension | Total Parameters | | :------: | :---: | :---: | :---: | :---: | | `flaubert-small-cased` | 6 | 8 | 512 | 54 M | | `flaubert-base-uncased` | 12 | 12 | 768 | 137 M | | `flaubert-base-cased` | 12 | 12 | 768 | 138 M | | `flaubert-large-cased` | 24 | 16 | 1024 | 373 M | **Note:** `flaubert-small-cased` is partially trained so performance is not guaranteed. Consider using it for debugging purpose only. ## Using FlauBERT with Hugging Face's Transformers ```python import torch from transformers import FlaubertModel, FlaubertTokenizer # Choose among ['flaubert/flaubert_small_cased', 'flaubert/flaubert_base_uncased', # 'flaubert/flaubert_base_cased', 'flaubert/flaubert_large_cased'] modelname = 'flaubert/flaubert_base_cased' # Load pretrained model and tokenizer flaubert, log = FlaubertModel.from_pretrained(modelname, output_loading_info=True) flaubert_tokenizer = FlaubertTokenizer.from_pretrained(modelname, do_lowercase=False) # do_lowercase=False if using cased models, True if using uncased ones sentence = "Le chat mange une pomme." token_ids = torch.tensor([flaubert_tokenizer.encode(sentence)]) last_layer = flaubert(token_ids)[0] print(last_layer.shape) # torch.Size([1, 8, 768]) -> (batch size x number of tokens x embedding dimension) # The BERT [CLS] token correspond to the first hidden state of the last layer cls_embedding = last_layer[:, 0, :] ``` **Notes:** if your `transformers` version is <=2.10.0, `modelname` should take one of the following values: ``` ['flaubert-small-cased', 'flaubert-base-uncased', 'flaubert-base-cased', 'flaubert-large-cased'] ``` ## References If you use FlauBERT or the FLUE Benchmark for your scientific publication, or if you find the resources in this repository useful, please cite one of the following papers: [LREC paper](http://www.lrec-conf.org/proceedings/lrec2020/pdf/2020.lrec-1.302.pdf) ``` @InProceedings{le2020flaubert, author = {Le, Hang and Vial, Lo\"{i}c and Frej, Jibril and Segonne, Vincent and Coavoux, Maximin and Lecouteux, Benjamin and Allauzen, Alexandre and Crabb\'{e}, Beno\^{i}t and Besacier, Laurent and Schwab, Didier}, title = {FlauBERT: Unsupervised Language Model Pre-training for French}, booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference}, month = {May}, year = {2020}, address = {Marseille, France}, publisher = {European Language Resources Association}, pages = {2479--2490}, url = {https://www.aclweb.org/anthology/2020.lrec-1.302} } ``` [TALN paper](https://hal.archives-ouvertes.fr/hal-02784776/) ``` @inproceedings{le2020flaubert, title = {FlauBERT: des mod{\`e}les de langue contextualis{\'e}s pr{\'e}-entra{\^\i}n{\'e}s pour le fran{\c{c}}ais}, author = {Le, Hang and Vial, Lo{\"\i}c and Frej, Jibril and Segonne, Vincent and Coavoux, Maximin and Lecouteux, Benjamin and Allauzen, Alexandre and Crabb{\'e}, Beno{\^\i}t and Besacier, Laurent and Schwab, Didier}, booktitle = {Actes de la 6e conf{\'e}rence conjointe Journ{\'e}es d'{\'E}tudes sur la Parole (JEP, 31e {\'e}dition), Traitement Automatique des Langues Naturelles (TALN, 27e {\'e}dition), Rencontre des {\'E}tudiants Chercheurs en Informatique pour le Traitement Automatique des Langues (R{\'E}CITAL, 22e {\'e}dition). Volume 2: Traitement Automatique des Langues Naturelles}, pages = {268--278}, year = {2020}, organization = {ATALA} } ```
AyushPJ/ai-club-inductions-21-nlp-distilBERT
[ "pytorch", "distilbert", "question-answering", "transformers", "generated_from_trainer", "autotrain_compatible" ]
question-answering
{ "architectures": [ "DistilBertForQuestionAnswering" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- language: - - thumbnail: tags: - - - license: datasets: - - metrics: - - --- # GPT-2 GERMAN ## Model description See [Open AI's model card](https://github.com/openai/gpt-2/blob/master/model_card.md) and [Huggingface's model card](https://huggingface.co/gpt2) for the original model. ## Intended uses & limitations #### How to use ```python def foo(bar) bar +=1 return bar ``` #### Limitations and bias On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? ?? https://dl.acm.org/doi/10.1145/3442188.3445922 ``` @inproceedings{10.1145/3442188.3445922, author = {Bender, Emily M. and Gebru, Timnit and McMillan-Major, Angelina and Shmitchell, Shmargaret}, title = {On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? ??}, year = {2021}, isbn = {9781450383097}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, url = {https://doi.org/10.1145/3442188.3445922}, doi = {10.1145/3442188.3445922}, abstract = {The past 3 years of work in NLP have been characterized by the development and deployment of ever larger language models, especially for English. BERT, its variants, GPT-2/3, and others, most recently Switch-C, have pushed the boundaries of the possible both through architectural innovations and through sheer size. Using these pretrained models and the methodology of fine-tuning them for specific tasks, researchers have extended the state of the art on a wide array of tasks as measured by leaderboards on specific benchmarks for English. In this paper, we take a step back and ask: How big is too big? What are the possible risks associated with this technology and what paths are available for mitigating those risks? We provide recommendations including weighing the environmental and financial costs first, investing resources into curating and carefully documenting datasets rather than ingesting everything on the web, carrying out pre-development exercises evaluating how the planned approach fits into research and development goals and supports stakeholder values, and encouraging research directions beyond ever larger language models.}, booktitle = {Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency}, pages = {610?623}, numpages = {14}, location = {Virtual Event, Canada}, series = {FAccT '21} } ``` ## Training data https://huggingface.co/datasets/german-nlp-group/german_common_crawl ```json {'url': 'http://my-shop.ru/shop/books/545473.html', 'date_download': '2016-10-20T19:38:58Z', 'digest': 'sha1:F62EMGYLZDIKF4UL5JZYU47KWGGUBT7T', 'length': 1155, 'nlines': 4, 'source_domain': 'my-shop.ru', 'title': 'Grammatikalische Liebeslieder. Methodische Vorschläge', 'raw_content': 'Grammatikalische Liebeslieder. [....]', 'cc_segment': 'crawl-data/CC-MAIN-2016-44/segments/1476988717783.68/wet/CC-MAIN-20161020183837-00354-ip-10-171-6-4.ec2.internal.warc.wet.gz', 'original_nlines': 99, 'original_length': 2672, 'language': 'de', 'language_score': 1.0, 'perplexity': 283.0, 'bucket': 'head'}" ``` ## Training procedure TODO (See [training](training.md)) ## Eval results TODO: Self-BLEU, Diversity, and other metrics from https://arxiv.org/abs/1904.09751 ``` @inproceedings{DBLP:conf/iclr/HoltzmanBDFC20, author = {Ari Holtzman and Jan Buys and Li Du and Maxwell Forbes and Yejin Choi}, title = {The Curious Case of Neural Text Degeneration}, booktitle = {8th International Conference on Learning Representations, {ICLR} 2020, Addis Ababa, Ethiopia, April 26-30, 2020}, publisher = {OpenReview.net}, year = {2020}, url = {https://openreview.net/forum?id=rygGQyrFvH}, timestamp = {Thu, 21 Jan 2021 17:36:46 +0100}, biburl = {https://dblp.org/rec/conf/iclr/HoltzmanBDFC20.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` ### BibTeX entry and citation info Does the Huggingface hub generate DOIs? Otherwise maybe Kaggle or Zenodo to generate one. ```bibtex @inproceedings{..., year={2021} } ```
Bagus/wav2vec2-xlsr-japanese-speech-emotion-recognition
[ "pytorch", "wav2vec2", "audio-classification", "ja", "dataset:jtes", "transformers", "audio", "speech", "speech-emotion-recognition", "has_space" ]
audio-classification
{ "architectures": [ "HubertForSequenceClassification" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
26
null
--- language: "mn" thumbnail: "https://avatars.githubusercontent.com/u/43239645?s=60&v=4" tags: - gpt2 datasets: - oscar --- # Mongolian GPT2 Goal is to create a strong language generation model for Mongolian Since initial code and data is pretty much written by @patrickvonplaten and other huggingface members, it should not be so hard to get the first sense. ## Model Randomly initialized GPT2 model ## Datasets We can use OSCAR which is available through datasets ## Datasets A causal language modeling script for Flax is available here 1. It can be used pretty much without any required code changes. If there is time left, I’d love to try some private crawling and integrate it datasets. ## Expected Outcome Understandable Mongolian text generation model ## Challenges Lack of data → OSCAR Mongolian is just 2.2G. Maybe we need to research ways to acquire more data with this.
Bakkes/BakkesModWiki
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
# IndicNLP Marathi News Classifier This model was fine-tuned using [Marathi RoBERTa](https://huggingface.co/flax-community/roberta-base-mr) on [IndicNLP Marathi News Dataset](https://github.com/AI4Bharat/indicnlp_corpus#indicnlp-news-article-classification-dataset) ## Dataset IndicNLP Marathi news dataset consists 3 classes - `['lifestyle', 'entertainment', 'sports']` - with following docs distribution as per classes: | train | eval | test | | ----- | ---- | ---- | | 9672 | 477 | 478 | 💯 Our **`mr-indicnlp-classifier`** model fine tuned from **roberta-base-mr** Pretrained Marathi RoBERTa model outperformed both classifier mentioned in [Arora, G. (2020). iNLTK](https://www.semanticscholar.org/paper/iNLTK%3A-Natural-Language-Toolkit-for-Indic-Languages-Arora/5039ed9e100d3a1cbbc25a02c82f6ee181609e83/figure/3) and [Kunchukuttan, Anoop et al. AI4Bharat-IndicNLP.](https://www.semanticscholar.org/paper/AI4Bharat-IndicNLP-Corpus%3A-Monolingual-Corpora-and-Kunchukuttan-Kakwani/7997d432925aff0ba05497d2893c09918298ca55/figure/4) | Dataset | FT-W | FT-WC | INLP | iNLTK | **roberta-base-mr 🏆** | | --------------- | ----- | ----- | ----- | ----- | --------------------- | | iNLTK Headlines | 83.06 | 81.65 | 89.92 | 92.4 | **97.48** |
Barleysack/AERoberta
[ "pytorch", "roberta", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "RobertaForQuestionAnswering" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- language: sv widget: - text: "Det var en gång" --- # Nordic GPT2--wikipedia A Nordic GPT2 style model trained using Flax CLM pipeline on the Nordic parts part of the wiki40b dataset. https://huggingface.co/datasets/wiki40b ## Model series This model is part of a series of models training on TPU with Flax Jax during Huggingface Flax/Jax challenge. ## Gpt models ## Swedish Gpt https://huggingface.co/birgermoell/swedish-gpt/ ## Swedish gpt wiki https://huggingface.co/flax-community/swe-gpt-wiki # Nordic gpt wiki https://huggingface.co/flax-community/nordic-gpt-wiki ## Dansk gpt wiki https://huggingface.co/flax-community/dansk-gpt-wiki ## Norsk gpt wiki https://huggingface.co/flax-community/norsk-gpt-wiki ## Roberta models ## Nordic Roberta Wiki https://huggingface.co/flax-community/nordic-roberta-wiki ## Swe Roberta Wiki Oscar https://huggingface.co/flax-community/swe-roberta-wiki-oscar ## Roberta Swedish Scandi https://huggingface.co/birgermoell/roberta-swedish-scandi ## Roberta Swedish https://huggingface.co/birgermoell/roberta-swedish ## Swedish T5 model https://huggingface.co/birgermoell/t5-base-swedish ## Data cleaning and preprocessing The data was cleaned and preprocessed using the following script. Make sure to install depencies for beam_runner to make the dataset work. ```python from datasets import load_dataset def load_and_clean_wiki(): dataset = load_dataset('wiki40b', 'da', beam_runner='DirectRunner', split="train") #dataset = load_dataset('wiki40b', 'sv', beam_runner='DirectRunner') dataset = dataset.remove_columns(['wikidata_id', 'version_id']) filtered_dataset = dataset.map(filter_wikipedia) # filtered_dataset[:3] # print(filtered_dataset[:3]) return filtered_dataset def filter_wikipedia(batch): batch["text"] = " ".join(batch["text"].split("\ _START_SECTION_\ ")) batch["text"] = " ".join(batch["text"].split("\ _START_ARTICLE_\ ")) batch["text"] = " ".join(batch["text"].split("\ _START_ARTICLE_\ ")) batch["text"] = " ".join(batch["text"].split("\ _START_PARAGRAPH_\ ")) batch["text"] = " ".join(batch["text"].split("_NEWLINE_")) batch["text"] = " ".join(batch["text"].split("\xa0")) return batch ``` ## Training script The following training script was used to train the model. ```bash ./run_clm_flax.py --output_dir="${MODEL_DIR}" --model_type="gpt2" --config_name="${MODEL_DIR}" --tokenizer_name="${MODEL_DIR}" --dataset_name="wiki40b" --dataset_config_name="da" --do_train --do_eval --block_size="512" --per_device_train_batch_size="64" --per_device_eval_batch_size="64" --learning_rate="5e-3" --warmup_steps="1000" --adam_beta1="0.9" --adam_beta2="0.98" --weight_decay="0.01" --overwrite_output_dir --num_train_epochs="20" --logging_steps="500" --save_steps="1000" --eval_steps="2500" --push_to_hub ```
Barleysack/AERoberta2
[ "pytorch", "roberta", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "RobertaForQuestionAnswering" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
null
--- language: sv license: cc-by-4.0 tags: - swedish - roberta pipeline_tag: fill-mask widget: - text: Meninged med livet är <mask>. --- # Nordic Roberta Wikipedia ## Description Nord roberta model trainined on the swedish danish and norwegian wikipedia. ## Evaluation Evaluation on Named Entity recognition in Danish. I finetuned each model on 3 epochs on DaNE, repeated it 5 times for each model, and calculated 95% confidence intervals for the means. Here are the results: xlm-roberta-base : 88.01 +- 0.43 flax-community/nordic-roberta-wiki: 85.75 +- 0.69 (this model) Maltehb/danish-bert-botxo: 85.38 +- 0.55 flax-community/roberta-base-danish: 80.14 +- 1.47 flax-community/roberta-base-scandinavian : 78.03 +- 3.02 Maltehb/-l-ctra-danish-electra-small-cased: 57.87 +- 3.19 NbAiLab/nb-bert-base : 30.24 +- 1.21 Randomly initialised RoBERTa model: 19.79 +- 2.00 Evaluation on Sentiment analysis in Dansish Here are the results on test set, where each model has been trained 5 times, and the “+-” refers to a 95% confidence interval of the mean score: Maltehb/danish-bert-botxo: 65.19 +- 0.53 NbAiLab/nb-bert-base : 63.80 +- 0.77 xlm-roberta-base : 63.55 +- 1.59 flax-community/nordic-roberta-wiki : 56.46 +- 1.77 flax-community/roberta-base-danish : 54.73 +- 8.96 flax-community/roberta-base-scandinavian : 44.28 +- 9.21 Maltehb/-l-ctra-danish-electra-small-cased : 47.78 +- 12.65 Randomly initialised RoBERTa model: 36.96 +- 1.02 Maltehb/roberta-base-scandinavian : 33.65 +- 8.32 ## Model series This model is part of a series of models training on TPU with Flax Jax during Huggingface Flax/Jax challenge. ## Gpt models ## Swedish Gpt https://huggingface.co/birgermoell/swedish-gpt/ ## Swedish gpt wiki https://huggingface.co/flax-community/swe-gpt-wiki # Nordic gpt wiki https://huggingface.co/flax-community/nordic-gpt-wiki ## Dansk gpt wiki https://huggingface.co/flax-community/dansk-gpt-wiki ## Norsk gpt wiki https://huggingface.co/flax-community/norsk-gpt-wiki ## Roberta models ## Nordic Roberta Wiki https://huggingface.co/flax-community/nordic-roberta-wiki ## Swe Roberta Wiki Oscar https://huggingface.co/flax-community/swe-roberta-wiki-oscar ## Roberta Swedish Scandi https://huggingface.co/birgermoell/roberta-swedish-scandi ## Roberta Swedish https://huggingface.co/birgermoell/roberta-swedish ## Swedish T5 model https://huggingface.co/birgermoell/t5-base-swedish
Batsy24/DialoGPT-medium-Twilight_BellaBot
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- language: pl tags: - text-generation widget: - text: "Najsmaczniejszy polski owoc to" --- # papuGaPT2 - Polish GPT2 language model [GPT2](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) was released in 2019 and surprised many with its text generation capability. However, up until very recently, we have not had a strong text generation model in Polish language, which limited the research opportunities for Polish NLP practitioners. With the release of this model, we hope to enable such research. Our model follows the standard GPT2 architecture and training approach. We are using a causal language modeling (CLM) objective, which means that the model is trained to predict the next word (token) in a sequence of words (tokens). ## Datasets We used the Polish subset of the [multilingual Oscar corpus](https://www.aclweb.org/anthology/2020.acl-main.156) to train the model in a self-supervised fashion. ``` from datasets import load_dataset dataset = load_dataset('oscar', 'unshuffled_deduplicated_pl') ``` ## Intended uses & limitations The raw model can be used for text generation or fine-tuned for a downstream task. The model has been trained on data scraped from the web, and can generate text containing intense violence, sexual situations, coarse language and drug use. It also reflects the biases from the dataset (see below for more details). These limitations are likely to transfer to the fine-tuned models as well. At this stage, we do not recommend using the model beyond research. ## Bias Analysis There are many sources of bias embedded in the model and we caution to be mindful of this while exploring the capabilities of this model. We have started a very basic analysis of bias that you can see in [this notebook](https://huggingface.co/flax-community/papuGaPT2/blob/main/papuGaPT2_bias_analysis.ipynb). ### Gender Bias As an example, we generated 50 texts starting with prompts "She/He works as". The image below presents the resulting word clouds of female/male professions. The most salient terms for male professions are: teacher, sales representative, programmer. The most salient terms for female professions are: model, caregiver, receptionist, waitress. ![gender bias](https://huggingface.co/flax-community/papuGaPT2/raw/main/gender_bias.jpeg) ### Ethnicity/Nationality/Gender Bias We generated 1000 texts to assess bias across ethnicity, nationality and gender vectors. We created prompts with the following scheme: * Person - in Polish this is a single word that differentiates both nationality/ethnicity and gender. We assessed the following 5 nationalities/ethnicities: German, Romani, Jewish, Ukrainian, Neutral. The neutral group used generic pronounts ("He/She"). * Topic - we used 5 different topics: * random act: *entered home* * said: *said* * works as: *works as* * intent: Polish *niech* which combined with *he* would roughly translate to *let him ...* * define: *is* Each combination of 5 nationalities x 2 genders x 5 topics had 20 generated texts. We used a model trained on [Polish Hate Speech corpus](https://huggingface.co/datasets/hate_speech_pl) to obtain the probability that each generated text contains hate speech. To avoid leakage, we removed the first word identifying the nationality/ethnicity and gender from the generated text before running the hate speech detector. The following tables and charts demonstrate the intensity of hate speech associated with the generated texts. There is a very clear effect where each of the ethnicities/nationalities score higher than the neutral baseline. ![hate score by ethnicity](https://huggingface.co/flax-community/papuGaPT2/raw/main/hate_by_ethnicity.png) Looking at the gender dimension we see higher hate score associated with males vs. females. ![hate score by gender](https://huggingface.co/flax-community/papuGaPT2/raw/main/hate_by_gender.png) We don't recommend using the GPT2 model beyond research unless a clear mitigation for the biases is provided. ## Training procedure ### Training scripts We used the [causal language modeling script for Flax](https://github.com/huggingface/transformers/blob/master/examples/flax/language-modeling/run_clm_flax.py). We would like to thank the authors of that script as it allowed us to complete this training in a very short time! ### Preprocessing and Training Details The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a vocabulary size of 50,257. The inputs are sequences of 512 consecutive tokens. We have trained the model on a single TPUv3 VM, and due to unforeseen events the training run was split in 3 parts, each time resetting from the final checkpoint with a new optimizer state: 1. LR 1e-3, bs 64, linear schedule with warmup for 1000 steps, 10 epochs, stopped after 70,000 steps at eval loss 3.206 and perplexity 24.68 2. LR 3e-4, bs 64, linear schedule with warmup for 5000 steps, 7 epochs, stopped after 77,000 steps at eval loss 3.116 and perplexity 22.55 3. LR 2e-4, bs 64, linear schedule with warmup for 5000 steps, 3 epochs, stopped after 91,000 steps at eval loss 3.082 and perplexity 21.79 ## Evaluation results We trained the model on 95% of the dataset and evaluated both loss and perplexity on 5% of the dataset. The final checkpoint evaluation resulted in: * Evaluation loss: 3.082 * Perplexity: 21.79 ## How to use You can use the model either directly for text generation (see example below), by extracting features, or for further fine-tuning. We have prepared a notebook with text generation examples [here](https://huggingface.co/flax-community/papuGaPT2/blob/main/papuGaPT2_text_generation.ipynb) including different decoding methods, bad words suppression, few- and zero-shot learning demonstrations. ### Text generation Let's first start with the text-generation pipeline. When prompting for the best Polish poet, it comes up with a pretty reasonable text, highlighting one of the most famous Polish poets, Adam Mickiewicz. ```python from transformers import pipeline, set_seed generator = pipeline('text-generation', model='flax-community/papuGaPT2') set_seed(42) generator('Największym polskim poetą był') >>> [{'generated_text': 'Największym polskim poetą był Adam Mickiewicz - uważany za jednego z dwóch geniuszów języka polskiego. "Pan Tadeusz" był jednym z najpopularniejszych dzieł w historii Polski. W 1801 został wystawiony publicznie w Teatrze Wilama Horzycy. Pod jego'}] ``` The pipeline uses `model.generate()` method in the background. In [our notebook](https://huggingface.co/flax-community/papuGaPT2/blob/main/papuGaPT2_text_generation.ipynb) we demonstrate different decoding methods we can use with this method, including greedy search, beam search, sampling, temperature scaling, top-k and top-p sampling. As an example, the below snippet uses sampling among the 50 most probable tokens at each stage (top-k) and among the tokens that jointly represent 95% of the probability distribution (top-p). It also returns 3 output sequences. ```python from transformers import AutoTokenizer, AutoModelWithLMHead model = AutoModelWithLMHead.from_pretrained('flax-community/papuGaPT2') tokenizer = AutoTokenizer.from_pretrained('flax-community/papuGaPT2') set_seed(42) # reproducibility input_ids = tokenizer.encode('Największym polskim poetą był', return_tensors='pt') sample_outputs = model.generate( input_ids, do_sample=True, max_length=50, top_k=50, top_p=0.95, num_return_sequences=3 ) print("Output:\ " + 100 * '-') for i, sample_output in enumerate(sample_outputs): print("{}: {}".format(i, tokenizer.decode(sample_output, skip_special_tokens=True))) >>> Output: >>> ---------------------------------------------------------------------------------------------------- >>> 0: Największym polskim poetą był Roman Ingarden. Na jego wiersze i piosenki oddziaływały jego zamiłowanie do przyrody i przyrody. Dlatego też jako poeta w czasie pracy nad utworami i wierszami z tych wierszy, a następnie z poezji własnej - pisał >>> 1: Największym polskim poetą był Julian Przyboś, którego poematem „Wierszyki dla dzieci”. >>> W okresie międzywojennym, pod hasłem „Papież i nie tylko” Polska, jak większość krajów europejskich, była państwem faszystowskim. >>> Prócz >>> 2: Największym polskim poetą był Bolesław Leśmian, który był jego tłumaczem, a jego poezja tłumaczyła na kilkanaście języków. >>> W 1895 roku nakładem krakowskiego wydania "Scientio" ukazała się w języku polskim powieść W krainie kangurów ``` ### Avoiding Bad Words You may want to prevent certain words from occurring in the generated text. To avoid displaying really bad words in the notebook, let's pretend that we don't like certain types of music to be advertised by our model. The prompt says: *my favorite type of music is*. ```python input_ids = tokenizer.encode('Mój ulubiony gatunek muzyki to', return_tensors='pt') bad_words = [' disco', ' rock', ' pop', ' soul', ' reggae', ' hip-hop'] bad_word_ids = [] for bad_word in bad_words: ids = tokenizer(bad_word).input_ids bad_word_ids.append(ids) sample_outputs = model.generate( input_ids, do_sample=True, max_length=20, top_k=50, top_p=0.95, num_return_sequences=5, bad_words_ids=bad_word_ids ) print("Output:\ " + 100 * '-') for i, sample_output in enumerate(sample_outputs): print("{}: {}".format(i, tokenizer.decode(sample_output, skip_special_tokens=True))) >>> Output: >>> ---------------------------------------------------------------------------------------------------- >>> 0: Mój ulubiony gatunek muzyki to muzyka klasyczna. Nie wiem, czy to kwestia sposobu, w jaki gramy, >>> 1: Mój ulubiony gatunek muzyki to reggea. Zachwycają mnie piosenki i piosenki muzyczne o ducho >>> 2: Mój ulubiony gatunek muzyki to rockabilly, ale nie lubię też punka. Moim ulubionym gatunkiem >>> 3: Mój ulubiony gatunek muzyki to rap, ale to raczej się nie zdarza w miejscach, gdzie nie chodzi >>> 4: Mój ulubiony gatunek muzyki to metal aranżeje nie mam pojęcia co mam robić. Co roku, ``` Ok, it seems this worked: we can see *classical music, rap, metal* among the outputs. Interestingly, *reggae* found a way through via a misspelling *reggea*. Take it as a caution to be careful with curating your bad word lists! ### Few Shot Learning Let's see now if our model is able to pick up training signal directly from a prompt, without any finetuning. This approach was made really popular with GPT3, and while our model is definitely less powerful, maybe it can still show some skills! If you'd like to explore this topic in more depth, check out [the following article](https://huggingface.co/blog/few-shot-learning-gpt-neo-and-inference-api) which we used as reference. ```python prompt = """Tekst: "Nienawidzę smerfów!" Sentyment: Negatywny ### Tekst: "Jaki piękny dzień 👍" Sentyment: Pozytywny ### Tekst: "Jutro idę do kina" Sentyment: Neutralny ### Tekst: "Ten przepis jest świetny!" Sentyment:""" res = generator(prompt, max_length=85, temperature=0.5, end_sequence='###', return_full_text=False, num_return_sequences=5,) for x in res: print(res[i]['generated_text'].split(' ')[1]) >>> Pozytywny >>> Pozytywny >>> Pozytywny >>> Pozytywny >>> Pozytywny ``` It looks like our model is able to pick up some signal from the prompt. Be careful though, this capability is definitely not mature and may result in spurious or biased responses. ### Zero-Shot Inference Large language models are known to store a lot of knowledge in its parameters. In the example below, we can see that our model has learned the date of an important event in Polish history, the battle of Grunwald. ```python prompt = "Bitwa pod Grunwaldem miała miejsce w roku" input_ids = tokenizer.encode(prompt, return_tensors='pt') # activate beam search and early_stopping beam_outputs = model.generate( input_ids, max_length=20, num_beams=5, early_stopping=True, num_return_sequences=3 ) print("Output:\ " + 100 * '-') for i, sample_output in enumerate(beam_outputs): print("{}: {}".format(i, tokenizer.decode(sample_output, skip_special_tokens=True))) >>> Output: >>> ---------------------------------------------------------------------------------------------------- >>> 0: Bitwa pod Grunwaldem miała miejsce w roku 1410, kiedy to wojska polsko-litewskie pod >>> 1: Bitwa pod Grunwaldem miała miejsce w roku 1410, kiedy to wojska polsko-litewskie pokona >>> 2: Bitwa pod Grunwaldem miała miejsce w roku 1410, kiedy to wojska polsko-litewskie, ``` ## BibTeX entry and citation info ```bibtex @misc{papuGaPT2, title={papuGaPT2 - Polish GPT2 language model}, url={https://huggingface.co/flax-community/papuGaPT2}, author={Wojczulis, Michał and Kłeczek, Dariusz}, year={2021} } ```
Batsy24/DialoGPT-small-Twilight_EdBot
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- language: nl datasets: - mC4 - Dutch_news --- # Pino (Dutch BigBird) base model Created by [Dat Nguyen](https://www.linkedin.com/in/dat-nguyen-49a641138/) & [Yeb Havinga](https://www.linkedin.com/in/yeb-havinga-86530825/) during the [Hugging Face community week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104) (Not finished yet) BigBird, is a sparse-attention based transformer which extends Transformer based models, such as BERT to much longer sequences. Moreover, BigBird comes along with a theoretical understanding of the capabilities of a complete transformer that the sparse model can handle. It is a pretrained model on Dutch language using a masked language modeling (MLM) objective. It was introduced in this [paper](https://arxiv.org/abs/2007.14062) and first released in this [repository](https://github.com/google-research/bigbird). ## Model description BigBird relies on **block sparse attention** instead of normal attention (i.e. BERT's attention) and can handle sequences up to a length of 4096 at a much lower compute cost compared to BERT. It has achieved SOTA on various tasks involving very long sequences such as long documents summarization, question-answering with long contexts. ## How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BigBirdModel # by default its in `block_sparse` mode with num_random_blocks=3, block_size=64 model = BigBirdModel.from_pretrained("flax-community/pino-bigbird-roberta-base") # you can change `attention_type` to full attention like this: model = BigBirdModel.from_pretrained("flax-community/pino-bigbird-roberta-base", attention_type="original_full") # you can change `block_size` & `num_random_blocks` like this: model = BigBirdModel.from_pretrained("flax-community/pino-bigbird-roberta-base", block_size=16, num_random_blocks=2) ``` ## Training Data This model is pre-trained on four publicly available datasets: **mC4**, and scraped **Dutch news** from NRC en Nu.nl. It uses the the fast universal Byte-level BPE (BBPE) in contrast to the sentence piece tokenizer and vocabulary as RoBERTa (which is in turn borrowed from GPT2). ## Training Procedure The data is cleaned as follows: Remove texts containing HTML codes / javascript codes / loremipsum / policies Remove lines without end mark. Remove too short texts, words Remove too long texts, words Remove bad words ## BibTeX entry and citation info ```tex @misc{zaheer2021big, title={Big Bird: Transformers for Longer Sequences}, author={Manzil Zaheer and Guru Guruganesh and Avinava Dubey and Joshua Ainslie and Chris Alberti and Santiago Ontanon and Philip Pham and Anirudh Ravula and Qifan Wang and Li Yang and Amr Ahmed}, year={2021}, eprint={2007.14062}, archivePrefix={arXiv}, primaryClass={cs.LG} } ```
BatuhanYilmaz/bert-finetuned-mrpc
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
"2021-07-19T03:39:16Z"
# Putting NeRF on a Diet: Semantically Consistent Few-Shot View Synthesis Implementation [![Open in Streamlit](https://static.streamlit.io/badges/streamlit_badge_black_white.svg)](https://huggingface.co/spaces/flax-community/DietNerf-Demo) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1etYeMTntw5mh3FvJv4Ubb7XUoTtt5J9G?usp=sharing) <p align="center"><img width="450" alt="스크린샷 2021-07-04 오후 4 11 51" src="https://user-images.githubusercontent.com/77657524/126361638-4aad58e8-4efb-4fc5-bf78-f53d03799e1e.png"></p> This project attempted to implement the paper **[Putting NeRF on a Diet](https://arxiv.org/abs/2104.00677)** (DietNeRF) in JAX/Flax. DietNeRF is designed for rendering quality novel views in few-shot learning scheme, a task that vanilla NeRF (Neural Radiance Field) struggles. To achieve this, the author coins **Semantic Consistency Loss** to supervise DietNeRF by prior knowledge from CLIP Vision Transformer. Such supervision enables DietNeRF to learn 3D scene reconstruction with CLIP's prior knowledge on 2D views. Besides this repo, you can check our write-up and demo here: - ✍️ **[Write-up in Notion](https://steep-cycle-f6b.notion.site/DietNeRF-Putting-NeRF-on-a-Diet-4aeddae95d054f1d91686f02bdb74745)**: more details of DietNeRF and our experiments - ✨ **[Demo in Hugging Face Space](https://huggingface.co/spaces/flax-community/DietNerf-Demo)**: showcase our trained DietNeRFs by Streamlit ## 🤩 Demo 1. You can check out [our demo in Hugging Face Space](https://huggingface.co/spaces/flax-community/DietNerf-Demo) 2. Or you can set up our Streamlit demo locally (model checkpoints will be fetched automatically upon startup) ```shell pip install -r requirements_demo.txt streamlit run app.py ``` <p align="center"><img width="600" height="400" alt="Streamlit Demo" src="assets/space_demo.png"></p> ## ✨ Implementation Our code is written in JAX/ Flax and mainly based upon [jaxnerf](https://github.com/google-research/google-research/tree/master/jaxnerf) from Google Research. The base code is highly optimized in GPU & TPU. For semantic consistency loss, we utilize pretrained CLIP Vision Transformer from [transformers](https://github.com/huggingface/transformers) library. To learn more about DietNeRF, our experiments and implementation, you are highly recommended to check out our very detailed **[Notion write-up](https://www.notion.so/DietNeRF-Putting-NeRF-on-a-Diet-4aeddae95d054f1d91686f02bdb74745)**! <p align="center"><img width="500" height="600" alt="스크린샷 2021-07-04 오후 4 11 51" src="assets/report_thumbnail.png"></p> ## 🤗 Hugging Face Model Hub Repo You can also find our project on the [Hugging Face Model Hub Repository](https://huggingface.co/flax-community/putting-nerf-on-a-diet/). Our JAX/Flax implementation currently supports: <table class="tg"> <thead> <tr> <th class="tg-0lax"><span style="font-weight:bold">Platform</span></th> <th class="tg-0lax" colspan="2"><span style="font-weight:bold">Single-Host GPU</span></th> <th class="tg-0lax" colspan="2"><span style="font-weight:bold">Multi-Device TPU</span></th> </tr> </thead> <tbody> <tr> <td class="tg-0lax"><span style="font-weight:bold">Type</span></td> <td class="tg-0lax">Single-Device</td> <td class="tg-0lax">Multi-Device</td> <td class="tg-0lax">Single-Host</td> <td class="tg-0lax">Multi-Host</td> </tr> <tr> <td class="tg-0lax"><span style="font-weight:bold">Training</span></td> <td class="tg-0lax"><img src="http://storage.googleapis.com/gresearch/jaxnerf/check.png" alt="Supported" width=18px height=18px></td> <td class="tg-0lax"><img src="http://storage.googleapis.com/gresearch/jaxnerf/check.png" alt="Supported" width=18px height=18px></td> <td class="tg-0lax"><img src="http://storage.googleapis.com/gresearch/jaxnerf/check.png" alt="Supported" width=18px height=18px></td> <td class="tg-0lax"><img src="http://storage.googleapis.com/gresearch/jaxnerf/check.png" alt="Supported" width=18px height=18px></td> </tr> <tr> <td class="tg-0lax"><span style="font-weight:bold">Evaluation</span></td> <td class="tg-0lax"><img src="http://storage.googleapis.com/gresearch/jaxnerf/check.png" alt="Supported" width=18px height=18px></td> <td class="tg-0lax"><img src="http://storage.googleapis.com/gresearch/jaxnerf/check.png" alt="Supported" width=18px height=18px></td> <td class="tg-0lax"><img src="http://storage.googleapis.com/gresearch/jaxnerf/check.png" alt="Supported" width=18px height=18px></td> <td class="tg-0lax"><img src="http://storage.googleapis.com/gresearch/jaxnerf/check.png" alt="Supported" width=18px height=18px></td> </tr> </tbody> </table> ## 💻 Installation ```bash # Clone the repo git clone https://github.com/codestella/putting-nerf-on-a-diet # Create a conda environment, note you can use python 3.6-3.8 as # one of the dependencies (TensorFlow) hasn't supported python 3.9 yet. conda create --name jaxnerf python=3.6.12; conda activate jaxnerf # Prepare pip conda install pip; pip install --upgrade pip # Install requirements pip install -r requirements.txt # [Optional] Install GPU and TPU support for Jax # Remember to change cuda101 to your CUDA version, e.g. cuda110 for CUDA 11.0. !pip install --upgrade jax "jax[cuda110]" -f https://storage.googleapis.com/jax-releases/jax_releases.html # install flax and flax-transformer pip install flax transformers[flax] ``` ## ⚽ Dataset Download the datasets from the [NeRF official Google Drive](https://drive.google.com/drive/folders/128yBriW1IG_3NJ5Rp7APSTZsJqdJdfc1). Please download the `nerf_synthetic.zip` and unzip them in the place you like. Let's assume they are placed under `/tmp/jaxnerf/data/`. ## 💖 Methods * 👉👉 You can check VEEEERY detailed explanation about our project on [Notion Report](https://www.notion.so/DietNeRF-Putting-NeRF-on-a-Diet-4aeddae95d054f1d91686f02bdb74745) <p align="center"><img width="400" alt="스크린샷 2021-07-04 오후 4 11 51" src="https://user-images.githubusercontent.com/77657524/124376591-b312b780-dce2-11eb-80ad-9129d6f5eedb.png"></p> Based on the principle that “a bulldozer is a bulldozer from any perspective”, Our proposed DietNeRF supervises the radiance field from arbitrary poses (DietNeRF cameras). This is possible because we compute a semantic consistency loss in a feature space capturing high-level scene attributes, not in pixel space. We extract semantic representations of renderings using the CLIP Vision Transformer, then maximize similarity with representations of ground-truth views. In effect, we use prior knowledge about scene semantics learned by single-view 2D image encoders to constrain a 3D representation. You can check detail information on the author's paper. Also, you can check the CLIP based semantic loss structure on the following image. <p align="center"><img width="600" alt="스크린샷 2021-07-04 오후 4 11 51" src="https://user-images.githubusercontent.com/77657524/126386709-a4ce7ff8-2a68-442f-b4ed-26971fb90e51.png"></p> Our code used JAX/FLAX framework for implementation. So that it can achieve much speed up than other NeRF codes. At last, our code used hugging face, transformer, CLIP model library. ## 🤟 How to use ``` python -m train \ --data_dir=/PATH/TO/YOUR/SCENE/DATA \ % e.g., nerf_synthetic/lego --train_dir=/PATH/TO/THE/PLACE/YOU/WANT/TO/SAVE/CHECKPOINTS \ --config=configs/CONFIG_YOU_LIKE ``` You can toggle the semantic loss by “use_semantic_loss” in configuration files. ## 💎 Experimental Results ### ❗ Rendered Rendering images by 8-shot learned Diet-NeRF DietNeRF has a strong capacity to generalise on novel and challenging views with EXTREMELY SMALL TRAINING SAMPLES! ### HOTDOG / DRUM / SHIP / CHAIR / LEGO / MIC <img alt="" src="https://user-images.githubusercontent.com/77657524/126976706-caec6d6c-6126-45d0-8680-4c883f71f5bb.png" width="250"/></td><td><img alt="" src="https://user-images.githubusercontent.com/77657524/126976868-183af09a-47b3-4c76-ba20-90e9fef17bcc.png" width="250"/><td><img alt="" src="https://user-images.githubusercontent.com/77657524/126977843-18b4b077-1db0-4287-8e5c-baa10c46e647.png" width="250"/> <img alt="" src="https://user-images.githubusercontent.com/77657524/126977066-9c99a882-7a46-4a1d-921f-cdb0eee60f39.gif" width="250"/><img alt="" src="https://user-images.githubusercontent.com/77657524/126913553-19ebd2f2-c5f1-4332-a253-950e41cb5229.gif" width="300"/><img alt="" src="https://user-images.githubusercontent.com/77657524/126913559-dfce4b88-84a8-4a0a-91eb-ed12716ab328.gif" width="300"/> ### ❗ Rendered GIF by occluded 14-shot learned NeRF and Diet-NeRF We made artificial occlusion on the right side of image (Only picked left side training poses). The reconstruction quality can be compared with this experiment. DietNeRF shows better quality than Original NeRF when It is occluded. #### Training poses <img width="1400" src="https://user-images.githubusercontent.com/26036843/126111980-4f332c87-a7f0-42e0-a355-8e77621bbca4.png"> #### LEGO [DietNeRF] <img alt="" src="https://user-images.githubusercontent.com/77657524/126913404-800777f8-8f88-451a-92de-3dda25075206.gif" width="300"/> [NeRF] <img alt="" src="https://user-images.githubusercontent.com/77657524/126913412-f10dfb3e-e918-4ff4-aa2c-63529fec91d8.gif" width="300"/> #### SHIP [DietNeRF] <img alt="" src="https://user-images.githubusercontent.com/77657524/126913430-0014a904-6ca1-4a7b-9cd6-6f73b36552fb.gif" width="300"/> [NeRF] <img alt="" src="https://user-images.githubusercontent.com/77657524/126913439-2e3128ef-c7ef-4c21-8261-6e3b8fe51f86.gif" width="300"/> ## 👨‍👧‍👦 Our Teams | Teams | Members | |------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------| | Project Managing | [Stella Yang](https://github.com/codestella) To Watch Our Project Progress, Please Check [Our Project Notion](https://www.notion.so/Putting-NeRF-on-a-Diet-e0caecea0c2b40c3996c83205baf870d) | | NeRF Team | [Stella Yang](https://github.com/codestella), [Alex Lau](https://github.com/riven314), [Seunghyun Lee](https://github.com/sseung0703), [Hyunkyu Kim](https://github.com/minus31), [Haswanth Aekula](https://github.com/hassiahk), [JaeYoung Chung](https://github.com/robot0321) | | CLIP Team | [Seunghyun Lee](https://github.com/sseung0703), [Sasikanth Kotti](https://github.com/ksasi), [Khali Sifullah](https://github.com/khalidsaifullaah) , [Sunghyun Kim](https://github.com/MrBananaHuman) | | Cloud TPU Team | [Alex Lau](https://github.com/riven314), [Aswin Pyakurel](https://github.com/masapasa), [JaeYoung Chung](https://github.com/robot0321), [Sunghyun Kim](https://github.com/MrBananaHuman) | * Extremely Don't Sleep Contributors 🤣: [Seunghyun Lee](https://github.com/sseung0703), [Alex Lau](https://github.com/riven314), [Stella Yang](https://github.com/codestella), [Haswanth Aekula](https://github.com/hassiahk) ## 😎 What we improved from original JAX-NeRF : Innovation - Neural rendering with fewshot images - Hugging face CLIP based semantic loss loop - You can choose coarse mlp / coarse + fine mlp training (coarse + fine is on the `main` branch / coarse is on the `coarse_only` branch) * coarse + fine : shows good geometric reconstruction * coarse : shows good PSNR/SSIM result - Make Video/GIF rendering result, `--generate_gif_only` arg can run fast rendering GIF. - Cleaning / refactoring the code - Made multiple models / colab / space for Nice demo ## 💞 Social Impact - Game Industry - Augmented Reality Industry - Virtual Reality Industry - Graphics Industry - Online shopping - Metaverse - Digital Twin - Mapping / SLAM ## 🌱 References This project is based on “JAX-NeRF”. ``` @software{jaxnerf2020github, author = {Boyang Deng and Jonathan T. Barron and Pratul P. Srinivasan}, title = {{JaxNeRF}: an efficient {JAX} implementation of {NeRF}}, url = {https://github.com/google-research/google-research/tree/master/jaxnerf}, version = {0.0}, year = {2020}, } ``` This project is based on “Putting NeRF on a Diet”. ``` @misc{jain2021putting, title={Putting NeRF on a Diet: Semantically Consistent Few-Shot View Synthesis}, author={Ajay Jain and Matthew Tancik and Pieter Abbeel}, year={2021}, eprint={2104.00677}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ## 🔑 License [Apache License 2.0](https://github.com/codestella/putting-nerf-on-a-diet/blob/main/LICENSE) ## ❤️ Special Thanks Our Project is started in the [HuggingFace X GoogleAI (JAX) Community Week Event](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104). Thank you for our mentor Suraj and organizers in JAX/Flax Community Week! Our team grows up with this community learning experience. It was wonderful time! <img width="250" alt="스크린샷 2021-07-04 오후 4 11 51" src="https://user-images.githubusercontent.com/77657524/126369170-5664076c-ac99-4157-bc53-b91dfb7ed7e1.jpeg"> [Common Computer AI](https://comcom.ai/en/) sponsored multiple V100 GPUs for our project! Thank you so much for your support! <img width="250" alt="스크린샷" src="https://user-images.githubusercontent.com/77657524/126914984-d959be06-19f4-4228-8d3a-a855396b2c3f.jpeg">
Beelow/model
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: sv license: cc-by-4.0 tags: - swedish - roberta pipeline_tag: fill-mask widget: - text: Meninged med livet är <mask>. --- # Swe Roberta Wiki Oscar ## Description This Roberta model was trained on the Swedish Wikipedia and Oscar datasets ## Model series This model is part of a series of models training on TPU with Flax Jax during Huggingface Flax/Jax challenge. ## Gpt models ## Swedish Gpt https://huggingface.co/birgermoell/swedish-gpt/ ## Swedish gpt wiki https://huggingface.co/flax-community/swe-gpt-wiki # Nordic gpt wiki https://huggingface.co/flax-community/nordic-gpt-wiki ## Dansk gpt wiki https://huggingface.co/flax-community/dansk-gpt-wiki ## Norsk gpt wiki https://huggingface.co/flax-community/norsk-gpt-wiki ## Roberta models ## Nordic Roberta Wiki https://huggingface.co/flax-community/nordic-roberta-wiki ## Swe Roberta Wiki Oscar https://huggingface.co/flax-community/swe-roberta-wiki-oscar ## Roberta Swedish Scandi https://huggingface.co/birgermoell/roberta-swedish-scandi ## Roberta Swedish https://huggingface.co/birgermoell/roberta-swedish ## Swedish T5 model https://huggingface.co/birgermoell/t5-base-swedish
Beelow/wav2vec2-ukrainian-model-large
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: en license: apache-2.0 tags: - summarization datasets: - cnn_dailymail model-index: - name: flax-community/t5-base-cnn-dm results: - task: type: summarization name: Summarization dataset: name: cnn_dailymail type: cnn_dailymail config: 3.0.0 split: test metrics: - type: rouge value: 24.2906 name: ROUGE-1 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMzZkZmUwOTU5ZGJhYjk4MGVjMjk2ZjEzNDUwMDYyOGIxMDc0ZTI5ZTFkZDUxNTEwNDJkZGRhZTkyNzRjZDJjOSIsInZlcnNpb24iOjF9.6fiYZdYKwgNM5KXeDtEOP2m3OGsip3792OvKnMiSVJk0Pn-CbQQwKH7oZ76YaIxyvOUtCmggiQSDXZc9UKKKCw - type: rouge value: 11.1405 name: ROUGE-2 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMDE0ZmZkMGQxYzQ2YjhiM2Y4NWE0NzY3M2I1OWYzYWE2NWI1MWNiYjBlOGRiNmJmMDJhYzU3MjkwYmM3Zjg4NyIsInZlcnNpb24iOjF9.kWdKlRb7j47d3RtAaqnQmY2-08RihES0UkyVXR32rDzpgy2hxmVcXdiaYE1sPuO6n_6Mx4xxFgxHyB9MNqBlBw - type: rouge value: 19.8442 name: ROUGE-L verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTkwNjkwZTk0NzEyNzQyYTE0OTFlNzQxMmJjYjU2YzkxZjUzZGRiODVmOTY0ZDk2Y2JiMzUzZWRiMGJiZWY5OSIsInZlcnNpb24iOjF9.1psy2C5BTqlfaBzM7VzxKA5TZ3zBxr3YFLzx4ClODILytwszfN8FTW0RTCIDCkdqh2QgXnrdC_K1Nyk82p2DDg - type: rouge value: 22.7556 name: ROUGE-LSUM verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNjYyNTcyNGNlYjcwYjBhMjgxZjkwMjU5YWRmNTQ1NmI3YzFiZGU0MjNkYzcyMDc1N2Q2YzY3YTE4NDNjNWVlNyIsInZlcnNpb24iOjF9.K9WzEm2ookzDPvl-LEknZPHc6g8OZ_uPGf07XzeIHoAwxBjgFpZ-Q2GMiS-UUseLP1Y3F0zE8t0QyNgMyUr4BA - type: loss value: 2.1426470279693604 name: loss verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNjk1YTU1ODljNzhmYmFjYmFjMGY5NGRmNmEwNzM0NTFkYWIwZTljZTExYTY5YjcxZTFhMWY1MTU0OWU4YzQzZiIsInZlcnNpb24iOjF9.3RiDEYMR4sJb8I12FZMuRaSrwHtvutDeRqMbk3QlK7Q5JzDUP1u4ZGjHIzyuTJl-S99EEobX0Dg0xw2yUcgRCg - type: gen_len value: 18.9993 name: gen_len verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNGY5YTYxZmZiYmY4NTZjNmMzMjllNWE1M2M2ZjA0MWM1MzBhZjc0MDM5ZGFiYTAzNjFiZjg5ZjMxYzlmOGYwMyIsInZlcnNpb24iOjF9.eXiPrQ-CeB3BWzlQzkTIA1q0xYP1GtFGIK9XyIneEmh5ajN5pCATxNDvn6n09d84OEr5432SoPJfdpNCd_UyCA --- # Model This model is fine-tuned from https://huggingface.co/flax-community/t5-base-openwebtext, fine-tuned on cnn_dailymail.
Belin/T5-Terms-and-Conditions
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
"2021-07-09T22:56:20Z"
--- language: - dutch tags: - seq2seq - lm-head datasets: - yhavinga/mc4_nl_cleaned license: apache-2.0 inference: false --- # t5-base-dutch Created by [Yeb Havinga](https://www.linkedin.com/in/yeb-havinga-86530825/) & [Dat Nguyen](https://www.linkedin.com/in/dat-nguyen-49a641138/) during the [Hugging Face community week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organized by [HuggingFace](https://huggingface.co/) and TPU usage sponsored by Google, for the project [Pre-train T5 from scratch in Dutch](https://discuss.huggingface.co/t/pretrain-t5-from-scratch-in-dutch/8109). See also the fine-tuned [t5-base-dutch-demo](https://huggingface.co/flax-community/t5-base-dutch-demo) model, and the demo application **[Netherformer 📰](https://huggingface.co/spaces/flax-community/netherformer)**, that are based on this model. **5 jan 2022: Model updated. Evaluation accuracy increased from 0.64 to 0.70.** **11 jan 2022: See also [yhavinga/t5-v1.1-base-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-cased) with eval acc 0.78** ## Model * Configuration based on `google/t5-base` * 12 layers, 12 heads * Dropout set to 0.1 ## Dataset This model was trained on the `full` configuration of [cleaned Dutch mC4](https://huggingface.co/datasets/yhavinga/mc4_nl_cleaned), which is the original mC4, except * Documents that contained words from a selection of the Dutch and English [List of Dirty Naught Obscene and Otherwise Bad Words](https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words) are removed * Sentences with less than 3 words are removed * Sentences with a word of more than 1000 characters are removed * Documents with less than 5 sentences are removed * Documents with "javascript", "lorum ipsum", "terms of use", "privacy policy", "cookie policy", "uses cookies", "use of cookies", "use cookies", "elementen ontbreken", "deze printversie" are removed. ## Tokenization A SentencePiece tokenizer was trained from scratch on this dataset. The total tokens of the `full` configuration is 34B ## Training The model was trained on the `full` mc4_nl_cleaned dataset configuration for 1 epoch, consisting of 34B tokens, for 528 482 steps with a batch size of 128 and took 57 hours. A triangle learning rate schedule was used, with peak learning rate 0.005. ## Evaluation * Loss: 1.38 * Accuracy: 0.70
BenGeorge/MyModel
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
"2021-07-01T21:17:02Z"
# Covid19 Related Question Answering (Closed book question answering) In 2020, COVID-19 which is caused by a coronavirus called SARS-CoV-2 took over the world. It touched the lives of many people and caused a lot of hardship for humanity. There are still many questions in regards to COVID-19 and it is often difficult to get the right answers. The aim of this project is to finetune models for closed book question answering. In closed-book QA, we feed the model a question *without any context or access to external knowledge* and train it to predict the answer. Since the model doesn't receive any context, the primary way it can learn to answer these questions is based on the "knowledge" it obtained during pre-training [[1]](https://colab.research.google.com/github/google-research/text-to-text-transfer-transformer/blob/master/notebooks/t5-trivia.ipynb#scrollTo=zSeyoqE7WMwu) [[2]](https://arxiv.org/abs/2002.08910). The main goals of this project are: 1. Train a model for question answering in regards to COVID-19 2. Release the top performing models for further research and enhancement 3. Release all of the preprocessing and postprocessing scripts and findings for future research. ## TO DO LIST: - [x] Team members met and the following was discussed: - Data preparation script is prepared that mixes CORD-19 and Pubmed. - Agreed to finalize the training scripts by 9pm PDT 7/9/2021. - Tokenizer is now trained. - [ ] Setup the pretraining script - [ ] Prepare the finetuning tasks inspired from [T5 Trivia Colab](https://colab.research.google.com/github/google-research/text-to-text-transfer-transformer/blob/master/notebooks/t5-trivia.ipynb) - What datasets we want to go with? - [Covid-QA](https://huggingface.co/datasets/covid_qa_deepset) (Maybe as test set?) - [Trivia](https://huggingface.co/datasets/covid_qa_deepset) - [CDC-QA](https://www.cdc.gov/coronavirus/2019-ncov/faq.html) (We can scrape quickly using beautiful soup or something) - [More Medical Datasets](https://aclanthology.org/2020.findings-emnlp.289.pdf) (See the dataset section for inspiratio) ## 1. Model We will be using T5 model. ## 2. Datasets The following datasets would be used for finetuning the model. Note that the last dataset is optional and the model is evaluated only using Covid-QA. For **Intermediate Pre-Training**: 1. [CORD-19](https://allenai.org/data/cord-19) For **Fine-Tuning** : 1. [Covid-QA](https://huggingface.co/datasets/covid_qa_deepset) 2. [CDC-QA](https://www.cdc.gov/coronavirus/2019-ncov/faq.html) 4. Optional - [Trivia-QA](https://nlp.cs.washington.edu/triviaqa/) ## 3. Training Scripts We can make use of : 1. [For preprocessing and mixing datasets](https://colab.research.google.com/github/google-research/text-to-text-transfer-transformer/blob/master/notebooks/t5-trivia.ipynb#:~:text=In%20this%20notebook%2C%20we&#39;ll,it%20to%20predict%20the%20answer.) 2. [For T5 training](https://github.com/huggingface/transformers/blob/master/src/transformers/models/t5/modeling_flax_t5.py) ## 4. Additional Reading - [How Much Knowledge Can You Pack Into the Parameters of a Language Model?](https://arxiv.org/pdf/2002.08910.pdf)
BenWitter/DialoGPT-small-Tyrion
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
11
"2021-07-09T02:42:37Z"
--- language: en tags: - seq2seq - t5 - text-generation - recipe-generation pipeline_tag: text2text-generation widget: - text: "provolone cheese, bacon, bread, ginger" - text: "sugar, crunchy jif peanut butter, cornflakes" - text: "sweet butter, confectioners sugar, flaked coconut, condensed milk, nuts, vanilla, dipping chocolate" - text: "macaroni, butter, salt, bacon, milk, flour, pepper, cream corn" - text: "hamburger, sausage, onion, regular, american cheese, colby cheese" - text: "chicken breasts, onion, garlic, great northern beans, black beans, green chilies, broccoli, garlic oil, butter, cajun seasoning, salt, oregano, thyme, black pepper, basil, worcestershire sauce, chicken broth, sour cream, chardonnay wine" - text: "serrano peppers, garlic, celery, oregano, canola oil, vinegar, water, kosher salt, salt, black pepper" --- ![avatar](chef-transformer.png) # Chef Transformer (T5) > This is part of the [Flax/Jax Community Week](https://discuss.huggingface.co/t/recipe-generation-model/7475), organized by [HuggingFace](https://huggingface.co/) and TPU usage sponsored by Google. Want to give it a try? Then what's the wait, head over to Hugging Face Spaces [here](https://huggingface.co/spaces/flax-community/chef-transformer). ## Team Members - Mehrdad Farahani ([m3hrdadfi](https://huggingface.co/m3hrdadfi)) - Kartik Godawat ([dk-crazydiv](https://huggingface.co/dk-crazydiv)) - Haswanth Aekula ([hassiahk](https://huggingface.co/hassiahk)) - Deepak Pandian ([rays2pix](https://huggingface.co/rays2pix)) - Nicholas Broad ([nbroad](https://huggingface.co/nbroad)) ## Dataset [RecipeNLG: A Cooking Recipes Dataset for Semi-Structured Text Generation](https://recipenlg.cs.put.poznan.pl/). This dataset contains **2,231,142** cooking recipes (>2 millions) with size of **2.14 GB**. It's processed in more careful way. ### Example ```json { "NER": [ "oyster crackers", "salad dressing", "lemon pepper", "dill weed", "garlic powder", "salad oil" ], "directions": [ "Combine salad dressing mix and oil.", "Add dill weed, garlic powder and lemon pepper.", "Pour over crackers; stir to coat.", "Place in warm oven.", "Use very low temperature for 15 to 20 minutes." ], "ingredients": [ "12 to 16 oz. plain oyster crackers", "1 pkg. Hidden Valley Ranch salad dressing mix", "1/4 tsp. lemon pepper", "1/2 to 1 tsp. dill weed", "1/4 tsp. garlic powder", "3/4 to 1 c. salad oil" ], "link": "www.cookbooks.com/Recipe-Details.aspx?id=648947", "source": "Gathered", "title": "Hidden Valley Ranch Oyster Crackers" } ``` ## How To Use ```bash # Installing requirements pip install transformers ``` ```python from transformers import FlaxAutoModelForSeq2SeqLM from transformers import AutoTokenizer MODEL_NAME_OR_PATH = "flax-community/t5-recipe-generation" tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME_OR_PATH, use_fast=True) model = FlaxAutoModelForSeq2SeqLM.from_pretrained(MODEL_NAME_OR_PATH) prefix = "items: " # generation_kwargs = { # "max_length": 512, # "min_length": 64, # "no_repeat_ngram_size": 3, # "early_stopping": True, # "num_beams": 5, # "length_penalty": 1.5, # } generation_kwargs = { "max_length": 512, "min_length": 64, "no_repeat_ngram_size": 3, "do_sample": True, "top_k": 60, "top_p": 0.95 } special_tokens = tokenizer.all_special_tokens tokens_map = { "<sep>": "--", "<section>": "\n" } def skip_special_tokens(text, special_tokens): for token in special_tokens: text = text.replace(token, "") return text def target_postprocessing(texts, special_tokens): if not isinstance(texts, list): texts = [texts] new_texts = [] for text in texts: text = skip_special_tokens(text, special_tokens) for k, v in tokens_map.items(): text = text.replace(k, v) new_texts.append(text) return new_texts def generation_function(texts): _inputs = texts if isinstance(texts, list) else [texts] inputs = [prefix + inp for inp in _inputs] inputs = tokenizer( inputs, max_length=256, padding="max_length", truncation=True, return_tensors="jax" ) input_ids = inputs.input_ids attention_mask = inputs.attention_mask output_ids = model.generate( input_ids=input_ids, attention_mask=attention_mask, **generation_kwargs ) generated = output_ids.sequences generated_recipe = target_postprocessing( tokenizer.batch_decode(generated, skip_special_tokens=False), special_tokens ) return generated_recipe ``` ```python items = [ "macaroni, butter, salt, bacon, milk, flour, pepper, cream corn", "provolone cheese, bacon, bread, ginger" ] generated = generation_function(items) for text in generated: sections = text.split("\n") for section in sections: section = section.strip() if section.startswith("title:"): section = section.replace("title:", "") headline = "TITLE" elif section.startswith("ingredients:"): section = section.replace("ingredients:", "") headline = "INGREDIENTS" elif section.startswith("directions:"): section = section.replace("directions:", "") headline = "DIRECTIONS" if headline == "TITLE": print(f"[{headline}]: {section.strip().capitalize()}") else: section_info = [f" - {i+1}: {info.strip().capitalize()}" for i, info in enumerate(section.split("--"))] print(f"[{headline}]:") print("\n".join(section_info)) print("-" * 130) ``` Output: ```text [TITLE]: Macaroni and corn [INGREDIENTS]: - 1: 2 c. macaroni - 2: 2 tbsp. butter - 3: 1 tsp. salt - 4: 4 slices bacon - 5: 2 c. milk - 6: 2 tbsp. flour - 7: 1/4 tsp. pepper - 8: 1 can cream corn [DIRECTIONS]: - 1: Cook macaroni in boiling salted water until tender. - 2: Drain. - 3: Melt butter in saucepan. - 4: Blend in flour, salt and pepper. - 5: Add milk all at once. - 6: Cook and stir until thickened and bubbly. - 7: Stir in corn and bacon. - 8: Pour over macaroni and mix well. ---------------------------------------------------------------------------------------------------------------------------------- [TITLE]: Grilled provolone and bacon sandwich [INGREDIENTS]: - 1: 2 slices provolone cheese - 2: 2 slices bacon - 3: 2 slices sourdough bread - 4: 2 slices pickled ginger [DIRECTIONS]: - 1: Place a slice of provolone cheese on one slice of bread. - 2: Top with a slice of bacon. - 3: Top with a slice of pickled ginger. - 4: Top with the other slice of bread. - 5: Heat a skillet over medium heat. - 6: Place the sandwich in the skillet and cook until the cheese is melted and the bread is golden brown. ---------------------------------------------------------------------------------------------------------------------------------- ``` ## Evaluation Since the test set is not available, we will evaluate the model based on a shared test set. This test set consists of 5% of the whole test (*= 5,000 records*), and we will generate five recipes for each input(*= 25,000 records*). The following table summarizes the scores obtained by the **Chef Transformer** and **RecipeNLG** as our baseline. | Model | COSIM | WER | ROUGE-2 | BLEU | GLEU | METEOR | |:------------------------------------------------------------------------:|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:| | [RecipeNLG](https://huggingface.co/mbien/recipenlg) | 0.5723 | 1.2125 | 0.1354 | 0.1164 | 0.1503 | 0.2309 | | [Chef Transformer](huggingface.co/flax-community/t5-recipe-generation) * | **0.7282** | **0.7613** | **0.2470** | **0.3245** | **0.2624** | **0.4150** | *From the 5 generated recipes corresponding to each NER (food items), only the highest score was taken into account in the WER, COSIM, and ROUGE metrics. At the same time, BLEU, GLEU, Meteor were designed to have many possible references.* ## Copyright Special thanks to those who provided these fantastic materials. - [Anatomy](https://www.flaticon.com/free-icon) - [Chef Hat](https://www.vecteezy.com/members/jellyfishwater) - [Moira Nazzari](https://pixabay.com/photos/food-dessert-cake-eggs-butter-3048440/) - [Instagram Post](https://www.freepik.com/free-psd/recipes-ad-social-media-post-template_11520617.htm)
Benicio/t5-small-finetuned-en-to-ru
[ "pytorch", "tensorboard", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "T5ForConditionalGeneration" ], "model_type": "t5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": true, "length_penalty": 2, "max_length": 200, "min_length": 30, "no_repeat_ngram_size": 3, "num_beams": 4, "prefix": "summarize: " }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to German: " }, "translation_en_to_fr": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to French: " }, "translation_en_to_ro": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to Romanian: " } } }
50
null
--- language: python tags: vae license: apache-2.0 datasets: Fraser/python-lines --- # T5-VAE-Python (flax) A Transformer-VAE made using flax. Try the [demo](https://huggingface.co/spaces/flax-community/t5-vae)! It has been trained to interpolate on lines of Python code from the [python-lines dataset](https://huggingface.co/datasets/Fraser/python-lines). Done as part of Huggingface community training ([see forum post](https://discuss.huggingface.co/t/train-a-vae-to-interpolate-on-english-sentences/7548)). Builds on T5, using an autoencoder to convert it into an MMD-VAE ([more info](http://fras.uk/ml/large%20prior-free%20models/transformer-vae/2020/08/13/Transformers-as-Variational-Autoencoders.html)). ## How to use from the 🤗/transformers library Add model repo as a submodule: ```bash git submodule add https://github.com/Fraser-Greenlee/t5-vae-flax.git t5_vae_flax ``` ```python from transformers import AutoTokenizer from t5_vae_flax.src.t5_vae import FlaxT5VaeForAutoencoding tokenizer = AutoTokenizer.from_pretrained("t5-base") model = FlaxT5VaeForAutoencoding.from_pretrained("flax-community/t5-vae-python") ``` ## Setup Run `setup_tpu_vm_venv.sh` to setup a virtual enviroment on a TPU VM for training.
BertChristiaens/EmojiPredictor
[ "pytorch", "distilbert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "DistilBertForTokenClassification" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
# Transformer-VAE (flax) (WIP) A Transformer-VAE made using flax. Done as part of Huggingface community training ([see forum post](https://discuss.huggingface.co/t/train-a-vae-to-interpolate-on-english-sentences/7548)). Builds on T5, using an autoencoder to convert it into an MMD-VAE. [See training logs.](https://wandb.ai/fraser/flax-vae) ## ToDo - [ ] Basic training script working. (Fraser + Theo) - [ ] Add MMD loss (Theo) - [ ] Save a wikipedia sentences dataset to Huggingface (see original https://github.com/ChunyuanLI/Optimus/blob/master/data/download_datasets.md) (Mina) - [ ] Make a tokenizer using the OPTIMUS tokenized dataset. - [ ] Train on the OPTIMUS wikipedia sentences dataset. - [ ] Make Huggingface widget interpolating sentences! (???) https://github.com/huggingface/transformers/tree/master/examples/research_projects/jax-projects#how-to-build-a-demo Optional ToDos: - [ ] Add Funnel transformer encoder to FLAX (don't need weights). - [ ] Train a Funnel-encoder + T5-decoder transformer VAE. - [ ] Additional datasets: - [ ] Poetry (https://www.gwern.net/GPT-2#data-the-project-gutenberg-poetry-corpus) - [ ] 8-bit music (https://github.com/chrisdonahue/LakhNES) ## Setup Follow all steps to install dependencies from https://cloud.google.com/tpu/docs/jax-quickstart-tpu-vm - [ ] Find dataset storage site. - [ ] Ask JAX team for dataset storage.
Betaniaolivo/Foto
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
## VQGAN-f16-16384 ### Model Description This is a Flax/JAX implementation of VQGAN, which learns a codebook of context-rich visual parts by leveraging both the use of convolutional methods and transformers. It was introduced in [Taming Transformers for High-Resolution Image Synthesis](https://compvis.github.io/taming-transformers/) ([CVPR paper](https://openaccess.thecvf.com/content/CVPR2021/html/Esser_Taming_Transformers_for_High-Resolution_Image_Synthesis_CVPR_2021_paper.html)). The model allows the encoding of images as a fixed-length sequence of tokens taken from the codebook. This version of the model uses a reduction factor `f=16` and a vocabulary of `13,384` tokens. As an example of how the reduction factor works, images of size `256x256` are encoded to sequences of `256` tokens: `256/16 * 256/16`. Images of `512x512` would result in sequences of `1024` tokens. ### Datasets Used for Training * ImageNet. We didn't train this model from scratch. Instead, we started from [a checkpoint pre-trained on ImageNet](https://heibox.uni-heidelberg.de/d/a7530b09fed84f80a887/). * [Conceptual Captions 3M](https://ai.google.com/research/ConceptualCaptions/) (CC3M). * [OpenAI subset of YFCC100M](https://github.com/openai/CLIP/blob/main/data/yfcc100m.md). We fine-tuned on CC3M and YFCC100M to improve the encoding quality of people and faces, which are not very well represented in ImageNet. We used a subset of 2,268,720 images from CC3M and YFCC100M for this purpose. ### Training Process Finetuning was performed in PyTorch using [taming-transformers](https://github.com/CompVis/taming-transformers). The full training process and model preparation includes these steps: * Pre-training on ImageNet. Previously performed. We used [this checkpoint](https://heibox.uni-heidelberg.de/d/a7530b09fed84f80a887). * Fine-tuning, [Part 1](https://wandb.ai/wandb/hf-flax-dalle-mini/runs/2021-07-09T15-33-11_dalle_vqgan?workspace=user-borisd13). * Fine-tuning, [Part 2](https://wandb.ai/wandb/hf-flax-dalle-mini/runs/2021-07-09T21-42-07_dalle_vqgan?workspace=user-borisd13) – continuation from Part 1. The final checkpoint was uploaded to [boris/vqgan_f16_16384](https://huggingface.co/boris/vqgan_f16_16384). * Conversion to JAX, which is the model described in this card. ### How to Use The checkpoint can be loaded using [Suraj Patil's implementation](https://github.com/patil-suraj/vqgan-jax) of `VQModel`. * Example notebook, heavily based in work by [Suraj](https://huggingface.co/valhalla): [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/borisdayma/dalle-mini/blob/main/dev/vqgan/JAX_VQGAN_f16_16384_Reconstruction.ipynb) * Batch encoding using JAX `pmap`, complete example including data loading with PyTorch: ```python # VQGAN-JAX - pmap encoding HowTo import numpy as np # For data loading import torch import torchvision.transforms.functional as TF from torch.utils.data import Dataset, DataLoader from torchvision.datasets.folder import default_loader from torchvision.transforms import InterpolationMode # For data saving from pathlib import Path import pandas as pd from tqdm import tqdm import jax from jax import pmap from vqgan_jax.modeling_flax_vqgan import VQModel ## Params and arguments # List of paths containing images to encode image_list = '/sddata/dalle-mini/CC12M/10k.tsv' output_tsv = 'output.tsv' # Encoded results batch_size = 64 num_workers = 4 # TPU v3-8s have 96 cores, so feel free to increase this number when necessary # Load model model = VQModel.from_pretrained("flax-community/vqgan_f16_16384") ## Data Loading. # Simple torch Dataset to load images from paths. # You can use your own pipeline instead. class ImageDataset(Dataset): def __init__(self, image_list_path: str, image_size: int, max_items=None): """ :param image_list_path: Path to a file containing a list of all images. We assume absolute paths for now. :param image_size: Image size. Source images will be resized and center-cropped. :max_items: Limit dataset size for debugging """ self.image_list = pd.read_csv(image_list_path, sep='\t', header=None) if max_items is not None: self.image_list = self.image_list[:max_items] self.image_size = image_size def __len__(self): return len(self.image_list) def _get_raw_image(self, i): image_path = Path(self.image_list.iloc[i][0]) return default_loader(image_path) def resize_image(self, image): s = min(image.size) r = self.image_size / s s = (round(r * image.size[1]), round(r * image.size[0])) image = TF.resize(image, s, interpolation=InterpolationMode.LANCZOS) image = TF.center_crop(image, output_size = 2 * [self.image_size]) image = np.expand_dims(np.array(image), axis=0) return image def __getitem__(self, i): image = self._get_raw_image(i) return self.resize_image(image) ## Encoding # Encoding function to be parallelized with `pmap` # Note: images have to be square def encode(model, batch): _, indices = model.encode(batch) return indices # Alternative: create a batch with num_tpus*batch_size and use `shard` to distribute. def superbatch_generator(dataloader, num_tpus): iter_loader = iter(dataloader) for batch in iter_loader: superbatch = [batch.squeeze(1)] try: for _ in range(num_tpus-1): batch = next(iter_loader) if batch is None: break # Skip incomplete last batch if batch.shape[0] == dataloader.batch_size: superbatch.append(batch.squeeze(1)) except StopIteration: pass superbatch = torch.stack(superbatch, axis=0) yield superbatch def encode_dataset(dataset, batch_size=32): dataloader = DataLoader(dataset, batch_size=batch_size, num_workers=num_workers) superbatches = superbatch_generator(dataloader, num_tpus=jax.device_count()) num_tpus = jax.device_count() dataloader = DataLoader(dataset, batch_size=batch_size, num_workers=num_workers) superbatches = superbatch_generator(dataloader, num_tpus=num_tpus) p_encoder = pmap(lambda batch: encode(model, batch)) # Save each superbatch to avoid reallocation of buffers as we process them. # Keep the file open to prevent excessive file seeks. with open(output_tsv, "w") as file: iterations = len(dataset) // (batch_size * num_tpus) for n in tqdm(range(iterations)): superbatch = next(superbatches) encoded = p_encoder(superbatch.numpy()) encoded = encoded.reshape(-1, encoded.shape[-1]) # Extract paths from the dataset, save paths and encodings (as string) start_index = n * batch_size * num_tpus end_index = (n+1) * batch_size * num_tpus paths = dataset.image_list[start_index:end_index][0].values encoded_as_string = list(map(lambda item: np.array2string(item, separator=',', max_line_width=50000, formatter={'int':lambda x: str(x)}), encoded)) batch_df = pd.DataFrame.from_dict({"image_file": paths, "encoding": encoded_as_string}) batch_df.to_csv(file, sep='\t', header=(n==0), index=None) dataset = ImageDataset(image_list, image_size=256) encoded_dataset = encode_dataset(dataset, batch_size=batch_size) ``` ### Related Models in the Hub * PyTorch version of VQGAN, trained on the same datasets described here: [boris/vqgan_f16_16384](https://huggingface.co/boris/vqgan_f16_16384). * [DALL·E mini](https://huggingface.co/flax-community/dalle-mini), a Flax/JAX simplified implementation of OpenAI's DALL·E. ### Other This model was successfully used as part of the implementation of [DALL·E mini](https://github.com/borisdayma/dalle-mini). Our [report](https://wandb.ai/dalle-mini/dalle-mini/reports/DALL-E-mini--Vmlldzo4NjIxODA) contains more details on how to leverage it in an image encoding / generation pipeline.
Bharathdamu/wav2vec2-large-xls-r-300m-hindi-colab
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "dataset:common_voice", "transformers", "generated_from_trainer", "license:apache-2.0" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- language: fa datasets: - common_voice tags: - speech license: apache-2.0 --- # Wav2Vec2 4 Persian > This is part of the [Flax/Jax Community Week](https://discuss.huggingface.co/t/pretrain-wav2vec2-in-persian/8180), organized by [HuggingFace](https://huggingface.co/) and TPU usage sponsored by Google. ## Team Members - Mehrdad Farahani ([m3hrdadfi](https://huggingface.co/m3hrdadfi)) ## Dataset TODO: Update ## How To Use TODO: Update ## Demo TODO: Update ## Evaluation TODO: Update
Bharathdamu/wav2vec2-large-xls-r-300m-hindi2-colab
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: dv tags: - automatic-speech-recognition datasets: - common_voice --- # Wav2Vec2 Dhivehi Wav2vec2 pre-pretrained from scratch using common voice dhivehi dataset. The model was trained with Flax during the [Flax/Jax Community Week](https://discss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104) organised by HuggingFace. ## Model description The model used for training is [Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) by FacebookAI. It was introduced in the paper "wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations" by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, and Michael Auli (https://arxiv.org/abs/2006.11477). This model is available in the 🤗 [Model Hub](https://huggingface.co/facebook/wav2vec2-base-960h). ## Training data Dhivehi data from [Common Voice](https://commonvoice.mozilla.org/en/datasets). The dataset is also available in the 🤗 [Datasets](https://huggingface.co/datasets/common_voice) library. ## Team members - Shahu Kareem ([@shahukareem](https://huggingface.co/shahukareem)) - Eyna ([@eyna](https://huggingface.co/eyna))
Bharathdamu/wav2vec2-model-hindi-stt
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: es tags: - audio - automatic-speech-recognition datasets: - common_voice --- # Wav2Vec2 Spanish Wav2Vec2 model pre-trained using the Spanish portion of the Common Voice dataset. The model is trained with Flax and using TPUs sponsored by Google since this is part of the [Flax/Jax Community Week](https://discss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104) organised by HuggingFace. ## Model description The model used for training is [Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) by FacebookAI. It was introduced in the paper "wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations" by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, and Michael Auli (https://arxiv.org/abs/2006.11477). This model is available in the 🤗 [Model Hub](https://huggingface.co/facebook/wav2vec2-base-960h). ## Training data Spanish portion of [Common Voice](https://commonvoice.mozilla.org/en/datasets). Common Voice is an open source, multi-language dataset of voices part of Mozilla's initiative to help teach machines how real people speak. The dataset is also available in the 🤗 [Datasets](https://huggingface.co/datasets/common_voice) library. ## Team members - María Grandury ([@mariagrandury](https://github.com/mariagrandury)) - Manuel Romero ([@mrm8488](https://huggingface.co/mrm8488)) - Eduardo González Ponferrada ([@edugp](https://huggingface.co/edugp)) - pcuenq ([@pcuenq](https://huggingface.co/pcuenq))
Bhumika/roberta-base-finetuned-sst2
[ "pytorch", "tensorboard", "roberta", "text-classification", "dataset:glue", "transformers", "generated_from_trainer", "model-index" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
85
null
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity language: en --- # Model description The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised contrastive learning objective. We used the pretrained ['MiniLM-L6-H384-uncased'](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model and fine-tuned in on a 1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset. We developped this model during the [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organized by Hugging Face. We developped this model as part of the project: [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Google’s Flax, JAX, and Cloud team member about efficient deep learning frameworks. ## Intended uses Our model is intented to be used as a sentence encoder. Given an input sentence, it ouptuts a vector which captures the sentence semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks. ## How to use Here is how to use this model to get the features of a given text using [SentenceTransformers](https://github.com/UKPLab/sentence-transformers) library: ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer('flax-sentence-embeddings/all_datasets_v3_MiniLM-L6') text = "Replace me by any text you'd like." text_embbedding = model.encode(text) # array([-0.01559514, 0.04046123, 0.1317083 , 0.00085931, 0.04585106, # -0.05607086, 0.0138078 , 0.03569756, 0.01420381, 0.04266302 ...], # dtype=float32) ``` # Training procedure ## Pre-training We use the pretrained ['MiniLM-L6-H384-uncased'](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) which is a 6 layer version of ['microsoft/MiniLM-L12-H384-uncased'](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) by keeping only every second layer. Please refer to the model card for more detailed information about the pre-training procedure. ## Fine-tuning We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch. We then apply the cross entropy loss by comparing with true pairs. ### Hyper parameters We trained ou model on a TPU v3-8. We train the model during 540k steps using a batch size of 1024 (128 per TPU core). We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with a 2e-5 learning rate. The full training script is accessible in this current repository. ### Training data We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences. We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file. | Dataset | Paper | Number of training tuples | |:--------------------------------------------------------:|:----------------------------------------:|:--------------------------:| | [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_title_body_jsonl) | - | 364,001 | | [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 | | [COCO 2020](COCO 2020) | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395| | [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 | | [TriviaqQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 | | [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 | | [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 | | [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 | | [Quora Question Pairs](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 | | [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 | | [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 | | [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 | | AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 | | [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 | | [SPECTER](https://github.com/allenai/specter) | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 | | [S2ORC](https://github.com/allenai/s2orc) Title/Abstract | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 | | [S2ORC](https://github.com/allenai/s2orc) Citation/Citation | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 | | [S2ORC](https://github.com/allenai/s2orc) Citation/Abstract | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 | | [PAQ](https://github.com/facebookresearch/PAQ) | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 | | [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 | | SearchQA | - | 582,261 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) Title/Answer | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) Title/Question | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) Question/Answer | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 | | [MS MARCO](https://microsoft.github.io/msmarco/) | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 | | [Reddit conversationnal](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 | | total | | 1,097,953,922 |
Bia18/Beatriz
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity language: en license: apache-2.0 --- # all-mpnet-base-v1 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/all-mpnet-base-v1') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch import torch.nn.functional as F #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-mpnet-base-v1') model = AutoModel.from_pretrained('sentence-transformers/all-mpnet-base-v1') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) # Normalize embeddings sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/all-mpnet-base-v1) ------ ## Background The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised contrastive learning objective. We used the pretrained [`microsoft/mpnet-base`](https://huggingface.co/microsoft/mpnet-base) model and fine-tuned in on a 1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset. We developped this model during the [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organized by Hugging Face. We developped this model as part of the project: [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks. ## Intended uses Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks. By default, input text longer than 128 word pieces is truncated. ## Training procedure ### Pre-training We use the pretrained [`microsoft/mpnet-base`](https://huggingface.co/microsoft/mpnet-base). Please refer to the model card for more detailed information about the pre-training procedure. ### Fine-tuning We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch. We then apply the cross entropy loss by comparing with true pairs. #### Hyper parameters We trained ou model on a TPU v3-8. We train the model during 920k steps using a batch size of 512 (64 per TPU core). We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`. #### Training data We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences. We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file. | Dataset | Paper | Number of training tuples | |--------------------------------------------------------|:----------------------------------------:|:--------------------------:| | [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 | | [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 | | [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 | | [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 | | [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 | | [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 | | [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 | | [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395| | [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 | | [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 | | [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 | | [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 | | AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 | | [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 | | [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 | | [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 | | [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 | | [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 | | [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 | | [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 | | [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 | | **Total** | | **1,124,818,467** |
Biasface/DDDC
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
14
null
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity language: en --- # Model description The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised contrastive learning objective. We used the pretrained [`roberta-large`](https://huggingface.co/roberta-large) model and fine-tuned in on a 1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset. We developped this model during the [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organized by Hugging Face. We developped this model as part of the project: [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Google’s Flax, JAX, and Cloud team member about efficient deep learning frameworks. ## Intended uses Our model is intented to be used as a sentence encoder. Given an input sentence, it ouptuts a vector which captures the sentence semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks. ## How to use Here is how to use this model to get the features of a given text using [SentenceTransformers](https://github.com/UKPLab/sentence-transformers) library: ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer('flax-sentence-embeddings/all_datasets_v3_roberta-large') text = "Replace me by any text you'd like." text_embbedding = model.encode(text) # array([-0.01559514, 0.04046123, 0.1317083 , 0.00085931, 0.04585106, # -0.05607086, 0.0138078 , 0.03569756, 0.01420381, 0.04266302 ...], # dtype=float32) ``` # Training procedure ## Pre-training We use the pretrained [`roberta-large`](https://huggingface.co/roberta-large). Please refer to the model card for more detailed information about the pre-training procedure. ## Fine-tuning We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch. We then apply the cross entropy loss by comparing with true pairs. ### Hyper parameters We trained ou model on a TPU v3-8. We train the model during 540k steps using a batch size of 1024 (128 per TPU core). We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with a 2e-5 learning rate. The full training script is accessible in this current repository. ### Training data We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences. We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file. | Dataset | Paper | Number of training tuples | |:--------------------------------------------------------:|:----------------------------------------:|:--------------------------:| | [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_title_body_jsonl) | - | 364,001 | | [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 | | [COCO 2020](COCO 2020) | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395| | [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 | | [TriviaqQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 | | [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 | | [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 | | [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 | | [Quora Question Pairs](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 | | [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 | | [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 | | [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 | | AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 | | [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 | | [SPECTER](https://github.com/allenai/specter) | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 | | [S2ORC](https://github.com/allenai/s2orc) Title/Abstract | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 | | [S2ORC](https://github.com/allenai/s2orc) Citation/Citation | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 | | [S2ORC](https://github.com/allenai/s2orc) Citation/Abstract | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 | | [PAQ](https://github.com/facebookresearch/PAQ) | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 | | [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 | | SearchQA | - | 582,261 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) Title/Answer | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) Title/Question | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) Question/Answer | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 | | [MS MARCO](https://microsoft.github.io/msmarco/) | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 | | [Reddit conversationnal](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 | | total | | 1,097,953,922 |
Biasface/DDDC2
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
null
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity language: en --- # Model description The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised contrastive learning objective. We used the pretrained ['MiniLM-L12'](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) model and fine-tuned in on a 1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset. We developped this model during the [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organized by Hugging Face. We developped this model as part of the project: [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Google’s Flax, JAX, and Cloud team member about efficient deep learning frameworks. ## Intended uses Our model is intented to be used as a sentence encoder. Given an input sentence, it ouptuts a vector which captures the sentence semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks. ## How to use Here is how to use this model to get the features of a given text using [SentenceTransformers](https://github.com/UKPLab/sentence-transformers) library: ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer('flax-sentence-embeddings/all_datasets_v4_MiniLM-L12') text = "Replace me by any text you'd like." text_embbedding = model.encode(text) # array([-0.01559514, 0.04046123, 0.1317083 , 0.00085931, 0.04585106, # -0.05607086, 0.0138078 , 0.03569756, 0.01420381, 0.04266302 ...], # dtype=float32) ``` # Training procedure ## Pre-training We use the pretrained ['MiniLM-L12'](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased). Please refer to the model card for more detailed information about the pre-training procedure. ## Fine-tuning We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch. We then apply the cross entropy loss by comparing with true pairs. ### Hyper parameters We trained ou model on a TPU v3-8. We train the model during 540k steps using a batch size of 1024 (128 per TPU core). We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with a 2e-5 learning rate. The full training script is accessible in this current repository. ### Training data We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences. We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file. | Dataset | Paper | Number of training tuples | |:--------------------------------------------------------:|:----------------------------------------:|:--------------------------:| | [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_title_body_jsonl) | - | 364,001 | | [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 | | [COCO 2020](COCO 2020) | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395| | [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 | | [TriviaqQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 | | [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 | | [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 | | [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 | | [Quora Question Pairs](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 | | [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 | | [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 | | [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 | | AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 | | [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 | | [SPECTER](https://github.com/allenai/specter) | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 | | [S2ORC](https://github.com/allenai/s2orc) Title/Abstract | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 | | [S2ORC](https://github.com/allenai/s2orc) Citation/Citation | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 | | [S2ORC](https://github.com/allenai/s2orc) Citation/Abstract | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 | | [PAQ](https://github.com/facebookresearch/PAQ) | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 | | [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 | | SearchQA | - | 582,261 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) Title/Answer | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) Title/Question | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) Question/Answer | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 | | [MS MARCO](https://microsoft.github.io/msmarco/) | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 | | [Reddit conversationnal](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 | | total | | 1,097,953,922 |
BigBoy/model
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity language: en --- # Model description The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised contrastive learning objective. We used the pretrained ['MiniLM-L6-H384-uncased'](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model and fine-tuned in on a 1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset. We developped this model during the [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organized by Hugging Face. We developped this model as part of the project: [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Google’s Flax, JAX, and Cloud team member about efficient deep learning frameworks. ## Intended uses Our model is intented to be used as a sentence encoder. Given an input sentence, it ouptuts a vector which captures the sentence semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks. ## How to use Here is how to use this model to get the features of a given text using [SentenceTransformers](https://github.com/UKPLab/sentence-transformers) library: ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer('flax-sentence-embeddings/all_datasets_v4_MiniLM-L6') text = "Replace me by any text you'd like." text_embbedding = model.encode(text) # array([-0.01559514, 0.04046123, 0.1317083 , 0.00085931, 0.04585106, # -0.05607086, 0.0138078 , 0.03569756, 0.01420381, 0.04266302 ...], # dtype=float32) ``` # Training procedure ## Pre-training We use the pretrained ['MiniLM-L6-H384-uncased'](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) which is a 6 layer version of ['microsoft/MiniLM-L12-H384-uncased'](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) by keeping only every second layer. Please refer to the model card for more detailed information about the pre-training procedure. ## Fine-tuning We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch. We then apply the cross entropy loss by comparing with true pairs. ### Hyper parameters We trained ou model on a TPU v3-8. We train the model during 540k steps using a batch size of 1024 (128 per TPU core). We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with a 2e-5 learning rate. The full training script is accessible in this current repository. ### Training data We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences. We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file. | Dataset | Paper | Number of training tuples | |:--------------------------------------------------------:|:----------------------------------------:|:--------------------------:| | [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_title_body_jsonl) | - | 364,001 | | [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 | | [COCO 2020](COCO 2020) | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395| | [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 | | [TriviaqQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 | | [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 | | [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 | | [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 | | [Quora Question Pairs](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 | | [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 | | [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 | | [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 | | AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 | | [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 | | [SPECTER](https://github.com/allenai/specter) | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 | | [S2ORC](https://github.com/allenai/s2orc) Title/Abstract | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 | | [S2ORC](https://github.com/allenai/s2orc) Citation/Citation | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 | | [S2ORC](https://github.com/allenai/s2orc) Citation/Abstract | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 | | [PAQ](https://github.com/facebookresearch/PAQ) | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 | | [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 | | SearchQA | - | 582,261 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) Title/Answer | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) Title/Question | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) Question/Answer | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 | | [MS MARCO](https://microsoft.github.io/msmarco/) | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 | | [Reddit conversationnal](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 | | total | | 1,097,953,922 |
BigSalmon/BertaMyWorda
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # mpnet_stackexchange_v1 ## Model Description SentenceTransformers is a set of models and frameworks that enable training and generating sentence embeddings from given data. The generated sentence embeddings can be utilized for Clustering, Semantic Search and other tasks. We used a pretrained [mpnet-base](https://huggingface.co/microsoft/mpnet-base) model and trained it using Siamese Network setup and contrastive learning objective. Question and answer pairs from StackExchange was used as training data to make the model robust to Question / Answer embedding similarity. We developped this model during the [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organized by Hugging Face. We developped this model as part of the project: [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as assistance from Google’s Flax, JAX, and Cloud team members about efficient deep learning frameworks. ## Intended uses Our model is intended to be used as a sentence encoder for a search engine. Given an input sentence, it ouptuts a vector which captures the sentence semantic information. The sentence vector may be used for semantic-search, clustering or sentence similarity tasks. ## How to use Here is how to use this model to get the features of a given text using [SentenceTransformers](https://github.com/UKPLab/sentence-transformers) library: ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer('flax-sentence-embeddings/mpnet_stackexchange_v1') text = "Replace me by any question / answer you'd like." text_embbedding = model.encode(text) # array([-0.01559514, 0.04046123, 0.1317083 , 0.00085931, 0.04585106, # -0.05607086, 0.0138078 , 0.03569756, 0.01420381, 0.04266302 ...], # dtype=float32) ``` # Training procedure ## Pre-training We use the pretrained [`Mpnet-base`](https://huggingface.co/microsoft/mpnet-base). Please refer to the model card for more detailed information about the pre-training procedure. ## Fine-tuning We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch. We then apply the cross entropy loss by comparing with true pairs. ### Hyper parameters We trained on model on a TPU v3-8. We train the model during 80k steps using a batch size of 1024 (128 per TPU core). We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with a 2e-5 learning rate. The full training script is accessible in this current repository. ### Training data We used the concatenation from multiple Stackexchange Question-Answer datasets to fine-tune our model. We sampled each StackExchange given a weighted probability of following equation. ``` int((stackexchange_length[path] / total_stackexchange_length) * total_weight) ``` MSMARCO, NQ & other question-answer datasets were also used. Sampling ratio for StackExchange vs remaining : 2 vs 1. | Dataset | Paper | Number of training tuples | |:--------------------------------------------------------:|:----------------------------------------:|:--------------------------:| | [Stack Exchange QA - Title & Answer](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_title_best_voted_answer_jsonl) | - | 4,750,619 | | [MS MARCO](https://microsoft.github.io/msmarco/) | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 | | [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) Question/Answer | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 | | SearchQA | - | 582,261 | | [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 |
BigSalmon/BlankSlots
[ "pytorch", "jax", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "T5ForConditionalGeneration" ], "model_type": "t5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": true, "length_penalty": 2, "max_length": 200, "min_length": 30, "no_repeat_ngram_size": 3, "num_beams": 4, "prefix": "summarize: " }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to German: " }, "translation_en_to_fr": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to French: " }, "translation_en_to_ro": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to Romanian: " } } }
4
"2021-07-17T04:21:57Z"
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # multi-QA_v1-mpnet-asymmetric-Q ## Model Description SentenceTransformers is a set of models and frameworks that enable training and generating sentence embeddings from given data. The generated sentence embeddings can be utilized for Clustering, Semantic Search and other tasks. We used two separate pretrained [mpnet-base](https://huggingface.co/microsoft/mpnet-base) models and trained them using contrastive learning objective. Question and answer pairs from StackExchange and other datasets were used as training data to make the model robust to Question / Answer embedding similarity. We developed this model during the [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organized by Hugging Face. We developed this model as part of the project: [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as assistance from Google’s Flax, JAX, and Cloud team members about efficient deep learning frameworks. ## Intended uses This model set is intended to be used as a sentence encoder for a search engine. Given an input sentence, it ouptuts a vector which captures the sentence semantic information. The sentence vector may be used for semantic-search, clustering or sentence similarity tasks. Two models should be used on conjunction for Semantic Search purposes. 1. [multi-QA_v1-mpnet-asymmetric-Q](https://huggingface.co/flax-sentence-embeddings/multi-QA_v1-mpnet-asymmetric-Q) - Model to encode Questions 1. [multi-QA_v1-mpnet-asymmetric-Q](https://huggingface.co/flax-sentence-embeddings/multi-QA_v1-mpnet-asymmetric-A) - Model to encode Answers ## How to use Here is how to use this model to get the features of a given text using [SentenceTransformers](https://github.com/UKPLab/sentence-transformers) library: ```python from sentence_transformers import SentenceTransformer model_Q = SentenceTransformer('flax-sentence-embeddings/multi-QA_v1-mpnet-asymmetric-Q') model_A = SentenceTransformer('flax-sentence-embeddings/multi-QA_v1-mpnet-asymmetric-A') question = "Replace me by any question you'd like." question_embbedding = model_Q.encode(text) answer = "Replace me by any answer you'd like." answer_embbedding = model_A.encode(text) answer_likeliness = cosine_similarity(question_embedding, answer_embedding) ``` # Training procedure ## Pre-training We use the pretrained [`Mpnet-base`](https://huggingface.co/microsoft/mpnet-base). Please refer to the model card for more detailed information about the pre-training procedure. ## Fine-tuning We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch. We then apply the cross entropy loss by comparing with true pairs. ### Hyper parameters We trained on model on a TPU v3-8. We train the model during 80k steps using a batch size of 1024 (128 per TPU core). We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with a 2e-5 learning rate. The full training script is accessible in this current repository. ### Training data We used the concatenation from multiple Stackexchange Question-Answer datasets to fine-tune our model. MSMARCO, NQ & other question-answer datasets were also used. | Dataset | Paper | Number of training tuples | |:--------------------------------------------------------:|:----------------------------------------:|:--------------------------:| | [Stack Exchange QA - Title & Answer](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_title_best_voted_answer_jsonl) | - | 4,750,619 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_title_body_jsonl) | - | 364,001 | | [TriviaqQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 | | [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 | | [Quora Question Pairs](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 | | [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 | | [PAQ](https://github.com/facebookresearch/PAQ) | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 | | [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 | | [MS MARCO](https://microsoft.github.io/msmarco/) | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 | | [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) Question/Answer | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 | | SearchQA | - | 582,261 | | [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 |
BigSalmon/Flowberta
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
13
null
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # multi-qa_v1-MiniLM-L6-mean_cos ## Model Description SentenceTransformers is a set of models and frameworks that enable training and generating sentence embeddings from given data. The generated sentence embeddings can be utilized for Clustering, Semantic Search and other tasks. We used a pretrained [nreimers/MiniLM-L6-H384-uncased](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model and trained it using Siamese Network setup and contrastive learning objective. Question and answer pairs from StackExchange was used as training data to make the model robust to Question / Answer embedding similarity. For this model, mean pooling of the hidden states were used as sentence embeddings. We developed this model during the [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organized by Hugging Face. We developed this model as part of the project: [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as assistance from Google’s Flax, JAX, and Cloud team members about efficient deep learning frameworks. ## Intended uses Our model is intended to be used as a sentence encoder for a search engine. Given an input sentence, it outputs a vector which captures the sentence semantic information. The sentence vector may be used for semantic-search, clustering or sentence similarity tasks. ## How to use Here is how to use this model to get the features of a given text using [SentenceTransformers](https://github.com/UKPLab/sentence-transformers) library: ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer('flax-sentence-embeddings/multi-qa_v1-MiniLM-L6-mean_cos') text = "Replace me by any question / answer you'd like." text_embbedding = model.encode(text) # array([-0.01559514, 0.04046123, 0.1317083 , 0.00085931, 0.04585106, # -0.05607086, 0.0138078 , 0.03569756, 0.01420381, 0.04266302 ...], # dtype=float32) ``` # Training procedure ## Pre-training We use the pretrained [nreimers/MiniLM-L6-H384-uncased](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased). Please refer to the model card for more detailed information about the pre-training procedure. ## Fine-tuning We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch. We then apply the cross entropy loss by comparing with true pairs. ### Hyper parameters We trained on model on a TPU v3-8. We train the model during 80k steps using a batch size of 1024 (128 per TPU core). We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with a 2e-5 learning rate. The full training script is accessible in this current repository. ### Training data We used the concatenation from multiple Stackexchange Question-Answer datasets to fine-tune our model. MSMARCO, NQ & other question-answer datasets were also used. | Dataset | Paper | Number of training tuples | |:--------------------------------------------------------:|:----------------------------------------:|:--------------------------:| | [Stack Exchange QA - Title & Answer](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_title_best_voted_answer_jsonl) | - | 4,750,619 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_title_body_jsonl) | - | 364,001 | | [TriviaqQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 | | [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 | | [Quora Question Pairs](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 | | [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 | | [PAQ](https://github.com/facebookresearch/PAQ) | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 | | [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 | | [MS MARCO](https://microsoft.github.io/msmarco/) | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 | | [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) Question/Answer | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 | | SearchQA | - | 582,261 | | [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 |
BigSalmon/FormalBerta2
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
16
null
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # multi-qa_v1-distilbert-mean_cos ## Model Description SentenceTransformers is a set of models and frameworks that enable training and generating sentence embeddings from given data. The generated sentence embeddings can be utilized for Clustering, Semantic Search and other tasks. We used a pretrained [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) model and trained it using Siamese Network setup and contrastive learning objective. Question and answer pairs from StackExchange was used as training data to make the model robust to Question / Answer embedding similarity. For this model, mean pooling of hidden states were used as sentence embeddings. We developed this model during the [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organized by Hugging Face. We developed this model as part of the project: [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as assistance from Google’s Flax, JAX, and Cloud team members about efficient deep learning frameworks. ## Intended uses Our model is intended to be used as a sentence encoder for a search engine. Given an input sentence, it outputs a vector which captures the sentence semantic information. The sentence vector may be used for semantic-search, clustering or sentence similarity tasks. ## How to use Here is how to use this model to get the features of a given text using [SentenceTransformers](https://github.com/UKPLab/sentence-transformers) library: ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer('flax-sentence-embeddings/multi-qa_v1-distilbert-mean_cos') text = "Replace me by any question / answer you'd like." text_embbedding = model.encode(text) # array([-0.01559514, 0.04046123, 0.1317083 , 0.00085931, 0.04585106, # -0.05607086, 0.0138078 , 0.03569756, 0.01420381, 0.04266302 ...], # dtype=float32) ``` # Training procedure ## Pre-training We use the pretrained [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased). Please refer to the model card for more detailed information about the pre-training procedure. ## Fine-tuning We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch. We then apply the cross entropy loss by comparing with true pairs. ### Hyper parameters We trained on model on a TPU v3-8. We train the model during 80k steps using a batch size of 1024 (128 per TPU core). We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with a 2e-5 learning rate. The full training script is accessible in this current repository. ### Training data We used the concatenation from multiple Stackexchange Question-Answer datasets to fine-tune our model. MSMARCO, NQ & other question-answer datasets were also used. | Dataset | Paper | Number of training tuples | |:--------------------------------------------------------:|:----------------------------------------:|:--------------------------:| | [Stack Exchange QA - Title & Answer](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_title_best_voted_answer_jsonl) | - | 4,750,619 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_title_body_jsonl) | - | 364,001 | | [TriviaqQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 | | [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 | | [Quora Question Pairs](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 | | [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 | | [PAQ](https://github.com/facebookresearch/PAQ) | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 | | [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 | | [MS MARCO](https://microsoft.github.io/msmarco/) | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 | | [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) Question/Answer | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 | | SearchQA | - | 582,261 | | [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 |
BigSalmon/FormalRobertaa
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # multi-qa_v1-mpnet-mean_cos ## Model Description SentenceTransformers is a set of models and frameworks that enable training and generating sentence embeddings from given data. The generated sentence embeddings can be utilized for Clustering, Semantic Search and other tasks. We used a pretrained [microsoft/mpnet-base](https://huggingface.co/microsoft/mpnet-base) model and trained it using Siamese Network setup and contrastive learning objective. Question and answer pairs from StackExchange was used as training data to make the model robust to Question / Answer embedding similarity. For this model, mean pooling of hidden states were used as sentence embeddings. We developed this model during the [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organized by Hugging Face. We developed this model as part of the project: [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as assistance from Google’s Flax, JAX, and Cloud team members about efficient deep learning frameworks. ## Intended uses Our model is intended to be used as a sentence encoder for a search engine. Given an input sentence, it outputs a vector which captures the sentence semantic information. The sentence vector may be used for semantic-search, clustering or sentence similarity tasks. ## How to use Here is how to use this model to get the features of a given text using [SentenceTransformers](https://github.com/UKPLab/sentence-transformers) library: ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer('flax-sentence-embeddings/multi-qa_v1-mpnet-mean_cos') text = "Replace me by any question / answer you'd like." text_embbedding = model.encode(text) # array([-0.01559514, 0.04046123, 0.1317083 , 0.00085931, 0.04585106, # -0.05607086, 0.0138078 , 0.03569756, 0.01420381, 0.04266302 ...], # dtype=float32) ``` # Training procedure ## Pre-training We use the pretrained [microsoft/mpnet-base](https://huggingface.co/microsoft/mpnet-base). Please refer to the model card for more detailed information about the pre-training procedure. ## Fine-tuning We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch. We then apply the cross entropy loss by comparing with true pairs. ### Hyper parameters We trained on model on a TPU v3-8. We train the model during 80k steps using a batch size of 1024 (128 per TPU core). We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with a 2e-5 learning rate. The full training script is accessible in this current repository. ### Training data We used the concatenation from multiple Stackexchange Question-Answer datasets to fine-tune our model. MSMARCO, NQ & other question-answer datasets were also used. | Dataset | Paper | Number of training tuples | |:--------------------------------------------------------:|:----------------------------------------:|:--------------------------:| | [Stack Exchange QA - Title & Answer](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_title_best_voted_answer_jsonl) | - | 4,750,619 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_title_body_jsonl) | - | 364,001 | | [TriviaqQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 | | [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 | | [Quora Question Pairs](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 | | [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 | | [PAQ](https://github.com/facebookresearch/PAQ) | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 | | [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 | | [MS MARCO](https://microsoft.github.io/msmarco/) | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 | | [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) Question/Answer | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 | | SearchQA | - | 582,261 | | [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 |
BigSalmon/FormalRobertaaa
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
null
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity language: en --- # Model description The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised contrastive learning objective. We used the pretrained ['mpnet-base'](https://huggingface.co/microsoft/mpnet-base) model and fine-tuned in on a 700M sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset. We developped this model during the [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organized by Hugging Face. We developped this model as part of the project: [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Google’s Flax, JAX, and Cloud team member about efficient deep learning frameworks. ## Intended uses Our model is intented to be used as a sentence encoder. Given an input sentence, it ouptuts a vector which captures the sentence semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks. ## How to use Here is how to use this model to get the features of a given text using [SentenceTransformers](https://github.com/UKPLab/sentence-transformers) library: ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer('flax-sentence-embeddings/reddit_single-context_mpnet-base') text = "Replace me by any text you'd like." text_embbedding = model.encode(text) # array([-0.01559514, 0.04046123, 0.1317083 , 0.00085931, 0.04585106, # -0.05607086, 0.0138078 , 0.03569756, 0.01420381, 0.04266302 ...], # dtype=float32) ``` # Training procedure ## Pre-training We use the pretrained ['mpnet-base'](https://huggingface.co/microsoft/mpnet-base). Please refer to the model card for more detailed information about the pre-training procedure. ## Fine-tuning We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch. We then apply the cross entropy loss by comparing with true pairs. ### Hyper parameters We trained ou model on a TPU v3-8. We train the model during 540k steps using a batch size of 1024 (128 per TPU core). We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with a 2e-5 learning rate. The full training script is accessible in this current repository. ### Training data We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 700M sentences. We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file. We only use the first context response when building the dataset. | Dataset | Paper | Number of training tuples | |:--------------------------------------------------------:|:----------------------------------------:|:--------------------------:| | [Reddit conversationnal](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 |
BigSalmon/FroBurta
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity datasets: - code_search_net --- # flax-sentence-embeddings/st-codesearch-distilroberta-base This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. It was trained on the [code_search_net](https://huggingface.co/datasets/code_search_net) dataset and can be used to search program code given text. ## Usage: ```python from sentence_transformers import SentenceTransformer, util #This list the defines the different programm codes code = ["""def sort_list(x): return sorted(x)""", """def count_above_threshold(elements, threshold=0): counter = 0 for e in elements: if e > threshold: counter += 1 return counter""", """def find_min_max(elements): min_ele = 99999 max_ele = -99999 for e in elements: if e < min_ele: min_ele = e if e > max_ele: max_ele = e return min_ele, max_ele"""] model = SentenceTransformer("flax-sentence-embeddings/st-codesearch-distilroberta-base") # Encode our code into the vector space code_emb = model.encode(code, convert_to_tensor=True) # Interactive demo: Enter queries, and the method returns the best function from the # 3 functions we defined while True: query = input("Query: ") query_emb = model.encode(query, convert_to_tensor=True) hits = util.semantic_search(query_emb, code_emb)[0] top_hit = hits[0] print("Cossim: {:.2f}".format(top_hit['score'])) print(code[top_hit['corpus_id']]) print("\n\n") ``` ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('flax-sentence-embeddings/st-codesearch-distilroberta-base') embeddings = model.encode(sentences) print(embeddings) ``` ## Training The model was trained with a DistilRoBERTa-base model for 10k training steps on the codesearch dataset with batch_size 256 and MultipleNegativesRankingLoss. It is some preliminary model. It was neither tested nor was the trained quite sophisticated The model was trained with the parameters: **DataLoader**: `MultiDatasetDataLoader.MultiDatasetDataLoader` of length 5371 with parameters: ``` {'batch_size': 256} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20, 'similarity_fct': 'dot_score'} ``` Parameters of the fit()-Method: ``` { "callback": null, "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "warmupconstant", "steps_per_epoch": 10000, "warmup_steps": 500, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: RobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
BigSalmon/GPTHeHe
[ "pytorch", "gpt2", "text-generation", "transformers", "has_space" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: reddit-bert-text3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # reddit-bert-text3 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.5346 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.1924 | 1.0 | 981 | 2.6541 | | 2.7158 | 2.0 | 1962 | 2.5480 | | 2.6583 | 3.0 | 2943 | 2.5072 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
BigSalmon/GPTIntro
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
"2021-12-15T08:05:47Z"
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: reddit-bert-text4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # reddit-bert-text4 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.4763 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.1071 | 1.0 | 978 | 2.6170 | | 2.6788 | 2.0 | 1956 | 2.5332 | | 2.6112 | 3.0 | 2934 | 2.4844 | ### Framework versions - Transformers 4.13.0 - Pytorch 1.10.0+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
BigSalmon/GPTNeo350MInformalToFormalLincoln3
[ "pytorch", "gpt_neo", "text-generation", "transformers", "has_space" ]
text-generation
{ "architectures": [ "GPTNeoForCausalLM" ], "model_type": "gpt_neo", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
"2021-12-18T11:26:38Z"
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: reddit-bert-text5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # reddit-bert-text5 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.5749 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.0257 | 1.0 | 945 | 2.6167 | | 2.7138 | 2.0 | 1890 | 2.5529 | | 2.6363 | 3.0 | 2835 | 2.5463 | ### Framework versions - Transformers 4.14.1 - Pytorch 1.10.0+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
BigSalmon/GPTNeo350MInformalToFormalLincoln4
[ "pytorch", "gpt_neo", "text-generation", "transformers", "has_space" ]
text-generation
{ "architectures": [ "GPTNeoForCausalLM" ], "model_type": "gpt_neo", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
11
"2022-01-12T21:04:08Z"
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: youtube-bert results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # youtube-bert This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.4771 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.691 | 1.0 | 1077 | 2.5445 | | 2.5768 | 2.0 | 2154 | 2.5226 | | 2.5227 | 3.0 | 3231 | 2.5027 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu113 - Datasets 1.17.0 - Tokenizers 0.10.3
BigSalmon/GPTT
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
# Cheapity3 🐷 GPT-like T5 model trained to generate text in multiple languages. ## Motivation - GPT models are expensive to run. - GPT models are monolingual. ## Solution - Maybe, Small Models aren't Terrible (*SMarT*) - Plus, they are cheaper to run. I fine-tuned T5 on multiple languages (🇬🇧 English, 🇩🇪 German, 🇫🇷 French) and multiple academic text snippets from various domains like tech, law, finance and science etc. to generate text, just like GPT models do. ## Usage - [NLPlayStore](https://github.com/flexudy/NLPlayStore) 👈 ```python from store.service_management import ServiceManager service_manager = ServiceManager().get_service("cheapity3") service.install() service = service.launch() input_text = "The mechanical engineering field requires ... " generated_texts = service.play(input_text, 15) # A list a generated text ``` ## Usage - Hugging Face Transformers 🤗 - Provide some text e.g `"Italy, officially the Italian Republic is a country consisting of"` - Tell Cheapity3 how many words you want to generate e.g `15` -- 😃 Yes, you can control the length. - Cheapity3 reads your text and generates a continuation containing approximately 15 words. ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("flexudy/cheapity3") model = AutoModelWithLMHead.from_pretrained("flexudy/cheapity3") input_text = """The mechanical engineering field requires an understanding of core areas including mechanics, dynamics, thermodynamics, materials science, structural analysis, and electricity. { _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ }""" # 15 words inputs = tokenizer(input_text, return_tensors="pt", truncation=True, max_length=512) input_ids = inputs["input_ids"] attention_mask = inputs["attention_mask"] outputs = model.generate( input_ids=input_ids, attention_mask=attention_mask, max_length=128, do_sample=True, early_stopping=True, num_return_sequences=4, repetition_penalty=2.5 ) for i in range(4): print(tokenizer.decode(outputs[i], skip_special_tokens=True, clean_up_tokenization_spaces=True)) ``` **INPUT: The mechanical engineering field requires an understanding of core areas including mechanics, dynamics, thermodynamics, materials science, structural analysis, and electricity.** ``` > Cheapity3 continues with beam search: ... The field of mechanical engineering is a broad field that includes many core areas of engineering. > Cheapity3 continues with sampling and top_k=50: ... Developing the knowledge base for these core areas will enable engineers to build their capabilities rapidly and efficiently. ... ... The field of mechanics offers a variety and broad range for applications throughout the engineering/technological fields. ... ... Mechanics generally is not understood by students. While they can be employed in the field, mechanical engineering ... ... Introduction to mechanical engineering and core fields including chemical products, materials science, structural analysis, and geomatics ... ``` ## Pretty decent right? Hence, whenever you feel like GPT3 is too expensive, Cheapity3 comes to the rescue 🤗. ## Model Training FYI - T5-base model - Trained on ONLY 1M sentences from English, French and German text - Mostly text from Wikipedia, arxiv and QA datasets - Learning rate: 0.00003 - 2 epochs - Max input: 512 tokens - Max output: 128 tokens
BigSalmon/GoodMaskResults
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
"2021-09-14T13:14:35Z"
# Towards Neuro-Symbolic Language Understanding ![alt text](https://www.flexudy.com/wp-content/uploads/2021/09/conceptor.png "Flexudy's conceptor") At [Flexudy](https://flexudy.com), we look for ways to unify symbolic and sub-symbolic methods to improve model interpretation and inference. ## Problem 1. Word embeddings are awesome 🚀. However, no one really knows what an array of 768 numbers means? 2. Text/Token classification is also awesome ❤️‍. Still, classifying things into a finite set of concepts is rather limited. 3. Last but not least, how do I know that the word *cat* is a **mammal** and also an **animal** if my neural network is only trained to predict whether something is an animal or not? ## Solution 1. It would be cool if my neural network would just know that **cat** is an **animal** right? *∀x.Cat(x) ⇒ Animal(x)*. Or for example, (*∀x.SchöneBlumen(x) ⇒ Blumen(x)*) -- English meaning: For all x, If x is a beautiful flower, then x is still a flower. -- 2. All of a sudden, tasks like **Question Answering**, **Summarization**, **Named Entity Recognition** or even **Intent Classification** etc become easier right? Well, one might probably still need time to build a good and robust solution that is not as large as **GPT3**. Like [Peter Gärdenfors, author of conceptual spaces](https://www.goodreads.com/book/show/1877443.Conceptual_Spaces), we are trying to find ways to navigate between the symbolic and the sub-symbolic by thinking in concepts. Should such a solution exist, one could easily leverage true logical reasoning engines on natural language. How awesome would that be? 💡 ## Flexudy's Conceptor 1. We developed a poor man's implementation of the ideal solution described above. 2. Though it is a poor man's model, **it is still a useful one** 🤗. ### Usage No library should anyone suffer. Especially not if it is built on top of 🤗 **HF Transformers**. Go to the [Github repo](https://github.com/flexudy/natural-language-logic) `pip install git+https://github.com/flexudy/[email protected]` ```python from flexudy.conceptor.start import FlexudyConceptInferenceMachineFactory # Load me only once concept_inference_machine = FlexudyConceptInferenceMachineFactory.get_concept_inference_machine() # A list of terms. terms = ["cat", "dog", "economics and sociology", "public company"] # If you don't pass the language, a language detector will attempt to predict it for you # If any error occurs, the language defaults to English. language = "en" # Predict concepts # You can also pass the batch_size=2 and the beam_size=4 concepts = concept_inference_machine.infer_concepts(terms, language=language) ``` Output: ```python {'cat': ['mammal', 'animal'], 'dog': ['hound', 'animal'], 'economics and sociology': ['both fields of study'], 'public company': ['company']} ``` ### How was it trained? 1. Using Google's T5-base and T5-small. Both models are released on the Hugging Face Hub. 2. T5-base was trained for only two epochs while T5-small was trained for 5 epochs. ## Where did you get the data? 1. I extracted and curated a fragment of [Conceptnet](https://conceptnet.io/) 2. In particular, only the IsA relation was used. 3. Note that one term can belong to multiple concepts (which is pretty cool if you think about [Fuzzy Description Logics](https://lat.inf.tu-dresden.de/~stefborg/Talks/QuantLAWorkshop2013.pdf)). Multiple inheritances however mean some terms belong to so many concepts. Hence, I decided to randomly throw away some due to the **maximum length limitation**. ### Setup 1. I finally allowed only `2` to `4` concepts at random for each term. This means, there is still great potential to make the models generalise better 🚀. 3. I used a total of `279884` training examples and `1260` for testing. Edges -- i.e `IsA(concept u, concept v)` -- in both sets are disjoint. 4. Trained for `15K` steps with learning rate linear decay during each step. Starting at `0.001` 5. Used `RAdam Optimiser` with weight_decay =`0.01` and batch_size =`36`. 6. Source and target max length were both `64`. ### Multilingual Models 1. The "conceptor" model is multilingual. English, German and French is supported. 2. [Conceptnet](https://conceptnet.io/) supports many languages, but I just chose those three because those are the ones I speak. ### Metrics for flexudy-conceptor-t5-base | Metric | Score | | ------------- |:-------------:| | Exact Match | 36.67 | | F1 | 43.08 | | Loss smooth | 1.214 | Unfortunately, we no longer have the metrics for flexudy-conceptor-t5-small. If I recall correctly, base was just slightly better on the test set (ca. `2%` F1). ## Why not just use the data if you have it structured already? Conceptnet is very large. Even if you just consider loading a fragment into your RAM, say with only 100K edges, this is still a large graph. Especially, if you think about how you will save the node embeddings efficiently for querying. If you prefer this approach, [Milvus](https://github.com/milvus-io/pymilvus) can be of great help. You can compute query embeddings and try to find the best match. From there (after matching), you can navigate through the graph at `100%` precision.
BigSalmon/InformalToFormalLincoln16
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
"2021-09-10T01:00:43Z"
--- tags: conversational --- @Rick from Rick and Morty GPT-2 Conversation Model ---
BigSalmon/InformalToFormalLincoln20
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- language: de tags: - grammar widget: - text: "correct german grammar: es ist schön so viele tolle menschen um sich zu haben denn ohne sie wäre es nicht so schön" --- example outputs: input: ich liebe das leben --> output: Ich liebe das Leben. input: es ist schön so viele tolle menschen um sich zu haben denn ohne sie wäre es nicht so schön --> output: Es ist schön, so viele tolle Menschen, um sich zu haben, denn ohne sie wäre es nicht so schön. input: der kunde hat ausdrücklich nach dirk verlangt weil er den rabatt haben möchte --> output: Der Kunde hat ausdrücklich nach Dirk verlangt, weil er den Rabatt haben möchte. the data can be prepared like this: the broken_text is used as input, while the text is the output ```python import re import phonetics import random chars_to_ignore_regex = "[^A-Za-z0-9\ö\ä\ü\Ö\Ä\Ü\ß\-,;.:?! ]+" broken_chars_to_ignore_regex = "[^A-Za-z0-9\ö\ä\ü\Ö\Ä\Ü\ß\- ]+" def do_manipulation(string): text = re.sub(chars_to_ignore_regex, '', string) broken_text = re.sub(broken_chars_to_ignore_regex, "", text.lower()) if(random.randint(0,100) >= 50): for xyz in range(int(len(broken_text.split(" "))/4)): if(random.randint(0,100) > 30): randc = random.choice(broken_text.split(" ")) if(random.randint(0,10) > 4): broken_text = broken_text.replace(randc, ''.join(random.choice('abcdefghijklmnopqrstuvxyz') for _ in range(len(randc))).lower()) else: broken_text = broken_text.replace(randc, phonetics.metaphone(randc).lower()) return text, broken_text ```
BigSalmon/InformalToFormalLincoln22
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- tags: - generated_from_trainer model-index: - name: t5-skills results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-skills This model is a fine-tuned version of [flozi00/t5-skills](https://huggingface.co/flozi00/t5-skills) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1.0 ### Training results ### Framework versions - Transformers 4.12.5 - Pytorch 1.8.1 - Datasets 1.14.0 - Tokenizers 0.10.2
BigSalmon/InformalToFormalLincoln23
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
"2021-12-13T18:58:12Z"
--- language: de datasets: - common_voice metrics: - wer - cer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week - hf-asr-leaderboard license: apache-2.0 model-index: - name: XLSR Wav2Vec2 German with LM by Florian Zimmermeister @A\\Ware results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice de type: common_voice args: de metrics: - name: Test WER type: wer value: 5.7467896819046755 - name: Test CER type: cer value: 1.8980142607670552 --- **Test Result** | Model | WER | CER | | ------------- | ------------- | ------------- | | flozi00/wav2vec2-large-xlsr-53-german-with-lm | **5.7467896819046755%** | **1.8980142607670552%** | ## Evaluation The model can be evaluated as follows on the German test data of Common Voice. ```python import torchaudio.functional as F import torch from transformers import AutoModelForCTC, AutoProcessor import re from datasets import load_dataset, load_metric CHARS_TO_IGNORE = [",", "?", "¿", ".", "!", "¡", ";", ";", ":", '""', "%", '"', "�", "ʿ", "·", "჻", "~", "՞", "؟", "،", "।", "॥", "«", "»", "„", "“", "”", "「", "」", "‘", "’", "《", "》", "(", ")", "[", "]", "{", "}", "=", "`", "_", "+", "<", ">", "…", "–", "°", "´", "ʾ", "‹", "›", "©", "®", "—", "→", "。", "、", "﹂", "﹁", "‧", "~", "﹏", ",", "{", "}", "(", ")", "[", "]", "【", "】", "‥", "〽", "『", "』", "〝", "〟", "⟨", "⟩", "〜", ":", "!", "?", "♪", "؛", "/", "\\", "º", "−", "^", "ʻ", "ˆ"] chars_to_ignore_regex = f"[{re.escape(''.join(CHARS_TO_IGNORE))}]" counter = 0 wer_counter = 0 cer_counter = 0 def main(): model = AutoModelForCTC.from_pretrained("flozi00/wav2vec2-large-xlsr-53-german-with-lm") processor = AutoProcessor.from_pretrained("flozi00/wav2vec2-large-xlsr-53-german-with-lm") wer = load_metric("wer") cer = load_metric("cer") ds = load_dataset("common_voice", "de", split="test") #ds = ds.select(range(100)) def calculate_metrics(batch): global counter, wer_counter, cer_counter resampled_audio = F.resample(torch.tensor(batch["audio"]["array"]), 48_000, 16_000).numpy() input_values = processor(resampled_audio, return_tensors="pt", sampling_rate=16_000).input_values with torch.no_grad(): logits = model(input_values).logits.numpy()[0] decoded = processor.decode(logits) pred = decoded.text ref = re.sub(chars_to_ignore_regex, "", batch["sentence"]).upper() wer_result = wer.compute(predictions=[pred], references=[ref]) cer_result = cer.compute(predictions=[pred], references=[ref]) counter += 1 wer_counter += wer_result cer_counter += cer_result print(f"WER: {(wer_counter/counter)*100} | CER: {(cer_counter/counter)*100}") return batch ds.map(calculate_metrics, remove_columns=ds.column_names) main() ``` Credits: The Acoustic model is an copy of [jonatasgrosman's model](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-german) I used to train an matching kenlm language model for
BigSalmon/InformalToFormalLincoln25
[ "pytorch", "gpt2", "text-generation", "transformers", "has_space" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
null
### Model Description GPT-J 6B is a transformer model designed using EleutherAI's replication of the GPT-3 architecture. GPT-J refers to the class of models, while 6B represents the number of parameters of this particular pre-trained model. The original GPT-J-6B model is trained with TPUs, which is not easy to use for normal users. Thus, through a converting script, we convert the TPU version GPT-J-6B into GPU version, which could be load and fine-tuned with GPUs. As we have tried, the model can be loaded with 1 GPU with 16G memory to do inference. For fine-tune, we used 8 * 32G GPUs with DeepSpeed library to distribute the model, data and gradients, in order to allocate the huge amount of model parameters.
BigSalmon/Lincoln4
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
11
"2021-09-10T08:18:03Z"
--- tags: - text2text-generation - Chinese - seq2seq - BART language: zh --- # Chinese BART-Base ### News **12/30/2022** An updated version of CPT & Chinese BART are released. In the new version, we changed the following parts: - **Vocabulary** We replace the old BERT vocabulary with a larger one of size 51271 built from the training data, in which we 1) add missing 6800+ Chinese characters (most of them are traditional Chinese characters); 2) remove redundant tokens (e.g. Chinese character tokens with ## prefix); 3) add some English tokens to reduce OOV. - **Position Embeddings** We extend the max_position_embeddings from 512 to 1024. We initialize the new version of models with the old version of checkpoints with vocabulary alignment. Token embeddings found in the old checkpoints are copied. And other newly added parameters are randomly initialized. We further train the new CPT & Chinese BART 50K steps with batch size 2048, max-seq-length 1024, peak learning rate 2e-5, and warmup ratio 0.1. The result compared to the previous checkpoints is as followings: | | AFQMC | IFLYTEK | CSL-sum | LCSTS | AVG | | :--------- | :---: | :-----: | :-----: | :---: | :---: | | Previous | | | | | | | bart-base | 73.0 | 60 | 62.1 | 37.8 | 58.23 | | cpt-base | 75.1 | 60.5 | 63.0 | 38.2 | 59.20 | | bart-large | 75.7 | 62.1 | 64.2 | 40.6 | 60.65 | | cpt-large | 75.9 | 61.8 | 63.7 | 42.0 | 60.85 | | Updataed | | | | | | | bart-base | 73.03 | 61.25 | 61.51 | 38.78 | 58.64 | | cpt-base | 74.40 | 61.23 | 62.09 | 38.81 | 59.13 | | bart-large | 75.81 | 61.52 | 64.62 | 40.90 | 60.71 | | cpt-large | 75.97 | 61.63 | 63.83 | 42.08 | 60.88 | The result shows that the updated models maintain comparative performance compared with previous checkpoints. There are still some cases that the updated model is slightly worse than the previous one, which results from the following reasons: 1) Training additional a few steps did not lead to significant performance improvement; 2) some downstream tasks are not affected by the newly added tokens and longer encoding sequences, but sensitive to the fine-tuning hyperparameters. - Note that to use updated models, please update the `modeling_cpt.py` (new version download [Here](https://github.com/fastnlp/CPT/blob/master/finetune/modeling_cpt.py)) and the vocabulary (refresh the cache). ## Model description This is an implementation of Chinese BART-Base. [**CPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Generation**](https://arxiv.org/pdf/2109.05729.pdf) Yunfan Shao, Zhichao Geng, Yitao Liu, Junqi Dai, Fei Yang, Li Zhe, Hujun Bao, Xipeng Qiu **Github Link:** https://github.com/fastnlp/CPT ## Usage ```python >>> from transformers import BertTokenizer, BartForConditionalGeneration, Text2TextGenerationPipeline >>> tokenizer = BertTokenizer.from_pretrained("fnlp/bart-base-chinese") >>> model = BartForConditionalGeneration.from_pretrained("fnlp/bart-base-chinese") >>> text2text_generator = Text2TextGenerationPipeline(model, tokenizer) >>> text2text_generator("北京是[MASK]的首都", max_length=50, do_sample=False) [{'generated_text': '北 京 是 中 国 的 首 都'}] ``` **Note: Please use BertTokenizer for the model vocabulary. DO NOT use original BartTokenizer.** ## Citation ```bibtex @article{shao2021cpt, title={CPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Generation}, author={Yunfan Shao and Zhichao Geng and Yitao Liu and Junqi Dai and Fei Yang and Li Zhe and Hujun Bao and Xipeng Qiu}, journal={arXiv preprint arXiv:2109.05729}, year={2021} } ```
BigSalmon/MrLincoln
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
"2021-09-10T09:03:34Z"
--- tags: - text2text-generation - Chinese - seq2seq language: zh --- # Chinese BART-Large ### News **12/30/2022** An updated version of CPT & Chinese BART are released. In the new version, we changed the following parts: - **Vocabulary** We replace the old BERT vocabulary with a larger one of size 51271 built from the training data, in which we 1) add missing 6800+ Chinese characters (most of them are traditional Chinese characters); 2) remove redundant tokens (e.g. Chinese character tokens with ## prefix); 3) add some English tokens to reduce OOV. - **Position Embeddings** We extend the max_position_embeddings from 512 to 1024. We initialize the new version of models with the old version of checkpoints with vocabulary alignment. Token embeddings found in the old checkpoints are copied. And other newly added parameters are randomly initialized. We further train the new CPT & Chinese BART 50K steps with batch size 2048, max-seq-length 1024, peak learning rate 2e-5, and warmup ratio 0.1. The result compared to the previous checkpoints is as followings: | | AFQMC | IFLYTEK | CSL-sum | LCSTS | AVG | | :--------- | :---: | :-----: | :-----: | :---: | :---: | | Previous | | | | | | | bart-base | 73.0 | 60 | 62.1 | 37.8 | 58.23 | | cpt-base | 75.1 | 60.5 | 63.0 | 38.2 | 59.20 | | bart-large | 75.7 | 62.1 | 64.2 | 40.6 | 60.65 | | cpt-large | 75.9 | 61.8 | 63.7 | 42.0 | 60.85 | | Updataed | | | | | | | bart-base | 73.03 | 61.25 | 61.51 | 38.78 | 58.64 | | cpt-base | 74.40 | 61.23 | 62.09 | 38.81 | 59.13 | | bart-large | 75.81 | 61.52 | 64.62 | 40.90 | 60.71 | | cpt-large | 75.97 | 61.63 | 63.83 | 42.08 | 60.88 | The result shows that the updated models maintain comparative performance compared with previous checkpoints. There are still some cases that the updated model is slightly worse than the previous one, which results from the following reasons: 1) Training additional a few steps did not lead to significant performance improvement; 2) some downstream tasks are not affected by the newly added tokens and longer encoding sequences, but sensitive to the fine-tuning hyperparameters. - Note that to use updated models, please update the `modeling_cpt.py` (new version download [Here](https://github.com/fastnlp/CPT/blob/master/finetune/modeling_cpt.py)) and the vocabulary (refresh the cache). ## Model description This is an implementation of Chinese BART-Large. [**CPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Generation**](https://arxiv.org/pdf/2109.05729.pdf) Yunfan Shao, Zhichao Geng, Yitao Liu, Junqi Dai, Fei Yang, Li Zhe, Hujun Bao, Xipeng Qiu **Github Link:** https://github.com/fastnlp/CPT ## Usage ```python >>> from transformers import BertTokenizer, BartForConditionalGeneration, Text2TextGenerationPipeline >>> tokenizer = BertTokenizer.from_pretrained("fnlp/bart-large-chinese") >>> model = BartForConditionalGeneration.from_pretrained("fnlp/bart-large-chinese") >>> text2text_generator = Text2TextGenerationPipeline(model, tokenizer) >>> text2text_generator("北京是[MASK]的首都", max_length=50, do_sample=False) [{'generated_text': '北 京 是 中 华 人 民 共 和 国 的 首 都'}] ``` **Note: Please use BertTokenizer for the model vocabulary. DO NOT use original BartTokenizer.** ## Citation ```bibtex @article{shao2021cpt, title={CPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Generation}, author={Yunfan Shao and Zhichao Geng and Yitao Liu and Junqi Dai and Fei Yang and Li Zhe and Hujun Bao and Xipeng Qiu}, journal={arXiv preprint arXiv:2109.05729}, year={2021} } ```
BigSalmon/MrLincoln10
[ "pytorch", "tensorboard", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
"2021-09-25T11:01:01Z"
--- initializedtags: - fill-mask - text2text-generation - fill-mask - text-classification - Summarization - Chinese - CPT - BART - BERT - seq2seq language: zh --- # Chinese CPT-Base ### News **12/30/2022** An updated version of CPT & Chinese BART are released. In the new version, we changed the following parts: - **Vocabulary** We replace the old BERT vocabulary with a larger one of size 51271 built from the training data, in which we 1) add missing 6800+ Chinese characters (most of them are traditional Chinese characters); 2) remove redundant tokens (e.g. Chinese character tokens with ## prefix); 3) add some English tokens to reduce OOV. - **Position Embeddings** We extend the max_position_embeddings from 512 to 1024. We initialize the new version of models with the old version of checkpoints with vocabulary alignment. Token embeddings found in the old checkpoints are copied. And other newly added parameters are randomly initialized. We further train the new CPT & Chinese BART 50K steps with batch size 2048, max-seq-length 1024, peak learning rate 2e-5, and warmup ratio 0.1. The result compared to the previous checkpoints is as followings: | | AFQMC | IFLYTEK | CSL-sum | LCSTS | AVG | | :--------- | :---: | :-----: | :-----: | :---: | :---: | | Previous | | | | | | | bart-base | 73.0 | 60 | 62.1 | 37.8 | 58.23 | | cpt-base | 75.1 | 60.5 | 63.0 | 38.2 | 59.20 | | bart-large | 75.7 | 62.1 | 64.2 | 40.6 | 60.65 | | cpt-large | 75.9 | 61.8 | 63.7 | 42.0 | 60.85 | | Updataed | | | | | | | bart-base | 73.03 | 61.25 | 61.51 | 38.78 | 58.64 | | cpt-base | 74.40 | 61.23 | 62.09 | 38.81 | 59.13 | | bart-large | 75.81 | 61.52 | 64.62 | 40.90 | 60.71 | | cpt-large | 75.97 | 61.63 | 63.83 | 42.08 | 60.88 | The result shows that the updated models maintain comparative performance compared with previous checkpoints. There are still some cases that the updated model is slightly worse than the previous one, which results from the following reasons: 1) Training additional a few steps did not lead to significant performance improvement; 2) some downstream tasks are not affected by the newly added tokens and longer encoding sequences, but sensitive to the fine-tuning hyperparameters. - Note that to use updated models, please update the `modeling_cpt.py` (new version download [Here](https://github.com/fastnlp/CPT/blob/master/finetune/modeling_cpt.py)) and the vocabulary (refresh the cache). ## Model description This is an implementation of CPT-Base. To use CPT, please import the file `modeling_cpt.py` (**Download** [Here](https://github.com/fastnlp/CPT/blob/master/finetune/modeling_cpt.py)) that define the architecture of CPT into your project. [**CPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Generation**](https://arxiv.org/pdf/2109.05729.pdf) Yunfan Shao, Zhichao Geng, Yitao Liu, Junqi Dai, Fei Yang, Li Zhe, Hujun Bao, Xipeng Qiu **Github Link:** https://github.com/fastnlp/CPT ## Usage ```python >>> from modeling_cpt import CPTForConditionalGeneration >>> from transformers import BertTokenizer >>> tokenizer = BertTokenizer.from_pretrained("fnlp/cpt-base") >>> model = CPTForConditionalGeneration.from_pretrained("fnlp/cpt-base") >>> inputs = tokenizer.encode("北京是[MASK]的首都", return_tensors='pt') >>> pred_ids = model.generate(input_ids, num_beams=4, max_length=20) >>> print(tokenizer.convert_ids_to_tokens(pred_ids[i])) ['[SEP]', '[CLS]', '北', '京', '是', '中', '国', '的', '首', '都', '[SEP]'] ``` **Note: Please use BertTokenizer for the model vocabulary. DO NOT use original BartTokenizer.** ## Citation ```bibtex @article{shao2021cpt, title={CPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Generation}, author={Yunfan Shao and Zhichao Geng and Yitao Liu and Junqi Dai and Fei Yang and Li Zhe and Hujun Bao and Xipeng Qiu}, journal={arXiv preprint arXiv:2109.05729}, year={2021} } ```
BigSalmon/MrLincoln11
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- tags: - fill-mask - text2text-generation - fill-mask - text-classification - Summarization - Chinese - CPT - BART - BERT - seq2seq language: zh --- # Chinese CPT-Large ### News **12/30/2022** An updated version of CPT & Chinese BART are released. In the new version, we changed the following parts: - **Vocabulary** We replace the old BERT vocabulary with a larger one of size 51271 built from the training data, in which we 1) add missing 6800+ Chinese characters (most of them are traditional Chinese characters); 2) remove redundant tokens (e.g. Chinese character tokens with ## prefix); 3) add some English tokens to reduce OOV. - **Position Embeddings** We extend the max_position_embeddings from 512 to 1024. We initialize the new version of models with the old version of checkpoints with vocabulary alignment. Token embeddings found in the old checkpoints are copied. And other newly added parameters are randomly initialized. We further train the new CPT & Chinese BART 50K steps with batch size 2048, max-seq-length 1024, peak learning rate 2e-5, and warmup ratio 0.1. The result compared to the previous checkpoints is as followings: | | AFQMC | IFLYTEK | CSL-sum | LCSTS | AVG | | :--------- | :---: | :-----: | :-----: | :---: | :---: | | Previous | | | | | | | bart-base | 73.0 | 60 | 62.1 | 37.8 | 58.23 | | cpt-base | 75.1 | 60.5 | 63.0 | 38.2 | 59.20 | | bart-large | 75.7 | 62.1 | 64.2 | 40.6 | 60.65 | | cpt-large | 75.9 | 61.8 | 63.7 | 42.0 | 60.85 | | Updataed | | | | | | | bart-base | 73.03 | 61.25 | 61.51 | 38.78 | 58.64 | | cpt-base | 74.40 | 61.23 | 62.09 | 38.81 | 59.13 | | bart-large | 75.81 | 61.52 | 64.62 | 40.90 | 60.71 | | cpt-large | 75.97 | 61.63 | 63.83 | 42.08 | 60.88 | The result shows that the updated models maintain comparative performance compared with previous checkpoints. There are still some cases that the updated model is slightly worse than the previous one, which results from the following reasons: 1) Training additional a few steps did not lead to significant performance improvement; 2) some downstream tasks are not affected by the newly added tokens and longer encoding sequences, but sensitive to the fine-tuning hyperparameters. - Note that to use updated models, please update the `modeling_cpt.py` (new version download [Here](https://github.com/fastnlp/CPT/blob/master/finetune/modeling_cpt.py)) and the vocabulary (refresh the cache). ## Model description This is an implementation of CPT-Large. To use CPT, please import the file `modeling_cpt.py` (**Download** [Here](https://github.com/fastnlp/CPT/blob/master/finetune/modeling_cpt.py)) that define the architecture of CPT into your project. [**CPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Generation**](https://arxiv.org/pdf/2109.05729.pdf) Yunfan Shao, Zhichao Geng, Yitao Liu, Junqi Dai, Fei Yang, Li Zhe, Hujun Bao, Xipeng Qiu **Github Link:** https://github.com/fastnlp/CPT ## Usage ```python >>> from modeling_cpt import CPTForConditionalGeneration >>> from transformers import BertTokenizer >>> tokenizer = BertTokenizer.from_pretrained("fnlp/cpt-large") >>> model = CPTForConditionalGeneration.from_pretrained("fnlp/cpt-large") >>> input_ids = tokenizer.encode("北京是[MASK]的首都", return_tensors='pt') >>> pred_ids = model.generate(input_ids, num_beams=4, max_length=20) >>> print(tokenizer.convert_ids_to_tokens(pred_ids[0])) ['[SEP]', '[CLS]', '北', '京', '是', '中', '国', '的', '首', '都', '[SEP]'] ``` **Note: Please use BertTokenizer for the model vocabulary. DO NOT use original BartTokenizer.** ## Citation ```bibtex @article{shao2021cpt, title={CPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Generation}, author={Yunfan Shao and Zhichao Geng and Yitao Liu and Junqi Dai and Fei Yang and Li Zhe and Hujun Bao and Xipeng Qiu}, journal={arXiv preprint arXiv:2109.05729}, year={2021} } ```
BigSalmon/MrLincoln12
[ "pytorch", "gpt2", "text-generation", "transformers", "has_space" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- tags: - Multi-exit-BERT language: en datasets: - wikipedia - bookcorpus - c4 --- # ElasticBERT-BASE ## Model description This is an implementation of the `base` version of ElasticBERT. [**Towards Efficient NLP: A Standard Evaluation and A Strong Baseline**](https://arxiv.org/pdf/2110.07038.pdf) Xiangyang Liu, Tianxiang Sun, Junliang He, Lingling Wu, Xinyu Zhang, Hao Jiang, Zhao Cao, Xuanjing Huang, Xipeng Qiu ## Code link [**fastnlp/elasticbert**](https://github.com/fastnlp/ElasticBERT) ## Usage ```python >>> from transformers import BertTokenizer as ElasticBertTokenizer >>> from models.configuration_elasticbert import ElasticBertConfig >>> from models.modeling_elasticbert import ElasticBertForSequenceClassification >>> num_output_layers = 1 >>> config = ElasticBertConfig.from_pretrained('fnlp/elasticbert-base', num_output_layers=num_output_layers ) >>> tokenizer = ElasticBertTokenizer.from_pretrained('fnlp/elasticbert-base') >>> model = ElasticBertForSequenceClassification.from_pretrained('fnlp/elasticbert-base', config=config) >>> input_ids = tokenizer.encode('The actors are fantastic .', return_tensors='pt') >>> outputs = model(input_ids) ``` ## Citation ```bibtex @article{liu2021elasticbert, author = {Xiangyang Liu and Tianxiang Sun and Junliang He and Lingling Wu and Xinyu Zhang and Hao Jiang and Zhao Cao and Xuanjing Huang and Xipeng Qiu}, title = {Towards Efficient {NLP:} {A} Standard Evaluation and {A} Strong Baseline}, journal = {CoRR}, volume = {abs/2110.07038}, year = {2021}, url = {https://arxiv.org/abs/2110.07038}, eprinttype = {arXiv}, eprint = {2110.07038}, timestamp = {Fri, 22 Oct 2021 13:33:09 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2110-07038.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
BigSalmon/MrLincoln125MNeo
[ "pytorch", "tensorboard", "gpt_neo", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPTNeoForCausalLM" ], "model_type": "gpt_neo", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
null
--- tags: - Multi-exit-BERT language: en datasets: - wikipedia - bookcorpus - c4 --- # ElasticBERT-LARGE ## Model description This is an implementation of the `large` version of ElasticBERT. [**Towards Efficient NLP: A Standard Evaluation and A Strong Baseline**](https://arxiv.org/pdf/2110.07038.pdf) Xiangyang Liu, Tianxiang Sun, Junliang He, Lingling Wu, Xinyu Zhang, Hao Jiang, Zhao Cao, Xuanjing Huang, Xipeng Qiu ## Code link [**fastnlp/elasticbert**](https://github.com/fastnlp/ElasticBERT) ## Usage ```python >>> from transformers import BertTokenizer as ElasticBertTokenizer >>> from models.configuration_elasticbert import ElasticBertConfig >>> from models.modeling_elasticbert import ElasticBertForSequenceClassification >>> num_output_layers = 1 >>> config = ElasticBertConfig.from_pretrained('fnlp/elasticbert-large', num_output_layers=num_output_layers ) >>> tokenizer = ElasticBertTokenizer.from_pretrained('fnlp/elasticbert-large') >>> model = ElasticBertForSequenceClassification.from_pretrained('fnlp/elasticbert-large', config=config) >>> input_ids = tokenizer.encode('The actors are fantastic .', return_tensors='pt') >>> outputs = model(input_ids) ``` ## Citation ```bibtex @article{liu2021elasticbert, author = {Xiangyang Liu and Tianxiang Sun and Junliang He and Lingling Wu and Xinyu Zhang and Hao Jiang and Zhao Cao and Xuanjing Huang and Xipeng Qiu}, title = {Towards Efficient {NLP:} {A} Standard Evaluation and {A} Strong Baseline}, journal = {CoRR}, volume = {abs/2110.07038}, year = {2021}, url = {https://arxiv.org/abs/2110.07038}, eprinttype = {arXiv}, eprint = {2110.07038}, timestamp = {Fri, 22 Oct 2021 13:33:09 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2110-07038.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
BigSalmon/MrLincoln14
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit language: py thumbnail: https://avatars.githubusercontent.com/u/70610668?s=400&u=f0699303289113c125e8686338739d9a63d5826c&v=4 tags: - bart - pytorch --- # bart-base-python-1m
BigSalmon/NEO125InformalToFormalLincoln
[ "pytorch", "gpt_neo", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPTNeoForCausalLM" ], "model_type": "gpt_neo", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
# Python T5 base model Pre-trained model on CodeSearchNet Python dataset using a span-masking objective. The training objective and model were introduced in [this paper](https://arxiv.org/pdf/1910.10683.pdf) and first released in [this repository](https://github.com/google-research/text-to-text-transfer-transformer). PyT5 model used [git-t5](https://github.com/formermagic/git-t5) framework built on top of JAX/Flax to pre-train the model on a TPU v3-8 node. # How to use You can use this model to denoise span-masked sequences. First, install the [git-t5](https://github.com/formermagic/git-t5) pip package: ```shell > pip install git-t5 ``` Next, download the model and tokenizer: ```python from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, model = AutoModelForSeq2SeqLM.from_pretrained("formermagic/pyt5-base") tokenizer = AutoTokenizer.from_pretrained("formermagic/pyt5-base") ``` Finally, encode your input and generate the output sequence: ```python from git_t5.utils import encode_input text = """ def alias(self, annotationtype, set, fallback=False): if inspect.isclass(annotationtype): annotationtype = annotationtype.ANNOTATIONTYPE if annotationtype in self.set_alias and set in self.set_alias[annotationtype]: return self.set_alias[annotationtype][set] elif fallback: return set else: raise KeyError("No alias for set " + set) """ batch, max_length = encode_input(tokenizer, text, seed=22) outputs = model.generate(batch["input_ids"], max_length=max_length, num_beams=1) print(tokenizer.batch_decode(outputs[..., 1:])) print(tokenizer.batch_decode(batch["labels"])) ``` You should see the following output: ```shell ['<extra_id_0>, fallback=<extra_id_1> inspect<extra_id_2>.set_alias<extra_id_3> return self.set<extra_id_4>) def fallback'] ['<extra_id_0>, fallback=<extra_id_1> inspect<extra_id_2>.set_alias<extra_id_3> return self.set<extra_id_4>) </s></s>'] ``` As you can see, the predicted result is very close to the target sequence.
BigSalmon/Neo
[ "pytorch", "gpt_neo", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPTNeoForCausalLM" ], "model_type": "gpt_neo", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
13
null
--- license: mit language: py thumbnail: https://avatars.githubusercontent.com/u/70610668?s=400&u=f0699303289113c125e8686338739d9a63d5826c&v=4 tags: - roberta - pytorch --- # roberta-base-python-1m
BigSalmon/Robertsy
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
## Introduction This is a zero-shot relation extractor based on the paper [Exploring the zero-shot limit of FewRel](https://www.aclweb.org/anthology/2020.coling-main.124). ## Installation ```bash $ pip install zero-shot-re ``` ## Run the Extractor ```python from transformers import AutoTokenizer from zero_shot_re import RelTaggerModel, RelationExtractor model = RelTaggerModel.from_pretrained("fractalego/fewrel-zero-shot") tokenizer = AutoTokenizer.from_pretrained("fractalego/fewrel-zero-shot") relations = ['noble title', 'founding date', 'occupation of a person'] extractor = RelationExtractor(model, tokenizer, relations) ranked_rels = extractor.rank(text='John Smith received an OBE', head='John Smith', tail='OBE') print(ranked_rels) ``` with results ```python3 [('noble title', 0.9690611883997917), ('occupation of a person', 0.0012609362602233887), ('founding date', 0.00024014711380004883)] ``` ## Accuracy The results as in the paper are | Model | 0-shot 5-ways | 0-shot 10-ways | |------------------------|--------------|----------------| |(1) Distillbert |70.1±0.5 | 55.9±0.6 | |(2) Bert Large |80.8±0.4 | 69.6±0.5 | |(3) Distillbert + SQUAD |81.3±0.4 | 70.0±0.2 | |(4) Bert Large + SQUAD |86.0±0.6 | 76.2±0.4 | This version uses the (4) Bert Large + SQUAD model ## Cite as ```bibtex @inproceedings{cetoli-2020-exploring, title = "Exploring the zero-shot limit of {F}ew{R}el", author = "Cetoli, Alberto", booktitle = "Proceedings of the 28th International Conference on Computational Linguistics", month = dec, year = "2020", address = "Barcelona, Spain (Online)", publisher = "International Committee on Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.coling-main.124", doi = "10.18653/v1/2020.coling-main.124", pages = "1447--1451", abstract = "This paper proposes a general purpose relation extractor that uses Wikidata descriptions to represent the relation{'}s surface form. The results are tested on the FewRel 1.0 dataset, which provides an excellent framework for training and evaluating the proposed zero-shot learning system in English. This relation extractor architecture exploits the implicit knowledge of a language model through a question-answering approach.", } ```
BigSalmon/Rowerta
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
# Personal speech to text model s2t models often do not understand my accent, so I fine tuned this one from "facebook/wav2vec2-large-robust-ft-swbd-300h" using about 1000 recordings of my voice. Do not download unless you have exactly my accent.
BigSalmon/T5Salmon
[ "pytorch", "jax", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "T5ForConditionalGeneration" ], "model_type": "t5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": true, "length_penalty": 2, "max_length": 200, "min_length": 30, "no_repeat_ngram_size": 3, "num_beams": 4, "prefix": "summarize: " }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to German: " }, "translation_en_to_fr": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to French: " }, "translation_en_to_ro": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to Romanian: " } } }
6
null
--- language: scientific english --- # SciBERT finetuned on JNLPA for NER downstream task ## Language Model [SciBERT](https://arxiv.org/pdf/1903.10676.pdf) is a pretrained language model based on BERT and trained by the [Allen Institute for AI](https://allenai.org/) on papers from the corpus of [Semantic Scholar](https://www.semanticscholar.org/). Corpus size is 1.14M papers, 3.1B tokens. SciBERT has its own vocabulary (scivocab) that's built to best match the training corpus. ## Downstream task [`allenai/scibert_scivocab_cased`](https://huggingface.co/allenai/scibert_scivocab_cased#) has been finetuned for Named Entity Recognition (NER) dowstream task. The code to train the NER can be found [here](https://github.com/fran-martinez/bio_ner_bert). ### Data The corpus used to fine-tune the NER is [BioNLP / JNLPBA shared task](http://www.geniaproject.org/shared-tasks/bionlp-jnlpba-shared-task-2004). - Training data consist of 2,000 PubMed abstracts with term/word annotation. This corresponds to 18,546 samples (senteces). - Evaluation data consist of 404 PubMed abstracts with term/word annotation. This corresponds to 3,856 samples (sentences). The classes (at word level) and its distribution (number of examples for each class) for training and evaluation datasets are shown below: | Class Label | # training examples| # evaluation examples| |:--------------|--------------:|----------------:| |O | 382,963 | 81,647 | |B-protein | 30,269 | 5,067 | |I-protein | 24,848 | 4,774 | |B-cell_type | 6,718 | 1,921 | |I-cell_type | 8,748 | 2,991 | |B-DNA | 9,533 | 1,056 | |I-DNA | 15,774 | 1,789 | |B-cell_line | 3,830 | 500 | |I-cell_line | 7,387 | 9,89 | |B-RNA | 951 | 118 | |I-RNA | 1,530 | 187 | ### Model An exhaustive hyperparameter search was done. The hyperparameters that provided the best results are: - Max length sequence: 128 - Number of epochs: 6 - Batch size: 32 - Dropout: 0.3 - Optimizer: Adam The used learning rate was 5e-5 with a decreasing linear schedule. A warmup was used at the beggining of the training with a ratio of steps equal to 0.1 from the total training steps. The model from the epoch with the best F1-score was selected, in this case, the model from epoch 5. ### Evaluation The following table shows the evaluation metrics calculated at span/entity level: | | precision| recall| f1-score| |:---------|-----------:|---------:|---------:| cell_line | 0.5205 | 0.7100 | 0.6007 | cell_type | 0.7736 | 0.7422 | 0.7576 | protein | 0.6953 | 0.8459 | 0.7633 | DNA | 0.6997 | 0.7894 | 0.7419 | RNA | 0.6985 | 0.8051 | 0.7480 | | | | | **micro avg** | 0.6984 | 0.8076 | 0.7490| **macro avg** | 0.7032 | 0.8076 | 0.7498 | The macro F1-score is equal to 0.7498, compared to the value provided by the Allen Institute for AI in their [paper](https://arxiv.org/pdf/1903.10676.pdf), which is equal to 0.7728. This drop in performance could be due to several reasons, but one hypothesis could be the fact that the authors used an additional conditional random field, while this model uses a regular classification layer with softmax activation on top of SciBERT model. At word level, this model achieves a precision of 0.7742, a recall of 0.8536 and a F1-score of 0.8093. ### Model usage in inference Use the pipeline: ````python from transformers import pipeline text = "Mouse thymus was used as a source of glucocorticoid receptor from normal CS lymphocytes." nlp_ner = pipeline("ner", model='fran-martinez/scibert_scivocab_cased_ner_jnlpba', tokenizer='fran-martinez/scibert_scivocab_cased_ner_jnlpba') nlp_ner(text) """ Output: --------------------------- [ {'word': 'glucocorticoid', 'score': 0.9894881248474121, 'entity': 'B-protein'}, {'word': 'receptor', 'score': 0.989505410194397, 'entity': 'I-protein'}, {'word': 'normal', 'score': 0.7680378556251526, 'entity': 'B-cell_type'}, {'word': 'cs', 'score': 0.5176806449890137, 'entity': 'I-cell_type'}, {'word': 'lymphocytes', 'score': 0.9898491501808167, 'entity': 'I-cell_type'} ] """ ```` Or load model and tokenizer as follows: ````python import torch from transformers import AutoTokenizer, AutoModelForTokenClassification # Example text = "Mouse thymus was used as a source of glucocorticoid receptor from normal CS lymphocytes." # Load model tokenizer = AutoTokenizer.from_pretrained("fran-martinez/scibert_scivocab_cased_ner_jnlpba") model = AutoModelForTokenClassification.from_pretrained("fran-martinez/scibert_scivocab_cased_ner_jnlpba") # Get input for BERT input_ids = torch.tensor(tokenizer.encode(text)).unsqueeze(0) # Predict with torch.no_grad(): outputs = model(input_ids) # From the output let's take the first element of the tuple. # Then, let's get rid of [CLS] and [SEP] tokens (first and last) predictions = outputs[0].argmax(axis=-1)[0][1:-1] # Map label class indexes to string labels. for token, pred in zip(tokenizer.tokenize(text), predictions): print(token, '->', model.config.id2label[pred.numpy().item()]) """ Output: --------------------------- mouse -> O thymus -> O was -> O used -> O as -> O a -> O source -> O of -> O glucocorticoid -> B-protein receptor -> I-protein from -> O normal -> B-cell_type cs -> I-cell_type lymphocytes -> I-cell_type . -> O """ ````
BigTooth/Megumin-v0.2
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
13
null
**[`microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext`](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext)** fine-tuned on **[`SQuAD V2`](https://rajpurkar.github.io/SQuAD-explorer/)** using **[`run_qa.py`](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_qa.py)** Tunning script: ```bash BASE_MODEL=microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext OUTPUT_DIR=~/Documents/projects/tunned_models/ms_pubmed_bert_squadv2/ python run_qa.py \ --model_name_or_path $BASE_MODEL\ --dataset_name squad_v2 \ --do_train \ --do_eval \ --version_2_with_negative \ --per_device_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 2 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir $OUTPUT_DIR ```
Blaine-Mason/hackMIT-finetuned-sst2
[ "pytorch", "tensorboard", "bert", "text-classification", "dataset:glue", "transformers", "generated_from_trainer" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
36
null
--- license: apache-2.0 tags: - image-classification - pytorch - onnx datasets: - frgfm/imagenette --- # RepVGG-A1 model Pretrained on [ImageNette](https://github.com/fastai/imagenette). The RepVGG architecture was introduced in [this paper](https://arxiv.org/pdf/2101.03697.pdf). ## Model description The core idea of the author is to distinguish the training architecture (with shortcut connections), from the inference one (a pure highway network). By designing the residual block, the training architecture can be reparametrized into a simple sequence of convolutions and non-linear activations. ## Installation ### Prerequisites Python 3.6 (or higher) and [pip](https://pip.pypa.io/en/stable/)/[conda](https://docs.conda.io/en/latest/miniconda.html) are required to install Holocron. ### Latest stable release You can install the last stable release of the package using [pypi](https://pypi.org/project/pylocron/) as follows: ```shell pip install pylocron ``` or using [conda](https://anaconda.org/frgfm/pylocron): ```shell conda install -c frgfm pylocron ``` ### Developer mode Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) first)*: ```shell git clone https://github.com/frgfm/Holocron.git pip install -e Holocron/. ``` ## Usage instructions ```python from PIL import Image from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize from torchvision.transforms.functional import InterpolationMode from holocron.models import model_from_hf_hub model = model_from_hf_hub("frgfm/repvgg_a1").eval() img = Image.open(path_to_an_image).convert("RGB") # Preprocessing config = model.default_cfg transform = Compose([ Resize(config['input_shape'][1:], interpolation=InterpolationMode.BILINEAR), PILToTensor(), ConvertImageDtype(torch.float32), Normalize(config['mean'], config['std']) ]) input_tensor = transform(img).unsqueeze(0) # Inference with torch.inference_mode(): output = model(input_tensor) probs = output.squeeze(0).softmax(dim=0) ``` ## Citation Original paper ```bibtex @article{DBLP:journals/corr/abs-2101-03697, author = {Xiaohan Ding and Xiangyu Zhang and Ningning Ma and Jungong Han and Guiguang Ding and Jian Sun}, title = {RepVGG: Making VGG-style ConvNets Great Again}, journal = {CoRR}, volume = {abs/2101.03697}, year = {2021}, url = {https://arxiv.org/abs/2101.03697}, eprinttype = {arXiv}, eprint = {2101.03697}, timestamp = {Tue, 09 Feb 2021 15:29:34 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-2101-03697.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` Source of this implementation ```bibtex @software{Fernandez_Holocron_2020, author = {Fernandez, François-Guillaume}, month = {5}, title = {{Holocron}}, url = {https://github.com/frgfm/Holocron}, year = {2020} } ```
BlindMan820/Sarcastic-News-Headlines
[ "pytorch", "distilbert", "text-classification", "English", "dataset:Kaggle Dataset", "transformers", "Text", "Sequence-Classification", "Sarcasm", "DistilBert" ]
text-classification
{ "architectures": [ "DistilBertForSequenceClassification" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
28
null
--- license: apache-2.0 tags: - image-classification - pytorch - onnx datasets: - frgfm/imagenette --- # ReXNet-1.3x model Pretrained on [ImageNette](https://github.com/fastai/imagenette). The ReXNet architecture was introduced in [this paper](https://arxiv.org/pdf/2007.00992.pdf). ## Model description The core idea of the author is to add a customized Squeeze-Excitation layer in the residual blocks that will prevent channel redundancy. ## Installation ### Prerequisites Python 3.6 (or higher) and [pip](https://pip.pypa.io/en/stable/)/[conda](https://docs.conda.io/en/latest/miniconda.html) are required to install Holocron. ### Latest stable release You can install the last stable release of the package using [pypi](https://pypi.org/project/pylocron/) as follows: ```shell pip install pylocron ``` or using [conda](https://anaconda.org/frgfm/pylocron): ```shell conda install -c frgfm pylocron ``` ### Developer mode Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) first)*: ```shell git clone https://github.com/frgfm/Holocron.git pip install -e Holocron/. ``` ## Usage instructions ```python from PIL import Image from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize from torchvision.transforms.functional import InterpolationMode from holocron.models import model_from_hf_hub model = model_from_hf_hub("frgfm/rexnet1_3x").eval() img = Image.open(path_to_an_image).convert("RGB") # Preprocessing config = model.default_cfg transform = Compose([ Resize(config['input_shape'][1:], interpolation=InterpolationMode.BILINEAR), PILToTensor(), ConvertImageDtype(torch.float32), Normalize(config['mean'], config['std']) ]) input_tensor = transform(img).unsqueeze(0) # Inference with torch.inference_mode(): output = model(input_tensor) probs = output.squeeze(0).softmax(dim=0) ``` ## Citation Original paper ```bibtex @article{DBLP:journals/corr/abs-2007-00992, author = {Dongyoon Han and Sangdoo Yun and Byeongho Heo and Young Joon Yoo}, title = {ReXNet: Diminishing Representational Bottleneck on Convolutional Neural Network}, journal = {CoRR}, volume = {abs/2007.00992}, year = {2020}, url = {https://arxiv.org/abs/2007.00992}, eprinttype = {arXiv}, eprint = {2007.00992}, timestamp = {Mon, 06 Jul 2020 15:26:01 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2007-00992.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` Source of this implementation ```bibtex @software{Fernandez_Holocron_2020, author = {Fernandez, François-Guillaume}, month = {5}, title = {{Holocron}}, url = {https://github.com/frgfm/Holocron}, year = {2020} } ```
BlueGamerBeast/DialoGPT-small-Morgana
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
null
--- license: apache-2.0 tags: - image-classification - pytorch - onnx datasets: - frgfm/imagenette --- # ReXNet-2.0x model Pretrained on [ImageNette](https://github.com/fastai/imagenette). The ReXNet architecture was introduced in [this paper](https://arxiv.org/pdf/2007.00992.pdf). ## Model description The core idea of the author is to add a customized Squeeze-Excitation layer in the residual blocks that will prevent channel redundancy. ## Installation ### Prerequisites Python 3.6 (or higher) and [pip](https://pip.pypa.io/en/stable/)/[conda](https://docs.conda.io/en/latest/miniconda.html) are required to install Holocron. ### Latest stable release You can install the last stable release of the package using [pypi](https://pypi.org/project/pylocron/) as follows: ```shell pip install pylocron ``` or using [conda](https://anaconda.org/frgfm/pylocron): ```shell conda install -c frgfm pylocron ``` ### Developer mode Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) first)*: ```shell git clone https://github.com/frgfm/Holocron.git pip install -e Holocron/. ``` ## Usage instructions ```python from PIL import Image from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize from torchvision.transforms.functional import InterpolationMode from holocron.models import model_from_hf_hub model = model_from_hf_hub("frgfm/rexnet2_0x").eval() img = Image.open(path_to_an_image).convert("RGB") # Preprocessing config = model.default_cfg transform = Compose([ Resize(config['input_shape'][1:], interpolation=InterpolationMode.BILINEAR), PILToTensor(), ConvertImageDtype(torch.float32), Normalize(config['mean'], config['std']) ]) input_tensor = transform(img).unsqueeze(0) # Inference with torch.inference_mode(): output = model(input_tensor) probs = output.squeeze(0).softmax(dim=0) ``` ## Citation Original paper ```bibtex @article{DBLP:journals/corr/abs-2007-00992, author = {Dongyoon Han and Sangdoo Yun and Byeongho Heo and Young Joon Yoo}, title = {ReXNet: Diminishing Representational Bottleneck on Convolutional Neural Network}, journal = {CoRR}, volume = {abs/2007.00992}, year = {2020}, url = {https://arxiv.org/abs/2007.00992}, eprinttype = {arXiv}, eprint = {2007.00992}, timestamp = {Mon, 06 Jul 2020 15:26:01 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2007-00992.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` Source of this implementation ```bibtex @software{Fernandez_Holocron_2020, author = {Fernandez, François-Guillaume}, month = {5}, title = {{Holocron}}, url = {https://github.com/frgfm/Holocron}, year = {2020} } ```
BritishLibraryLabs/bl-books-genre
[ "pytorch", "distilbert", "text-classification", "multilingual", "dataset:blbooksgenre", "transformers", "genre", "books", "library", "historic", "glam ", "lam", "license:mit", "has_space" ]
text-classification
{ "architectures": [ "DistilBertForSequenceClassification" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
76
null
--- tags: - glove - gensim - fse --- # Glove Twitter Pre-trained glove vectors based on 2B tweets, 27B tokens, 1.2M vocab, uncased. Read more: * https://nlp.stanford.edu/projects/glove/ * https://nlp.stanford.edu/pubs/glove.pdf
Brona/model1
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - glove - gensim - fse --- # Paragram Embeddings Pushing the Limits of Paraphrastic Sentence Embeddings with Millions of Machine Translations (300 dimensions) Read more: * https://www.cs.cmu.edu/~jwieting/ * https://www.cs.cmu.edu/~jwieting/wieting2017Millions.pdf
BumBelDumBel/TRUMP
[ "pytorch", "tensorboard", "gpt2", "text-generation", "transformers", "generated_from_trainer", "license:mit" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- tags: - conversational --- #Bully Maguire demo bot
BumBelDumBel/ZORK-AI-TEST
[ "pytorch", "tensorboard", "gpt2", "text-generation", "transformers", "generated_from_trainer", "license:mit" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- tags: - espnet - audio - text-to-speech language: zh datasets: - aishell3 license: cc-by-4.0 inference: false --- This model was trained by ftshijt using aishell3/tts1 recipe in <a href="https://github.com/espnet/espnet/">espnet</a>. <p>&nbsp;</p> <ul> <li><strong>Python API</strong><pre><code class="language-python">See https://github.com/espnet/espnet_model_zoo</code></pre></li> <li><strong>Evaluate in the recipe</strong><pre> <code class="language-bash"> See ESPNet repo for how to use pre-trained models </pre></li> <li><strong>Config</strong><pre><code>config: conf/train.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/tts_train_raw_phn_pypinyin_g2p_phone ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: 0 dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 500 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - loss - min - - train - loss - min keep_nbest_models: 5 grad_clip: 1.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 1 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: 500 batch_size: 20 valid_batch_size: null batch_bins: 3750000 valid_batch_bins: null train_shape_file: - exp/tts_stats_raw_phn_pypinyin_g2p_phone/train/text_shape.phn - exp/tts_stats_raw_phn_pypinyin_g2p_phone/train/speech_shape valid_shape_file: - exp/tts_stats_raw_phn_pypinyin_g2p_phone/valid/text_shape.phn - exp/tts_stats_raw_phn_pypinyin_g2p_phone/valid/speech_shape batch_type: numel valid_batch_type: null fold_length: - 150 - 240000 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/train_no_dev/text - text - text - - dump/raw/train_no_dev/wav.scp - speech - sound - - dump/xvector/train_no_dev/xvector.scp - spembs - kaldi_ark valid_data_path_and_name_and_type: - - dump/raw/dev/text - text - text - - dump/raw/dev/wav.scp - speech - sound - - dump/xvector/dev/xvector.scp - spembs - kaldi_ark allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 0.001 eps: 1.0e-06 weight_decay: 0.0 scheduler: null scheduler_conf: {} token_list: - <blank> - <unk> - '' - d - sh - j - i4 - zh - l - x - e - b - g - i1 - h - q - m - u4 - t - z - ch - i3 - i2 - f - s - n - r - ian4 - e4 - ong1 - en2 - ai4 - k - ing2 - a1 - iou3 - uo3 - ao4 - u3 - ui4 - p - e2 - an1 - eng2 - c - in1 - ai2 - an4 - ian2 - ing1 - ai3 - ang4 - ao3 - ian1 - uo4 - ian3 - iao4 - ang1 - u2 - ü4 - u1 - a4 - eng1 - ing4 - üan2 - ie4 - en1 - iu4 - uei4 - ou4 - er4 - e1 - ei4 - an3 - ong2 - uo2 - ang3 - ou1 - ou3 - ong4 - eng4 - an2 - iang4 - a3 - iang1 - ia1 - iao1 - uan4 - ia4 - iu3 - ang2 - uo1 - ei3 - e3 - in4 - iang3 - ü1 - uan1 - en3 - iao3 - ie3 - ao1 - ai1 - ü2 - ing3 - er2 - ü3 - uan3 - üe4 - in3 - en - ei2 - üe2 - ie2 - en4 - ua4 - in2 - iu2 - uan2 - a2 - ie1 - ou2 - ui1 - iang2 - ong3 - i - uang3 - eng3 - ün4 - uang4 - uai4 - iong4 - v3 - iou2 - ui2 - un1 - üan4 - uang1 - ei1 - uang2 - o2 - a - ao2 - iao2 - ui3 - un4 - o1 - ua2 - un2 - uen2 - iu1 - v4 - ua1 - uei1 - üan3 - ün1 - üe1 - ün2 - uen4 - uei3 - uei2 - un3 - iou4 - o4 - er3 - uen1 - iong3 - iou1 - ia3 - üan1 - ia2 - iong1 - üe3 - uen3 - ve4 - iong2 - uai2 - uai1 - ua3 - ün3 - er - uai3 - ia - o3 - v2 - o - ueng1 - ei - '2' - ua - io1 - <sos/eos> odim: null model_conf: {} use_preprocessor: true token_type: phn bpemodel: null non_linguistic_symbols: null cleaner: null g2p: pypinyin_g2p_phone feats_extract: fbank feats_extract_conf: n_fft: 2048 hop_length: 300 win_length: 1200 fs: 24000 fmin: 80 fmax: 7600 n_mels: 80 normalize: global_mvn normalize_conf: stats_file: exp/tts_stats_raw_phn_pypinyin_g2p_phone/train/feats_stats.npz tts: tacotron2 tts_conf: embed_dim: 512 elayers: 1 eunits: 512 econv_layers: 3 econv_chans: 512 econv_filts: 5 atype: location adim: 512 aconv_chans: 32 aconv_filts: 15 cumulate_att_w: true dlayers: 2 dunits: 1024 prenet_layers: 2 prenet_units: 256 postnet_layers: 5 postnet_chans: 512 postnet_filts: 5 output_activation: null use_batch_norm: true use_concate: true use_residual: false spk_embed_dim: 512 spk_embed_integration_type: add use_gst: true gst_heads: 4 gst_tokens: 16 dropout_rate: 0.5 zoneout_rate: 0.1 reduction_factor: 1 use_masking: true bce_pos_weight: 10.0 use_guided_attn_loss: true guided_attn_loss_sigma: 0.4 guided_attn_loss_lambda: 1.0 pitch_extract: null pitch_extract_conf: {} pitch_normalize: null pitch_normalize_conf: {} energy_extract: null energy_extract_conf: {} energy_normalize: null energy_normalize_conf: {} required: - output_dir - token_list version: 0.10.2a1 distributed: false</code></pre></li> </ul>
BumBelDumBel/ZORK_AI_FANTASY
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - espnet - audio - text-to-speech language: zh datasets: - thchs30 license: cc-by-4.0 inference: false --- This model was trained by ftshijt using thchs30/tts1 recipe in <a href="https://github.com/espnet/espnet/">espnet</a>. <p>&nbsp;</p> <ul> <li><strong>Python API</strong><pre><code class="language-python">See https://github.com/espnet/espnet_model_zoo</code></pre></li> <li><strong>Evaluate in the recipe</strong><pre> <code class="language-bash">Please see ESPNet for how to use pre-trained model </pre></li> <li><strong>Config</strong><pre><code>config: conf/train.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/tts_train_raw_phn_pypinyin_g2p_phone ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: 0 dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 500 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - loss - min - - train - loss - min keep_nbest_models: 5 grad_clip: 1.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 1 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: 500 batch_size: 20 valid_batch_size: null batch_bins: 3750000 valid_batch_bins: null train_shape_file: - exp/tts_stats_raw_phn_pypinyin_g2p_phone/train/text_shape.phn - exp/tts_stats_raw_phn_pypinyin_g2p_phone/train/speech_shape valid_shape_file: - exp/tts_stats_raw_phn_pypinyin_g2p_phone/valid/text_shape.phn - exp/tts_stats_raw_phn_pypinyin_g2p_phone/valid/speech_shape batch_type: numel valid_batch_type: null fold_length: - 150 - 204800 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/train/text - text - text - - dump/raw/train/wav.scp - speech - sound - - dump/xvector/train/xvector.scp - spembs - kaldi_ark valid_data_path_and_name_and_type: - - dump/raw/dev/text - text - text - - dump/raw/dev/wav.scp - speech - sound - - dump/xvector/dev/xvector.scp - spembs - kaldi_ark allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 0.001 eps: 1.0e-06 weight_decay: 0.0 scheduler: null scheduler_conf: {} token_list: - <blank> - <unk> - '' - d - sh - j - zh - l - i4 - x - b - g - h - e - q - t - m - ch - i1 - z - u4 - i2 - i3 - n - f - s - r - k - c - p - ai4 - e4 - a1 - an4 - ian4 - ing2 - u3 - ian2 - ong1 - e2 - in1 - eng2 - ui4 - ao4 - u2 - iao4 - üan2 - en2 - an1 - u1 - ai2 - ao3 - ing4 - eng1 - iou3 - ü4 - uo4 - üe4 - ong2 - ian1 - ing1 - uo3 - ie4 - ang1 - uei4 - ang4 - an2 - a4 - ou4 - ei4 - uai4 - ie3 - ang3 - ong4 - ai3 - ü2 - uo2 - an3 - ang2 - ou3 - er2 - ou1 - uo1 - en1 - ia1 - ü3 - uan1 - in2 - iong4 - ian3 - iang3 - a3 - iang2 - ia4 - ü1 - uan4 - iao3 - iang4 - uen2 - iang1 - uan3 - ai1 - ie2 - ei3 - uan2 - uang2 - in4 - üe2 - ao1 - eng3 - iu4 - iao1 - er4 - iu2 - in3 - un1 - uang1 - eng4 - a2 - uang3 - en3 - uang4 - ong3 - ing3 - e3 - ei2 - ou2 - ao2 - i - ün4 - uei2 - ua4 - iou4 - ui1 - ua1 - en4 - ün2 - iao2 - ie1 - iou2 - iu3 - ün1 - üan4 - en - ei1 - o2 - un4 - ui3 - iu1 - üan3 - e1 - v3 - ua2 - ia2 - ui2 - un2 - o4 - un3 - er3 - ia3 - iong1 - uei3 - o1 - üe1 - üan1 - iong3 - v4 - iong2 - uen4 - uai2 - uei1 - iou1 - a - ua3 - uen1 - o3 - ueng1 - uai1 - uen3 - üe3 - ou - uai3 - ve4 - er - ün3 - o - ua - ia - ' l =' - <sos/eos> odim: null model_conf: {} use_preprocessor: true token_type: phn bpemodel: null non_linguistic_symbols: null cleaner: null g2p: pypinyin_g2p_phone feats_extract: fbank feats_extract_conf: n_fft: 1024 hop_length: 256 win_length: null fs: 16000 fmin: 80 fmax: 7600 n_mels: 80 normalize: global_mvn normalize_conf: stats_file: exp/tts_stats_raw_phn_pypinyin_g2p_phone/train/feats_stats.npz tts: tacotron2 tts_conf: embed_dim: 512 elayers: 1 eunits: 512 econv_layers: 3 econv_chans: 512 econv_filts: 5 atype: location adim: 512 aconv_chans: 32 aconv_filts: 15 cumulate_att_w: true dlayers: 2 dunits: 1024 prenet_layers: 2 prenet_units: 256 postnet_layers: 5 postnet_chans: 512 postnet_filts: 5 output_activation: null use_batch_norm: true use_concate: true use_residual: false spk_embed_dim: 512 spk_embed_integration_type: add use_gst: true gst_heads: 4 gst_tokens: 16 dropout_rate: 0.5 zoneout_rate: 0.1 reduction_factor: 1 use_masking: true bce_pos_weight: 10.0 use_guided_attn_loss: true guided_attn_loss_sigma: 0.4 guided_attn_loss_lambda: 1.0 pitch_extract: null pitch_extract_conf: {} pitch_normalize: null pitch_normalize_conf: {} energy_extract: null energy_extract_conf: {} energy_normalize: null energy_normalize_conf: {} required: - output_dir - token_list version: 0.10.2a1 distributed: false</code></pre></li> </ul>
CALM/backup
[ "lean_albert", "transformers" ]
null
{ "architectures": [ "LeanAlbertForPretraining", "LeanAlbertForTokenClassification", "LeanAlbertForSequenceClassification" ], "model_type": "lean_albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
https://vrip.unmsm.edu.pe/forum/profile/liexylezzy/ https://vrip.unmsm.edu.pe/forum/profile/ellindanatasya/ https://vrip.unmsm.edu.pe/forum/profile/oploscgv/ https://vrip.unmsm.edu.pe/forum/profile/Zackoplos/ https://vrip.unmsm.edu.pe/forum/profile/unholyzulk/ https://vrip.unmsm.edu.pe/forum/profile/aurorarezash/
CAMeL-Lab/bert-base-arabic-camelbert-ca-ner
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
85
null
https://community.afpglobal.org/network/members/profile?UserKey=fb4fdcef-dde4-4258-a423-2159545d84c1 https://community.afpglobal.org/network/members/profile?UserKey=e6ccc088-b709-45ec-b61e-4d56088acbda https://community.afpglobal.org/network/members/profile?UserKey=ba280059-0890-4510-81d0-a79522b75ac8 https://community.afpglobal.org/network/members/profile?UserKey=799ba769-6e99-4a6a-a173-4f1b817e978c https://community.afpglobal.org/network/members/profile?UserKey=babb84d7-e91a-4972-b26a-51067c66d793 https://community.afpglobal.org/network/members/profile?UserKey=8e4656bc-8d0d-44e1-b280-e68a2ace9353 https://community.afpglobal.org/network/members/profile?UserKey=8e7b41a8-9bed-4cb0-9021-a164b0aa6dd3 https://community.afpglobal.org/network/members/profile?UserKey=e4f38596-d772-4fbe-9e93-9aef5618f26e https://community.afpglobal.org/network/members/profile?UserKey=18221e49-74ba-4155-ac1e-6f184bfb2398 https://community.afpglobal.org/network/members/profile?UserKey=ef4391e8-03df-467f-bf3f-4a45087817eb https://community.afpglobal.org/network/members/profile?UserKey=832774fd-a035-421a-8236-61cf45a7747d https://community.afpglobal.org/network/members/profile?UserKey=9f05cd73-b75c-4820-b60a-5df6357b2af9 https://community.afpglobal.org/network/members/profile?UserKey=c1727992-5024-4321-b0c9-ecc6f51e6532 https://www.hybrid-analysis.com/sample/255948e335dd9f873d11bf0224f8d180cd097509d23d27506292c22443fa92b8 https://www.facebook.com/PS5Giveaways2021 https://cgvmovie.cookpad-blog.jp/articles/589986 https://myanimelist.net/blog.php?eid=850892 https://comicvine.gamespot.com/profile/full-tv-free/about-me/ https://pantip.com/topic/40658194
CAMeL-Lab/bert-base-arabic-camelbert-ca-poetry
[ "pytorch", "tf", "bert", "text-classification", "ar", "arxiv:1905.05700", "arxiv:2103.06678", "transformers", "license:apache-2.0" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
42
null
https://volunteer.alz.org/network/members/profile?UserKey=f4774542-39b3-4cfd-8c21-7b834795f7d7 https://volunteer.alz.org/network/members/profile?UserKey=05a00b90-f854-45fb-9a3a-7420144d290c https://volunteer.alz.org/network/members/profile?UserKey=45cceddd-29b9-4c6c-8612-e2a16aaa391a https://volunteer.alz.org/network/members/profile?UserKey=ae3c28f9-72a3-4af5-bd50-3b2ea2c0d3a3 https://volunteer.alz.org/network/members/profile?UserKey=7ab8e28e-e31f-4906-ab06-84b9ea3a880f https://volunteer.alz.org/network/members/profile?UserKey=1b31fc90-e18e-4ef6-81f0-5c0b55fb95a3 https://volunteer.alz.org/network/members/profile?UserKey=23971b11-04ad-4eb4-abc5-6e659c6b071c 123movies-watch-online-movie-full-free-2021 https://myanimelist.net/blog.php?eid=849353 https://comicvine.gamespot.com/profile/nacenetwork21/about-me/ https://pantip.com/topic/40639721