modelId
stringlengths
4
112
lastModified
stringlengths
24
24
tags
sequence
pipeline_tag
stringclasses
21 values
files
sequence
publishedBy
stringlengths
2
37
downloads_last_month
int32
0
9.44M
library
stringclasses
15 values
modelCard
large_stringlengths
0
100k
sunguk/sunguk-bert
2021-03-19T08:37:20.000Z
[ "pytorch", "transformers" ]
[ ".gitattributes", "config.json", "import test", "pytorch_model.bin", "tokenizer_config.json", "vocab.txt" ]
sunguk
6
transformers
sunhao666/chi-sina
2021-06-04T06:43:10.000Z
[ "pytorch", "gpt2", "transformers" ]
[ ".gitattributes", "config.json", "pytorch_model.bin" ]
sunhao666
10
transformers
sunhao666/chi-sum
2021-05-19T17:32:16.000Z
[ "pytorch", "bert", "transformers" ]
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin" ]
sunhao666
8
transformers
sunhao666/chi-sum2
2021-05-20T04:01:09.000Z
[ "pytorch", "t5", "transformers" ]
[ ".gitattributes", "config.json", "pytorch_model.bin" ]
sunhao666
27
transformers
superspray/distilbert_base_squad2_custom_dataset
2021-02-20T07:33:31.000Z
[ "pytorch", "distilbert", "question-answering", "transformers" ]
question-answering
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.txt" ]
superspray
7
transformers
# Question & Answering Model for 'Save Your Minutes' from Dobby-AI Distilbert_Base fine-tuned on SQuAD2.0 and custom QA dataset This model is [twmkn9/distilbert-base-uncased-squad2] trained on additional custom dataset as: ``` !python3 run_squad.py --model_type distilbert \ --model_name_or_path /content/distilbert_base_384 \ --do_lower_case \ --output_dir /content/model/\ --do_train \ --train_file $data_dir/additional_qa.json\ --version_2_with_negative \ --do_lower_case \ --num_train_epochs 3 \ --weight_decay 0.01 \ --learning_rate 3e-5 \ --max_grad_norm 0.5 \ --adam_epsilon 1e-6 \ --max_seq_length 512 \ --doc_stride 128 \ --threads 12 \ --logging_steps 50 \ --save_steps 1000 \ --overwrite_output_dir \ --per_gpu_train_batch_size 4 ``` We used Google Colab for training the model,
superspray/electra_large_discriminator_squad2_custom_dataset
2021-02-20T07:00:12.000Z
[ "pytorch", "electra", "question-answering", "transformers" ]
question-answering
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.txt" ]
superspray
12
transformers
# Question & Answering Model for 'Save Your Minutes' from Dobby-AI Electra_Large Discriminator fine-tuned on SQuAD2.0 and custom QA dataset This model is [ahotrod/electra_large_discriminator_squad2_512](https://huggingface.co/ahotrod/electra_large_discriminator_squad2_512/blob/main/README.md) trained on additional custom dataset as: ``` !python3 run_squad.py --model_type electra \ --model_name_or_path /content/electra_large_512 \ --do_lower_case \ --output_dir /content/model/\ --do_train \ --train_file $data_dir/additional_qa.json\ --version_2_with_negative \ --do_lower_case \ --num_train_epochs 3 \ --weight_decay 0.01 \ --learning_rate 3e-5 \ --max_grad_norm 0.5 \ --adam_epsilon 1e-6 \ --max_seq_length 512 \ --doc_stride 128 \ --threads 12 \ --logging_steps 50 \ --save_steps 1000 \ --overwrite_output_dir \ --per_gpu_train_batch_size 4 ``` We used Google Colab for training the model,
surajp/RoBERTa-hindi-guj-san
2021-05-20T22:02:11.000Z
[ "pytorch", "jax", "roberta", "masked-lm", "hi", "sa", "gu", "dataset:Wikipedia (Hindi, Sanskrit, Gujarati)", "transformers", "Indic", "license:mit", "fill-mask" ]
fill-mask
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.json" ]
surajp
113
transformers
--- language: - hi - sa - gu tags: - Indic license: mit datasets: - Wikipedia (Hindi, Sanskrit, Gujarati) metrics: - perplexity --- # RoBERTa-hindi-guj-san ## Model description Multillingual RoBERTa like model trained on Wikipedia articles of Hindi, Sanskrit, Gujarati languages. The tokenizer was trained on combined text. However, Hindi text was used to pre-train the model and then it was fine-tuned on Sanskrit and Gujarati Text combined hoping that pre-training with Hindi will help the model learn similar languages. ### Configuration | Parameter | Value | |---|---| | `hidden_size` | 768 | | `num_attention_heads` | 12 | | `num_hidden_layers` | 6 | | `vocab_size` | 30522 | |`model_type`|`roberta`| ## Intended uses & limitations #### How to use ```python # Example usage from transformers import AutoTokenizer, AutoModelWithLMHead, pipeline tokenizer = AutoTokenizer.from_pretrained("surajp/RoBERTa-hindi-guj-san") model = AutoModelWithLMHead.from_pretrained("surajp/RoBERTa-hindi-guj-san") fill_mask = pipeline( "fill-mask", model=model, tokenizer=tokenizer ) # Sanskrit: इयं भाषा न केवलं भारतस्य अपि तु विश्वस्य प्राचीनतमा भाषा इति मन्यते। # Hindi: अगर आप अब अभ्यास नहीं करते हो तो आप अपने परीक्षा में मूर्खतापूर्ण गलतियाँ करोगे। # Gujarati: ગુજરાતમાં ૧૯મી માર્ચ સુધી કોઈ સકારાત્મક (પોઝીટીવ) રીપોર્ટ આવ્યો <mask> હતો. fill_mask("ગુજરાતમાં ૧૯મી માર્ચ સુધી કોઈ સકારાત્મક (પોઝીટીવ) રીપોર્ટ આવ્યો <mask> હતો.") ''' Output: -------- [ {'score': 0.07849744707345963, 'sequence': '<s> ગુજરાતમાં ૧૯મી માર્ચ સુધી કોઈ સકારાત્મક (પોઝીટીવ) રીપોર્ટ આવ્યો જ હતો.</s>', 'token': 390}, {'score': 0.06273336708545685, 'sequence': '<s> ગુજરાતમાં ૧૯મી માર્ચ સુધી કોઈ સકારાત્મક (પોઝીટીવ) રીપોર્ટ આવ્યો ન હતો.</s>', 'token': 478}, {'score': 0.05160355195403099, 'sequence': '<s> ગુજરાતમાં ૧૯મી માર્ચ સુધી કોઈ સકારાત્મક (પોઝીટીવ) રીપોર્ટ આવ્યો થઇ હતો.</s>', 'token': 2075}, {'score': 0.04751499369740486, 'sequence': '<s> ગુજરાતમાં ૧૯મી માર્ચ સુધી કોઈ સકારાત્મક (પોઝીટીવ) રીપોર્ટ આવ્યો એક હતો.</s>', 'token': 600}, {'score': 0.03788900747895241, 'sequence': '<s> ગુજરાતમાં ૧૯મી માર્ચ સુધી કોઈ સકારાત્મક (પોઝીટીવ) રીપોર્ટ આવ્યો પણ હતો.</s>', 'token': 840} ] ``` ## Training data Cleaned wikipedia articles in Hindi, Sanskrit and Gujarati on Kaggle. It contains training as well as evaluation text. Used in [iNLTK](https://github.com/goru001/inltk) - [Hindi](https://www.kaggle.com/disisbig/hindi-wikipedia-articles-172k) - [Gujarati](https://www.kaggle.com/disisbig/gujarati-wikipedia-articles) - [Sanskrit](https://www.kaggle.com/disisbig/sanskrit-wikipedia-articles) ## Training procedure - On TPU (using `xla_spawn.py`) - For language modelling - Iteratively increasing `--block_size` from 128 to 256 over epochs - Tokenizer trained on combined text - Pre-training with Hindi and fine-tuning on Sanskrit and Gujarati texts ``` --model_type distillroberta-base \ --model_name_or_path "/content/SanHiGujBERTa" \ --mlm_probability 0.20 \ --line_by_line \ --save_total_limit 2 \ --per_device_train_batch_size 128 \ --per_device_eval_batch_size 128 \ --num_train_epochs 5 \ --block_size 256 \ --seed 108 \ --overwrite_output_dir \ ``` ## Eval results perplexity = 2.920005983224673 > Created by [Suraj Parmar/@parmarsuraj99](https://twitter.com/parmarsuraj99) | [LinkedIn](https://www.linkedin.com/in/parmarsuraj99/) > Made with <span style="color: #e25555;">&hearts;</span> in India
surajp/SanBERTa
2021-05-20T22:03:36.000Z
[ "pytorch", "jax", "roberta", "masked-lm", "sa", "transformers", "fill-mask" ]
fill-mask
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.json" ]
surajp
25
transformers
--- language: sa --- # RoBERTa trained on Sanskrit (SanBERTa) **Mode size** (after training): **340MB** ### Dataset: [Wikipedia articles](https://www.kaggle.com/disisbig/sanskrit-wikipedia-articles) (used in [iNLTK](https://github.com/goru001/nlp-for-sanskrit)). It contains evaluation set. [Sanskrit scraps from CLTK](http://cltk.org/) ### Configuration | Parameter | Value | |---|---| | `num_attention_heads` | 12 | | `num_hidden_layers` | 6 | | `hidden_size` | 768 | | `vocab_size` | 29407 | ### Training : - On TPU - For language modelling - Iteratively increasing `--block_size` from 128 to 256 over epochs ### Evaluation |Metric| # Value | |---|---| |Perplexity (`block_size=256`)|4.04| ## Example of usage: ### For Embeddings ``` tokenizer = AutoTokenizer.from_pretrained("surajp/SanBERTa") model = RobertaModel.from_pretrained("surajp/SanBERTa") op = tokenizer.encode("इयं भाषा न केवलं भारतस्य अपि तु विश्वस्य प्राचीनतमा भाषा इति मन्यते।", return_tensors="pt") ps = model(op) ps[0].shape ``` ``` ''' Output: -------- torch.Size([1, 47, 768]) ``` ### For \<mask\> Prediction ``` from transformers import pipeline fill_mask = pipeline( "fill-mask", model="surajp/SanBERTa", tokenizer="surajp/SanBERTa" ) ## इयं भाषा न केवलं भारतस्य अपि तु विश्वस्य प्राचीनतमा भाषा इति मन्यते। fill_mask("इयं भाषा न केवल<mask> भारतस्य अपि तु विश्वस्य प्राचीनतमा भाषा इति मन्यते।") ps = model(torch.tensor(enc).unsqueeze(1)) print(ps[0].shape) ``` ``` ''' Output: -------- [{'score': 0.7516744136810303, 'sequence': '<s> इयं भाषा न केवलं भारतस्य अपि तु विश्वस्य प्राचीनतमा भाषा इति मन्यते।</s>', 'token': 280, 'token_str': 'à¤Ĥ'}, {'score': 0.06230105459690094, 'sequence': '<s> इयं भाषा न केवली भारतस्य अपि तु विश्वस्य प्राचीनतमा भाषा इति मन्यते।</s>', 'token': 289, 'token_str': 'à¥Ģ'}, {'score': 0.055410224944353104, 'sequence': '<s> इयं भाषा न केवला भारतस्य अपि तु विश्वस्य प्राचीनतमा भाषा इति मन्यते।</s>', 'token': 265, 'token_str': 'ा'}, ...] ``` ### It works!! 🎉 🎉 🎉 > Created by [Suraj Parmar/@parmarsuraj99](https://twitter.com/parmarsuraj99) | [LinkedIn](https://www.linkedin.com/in/parmarsuraj99/) > Made with <span style="color: #e25555;">&hearts;</span> in India
surajp/albert-base-sanskrit
2020-12-11T22:02:34.000Z
[ "pytorch", "albert", "sa", "transformers" ]
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json" ]
surajp
89
transformers
--- language: sa --- # ALBERT-base-Sanskrit Explaination Notebook Colab: [SanskritALBERT.ipynb](https://colab.research.google.com/github/parmarsuraj99/suraj-parmar/blob/master/_notebooks/2020-05-02-SanskritALBERT.ipynb) Size of the model is **46MB** Example of usage: ``` tokenizer = AutoTokenizer.from_pretrained("surajp/albert-base-sanskrit") model = AutoModel.from_pretrained("surajp/albert-base-sanskrit") enc=tokenizer.encode("ॐ सर्वे भवन्तु सुखिनः सर्वे सन्तु निरामयाः । सर्वे भद्राणि पश्यन्तु मा कश्चिद्दुःखभाग्भवेत् । ॐ शान्तिः शान्तिः शान्तिः ॥") print(tokenizer.decode(enc)) ps = model(torch.tensor(enc).unsqueeze(1)) print(ps[0].shape) ``` ``` ''' Output: -------- [CLS] ॐ सर्वे भवन्तु सुखिनः सर्वे सन्तु निरामयाः । सर्वे भद्राणि पश्यन्तु मा कश्चिद्दुःखभाग्भवेत् । ॐ शान्तिः शान्तिः शान्तिः ॥[SEP] torch.Size([28, 1, 768]) ``` > Created by [Suraj Parmar/@parmarsuraj99](https://twitter.com/parmarsuraj99) > Made with <span style="color: #e25555;">&hearts;</span> in India
surajp/gpt2-hindi
2021-05-23T13:02:32.000Z
[ "pytorch", "tf", "jax", "gpt2", "lm-head", "causal-lm", "transformers", "text-generation" ]
text-generation
[ ".gitattributes", "added_tokens.json", "config.json", "flax_model.msgpack", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tf_model.h5", "tokenizer_config.json", "vocab.json" ]
surajp
161
transformers
susumu2357/bert-base-swedish-squad2
2021-05-20T07:20:04.000Z
[ "pytorch", "tf", "jax", "bert", "question-answering", "sv", "dataset:susumu2357/squad_v2_sv", "transformers", "squad", "license:apache-2.0" ]
question-answering
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "pytorch_model.bin", "special_tokens_map.json", "tf_model.h5", "tokenizer_config.json", "vocab.txt" ]
susumu2357
57
transformers
--- language: - sv tags: - squad license: apache-2.0 datasets: - susumu2357/squad_v2_sv metrics: - squad_v2 --- # Swedish BERT Fine-tuned on SQuAD v2 This model is a fine-tuning checkpoint of Swedish BERT on SQuAD v2. ## Training data Fine-tuning was done based on the pre-trained model [KB/bert-base-swedish-cased](https://huggingface.co/KB/bert-base-swedish-cased). Training and dev datasets are our [Swedish translation of SQuAD v2](https://github.com/susumu2357/SQuAD_v2_sv). [Here](https://huggingface.co/datasets/susumu2357/squad_v2_sv) is the HuggingFace Datasets. ## Hyperparameters ``` batch_size = 16 n_epochs = 2 max_seq_len = 386 learning_rate = 3e-5 warmup_steps = 2900 # warmup_proportion = 0.2 doc_stride=128 max_query_length=64 ``` ## Eval results ``` 'exact': 66.72642524202223 'f1': 70.11149581003404 'total': 11156 'HasAns_exact': 55.574745730186144 'HasAns_f1': 62.821693965983044 'HasAns_total': 5211 'NoAns_exact': 76.50126156433979 'NoAns_f1': 76.50126156433979 'NoAns_total': 5945 ``` ## Limitations and bias This model may contain biases due to mistranslations of the SQuAD dataset. ## BibTeX entry and citation info ```bibtex @misc{svSQuADbert, author = {Susumu Okazawa}, title = {Swedish BERT Fine-tuned on Swedish SQuAD 2.0}, year = {2021}, howpublished = {\url{https://huggingface.co/susumu2357/bert-base-swedish-squad2}}, } ```
svalabs/bi-electra-ms-marco-german-uncased
2021-06-14T07:46:23.000Z
[ "pytorch", "electra", "arxiv:1908.10084", "arxiv:1611.09268", "arxiv:2104.08663", "arxiv:2104.12741", "transformers" ]
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "sentence_bert_config.json", "special_tokens_map.json", "tokenizer.json", "tokenizer_config.json", "vocab.txt" ]
svalabs
37
transformers
# SVALabs - German Uncased Electra Bi-Encoder In this repository, we present our german, uncased bi-encoder for Passage Retrieval. This model was trained on the basis of the german electra uncased model from the [german-nlp-group](https://huggingface.co/german-nlp-group/electra-base-german-uncased) and finetuned as a bi-encoder for Passage Retrieval using the [sentence-transformers](https://github.com/UKPLab/sentence-transformers) package. For this purpose, we translated the [MSMARCO-Passage-Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) dataset using the [fairseq-wmt19-en-de](https://github.com/pytorch/fairseq/tree/master/examples/wmt19) translation model. ### Model Details | | Description or Link | |---|---| |**Base model** | [```german-nlp-group/electra-base-german-uncased```](https://huggingface.co/german-nlp-group/electra-base-german-uncased) | |**Finetuning task**| Passage Retrieval / Semantic Search | |**Source dataset**| [```MSMARCO-Passage-Ranking```](https://github.com/microsoft/MSMARCO-Passage-Ranking) | |**Translation model**| [```fairseq-wmt19-en-de```](https://github.com/pytorch/fairseq/tree/master/examples/wmt19) | ### Performance We evaluated our model on the [GermanDPR testset](https://deepset.ai/germanquad) and followed the benchmark framework of [BEIR](https://github.com/UKPLab/beir). In order to compare our results, we conducted an evaluation on the same test data with BM25 and presented the results in the table below. We took every paragraph with negative and positive context out of the testset and deduplicated them. The resulting corpus size is 2871 against 1025 queries. | Model | NDCG@1 | NDCG@5 | NDCG@10 | Recall@1 | Recall@5 | Recall@10 | |:-------:|:--------:|:--------:|:---------:|:--------:|:----------:|:-----------:| | BM25 | 0.1463 | 0.3451 | 0.4097 | 0.1463 | 0.5424 | 0.7415 | | Ours | 0.4624 | 0.6218 | 0.6425 | 0.4624 | 0.7581 | 0.8205 | ### How to Use With ```sentence-transformers``` package (see [UKPLab/sentence-transformers](https://github.com/UKPLab/sentence-transformers) on GitHub for more details): ```python from sentence_transformers import SentenceTransformer bi_model = SentenceTransformer("svalabs/bi-electra-ms-marco-german-uncased") ``` ### Semantic Search Example ```python import numpy as np from sklearn.metrics.pairwise import cosine_similarity K = 3 # number of top ranks to retrieve # specify documents and queries docs = [ "Auf Netflix gibt es endlich die neue Staffel meiner Lieblingsserie.", "Der Gepard jagt seine Beute.", "Wir haben in der Agentur ein neues System für Zeiterfassung.", "Mein Arzt sagt, dass mir dabei eher ein Orthopäde helfen könnte.", "Einen Impftermin kann mir der Arzt momentan noch nicht anbieten.", "Auf Kreta hat meine Tochter mit Muscheln eine schöne Sandburg gebaut.", "Das historische Zentrum (centro storico) liegt auf mehr als 100 Inseln in der Lagune von Venedig.", "Um in Zukunft sein Vermögen zu schützen, sollte man andere Investmentstrategien in Betracht ziehen.", "Die Ära der Dinosaurier wurde vermutlich durch den Einschlag eines gigantischen Meteoriten auf der Erde beendet.", "Bei ALDI sind die Bananen gerade im Angebot.", "Die Entstehung der Erde ist 4,5 milliarden jahre her.", "Finanzwerte treiben DAX um mehr als sechs Prozent nach oben Frankfurt/Main gegeben.", "DAX dreht ins Minus. Konjunkturdaten und Gewinnmitnahmen belasten Frankfurt/Main.", ] queries = [ "dax steigt", "dax sinkt", "probleme mit knieschmerzen", "software für urlaubsstunden", "raubtier auf der jagd", "alter der erde", "wie alt ist unser planet?", "wie kapital sichern", "supermarkt lebensmittel reduziert", "wodurch ist der tyrannosaurus aussgestorben", "serien streamen" ] # encode documents and queries features_docs = bi_model.encode(docs) features_queries = bi_model.encode(queries) # compute pairwise cosine similarity scores sim = cosine_similarity(features_queries, features_docs) # print results for i, query in enumerate(queries): ranks = np.argsort(-sim[i]) print("Query:", query) for j, r in enumerate(ranks[:K]): print(f"[{j}: {sim[i, r]: .3f}]", docs[r]) print("-"*96) ``` **Console Output**: ``` Query: dax steigt [0: 0.811] Finanzwerte treiben DAX um mehr als sechs Prozent nach oben Frankfurt/Main gegeben. [1: 0.719] DAX dreht ins Minus. Konjunkturdaten und Gewinnmitnahmen belasten Frankfurt/Main. [2: 0.218] Auf Netflix gibt es endlich die neue Staffel meiner Lieblingsserie. ------------------------------------------------------------------------------------------------ Query: dax sinkt [0: 0.815] DAX dreht ins Minus. Konjunkturdaten und Gewinnmitnahmen belasten Frankfurt/Main. [1: 0.719] Finanzwerte treiben DAX um mehr als sechs Prozent nach oben Frankfurt/Main gegeben. [2: 0.243] Auf Netflix gibt es endlich die neue Staffel meiner Lieblingsserie. ------------------------------------------------------------------------------------------------ Query: probleme mit knieschmerzen [0: 0.237] Mein Arzt sagt, dass mir dabei eher ein Orthopäde helfen könnte. [1: 0.209] Das historische Zentrum (centro storico) liegt auf mehr als 100 Inseln in der Lagune von Venedig. [2: 0.182] DAX dreht ins Minus. Konjunkturdaten und Gewinnmitnahmen belasten Frankfurt/Main. ------------------------------------------------------------------------------------------------ Query: software für urlaubsstunden [0: 0.478] Wir haben in der Agentur ein neues System für Zeiterfassung. [1: 0.208] Auf Netflix gibt es endlich die neue Staffel meiner Lieblingsserie. [2: 0.190] Bei ALDI sind die Bananen gerade im Angebot. ------------------------------------------------------------------------------------------------ Query: raubtier auf der jagd [0: 0.599] Der Gepard jagt seine Beute. [1: 0.264] Auf Netflix gibt es endlich die neue Staffel meiner Lieblingsserie. [2: 0.159] Auf Kreta hat meine Tochter mit Muscheln eine schöne Sandburg gebaut. ------------------------------------------------------------------------------------------------ Query: alter der erde [0: 0.705] Die Entstehung der Erde ist 4,5 milliarden jahre her. [1: 0.413] Die Ära der Dinosaurier wurde vermutlich durch den Einschlag eines gigantischen Meteoriten auf der Erde beendet. [2: 0.262] Finanzwerte treiben DAX um mehr als sechs Prozent nach oben Frankfurt/Main gegeben. ------------------------------------------------------------------------------------------------ Query: wie alt ist unser planet? [0: 0.441] Die Entstehung der Erde ist 4,5 milliarden jahre her. [1: 0.335] Auf Netflix gibt es endlich die neue Staffel meiner Lieblingsserie. [2: 0.302] Die Ära der Dinosaurier wurde vermutlich durch den Einschlag eines gigantischen Meteoriten auf der Erde beendet. ------------------------------------------------------------------------------------------------ Query: wie kapital sichern [0: 0.547] Um in Zukunft sein Vermögen zu schützen, sollte man andere Investmentstrategien in Betracht ziehen. [1: 0.331] Finanzwerte treiben DAX um mehr als sechs Prozent nach oben Frankfurt/Main gegeben. [2: 0.143] Auf Netflix gibt es endlich die neue Staffel meiner Lieblingsserie. ------------------------------------------------------------------------------------------------ Query: supermarkt lebensmittel reduziert [0: 0.455] Bei ALDI sind die Bananen gerade im Angebot. [1: 0.362] DAX dreht ins Minus. Konjunkturdaten und Gewinnmitnahmen belasten Frankfurt/Main. [2: 0.345] Finanzwerte treiben DAX um mehr als sechs Prozent nach oben Frankfurt/Main gegeben. ------------------------------------------------------------------------------------------------ Query: wodurch ist der tyrannosaurus aussgestorben [0: 0.457] Die Ära der Dinosaurier wurde vermutlich durch den Einschlag eines gigantischen Meteoriten auf der Erde beendet. [1: 0.216] Der Gepard jagt seine Beute. [2: 0.195] Die Entstehung der Erde ist 4,5 milliarden jahre her. ------------------------------------------------------------------------------------------------ Query: serien streamen [0: 0.570] Auf Netflix gibt es endlich die neue Staffel meiner Lieblingsserie. [1: 0.361] Wir haben in der Agentur ein neues System für Zeiterfassung. [2: 0.282] Bei ALDI sind die Bananen gerade im Angebot. ------------------------------------------------------------------------------------------------ ``` ### Contact - Baran Avinc, [email protected] - Jonas Grebe, [email protected] - Lisa Stolz, [email protected] - Bonian Riebe, [email protected] ### References - N. Reimers and I. Gurevych (2019), ['Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks'](https://arxiv.org/abs/1908.10084). - Payal Bajaj et al. (2018), ['MS MARCO: A Human Generated MAchine Reading COmprehension Dataset'](https://arxiv.org/abs/1611.09268). - N. Thakur et al. (2021), ['BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models'](https://arxiv.org/abs/2104.08663). - T. Möller, J. Risch and M. Pietsch (2021), ['GermanQuAD and GermanDPR: Improving Non-English Question Answering and Passage Retrieval'](https://arxiv.org/abs/2104.12741).
svalabs/cross-electra-ms-marco-german-uncased
2021-06-10T07:20:46.000Z
[ "pytorch", "electra", "text-classification", "arxiv:1908.10084", "arxiv:1611.09268", "arxiv:2104.08663", "arxiv:2104.12741", "arxiv:2010.02666", "transformers" ]
text-classification
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer.json", "tokenizer_config.json", "vocab.txt" ]
svalabs
66
transformers
# SVALabs - German Uncased Electra Cross-Encoder In this repository, we present our german, uncased cross-encoder for Passage Retrieval. This model was trained on the basis of the german electra uncased model from the [german-nlp-group](https://huggingface.co/german-nlp-group/electra-base-german-uncased) and finetuned as a cross-encoder for Passage Retrieval using the [sentence-transformers](https://github.com/UKPLab/sentence-transformers) package. For this purpose, we translated the [MSMARCO-Passage-Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) dataset using the [fairseq-wmt19-en-de](https://github.com/pytorch/fairseq/tree/master/examples/wmt19) translation model. ### Model Details | | Description or Link | |---|---| |**Base model** | [```german-nlp-group/electra-base-german-uncased```](https://huggingface.co/german-nlp-group/electra-base-german-uncased) | |**Finetuning task**| Passage Retrieval / Semantic Search | |**Source dataset**| [```MSMARCO-Passage-Ranking```](https://github.com/microsoft/MSMARCO-Passage-Ranking) | |**Translation model**| [```fairseq-wmt19-en-de```](https://github.com/pytorch/fairseq/tree/master/examples/wmt19) | ### Performance We evaluated our model on the [GermanDPR testset](https://deepset.ai/germanquad) and followed the benchmark framework of [BEIR](https://github.com/UKPLab/beir). In order to compare our results, we conducted an evaluation on the same test data with BM25 and presented the results in the table below. We took every paragraph with negative and positive context out of the testset and deduplicated them. The resulting corpus size is 2871 against 1025 queries. | Model | NDCG@1 | NDCG@5 | NDCG@10 | Recall@1 | Recall@5 | Recall@10 | |:-------------------:|:------:|:------:|:-------:|:--------:|:--------:|:---------:| | BM25 | 0.1463 | 0.3451 | 0.4097 | 0.1463 | 0.5424 | 0.7415 | | BM25(Top 100) +Ours | 0.6410 | 0.7885 | 0.7943 | 0.6410 | 0.8576 | 0.9024 | ### How to Use With ```sentence-transformers``` package (see [UKPLab/sentence-transformers](https://github.com/UKPLab/sentence-transformers) on GitHub for more details): ```python from sentence_transformers.cross_encoder import CrossEncoder cross_model = CrossEncoder("svalabs/cross-electra-ms-marco-german-uncased") ``` ### Semantic Search Example ```python import numpy as np from sklearn.metrics.pairwise import cosine_similarity K = 3 # number of top ranks to retrieve docs = [ "Auf Netflix gibt es endlich die neue Staffel meiner Lieblingsserie.", "Der Gepard jagt seine Beute.", "Wir haben in der Agentur ein neues System für Zeiterfassung.", "Mein Arzt sagt, dass mir dabei eher ein Orthopäde helfen könnte.", "Einen Impftermin kann mir der Arzt momentan noch nicht anbieten.", "Auf Kreta hat meine Tochter mit Muscheln eine schöne Sandburg gebaut.", "Das historische Zentrum (centro storico) liegt auf mehr als 100 Inseln in der Lagune von Venedig.", "Um in Zukunft sein Vermögen zu schützen, sollte man andere Investmentstrategien in Betracht ziehen.", "Die Ära der Dinosaurier wurde vermutlich durch den Einschlag eines gigantischen Meteoriten auf der Erde beendet.", "Bei ALDI sind die Bananen gerade im Angebot.", "Die Entstehung der Erde ist 4,5 milliarden jahre her.", "Finanzwerte treiben DAX um mehr als sechs Prozent nach oben Frankfurt/Main gegeben.", "DAX dreht ins Minus. Konjunkturdaten und Gewinnmitnahmen belasten Frankfurt/Main." ] queries = [ "dax steigt", "dax sinkt", "probleme mit knieschmerzen", "software für urlaubsstunden", "raubtier auf der jagd", "alter der erde", "wie alt ist unser planet?", "wie kapital sichern", "supermarkt lebensmittel reduziert", "wodurch ist der tyrannosaurus aussgestorben", "serien streamen" ] # encode each query document pair from itertools import product combs = list(product(queries, docs)) outputs = cross_model.predict(combs).reshape((len(queries), len(docs))) # print results for i, query in enumerate(queries): ranks = np.argsort(-outputs[i]) print("Query:", query) for j, r in enumerate(ranks[:3]): print(f"[{j}: {outputs[i, r]: .3f}]", docs[r]) print("-"*96) ``` **Console Output**: ``` Query: dax steigt [0: 7.676] Finanzwerte treiben DAX um mehr als sechs Prozent nach oben Frankfurt/Main gegeben. [1: 0.821] DAX dreht ins Minus. Konjunkturdaten und Gewinnmitnahmen belasten Frankfurt/Main. [2: -9.905] Um in Zukunft sein Vermögen zu schützen, sollte man andere Investmentstrategien in Betracht ziehen. ------------------------------------------------------------------------------------------------ Query: dax sinkt [0: 8.079] DAX dreht ins Minus. Konjunkturdaten und Gewinnmitnahmen belasten Frankfurt/Main. [1: -0.491] Finanzwerte treiben DAX um mehr als sechs Prozent nach oben Frankfurt/Main gegeben. [2: -9.224] Um in Zukunft sein Vermögen zu schützen, sollte man andere Investmentstrategien in Betracht ziehen. ------------------------------------------------------------------------------------------------ Query: probleme mit knieschmerzen [0: 6.753] Mein Arzt sagt, dass mir dabei eher ein Orthopäde helfen könnte. [1: -5.866] Einen Impftermin kann mir der Arzt momentan noch nicht anbieten. [2: -9.461] Auf Kreta hat meine Tochter mit Muscheln eine schöne Sandburg gebaut. ------------------------------------------------------------------------------------------------ Query: software für urlaubsstunden [0: 1.707] Wir haben in der Agentur ein neues System für Zeiterfassung. [1: -10.649] Mein Arzt sagt, dass mir dabei eher ein Orthopäde helfen könnte. [2: -11.280] DAX dreht ins Minus. Konjunkturdaten und Gewinnmitnahmen belasten Frankfurt/Main. ------------------------------------------------------------------------------------------------ Query: raubtier auf der jagd [0: 4.596] Der Gepard jagt seine Beute. [1: -6.809] Auf Netflix gibt es endlich die neue Staffel meiner Lieblingsserie. [2: -8.392] Das historische Zentrum (centro storico) liegt auf mehr als 100 Inseln in der Lagune von Venedig. ------------------------------------------------------------------------------------------------ Query: alter der erde [0: 7.343] Die Entstehung der Erde ist 4,5 milliarden jahre her. [1: -7.664] Die Ära der Dinosaurier wurde vermutlich durch den Einschlag eines gigantischen Meteoriten auf der Erde beendet. [2: -8.020] Das historische Zentrum (centro storico) liegt auf mehr als 100 Inseln in der Lagune von Venedig. ------------------------------------------------------------------------------------------------ Query: wie alt ist unser planet? [0: 7.672] Die Entstehung der Erde ist 4,5 milliarden jahre her. [1: -9.638] Die Ära der Dinosaurier wurde vermutlich durch den Einschlag eines gigantischen Meteoriten auf der Erde beendet. [2: -10.251] Auf Kreta hat meine Tochter mit Muscheln eine schöne Sandburg gebaut. ------------------------------------------------------------------------------------------------ Query: wie kapital sichern [0: 3.927] Um in Zukunft sein Vermögen zu schützen, sollte man andere Investmentstrategien in Betracht ziehen. [1: -8.733] Finanzwerte treiben DAX um mehr als sechs Prozent nach oben Frankfurt/Main gegeben. [2: -10.090] Mein Arzt sagt, dass mir dabei eher ein Orthopäde helfen könnte. ------------------------------------------------------------------------------------------------ Query: supermarkt lebensmittel reduziert [0: 3.508] Bei ALDI sind die Bananen gerade im Angebot. [1: -10.057] Das historische Zentrum (centro storico) liegt auf mehr als 100 Inseln in der Lagune von Venedig. [2: -10.470] DAX dreht ins Minus. Konjunkturdaten und Gewinnmitnahmen belasten Frankfurt/Main. ------------------------------------------------------------------------------------------------ Query: wodurch ist der tyrannosaurus aussgestorben [0: 0.079] Die Ära der Dinosaurier wurde vermutlich durch den Einschlag eines gigantischen Meteoriten auf der Erde beendet. [1: -10.701] Mein Arzt sagt, dass mir dabei eher ein Orthopäde helfen könnte. [2: -11.200] Auf Netflix gibt es endlich die neue Staffel meiner Lieblingsserie. ------------------------------------------------------------------------------------------------ Query: serien streamen [0: 3.392] Auf Netflix gibt es endlich die neue Staffel meiner Lieblingsserie. [1: -5.725] Der Gepard jagt seine Beute. [2: -8.378] Auf Kreta hat meine Tochter mit Muscheln eine schöne Sandburg gebaut. ------------------------------------------------------------------------------------------------ ``` ### Contact - Baran Avinc, [email protected] - Jonas Grebe, [email protected] - Lisa Stolz, [email protected] - Bonian Riebe, [email protected] ### References - N. Reimers and I. Gurevych (2019), ['Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks'](https://arxiv.org/abs/1908.10084). - Payal Bajaj et al. (2018), ['MS MARCO: A Human Generated MAchine Reading COmprehension Dataset'](https://arxiv.org/abs/1611.09268). - N. Thakur et al. (2021), ['BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models'](https://arxiv.org/abs/2104.08663). - T. Möller, J. Risch and M. Pietsch (2021), ['GermanQuAD and GermanDPR: Improving Non-English Question Answering and Passage Retrieval'](https://arxiv.org/abs/2104.12741). - Hofstätter et al. (2021), ['Improving Efficient Neural Ranking Models with Cross-Architecture Knowledge Distillation'](https://arxiv.org/abs/2010.02666)
svalabs/ger-roberta
2021-05-20T22:04:35.000Z
[ "pytorch", "jax", "roberta", "masked-lm", "transformers", "fill-mask" ]
fill-mask
[ ".gitattributes", "config.json", "flax_model.msgpack", "merges.txt", "optimizer.pt", "pytorch_model.bin", "scheduler.pt", "training_args.bin", "vocab.json" ]
svalabs
40
transformers
sven1977/test_model
2021-06-13T18:51:21.000Z
[]
[ ".gitattributes" ]
sven1977
0
sw005320/Shinji-Watanabe-ami_asr_train_asr_e85_raw_en_bpe100_optim_conflr5.0_sp_valid.acc.ave-fs16k-langen
2020-12-29T21:42:07.000Z
[]
[ ".gitattributes" ]
sw005320
0
swapnil2911/DialoGPT-small-arya
2021-06-09T06:27:55.000Z
[ "pytorch", "gpt2", "lm-head", "causal-lm", "transformers", "text-generation" ]
text-generation
[ ".gitattributes", "README.md", "config.json", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer.json", "tokenizer_config.json", "vocab.json" ]
swapnil2911
9
transformers
pipeline_tag:conversational
swapnil2911/DialoGPT-test-arya
2021-06-09T06:19:33.000Z
[ "pytorch", "gpt2", "lm-head", "causal-lm", "transformers", "text-generation" ]
text-generation
[ ".gitattributes", "README.md", "config.json", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer.json", "tokenizer_config.json", "vocab.json" ]
swapnil2911
2
transformers
pipeline_tag: conversational
swapnil2911/test
2021-06-09T06:28:29.000Z
[]
[ ".gitattributes", "README.md" ]
swapnil2911
0
pipeline_tag: conversational
swcrazyfan/TE-v3-10K
2021-05-29T03:21:08.000Z
[ "pytorch", "gpt_neo", "causal-lm", "transformers", "text-generation" ]
text-generation
[ ".gitattributes", "config.json", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer.json", "tokenizer_config.json", "vocab.json" ]
swcrazyfan
14
transformers
swcrazyfan/TE-v3-12K
2021-05-29T06:32:52.000Z
[ "pytorch", "gpt_neo", "causal-lm", "transformers", "text-generation" ]
text-generation
[ ".gitattributes", "config.json", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer.json", "tokenizer_config.json", "vocab.json" ]
swcrazyfan
28
transformers
swcrazyfan/TE-v3-3K
2021-05-28T06:38:28.000Z
[ "pytorch", "gpt_neo", "causal-lm", "transformers", "text-generation" ]
text-generation
[ ".gitattributes", "config.json", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer.json", "tokenizer_config.json", "vocab.json" ]
swcrazyfan
22
transformers
swcrazyfan/TE-v3-8K
2021-05-28T12:26:43.000Z
[ "pytorch", "gpt_neo", "causal-lm", "transformers", "text-generation" ]
text-generation
[ ".gitattributes", "config.json", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer.json", "tokenizer_config.json", "vocab.json" ]
swcrazyfan
23
transformers
swcrazyfan/TEFL-2.7B-10K
2021-06-10T03:25:02.000Z
[ "pytorch", "gpt_neo", "causal-lm", "transformers", "text-generation" ]
text-generation
[ ".gitattributes", "config.json", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer.json", "tokenizer_config.json", "vocab.json" ]
swcrazyfan
22
transformers
swcrazyfan/TEFL-2.7B-15K
2021-06-10T09:20:21.000Z
[ "pytorch", "gpt_neo", "causal-lm", "transformers", "text-generation" ]
text-generation
[ ".gitattributes", "config.json", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer.json", "tokenizer_config.json", "vocab.json" ]
swcrazyfan
0
transformers
swcrazyfan/TEFL-2.7B-4K
2021-06-04T15:58:19.000Z
[ "pytorch", "gpt_neo", "causal-lm", "transformers", "text-generation" ]
text-generation
[ ".gitattributes", "config.json", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer.json", "tokenizer_config.json", "vocab.json" ]
swcrazyfan
6
transformers
swcrazyfan/TEFL-2.7B-6K
2021-06-05T07:53:03.000Z
[ "pytorch", "gpt_neo", "causal-lm", "transformers", "text-generation" ]
text-generation
[ ".gitattributes", "config.json", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer.json", "tokenizer_config.json", "vocab.json" ]
swcrazyfan
1
transformers
swcrazyfan/TEFL-V3
2021-06-14T07:17:34.000Z
[ "pytorch", "gpt_neo", "causal-lm", "transformers", "text-generation" ]
text-generation
[ ".gitattributes", "config.json", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer.json", "tokenizer_config.json", "vocab.json" ]
swcrazyfan
5
transformers
swcrazyfan/TEFL-blogging-9K
2021-06-03T01:32:49.000Z
[ "pytorch", "gpt_neo", "causal-lm", "transformers", "text-generation" ]
text-generation
[ ".gitattributes", "config.json", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer.json", "tokenizer_config.json", "vocab.json" ]
swcrazyfan
22
transformers
swcrazyfan/gpt-neo-1.3B-TBL
2021-05-21T05:43:27.000Z
[ "pytorch", "gpt_neo", "causal-lm", "transformers", "text-generation" ]
text-generation
[ ".gitattributes", "config.json", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer.json", "tokenizer_config.json", "vocab.json" ]
swcrazyfan
99
transformers
swheel/nkuCordBot
2021-06-04T03:35:36.000Z
[]
[ ".gitattributes" ]
swheel
0
sy7/first
2021-05-29T12:20:33.000Z
[]
[ ".gitattributes" ]
sy7
0
sybae/BertPractice
2021-01-31T10:03:33.000Z
[]
[ ".gitattributes", "README.md" ]
sybae
0
BERT Implication
sybk/highkick-soonjae-v2
2021-05-31T04:23:02.000Z
[ "pytorch", "gpt2", "transformers" ]
[ ".gitattributes", "config.json", "pytorch_model.bin" ]
sybk
65
transformers
sybk/highkick-soonjae
2021-05-23T14:38:21.000Z
[ "pytorch", "gpt2", "lm-head", "causal-lm", "transformers", "text-generation" ]
text-generation
[ ".gitattributes", "config.json", "pytorch_model.bin" ]
sybk
39
transformers
sybk/hk-backward
2021-05-23T14:41:39.000Z
[ "pytorch", "gpt2", "lm-head", "causal-lm", "transformers", "text-generation" ]
text-generation
[ ".gitattributes", "config.json", "pytorch_model.bin" ]
sybk
38
transformers
sybk/hk_backward_v2
2021-05-31T04:17:16.000Z
[ "pytorch", "gpt2", "transformers" ]
[ ".gitattributes", "config.json", "pytorch_model.bin" ]
sybk
90
transformers
t4peter/testModel
2021-04-14T19:38:45.000Z
[]
[ ".gitattributes" ]
t4peter
0
taeminlee/kodialogpt2-base
2021-05-23T13:03:30.000Z
[ "pytorch", "jax", "gpt2", "lm-head", "causal-lm", "transformers", "text-generation" ]
text-generation
[ ".gitattributes", "config.json", "flax_model.msgpack", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.json" ]
taeminlee
66
transformers
taeminlee/kogpt2
2021-05-23T13:04:34.000Z
[ "pytorch", "jax", "gpt2", "lm-head", "causal-lm", "transformers", "text-generation" ]
text-generation
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer.json", "tokenizer_config.json", "vocab.json" ]
taeminlee
476
transformers
# KoGPT2-Transformers KoGPT2 on Huggingface Transformers ### KoGPT2-Transformers - [SKT-AI 에서 공개한 KoGPT2 (ver 1.0)](https://github.com/SKT-AI/KoGPT2)를 [Transformers](https://github.com/huggingface/transformers)에서 사용하도록 하였습니다. - **SKT-AI 에서 KoGPT2 2.0을 공개하였습니다. https://huggingface.co/skt/kogpt2-base-v2/** ### Demo - 일상 대화 챗봇 : http://demo.tmkor.com:36200/dialo - 화장품 리뷰 생성 : http://demo.tmkor.com:36200/ctrl ### Example ```python from transformers import GPT2LMHeadModel, PreTrainedTokenizerFast model = GPT2LMHeadModel.from_pretrained("taeminlee/kogpt2") tokenizer = PreTrainedTokenizerFast.from_pretrained("taeminlee/kogpt2") input_ids = tokenizer.encode("안녕", add_special_tokens=False, return_tensors="pt") output_sequences = model.generate(input_ids=input_ids, do_sample=True, max_length=100, num_return_sequences=3) for generated_sequence in output_sequences: generated_sequence = generated_sequence.tolist() print("GENERATED SEQUENCE : {0}".format(tokenizer.decode(generated_sequence, clean_up_tokenization_spaces=True))) ```
taharushain/postive_negative_emotions
2021-03-12T03:36:15.000Z
[]
[ ".gitattributes" ]
taharushain
0
tals/albert-base-mnli
2021-03-22T00:37:13.000Z
[ "pytorch", "albert", "text-classification", "transformers" ]
text-classification
[ ".gitattributes", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json" ]
tals
90
transformers
tals/albert-base-vitaminc-fever
2021-03-22T14:06:31.000Z
[ "pytorch", "albert", "text-classification", "transformers" ]
text-classification
[ ".gitattributes", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json" ]
tals
103
transformers
tals/albert-base-vitaminc-mnli
2021-03-22T16:06:08.000Z
[ "pytorch", "albert", "text-classification", "transformers" ]
text-classification
[ ".gitattributes", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json" ]
tals
15
transformers
tals/albert-base-vitaminc
2021-03-22T16:03:08.000Z
[ "pytorch", "albert", "text-classification", "transformers" ]
text-classification
[ ".gitattributes", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json" ]
tals
164
transformers
tals/albert-base-vitaminc_flagging
2021-03-22T16:13:02.000Z
[ "pytorch", "albert", "text-classification", "transformers" ]
text-classification
[ ".gitattributes", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json" ]
tals
300
transformers
tals/albert-base-vitaminc_rationale
2021-06-11T16:51:37.000Z
[ "pytorch", "albert", "transformers" ]
[ ".gitattributes", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json" ]
tals
7
transformers
tals/albert-base-vitaminc_wnei-fever
2021-06-11T16:25:01.000Z
[ "pytorch", "albert", "text-classification", "transformers" ]
text-classification
[ ".gitattributes", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json" ]
tals
5
transformers
tals/albert-xlarge-vitaminc-fever
2021-03-22T13:55:16.000Z
[ "pytorch", "albert", "text-classification", "transformers" ]
text-classification
[ ".gitattributes", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json" ]
tals
67
transformers
tals/albert-xlarge-vitaminc-mnli
2021-03-22T16:08:15.000Z
[ "pytorch", "albert", "text-classification", "transformers" ]
text-classification
[ ".gitattributes", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json" ]
tals
1,122
transformers
tals/albert-xlarge-vitaminc
2021-03-22T01:58:15.000Z
[ "pytorch", "albert", "text-classification", "transformers" ]
text-classification
[ ".gitattributes", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json" ]
tals
36
transformers
tanay/xlm-fine-tuned
2021-03-22T05:13:25.000Z
[ "pytorch", "xlm-roberta", "text-classification", "transformers" ]
text-classification
[ ".gitattributes", "config.json", "pytorch_model.bin", "sentencepiece.bpe.model", "special_tokens_map.json", "tokenizer_config.json" ]
tanay
7
transformers
tankhead200/J123897
2021-03-06T07:40:51.000Z
[]
[ ".gitattributes" ]
tankhead200
0
tanmaylaud/wav2vec2-large-xlsr-hindi-marathi
2021-04-19T18:40:07.000Z
[ "pytorch", "wav2vec2", "mr", "hi", "dataset:openslr", "dataset:interspeech_2021_asr", "transformers", "audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week", "hindi", "marathi", "license:apache-2.0" ]
automatic-speech-recognition
[ ".gitattributes", "README.md", "config.json", "optimizer.pt", "preprocessor_config.json", "pytorch_model.bin", "scheduler.pt", "special_tokens_map.json", "tokenizer_config.json", "trainer_state.json", "training_args.bin", "vocab.json", ".ipynb_checkpoints/README-checkpoint.md", ".ipynb_checkpoints/vocab-checkpoint.json" ]
tanmaylaud
266
transformers
--- language: mr datasets: - openslr - interspeech_2021_asr metrics: - wer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week - hindi - marathi license: apache-2.0 model-index: - name: XLSR Wav2Vec2 Large 53 Hindi-Marathi by Tanmay Laud results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: OpenSLR hi, OpenSLR mr type: openslr, interspeech_2021_asr metrics: - name: Test WER type: wer value: 24.92 --- # Wav2Vec2-Large-XLSR-53-Hindi-Marathi Fine-tuned facebook/wav2vec2-large-xlsr-53 on Hindi and Marathi using the OpenSLR SLR64 datasets. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows, assuming you have a dataset with Marathi text and audio_path fields: ``` import torch import torchaudio import librosa from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor # test_data = #TODO: WRITE YOUR CODE TO LOAD THE TEST DATASET. For sample see the Colab link in Training Section. processor = Wav2Vec2Processor.from_pretrained("tanmaylaud/wav2vec2-large-xlsr-hindi-marathi") model = Wav2Vec2ForCTC.from_pretrained("tanmaylaud/wav2vec2-large-xlsr-hindi-marathi") # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["audio_path"]) batch["speech"] = librosa.resample(speech_array[0].numpy(), sampling_rate, 16_000) # sampling_rate can vary return batch test_data= test_data.map(speech_file_to_array_fn) inputs = processor(test_data["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_data["text"][:2]) Evaluation The model can be evaluated as follows on 10% of the Marathi data on OpenSLR. ``` ``` import torchaudio from datasets import load_metric from transformers import Wav2Vec2Processor,Wav2Vec2ForCTC import torch import librosa import numpy as np import re wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("tanmaylaud/wav2vec2-large-xlsr-hindi-marathi") model = Wav2Vec2ForCTC.from_pretrained("tanmaylaud/wav2vec2-large-xlsr-hindi-marathi") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\।]' # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]) speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = speech_array[0].numpy() batch["sampling_rate"] = sampling_rate batch["target_text"] = batch["sentence"] batch["speech"] = librosa.resample(np.asarray(batch["speech"]), sampling_rate, 16_000) batch["sampling_rate"] = 16_000 return batch test= test.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the audio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids, group_tokens=False) # we do not want to group tokens when computing the metrics return batch result = test.map(evaluate, batched=True, batch_size=32) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["text"]))) ``` Link to eval notebook : https://colab.research.google.com/drive/1nZRTgKfxCD9cvy90wikTHkg2il3zgcqW#scrollTo=cXWFbhb0d7DT
tanmoyio/wav2vec2-large-xlsr-bengali
2021-03-29T17:28:42.000Z
[ "pytorch", "wav2vec2", "Bengali", "dataset:OpenSLR", "transformers", "audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week", "license:attribution-sharealike 4.0 international" ]
automatic-speech-recognition
[ ".gitattributes", "README.md", "config.json", "preprocessor_config.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.json" ]
tanmoyio
97
transformers
--- language: Bengali datasets: - OpenSLR metrics: - wer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: Attribution-ShareAlike 4.0 International model-index: - name: XLSR Wav2Vec2 Bengali by Tanmoy Sarkar results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: OpenSLR type: OpenSLR args: ben metrics: - name: Test WER type: wer value: 88.58 --- # Wav2Vec2-Large-XLSR-Bengali Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) Bengali using the [Bengali ASR training data set containing ~196K utterances](https://www.openslr.org/53/). When using this model, make sure that your speech input is sampled at 16kHz. ## Usage Dataset must be downloaded from [this website](https://www.openslr.org/53/) and preprocessed accordingly. For example 1250 test samples has been chosen. ```python import pandas as pd test_dataset = pd.read_csv('utt_spk_text.tsv', sep='\\t', header=None)[60000:61250] test_dataset.columns = ["audio_path", "__", "label"] test_dataset = test_data.drop("__", axis=1) def add_file_path(text): path = "data/" + text[:2] + "/" + text + '.flac' return path test_dataset['audio_path'] = test_dataset['audio_path'].map(lambda x: add_file_path(x)) ``` The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor processor = Wav2Vec2Processor.from_pretrained("tanmoyio/wav2vec2-large-xlsr-bengali") model = Wav2Vec2ForCTC.from_pretrained("tanmoyio/wav2vec2-large-xlsr-bengali") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["audio_path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["label"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Bengali test data of OpenSLR. ```python import torch import torchaudio from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("tanmoyio/wav2vec2-large-xlsr-bengali") model = Wav2Vec2ForCTC.from_pretrained("tanmoyio/wav2vec2-large-xlsr-bengali") model.to("cuda") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["label"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 88.58 % ## Training The script used for training can be found [Bengali ASR Fine Tuning Wav2Vec2](https://colab.research.google.com/drive/1Bkc5C_cJV9BeS0FD0MuHyayl8hqcbdRZ?usp=sharing)
tareknaous/arabic-empathetic-bert2bert
2021-05-30T15:53:49.000Z
[]
[ ".gitattributes" ]
tareknaous
0
tartuNLP/EstBERT
2021-05-20T07:21:03.000Z
[ "pytorch", "jax", "bert", "masked-lm", "et", "transformers", "fill-mask" ]
fill-mask
[ ".gitattributes", "README.md", "bert_config.json", "config.json", "flax_model.msgpack", "model.ckpt.data-00000-of-00001", "model.ckpt.index", "model.ckpt.meta", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.txt" ]
tartuNLP
728
transformers
--- language: et --- # EstBERT ### What's this? The EstBERT model is a pretrained BERT<sub>Base</sub> model exclusively trained on Estonian cased corpus on both 128 and 512 sequence length of data. ### How to use? You can use the model transformer library both in tensorflow and pytorch version. ``` from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("tartuNLP/EstBERT") model = AutoModelForMaskedLM.from_pretrained("tartuNLP/EstBERT") ``` You can also download the pretrained model from here, [EstBERT_128]() [EstBERT_512]() #### Dataset used to train the model The EstBERT model is trained both on 128 and 512 sequence length of data. For training the EstBERT we used the [Estonian National Corpus 2017](https://metashare.ut.ee/repository/browse/estonian-national-corpus-2017/b616ceda30ce11e8a6e4005056b40024880158b577154c01bd3d3fcfc9b762b3/), which was the largest Estonian language corpus available at the time. It consists of four sub-corpora: Estonian Reference Corpus 1990-2008, Estonian Web Corpus 2013, Estonian Web Corpus 2017 and Estonian Wikipedia Corpus 2017. ### Why would I use? Overall EstBERT performs better in parts of speech (POS), name entity recognition (NER), rubric, and sentiment classification tasks compared to mBERT and XLM-RoBERTa. The comparative results can be found below; |Model |UPOS |XPOS |Morph |bf UPOS |bf XPOS |Morph | |--------------|----------------------------|-------------|-------------|-------------|----------------------------|----------------------------| | EstBERT | **_97.89_** | **98.40** | **96.93** | **97.84** | **_98.43_** | **_96.80_** | | mBERT | 97.42 | 98.06 | 96.24 | 97.43 | 98.13 | 96.13 | | XLM-RoBERTa | 97.78 | 98.36 | 96.53 | 97.80 | 98.40 | 96.69 | |Model|Rubric<sub>128</sub> |Sentiment<sub>128</sub> | Rubric<sub>128</sub> |Sentiment<sub>512</sub> | |-------------------|----------------------------|--------------------|-----------------------------------------------|----------------------------| | EstBERT | **_81.70_** | 74.36 | **80.96** | 74.50 | | mBERT | 75.67 | 70.23 | 74.94 | 69.52 | | XLM\-RoBERTa | 80.34 | **74.50** | 78.62 | **_76.07_**| |Model |Precicion<sub>128</sub> |Recall<sub>128</sub> |F1-Score<sub>128</sub> |Precision<sub>512</sub> |Recall<sub>512</sub> |F1-Score<sub>512</sub> | |--------------|----------------|----------------------------|----------------------------|----------------------------|-------------|----------------| | EstBERT | **88.42** | 90.38 |**_89.39_** | 88.35 | 89.74 | 89.04 | | mBERT | 85.88 | 87.09 | 86.51 |**_88.47_** | 88.28 | 88.37 | | XLM\-RoBERTa | 87.55 |**_91.19_** | 89.34 | 87.50 | **90.76** | **89.10** |
tartuNLP/EstBERT_512
2021-05-20T07:22:02.000Z
[ "pytorch", "jax", "bert", "masked-lm", "transformers", "fill-mask" ]
fill-mask
[ ".gitattributes", "config.json", "flax_model.msgpack", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.txt" ]
tartuNLP
25
transformers
tartuNLP/EstBERT_Morph_128
2021-05-26T06:48:09.000Z
[ "pytorch", "bert", "token-classification", "transformers" ]
token-classification
[ ".gitattributes", "config.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
tartuNLP
11
transformers
tartuNLP/EstBERT_NER
2021-05-20T07:23:20.000Z
[ "pytorch", "jax", "bert", "token-classification", "arxiv:2011.04784", "transformers" ]
token-classification
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "optimizer.pt", "pytorch_model.bin", "scheduler.pt", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
tartuNLP
45
transformers
# EstBERT_NER ## Model description EstBERT_NER is a fine-tuned EstBERT model that can be used for Named Entity Recognition. This model was trained on the Estonian NER dataset created by [Tkachenko et al](https://www.aclweb.org/anthology/W13-2412.pdf). It can recognize three types of entities: locations (LOC), organizations (ORG) and persons (PER). ## How to use You can use this model with Transformers pipeline for NER. Post-processing of results may be necessary as the model occasionally tags subword tokens as entities. ``` from transformers import BertTokenizer, BertForTokenClassification from transformers import pipeline tokenizer = BertTokenizer.from_pretrained('tartuNLP/EstBERT_NER') bertner = BertForTokenClassification.from_pretrained('tartuNLP/EstBERT_NER') nlp = pipeline("ner", model=bertner, tokenizer=tokenizer) sentence = 'Eesti Ekspressi teada on Eesti Pank uurinud Hansapanga tehinguid , mis toimusid kaks aastat tagasi suvel ja mille käigus voolas panka ligi miljardi krooni ulatuses kahtlast raha .' ner_results = nlp(sentence) print(ner_results) ``` ``` [{'word': 'Eesti', 'score': 0.9964128136634827, 'entity': 'B-ORG', 'index': 1}, {'word': 'Ekspressi', 'score': 0.9978809356689453, 'entity': 'I-ORG', 'index': 2}, {'word': 'Eesti', 'score': 0.9988121390342712, 'entity': 'B-ORG', 'index': 5}, {'word': 'Pank', 'score': 0.9985784292221069, 'entity': 'I-ORG', 'index': 6}, {'word': 'Hansapanga', 'score': 0.9979034662246704, 'entity': 'B-ORG', 'index': 8}] ``` ## BibTeX entry and citation info ``` @misc{tanvir2020estbert, title={EstBERT: A Pretrained Language-Specific BERT for Estonian}, author={Hasan Tanvir and Claudia Kittask and Kairit Sirts}, year={2020}, eprint={2011.04784}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
tartuNLP/EstBERT_UPOS_128
2021-05-26T06:52:21.000Z
[ "pytorch", "bert", "token-classification", "transformers" ]
token-classification
[ ".gitattributes", "config.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
tartuNLP
27
transformers
tartuNLP/EstBERT_XPOS_128
2021-05-26T06:55:41.000Z
[ "pytorch", "bert", "token-classification", "transformers" ]
token-classification
[ ".gitattributes", "config.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
tartuNLP
35
transformers
taylor/testing
2021-03-27T04:14:41.000Z
[]
[ ".gitattributes", "README.md" ]
taylor
0
tblard/tf-allocine
2020-12-11T22:02:40.000Z
[ "tf", "camembert", "text-classification", "fr", "transformers" ]
text-classification
[ ".gitattributes", "README.md", "config.json", "sentencepiece.bpe.model", "special_tokens_map.json", "tf_model.h5", "tokenizer_config.json" ]
tblard
6,183
transformers
--- language: fr --- # tf-allociné A french sentiment analysis model, based on [CamemBERT](https://camembert-model.fr/), and finetuned on a large-scale dataset scraped from [Allociné.fr](http://www.allocine.fr/) user reviews. ## Results | Validation Accuracy | Validation F1-Score | Test Accuracy | Test F1-Score | |--------------------:| -------------------:| -------------:|--------------:| | 97.39 | 97.36 | 97.44 | 97.34 | The dataset and the evaluation code are available on [this repo](https://github.com/TheophileBlard/french-sentiment-analysis-with-bert). ## Usage ```python from transformers import AutoTokenizer, TFAutoModelForSequenceClassification from transformers import pipeline tokenizer = AutoTokenizer.from_pretrained("tblard/tf-allocine") model = TFAutoModelForSequenceClassification.from_pretrained("tblard/tf-allocine") nlp = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer) print(nlp("Alad'2 est clairement le meilleur film de l'année 2018.")) # POSITIVE print(nlp("Juste whoaaahouuu !")) # POSITIVE print(nlp("NUL...A...CHIER ! FIN DE TRANSMISSION.")) # NEGATIVE print(nlp("Je m'attendais à mieux de la part de Franck Dubosc !")) # NEGATIVE ``` ## Author Théophile Blard – :email: [email protected] If you use this work (code, model or dataset), please cite as: > Théophile Blard, French sentiment analysis with BERT, (2020), GitHub repository, <https://github.com/TheophileBlard/french-sentiment-analysis-with-bert>
tbs17/MathBERT-custom
2021-06-17T16:41:41.000Z
[ "pytorch", "bert", "masked-lm", "transformers", "fill-mask" ]
fill-mask
[ ".gitattributes", "README.md", "config.json", "math-vocab.txt", "pytorch_model.bin", "tokenizer.json", "tokenizer_config.json" ]
tbs17
31
transformers
#### MathBERT model (custom vocab) Pretrained model on pre-k to graduate math language (English) using a masked language modeling (MLM) objective. This model is uncased: it does not make a difference between english and English. #### Model description MathBERT is a transformers model pretrained on a large corpus of English math corpus data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the math language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the MathBERT model as inputs. #### Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a math-related downstream task. Note that this model is primarily aimed at being fine-tuned on math-related tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as math text generation you should look at model like GPT2. #### How to use <!---You can use this model directly with a pipeline for masked language modeling: >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='bert-base-uncased') >>> unmasker("Hello I'm a [MASK] model.") [{'sequence': "[CLS] hello i'm a fashion model. [SEP]", 'score': 0.1073106899857521, 'token': 4827, 'token_str': 'fashion'}, {'sequence': "[CLS] hello i'm a role model. [SEP]", 'score': 0.08774490654468536, 'token': 2535, 'token_str': 'role'}, {'sequence': "[CLS] hello i'm a new model. [SEP]", 'score': 0.05338378623127937, 'token': 2047, 'token_str': 'new'}, {'sequence': "[CLS] hello i'm a super model. [SEP]", 'score': 0.04667217284440994, 'token': 3565, 'token_str': 'super'}, {'sequence': "[CLS] hello i'm a fine model. [SEP]", 'score': 0.027095865458250046, 'token': 2986, 'token_str': 'fine'}]---> Here is how to use this model to get the features of a given text in PyTorch: ```from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('tbs17/MathBERT-custom') model = BertModel.from_pretrained("tbs17/MathBERT-custom") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(encoded_input) ``` and in TensorFlow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('tbs17/MathBERT-custom') model = TFBertModel.from_pretrained("tbs17/MathBERT-custom") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` #### Limitations and bias <!---Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions: >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='bert-base-uncased') >>> unmasker("The man worked as a [MASK].") [{'sequence': '[CLS] the man worked as a carpenter. [SEP]', 'score': 0.09747550636529922, 'token': 10533, 'token_str': 'carpenter'}, {'sequence': '[CLS] the man worked as a waiter. [SEP]', 'score': 0.0523831807076931, 'token': 15610, 'token_str': 'waiter'}, {'sequence': '[CLS] the man worked as a barber. [SEP]', 'score': 0.04962705448269844, 'token': 13362, 'token_str': 'barber'}, {'sequence': '[CLS] the man worked as a mechanic. [SEP]', 'score': 0.03788609802722931, 'token': 15893, 'token_str': 'mechanic'}, {'sequence': '[CLS] the man worked as a salesman. [SEP]', 'score': 0.037680890411138535, 'token': 18968, 'token_str': 'salesman'}] >>> unmasker("The woman worked as a [MASK].") [{'sequence': '[CLS] the woman worked as a nurse. [SEP]', 'score': 0.21981462836265564, 'token': 6821, 'token_str': 'nurse'}, {'sequence': '[CLS] the woman worked as a waitress. [SEP]', 'score': 0.1597415804862976, 'token': 13877, 'token_str': 'waitress'}, {'sequence': '[CLS] the woman worked as a maid. [SEP]', 'score': 0.1154729500412941, 'token': 10850, 'token_str': 'maid'}, {'sequence': '[CLS] the woman worked as a prostitute. [SEP]', 'score': 0.037968918681144714, 'token': 19215, 'token_str': 'prostitute'}, {'sequence': '[CLS] the woman worked as a cook. [SEP]', 'score': 0.03042375110089779, 'token': 5660, 'token_str': 'cook'}] This bias will also affect all fine-tuned versions of this model.---> Training data The BERT model was pretrained on pre-k to HS math curriculum (engageNY, Utah Math, Illustrative Math), college math books from openculture.com as well as graduate level math from arxiv math paper abstracts. There is about 100M tokens got pretrained on. #### Training procedure The texts are lowercased and tokenized using WordPiece and a customized vocabulary size of 30,522. We use the ```bert_tokenizer``` from huggingface tokenizers library to generate a custom vocab file from our training raw math texts. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: + 15% of the tokens are masked. + In 80% of the cases, the masked tokens are replaced by [MASK]. + In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. + In the 10% remaining cases, the masked tokens are left as is. #### Pretraining The model was trained on a 8-core cloud TPUs from Google Colab for 600k steps with a batch size of 128. The sequence length was limited to 512 for the entire time. The optimizer used is Adam with a learning rate of 5e-5, beta_{1} = 0.9 and beta_{2} =0.999, a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after.
tbs17/MathBERT
2021-06-17T19:04:57.000Z
[ "pytorch", "bert", "masked-lm", "transformers", "fill-mask" ]
fill-mask
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "tokenizer.json", "tokenizer_config.json", "vocab.txt" ]
tbs17
161
transformers
#### MathBERT model (original vocab) Pretrained model on pre-k to graduate math language (English) using a masked language modeling (MLM) objective. This model is uncased: it does not make a difference between english and English. #### Model description MathBERT is a transformers model pretrained on a large corpus of English math corpus data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the math language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the MathBERT model as inputs. #### Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a math-related downstream task. Note that this model is primarily aimed at being fine-tuned on math-related tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as math text generation you should look at model like GPT2. #### How to use <!---You can use this model directly with a pipeline for masked language modeling: >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='bert-base-uncased') >>> unmasker("Hello I'm a [MASK] model.") [{'sequence': "[CLS] hello i'm a fashion model. [SEP]", 'score': 0.1073106899857521, 'token': 4827, 'token_str': 'fashion'}, {'sequence': "[CLS] hello i'm a role model. [SEP]", 'score': 0.08774490654468536, 'token': 2535, 'token_str': 'role'}, {'sequence': "[CLS] hello i'm a new model. [SEP]", 'score': 0.05338378623127937, 'token': 2047, 'token_str': 'new'}, {'sequence': "[CLS] hello i'm a super model. [SEP]", 'score': 0.04667217284440994, 'token': 3565, 'token_str': 'super'}, {'sequence': "[CLS] hello i'm a fine model. [SEP]", 'score': 0.027095865458250046, 'token': 2986, 'token_str': 'fine'}]---> Here is how to use this model to get the features of a given text in PyTorch: ```from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('tbs17/MathBERT') model = BertModel.from_pretrained("tbs17/MathBERT") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(encoded_input) ``` and in TensorFlow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('tbs17/MathBERT') model = TFBertModel.from_pretrained("tbs17/MathBERT") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` #### Limitations and bias <!---Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions: >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='bert-base-uncased') >>> unmasker("The man worked as a [MASK].") [{'sequence': '[CLS] the man worked as a carpenter. [SEP]', 'score': 0.09747550636529922, 'token': 10533, 'token_str': 'carpenter'}, {'sequence': '[CLS] the man worked as a waiter. [SEP]', 'score': 0.0523831807076931, 'token': 15610, 'token_str': 'waiter'}, {'sequence': '[CLS] the man worked as a barber. [SEP]', 'score': 0.04962705448269844, 'token': 13362, 'token_str': 'barber'}, {'sequence': '[CLS] the man worked as a mechanic. [SEP]', 'score': 0.03788609802722931, 'token': 15893, 'token_str': 'mechanic'}, {'sequence': '[CLS] the man worked as a salesman. [SEP]', 'score': 0.037680890411138535, 'token': 18968, 'token_str': 'salesman'}] >>> unmasker("The woman worked as a [MASK].") [{'sequence': '[CLS] the woman worked as a nurse. [SEP]', 'score': 0.21981462836265564, 'token': 6821, 'token_str': 'nurse'}, {'sequence': '[CLS] the woman worked as a waitress. [SEP]', 'score': 0.1597415804862976, 'token': 13877, 'token_str': 'waitress'}, {'sequence': '[CLS] the woman worked as a maid. [SEP]', 'score': 0.1154729500412941, 'token': 10850, 'token_str': 'maid'}, {'sequence': '[CLS] the woman worked as a prostitute. [SEP]', 'score': 0.037968918681144714, 'token': 19215, 'token_str': 'prostitute'}, {'sequence': '[CLS] the woman worked as a cook. [SEP]', 'score': 0.03042375110089779, 'token': 5660, 'token_str': 'cook'}] This bias will also affect all fine-tuned versions of this model.---> #### Training data The MathBERT model was pretrained on pre-k to HS math curriculum (engageNY, Utah Math, Illustrative Math), college math books from openculture.com as well as graduate level math from arxiv math paper abstracts. There is about 100M tokens got pretrained on. #### Training procedure The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,522 which is from original BERT vocab.txt. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` With probability 0.5, sentence A and sentence B correspond to two consecutive sentence spans from the original corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence, but less than 512 tokens. The details of the masking procedure for each sentence are the following: + 15% of the tokens are masked. + In 80% of the cases, the masked tokens are replaced by [MASK]. + In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. + In the 10% remaining cases, the masked tokens are left as is. #### Pretraining The model was trained on a 8-core cloud TPUs from Google Colab for 600k steps with a batch size of 128. The sequence length was limited to 512 for the entire time. The optimizer used is Adam with a learning rate of 5e-5, beta_{1} = 0.9 and beta_{2} =0.999, a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after.
tcaputi/guns-relevant-b300
2021-05-20T07:24:39.000Z
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
[ ".gitattributes", "config.json", "flax_model.msgpack", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
tcaputi
13
transformers
tcaputi/guns-relevant
2021-05-20T07:25:33.000Z
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
[ ".gitattributes", "config.json", "flax_model.msgpack", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
tcaputi
44
transformers
tdrt67ijk/oefsjdkx
2021-06-13T05:46:15.000Z
[]
[ ".gitattributes", "README.md" ]
tdrt67ijk
0
techthiyanes/Bert_Bahasa_Sentiment
2021-05-20T07:26:52.000Z
[ "pytorch", "tf", "jax", "bert", "text-classification", "transformers" ]
text-classification
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "pytorch_model.bin", "special_tokens_map.json", "tf_model.h5", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
techthiyanes
20
transformers
from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) model = AutoModelForSequenceClassification.from_pretrained('techthiyanes/Bert_Bahasa_Sentiment') inputs = tokenizer("saya tidak", return_tensors="pt") labels = torch.tensor([1]).unsqueeze(0) outputs = model(**inputs, labels=labels) loss = outputs.loss logits = outputs.logits outputs hello
techthiyanes/bert-base-mulitilingual-bahasa-sentiment
2021-05-09T02:34:07.000Z
[]
[ ".gitattributes" ]
techthiyanes
0
techthiyanes/chinese_sentiment
2021-05-20T07:28:06.000Z
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
[ ".gitattributes", "config.json", "flax_model.msgpack", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.txt" ]
techthiyanes
490
transformers
techthiyanes/trainedones
2021-05-09T02:42:36.000Z
[]
[ ".gitattributes", "README.md" ]
techthiyanes
0
tehyw/test
2021-05-10T01:25:58.000Z
[]
[ ".gitattributes", "README.md" ]
tehyw
0
teleportHQ/predicto_css
2021-05-23T13:05:04.000Z
[ "pytorch", "gpt2", "lm-head", "causal-lm", "transformers", "text-generation" ]
text-generation
[ ".gitattributes", "README.md", "config.json", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.json" ]
teleportHQ
13
transformers
predicto css model
teleportHQ/predicto_tsx
2021-05-23T13:05:19.000Z
[ "pytorch", "gpt2", "lm-head", "causal-lm", "transformers", "text-generation" ]
text-generation
[ ".gitattributes", "README.md", "config.json", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.json" ]
teleportHQ
6
transformers
predicto css model
tengzhiyong/risk-cls
2020-12-02T02:37:42.000Z
[]
[ ".gitattributes" ]
tengzhiyong
0
tennessejoyce/titlewave-bert-base-uncased
2021-05-20T07:29:09.000Z
[ "pytorch", "jax", "bert", "text-classification", "en", "transformers", "license:cc-by-4.0" ]
text-classification
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
tennessejoyce
18
transformers
--- language: en license: cc-by-4.0 widget: - text: "[Gmail API] How can I extract plain text from an email sent to me?" --- # Titlewave: bert-base-uncased ## Model description Titlewave is a Chrome extension that helps you choose better titles for your Stack Overflow questions. See the [github repository](https://github.com/tennessejoyce/TitleWave) for more information. This is one of two NLP models used in the Titlewave project, and its purpose is to classify whether question will be answered or not just based on the title. The [companion model](https://huggingface.co/tennessejoyce/titlewave-t5-small) suggests a new title based on on the body of the question. ## Intended use Try out different titles for your Stack Overflow post, and see which one gives you the best chance of receiving an answer. You can use the model through the API on this page (hosted by HuggingFace) or install the Chrome extension by following the instructions on the [github repository](https://github.com/tennessejoyce/TitleWave), which integrates the tool directly into the Stack Overflow website. You can also run the model locally in Python like this (which automatically downloads the model to your machine): ```python >>> from transformers import pipeline >>> classifier = pipeline('sentiment-analysis', model='tennessejoyce/titlewave-bert-base-uncased') >>> classifier('[Gmail API] How can I extract plain text from an email sent to me?') [{'label': 'Answered', 'score': 0.8053370714187622}] ``` The 'score' in the output represents the probability of getting an answer with this title: 80.5%. ## Training data The weights were initialized from the [BERT base model](https://huggingface.co/bert-base-uncased), which was trained on BookCorpus and English Wikipedia. Then the model was fine-tuned on the dataset of previous Stack Overflow post titles, which is publicly available [here](https://archive.org/details/stackexchange). Specifically I used three years of posts from 2017-2019, filtered out posts which were closed (e.g., duplicates, off-topic), and selected 5% of the remaining posts at random to use in the training set, and the same amount for validation and test sets (278,155 posts each). ## Training procedure The model was fine-tuned for two epochs with a batch size of 32 (17,384 steps total) using 16-bit mixed precision. After some hyperparameter tuning, I found that the following two-phase training procedure yields the best performance (ROC-AUC score) on the validation set: * In the first epoch, all layers were frozen except for the last two (pooling layer and classification layer) and a learning rate of 3e-4 was used. * In the second epoch all layers were unfrozen, and the learning rate was decreased by a factor of 10 to 3e-5. Otherwise, all parameters we set to the defaults listed [here](https://huggingface.co/transformers/main_classes/trainer.html#transformers.TrainingArguments), including the AdamW optimizer and a linearly decreasing learning schedule (both of which were reset between the two epochs). See the [github repository](https://github.com/tennessejoyce/TitleWave) for the scripts that were used to train the model. ## Evaluation See [this notebook](https://github.com/tennessejoyce/TitleWave/blob/master/model_training/test_classifier.ipynb) for the performance of the title classification model on the test set.
tennessejoyce/titlewave-t5-base
2021-03-09T16:47:18.000Z
[ "pytorch", "t5", "seq2seq", "en", "transformers", "license:cc-by-4.0", "summarization", "pipeline_tag:summarization", "text2text-generation" ]
summarization
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json" ]
tennessejoyce
178
transformers
--- language: en license: cc-by-4.0 pipeline_tag: summarization widget: - text: "Example question body." --- # Titlewave: t5-base ## Model description Titlewave is a Chrome extension that helps you choose better titles for your Stack Overflow questions. See https://github.com/tennessejoyce/TitleWave for more information. This is one of two NLP models used in the Titlewave project, and its purpose is to suggests a new title based on on the body of the question. The companion model (https://huggingface.co/tennessejoyce/titlewave-bert-base-uncased) classifies whether question will be answered or not just based on the title ## Intended use Try out different titles for your Stack Overflow post, and see which one gives you the best chance of recieving an answer. This model can be used in your browser as a Chrome extension by following the installation instructions at https://github.com/tennessejoyce/TitleWave. Or load it in Python like this (which will automatically download the model to your machine): ```python >>> from transformers import pipeline >>> classifier = pipeline('summarization', model='tennessejoyce/titlewave-t5-base') >>> body = """"Example question body.""" >>> classifier(body) [{'summary_text': 'Example title suggestion?'}] ``` ## Training data The weights were initialized from the BERT base model (https://huggingface.co/bert-base-uncased), which was trained on BookCorpus and English Wikipedia. Then the model was fine-tuned on the dataset of previous Stack Overflow post titles (https://archive.org/details/stackexchange). Specifically I used three years of posts from 2017-2019, filtered out posts which were closed, and selected 25% of the remaining posts at random to use in the training set. In order to improve the quality of the titles generated, the model was trained only on questions with an accepted answer. ## Evaluation See https://github.com/tennessejoyce/TitleWave/blob/master/model_training/test_summarizer.ipynb for the performance of the title generation model on the test set.
tennessejoyce/titlewave-t5-small
2021-03-09T04:03:11.000Z
[ "pytorch", "t5", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json", "training_args.bin" ]
tennessejoyce
10
transformers
# Titlewave: t5-small This is one of two models used in the Titlewave project. See https://github.com/tennessejoyce/TitleWave for more information. This model was fine-tuned on a dataset of Stack Overflow posts, with a ConditionalGeneration head that summarizes the body of a question in order to suggest a title.
tensorspeech/tts-fastspeech-ljspeech-en
2021-06-01T09:52:36.000Z
[ "eng", "dataset:LJSpeech", "arxiv:1905.09263", "tensorflowtts", "audio", "text-to-speech", "text-to-mel", "license:apache-2.0" ]
text-to-speech
[ ".gitattributes", "README.md", "config.yml", "model.h5", "processor.json" ]
tensorspeech
0
tensorflowtts
--- tags: - tensorflowtts - audio - text-to-speech - text-to-mel language: eng license: apache-2.0 datasets: - LJSpeech widget: - text: "How are you?" --- # FastSpeech trained on LJSpeech (Eng) This repository provides a pretrained [FastSpeech](https://arxiv.org/abs/1905.09263) trained on LJSpeech dataset (ENG). For a detail of the model, we encourage you to read more about [TensorFlowTTS](https://github.com/TensorSpeech/TensorFlowTTS). ## Install TensorFlowTTS First of all, please install TensorFlowTTS with the following command: ``` pip install TensorFlowTTS ``` ### Converting your Text to Mel Spectrogram ```python import numpy as np import soundfile as sf import yaml import tensorflow as tf from tensorflow_tts.inference import AutoProcessor from tensorflow_tts.inference import TFAutoModel processor = AutoProcessor.from_pretrained("tensorspeech/tts-fastspeech-ljspeech-en") fastspeech = TFAutoModel.from_pretrained("tensorspeech/tts-fastspeech-ljspeech-en") text = "How are you?" input_ids = processor.text_to_sequence(text) mel_before, mel_after, duration_outputs = fastspeech.inference( input_ids=tf.expand_dims(tf.convert_to_tensor(input_ids, dtype=tf.int32), 0), speaker_ids=tf.convert_to_tensor([0], dtype=tf.int32), speed_ratios=tf.convert_to_tensor([1.0], dtype=tf.float32), ) ``` #### Referencing FastSpeech ``` @article{DBLP:journals/corr/abs-1905-09263, author = {Yi Ren and Yangjun Ruan and Xu Tan and Tao Qin and Sheng Zhao and Zhou Zhao and Tie{-}Yan Liu}, title = {FastSpeech: Fast, Robust and Controllable Text to Speech}, journal = {CoRR}, volume = {abs/1905.09263}, year = {2019}, url = {http://arxiv.org/abs/1905.09263}, archivePrefix = {arXiv}, eprint = {1905.09263}, timestamp = {Wed, 11 Nov 2020 08:48:07 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-1905-09263.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` #### Referencing TensorFlowTTS ``` @misc{TFTTS, author = {Minh Nguyen, Alejandro Miguel Velasquez, Erogol, Kuan Chen, Dawid Kobus, Takuya Ebata, Trinh Le and Yunchao He}, title = {TensorflowTTS}, year = {2020}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\\url{https://github.com/TensorSpeech/TensorFlowTTS}}, } ```
tensorspeech/tts-fastspeech2-baker-ch
2021-06-02T02:51:55.000Z
[ "chinese", "dataset:Baker", "arxiv:2006.04558", "tensorflowtts", "audio", "text-to-speech", "text-to-mel", "license:apache-2.0" ]
text-to-speech
[ ".gitattributes", "README.md", "config.yml", "model.h5", "processor.json" ]
tensorspeech
0
tensorflowtts
--- tags: - tensorflowtts - audio - text-to-speech - text-to-mel language: chinese license: apache-2.0 datasets: - Baker widget: - text: "这是一个开源的端到端中文语音合成系统" --- # FastSpeech2 trained on Baker (Chinese) This repository provides a pretrained [FastSpeech2](https://arxiv.org/abs/2006.04558) trained on Baker dataset (Ch). For a detail of the model, we encourage you to read more about [TensorFlowTTS](https://github.com/TensorSpeech/TensorFlowTTS). ## Install TensorFlowTTS First of all, please install TensorFlowTTS with the following command: ``` pip install TensorFlowTTS ``` ### Converting your Text to Mel Spectrogram ```python import numpy as np import soundfile as sf import yaml import tensorflow as tf from tensorflow_tts.inference import AutoProcessor from tensorflow_tts.inference import TFAutoModel processor = AutoProcessor.from_pretrained("tensorspeech/tts-fastspeech2-baker-ch") fastspeech2 = TFAutoModel.from_pretrained("tensorspeech/tts-fastspeech2-baker-ch") text = "这是一个开源的端到端中文语音合成系统" input_ids = processor.text_to_sequence(text, inference=True) mel_before, mel_after, duration_outputs, _, _ = fastspeech2.inference( input_ids=tf.expand_dims(tf.convert_to_tensor(input_ids, dtype=tf.int32), 0), speaker_ids=tf.convert_to_tensor([0], dtype=tf.int32), speed_ratios=tf.convert_to_tensor([1.0], dtype=tf.float32), f0_ratios =tf.convert_to_tensor([1.0], dtype=tf.float32), energy_ratios =tf.convert_to_tensor([1.0], dtype=tf.float32), ) ``` #### Referencing FastSpeech2 ``` @misc{ren2021fastspeech, title={FastSpeech 2: Fast and High-Quality End-to-End Text to Speech}, author={Yi Ren and Chenxu Hu and Xu Tan and Tao Qin and Sheng Zhao and Zhou Zhao and Tie-Yan Liu}, year={2021}, eprint={2006.04558}, archivePrefix={arXiv}, primaryClass={eess.AS} } ``` #### Referencing TensorFlowTTS ``` @misc{TFTTS, author = {Minh Nguyen, Alejandro Miguel Velasquez, Erogol, Kuan Chen, Dawid Kobus, Takuya Ebata, Trinh Le and Yunchao He}, title = {TensorflowTTS}, year = {2020}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\\url{https://github.com/TensorSpeech/TensorFlowTTS}}, } ```
tensorspeech/tts-fastspeech2-kss-ko
2021-06-11T03:03:15.000Z
[ "ko", "dataset:KSS", "arxiv:2006.04558", "tensorflowtts", "audio", "text-to-speech", "text-to-mel", "license:apache-2.0" ]
text-to-speech
[ ".gitattributes", "README.md", "config.yml", "model.h5", "processor.json" ]
tensorspeech
0
tensorflowtts
--- tags: - tensorflowtts - audio - text-to-speech - text-to-mel language: ko license: apache-2.0 datasets: - KSS widget: - text: "신은 우리의 수학 문제에는 관심이 없다. 신은 다만 경험적으로 통합할 뿐이다." --- # FastSpeech2 trained on KSS (Korean) This repository provides a pretrained [FastSpeech2](https://arxiv.org/abs/2006.04558) trained on KSS dataset (Ko). For a detail of the model, we encourage you to read more about [TensorFlowTTS](https://github.com/TensorSpeech/TensorFlowTTS). ## Install TensorFlowTTS First of all, please install TensorFlowTTS with the following command: ``` pip install TensorFlowTTS ``` ### Converting your Text to Mel Spectrogram ```python import numpy as np import soundfile as sf import yaml import tensorflow as tf from tensorflow_tts.inference import AutoProcessor from tensorflow_tts.inference import TFAutoModel processor = AutoProcessor.from_pretrained("tensorspeech/tts-fastspeech2-kss-ko") fastspeech2 = TFAutoModel.from_pretrained("tensorspeech/tts-fastspeech2-kss-ko") text = "신은 우리의 수학 문제에는 관심이 없다. 신은 다만 경험적으로 통합할 뿐이다." input_ids = processor.text_to_sequence(text) mel_before, mel_after, duration_outputs, _, _ = fastspeech2.inference( input_ids=tf.expand_dims(tf.convert_to_tensor(input_ids, dtype=tf.int32), 0), speaker_ids=tf.convert_to_tensor([0], dtype=tf.int32), speed_ratios=tf.convert_to_tensor([1.0], dtype=tf.float32), f0_ratios =tf.convert_to_tensor([1.0], dtype=tf.float32), energy_ratios =tf.convert_to_tensor([1.0], dtype=tf.float32), ) ``` #### Referencing FastSpeech2 ``` @misc{ren2021fastspeech, title={FastSpeech 2: Fast and High-Quality End-to-End Text to Speech}, author={Yi Ren and Chenxu Hu and Xu Tan and Tao Qin and Sheng Zhao and Zhou Zhao and Tie-Yan Liu}, year={2021}, eprint={2006.04558}, archivePrefix={arXiv}, primaryClass={eess.AS} } ``` #### Referencing TensorFlowTTS ``` @misc{TFTTS, author = {Minh Nguyen, Alejandro Miguel Velasquez, Erogol, Kuan Chen, Dawid Kobus, Takuya Ebata, Trinh Le and Yunchao He}, title = {TensorflowTTS}, year = {2020}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\\url{https://github.com/TensorSpeech/TensorFlowTTS}}, } ```
tensorspeech/tts-fastspeech2-ljspeech-en
2021-06-01T09:54:05.000Z
[ "eng", "dataset:LJSpeech", "arxiv:2006.04558", "tensorflowtts", "audio", "text-to-speech", "text-to-mel", "license:apache-2.0" ]
text-to-speech
[ ".gitattributes", "README.md", "config.yml", "model.h5", "processor.json" ]
tensorspeech
0
tensorflowtts
--- tags: - tensorflowtts - audio - text-to-speech - text-to-mel language: eng license: apache-2.0 datasets: - LJSpeech widget: - text: "How are you?" --- # FastSpeech2 trained on LJSpeech (Eng) This repository provides a pretrained [FastSpeech2](https://arxiv.org/abs/2006.04558) trained on LJSpeech dataset (ENG). For a detail of the model, we encourage you to read more about [TensorFlowTTS](https://github.com/TensorSpeech/TensorFlowTTS). ## Install TensorFlowTTS First of all, please install TensorFlowTTS with the following command: ``` pip install TensorFlowTTS ``` ### Converting your Text to Mel Spectrogram ```python import numpy as np import soundfile as sf import yaml import tensorflow as tf from tensorflow_tts.inference import AutoProcessor from tensorflow_tts.inference import TFAutoModel processor = AutoProcessor.from_pretrained("tensorspeech/tts-fastspeech2-ljspeech-en") fastspeech2 = TFAutoModel.from_pretrained("tensorspeech/tts-fastspeech2-ljspeech-en") text = "How are you?" input_ids = processor.text_to_sequence(text) mel_before, mel_after, duration_outputs, _, _ = fastspeech2.inference( input_ids=tf.expand_dims(tf.convert_to_tensor(input_ids, dtype=tf.int32), 0), speaker_ids=tf.convert_to_tensor([0], dtype=tf.int32), speed_ratios=tf.convert_to_tensor([1.0], dtype=tf.float32), f0_ratios =tf.convert_to_tensor([1.0], dtype=tf.float32), energy_ratios =tf.convert_to_tensor([1.0], dtype=tf.float32), ) ``` #### Referencing FastSpeech2 ``` @misc{ren2021fastspeech, title={FastSpeech 2: Fast and High-Quality End-to-End Text to Speech}, author={Yi Ren and Chenxu Hu and Xu Tan and Tao Qin and Sheng Zhao and Zhou Zhao and Tie-Yan Liu}, year={2021}, eprint={2006.04558}, archivePrefix={arXiv}, primaryClass={eess.AS} } ``` #### Referencing TensorFlowTTS ``` @misc{TFTTS, author = {Minh Nguyen, Alejandro Miguel Velasquez, Erogol, Kuan Chen, Dawid Kobus, Takuya Ebata, Trinh Le and Yunchao He}, title = {TensorflowTTS}, year = {2020}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\\url{https://github.com/TensorSpeech/TensorFlowTTS}}, } ```
tensorspeech/tts-mb_melgan-baker-ch
2021-06-02T02:50:59.000Z
[ "ch", "dataset:Baker", "arxiv:2005.05106", "tensorflowtts", "audio", "text-to-speech", "mel-to-wav", "license:apache-2.0" ]
text-to-speech
[ ".gitattributes", "README.md", "config.yml", "model.h5" ]
tensorspeech
0
tensorflowtts
--- tags: - tensorflowtts - audio - text-to-speech - mel-to-wav language: ch license: apache-2.0 datasets: - Baker widget: - text: "这是一个开源的端到端中文语音合成系统" --- # Multi-band MelGAN trained on Baker (Ch) This repository provides a pretrained [Multi-band MelGAN](https://arxiv.org/abs/2005.05106) trained on Baker dataset (ch). For a detail of the model, we encourage you to read more about [TensorFlowTTS](https://github.com/TensorSpeech/TensorFlowTTS). ## Install TensorFlowTTS First of all, please install TensorFlowTTS with the following command: ``` pip install TensorFlowTTS ``` ### Converting your Text to Wav ```python import soundfile as sf import numpy as np import tensorflow as tf from tensorflow_tts.inference import AutoProcessor from tensorflow_tts.inference import TFAutoModel processor = AutoProcessor.from_pretrained("tensorspeech/tts-tacotron2-baker-ch") tacotron2 = TFAutoModel.from_pretrained("tensorspeech/tts-tacotron2-baker-ch") mb_melgan = TFAutoModel.from_pretrained("tensorspeech/tts-mb_melgan-baker-ch") text = "这是一个开源的端到端中文语音合成系统" input_ids = processor.text_to_sequence(text, inference=True) # tacotron2 inference (text-to-mel) decoder_output, mel_outputs, stop_token_prediction, alignment_history = tacotron2.inference( input_ids=tf.expand_dims(tf.convert_to_tensor(input_ids, dtype=tf.int32), 0), input_lengths=tf.convert_to_tensor([len(input_ids)], tf.int32), speaker_ids=tf.convert_to_tensor([0], dtype=tf.int32), ) # melgan inference (mel-to-wav) audio = mb_melgan.inference(mel_outputs)[0, :, 0] # save to file sf.write('./audio.wav', audio, 22050, "PCM_16") ``` #### Referencing Multi-band MelGAN ``` @misc{yang2020multiband, title={Multi-band MelGAN: Faster Waveform Generation for High-Quality Text-to-Speech}, author={Geng Yang and Shan Yang and Kai Liu and Peng Fang and Wei Chen and Lei Xie}, year={2020}, eprint={2005.05106}, archivePrefix={arXiv}, primaryClass={cs.SD} } ``` #### Referencing TensorFlowTTS ``` @misc{TFTTS, author = {Minh Nguyen, Alejandro Miguel Velasquez, Erogol, Kuan Chen, Dawid Kobus, Takuya Ebata, Trinh Le and Yunchao He}, title = {TensorflowTTS}, year = {2020}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\\url{https://github.com/TensorSpeech/TensorFlowTTS}}, } ```
tensorspeech/tts-mb_melgan-kss-ko
2021-06-01T09:06:04.000Z
[ "ko", "dataset:KSS", "arxiv:2005.05106", "tensorflowtts", "audio", "text-to-speech", "mel-to-wav", "license:apache-2.0" ]
text-to-speech
[ ".gitattributes", "README.md", "config.yml", "model.h5" ]
tensorspeech
0
tensorflowtts
--- tags: - tensorflowtts - audio - text-to-speech - mel-to-wav language: ko license: apache-2.0 datasets: - KSS widget: - text: "신은 우리의 수학 문제에는 관심이 없다. 신은 다만 경험적으로 통합할 뿐이다." --- # Multi-band MelGAN trained on KSS (Korean) This repository provides a pretrained [Multi-band MelGAN](https://arxiv.org/abs/2005.05106) trained on KSS dataset (ko). For a detail of the model, we encourage you to read more about [TensorFlowTTS](https://github.com/TensorSpeech/TensorFlowTTS). ## Install TensorFlowTTS First of all, please install TensorFlowTTS with the following command: ``` pip install TensorFlowTTS ``` ### Converting your Text to Wav ```python import soundfile as sf import numpy as np import tensorflow as tf from tensorflow_tts.inference import AutoProcessor from tensorflow_tts.inference import TFAutoModel processor = AutoProcessor.from_pretrained("tensorspeech/tts-tacotron2-kss-ko") tacotron2 = TFAutoModel.from_pretrained("tensorspeech/tts-tacotron2-kss-ko") mb_melgan = TFAutoModel.from_pretrained("tensorspeech/tts-mb_melgan-kss-ko") text = "신은 우리의 수학 문제에는 관심이 없다. 신은 다만 경험적으로 통합할 뿐이다." input_ids = processor.text_to_sequence(text) # tacotron2 inference (text-to-mel) decoder_output, mel_outputs, stop_token_prediction, alignment_history = tacotron2.inference( input_ids=tf.expand_dims(tf.convert_to_tensor(input_ids, dtype=tf.int32), 0), input_lengths=tf.convert_to_tensor([len(input_ids)], tf.int32), speaker_ids=tf.convert_to_tensor([0], dtype=tf.int32), ) # melgan inference (mel-to-wav) audio = mb_melgan.inference(mel_outputs)[0, :, 0] # save to file sf.write('./audio.wav', audio, 22050, "PCM_16") ``` #### Referencing Multi-band MelGAN ``` @misc{yang2020multiband, title={Multi-band MelGAN: Faster Waveform Generation for High-Quality Text-to-Speech}, author={Geng Yang and Shan Yang and Kai Liu and Peng Fang and Wei Chen and Lei Xie}, year={2020}, eprint={2005.05106}, archivePrefix={arXiv}, primaryClass={cs.SD} } ``` #### Referencing TensorFlowTTS ``` @misc{TFTTS, author = {Minh Nguyen, Alejandro Miguel Velasquez, Erogol, Kuan Chen, Dawid Kobus, Takuya Ebata, Trinh Le and Yunchao He}, title = {TensorflowTTS}, year = {2020}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\\url{https://github.com/TensorSpeech/TensorFlowTTS}}, } ```
tensorspeech/tts-mb_melgan-ljspeech-en
2021-06-01T09:54:44.000Z
[ "en", "dataset:ljspeech", "arxiv:2005.05106", "tensorflowtts", "audio", "text-to-speech", "mel-to-wav", "license:apache-2.0" ]
text-to-speech
[ ".gitattributes", "README.md", "config.yml", "model.h5" ]
tensorspeech
0
tensorflowtts
--- tags: - tensorflowtts - audio - text-to-speech - mel-to-wav language: en license: apache-2.0 datasets: - ljspeech widget: - text: "Hello, how are you doing?" --- # Multi-band MelGAN trained on LJSpeech (En) This repository provides a pretrained [Multi-band MelGAN](https://arxiv.org/abs/2005.05106) trained on LJSpeech dataset (Eng). For a detail of the model, we encourage you to read more about [TensorFlowTTS](https://github.com/TensorSpeech/TensorFlowTTS). ## Install TensorFlowTTS First of all, please install TensorFlowTTS with the following command: ``` pip install TensorFlowTTS ``` ### Converting your Text to Wav ```python import soundfile as sf import numpy as np import tensorflow as tf from tensorflow_tts.inference import AutoProcessor from tensorflow_tts.inference import TFAutoModel processor = AutoProcessor.from_pretrained("tensorspeech/tts-tacotron2-ljspeech-en") tacotron2 = TFAutoModel.from_pretrained("tensorspeech/tts-tacotron2-ljspeech-en") mb_melgan = TFAutoModel.from_pretrained("tensorspeech/tts-mb_melgan-ljspeech-en") text = "This is a demo to show how to use our model to generate mel spectrogram from raw text." input_ids = processor.text_to_sequence(text) # tacotron2 inference (text-to-mel) decoder_output, mel_outputs, stop_token_prediction, alignment_history = tacotron2.inference( input_ids=tf.expand_dims(tf.convert_to_tensor(input_ids, dtype=tf.int32), 0), input_lengths=tf.convert_to_tensor([len(input_ids)], tf.int32), speaker_ids=tf.convert_to_tensor([0], dtype=tf.int32), ) # melgan inference (mel-to-wav) audio = mb_melgan.inference(mel_outputs)[0, :, 0] # save to file sf.write('./audio.wav', audio, 22050, "PCM_16") ``` #### Referencing Multi-band MelGAN ``` @misc{yang2020multiband, title={Multi-band MelGAN: Faster Waveform Generation for High-Quality Text-to-Speech}, author={Geng Yang and Shan Yang and Kai Liu and Peng Fang and Wei Chen and Lei Xie}, year={2020}, eprint={2005.05106}, archivePrefix={arXiv}, primaryClass={cs.SD} } ``` #### Referencing TensorFlowTTS ``` @misc{TFTTS, author = {Minh Nguyen, Alejandro Miguel Velasquez, Erogol, Kuan Chen, Dawid Kobus, Takuya Ebata, Trinh Le and Yunchao He}, title = {TensorflowTTS}, year = {2020}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\\url{https://github.com/TensorSpeech/TensorFlowTTS}}, } ```
tensorspeech/tts-mb_melgan-thorsten-ger
2021-06-01T09:07:00.000Z
[ "ger", "dataset:Thorsten", "arxiv:2005.05106", "tensorflowtts", "audio", "text-to-speech", "mel-to-wav", "license:apache-2.0" ]
text-to-speech
[ ".gitattributes", "README.md", "config.yml", "model.h5" ]
tensorspeech
0
tensorflowtts
--- tags: - tensorflowtts - audio - text-to-speech - mel-to-wav language: ger license: apache-2.0 datasets: - Thorsten widget: - text: "Möchtest du das meiner Frau erklären? Nein? Ich auch nicht." --- # Multi-band MelGAN trained on Thorsten (Ger) This repository provides a pretrained [Multi-band MelGAN](https://arxiv.org/abs/2005.05106) trained on Thorsten dataset (ger). For a detail of the model, we encourage you to read more about [TensorFlowTTS](https://github.com/TensorSpeech/TensorFlowTTS). ## Install TensorFlowTTS First of all, please install TensorFlowTTS with the following command: ``` pip install TensorFlowTTS ``` ### Converting your Text to Wav ```python import soundfile as sf import numpy as np import tensorflow as tf from tensorflow_tts.inference import AutoProcessor from tensorflow_tts.inference import TFAutoModel processor = AutoProcessor.from_pretrained("tensorspeech/tts-tacotron2-thorsten-ger") tacotron2 = TFAutoModel.from_pretrained("tensorspeech/tts-tacotron2-thorsten-ger") mb_melgan = TFAutoModel.from_pretrained("tensorspeech/tts-mb_melgan-thorsten-ger") text = "Möchtest du das meiner Frau erklären? Nein? Ich auch nicht." input_ids = processor.text_to_sequence(text) # tacotron2 inference (text-to-mel) decoder_output, mel_outputs, stop_token_prediction, alignment_history = tacotron2.inference( input_ids=tf.expand_dims(tf.convert_to_tensor(input_ids, dtype=tf.int32), 0), input_lengths=tf.convert_to_tensor([len(input_ids)], tf.int32), speaker_ids=tf.convert_to_tensor([0], dtype=tf.int32), ) # melgan inference (mel-to-wav) audio = mb_melgan.inference(mel_outputs)[0, :, 0] # save to file sf.write('./audio.wav', audio, 22050, "PCM_16") ``` #### Referencing Multi-band MelGAN ``` @misc{yang2020multiband, title={Multi-band MelGAN: Faster Waveform Generation for High-Quality Text-to-Speech}, author={Geng Yang and Shan Yang and Kai Liu and Peng Fang and Wei Chen and Lei Xie}, year={2020}, eprint={2005.05106}, archivePrefix={arXiv}, primaryClass={cs.SD} } ``` #### Referencing TensorFlowTTS ``` @misc{TFTTS, author = {Minh Nguyen, Alejandro Miguel Velasquez, Erogol, Kuan Chen, Dawid Kobus, Takuya Ebata, Trinh Le and Yunchao He}, title = {TensorflowTTS}, year = {2020}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\\url{https://github.com/TensorSpeech/TensorFlowTTS}}, } ```
tensorspeech/tts-melgan-ljspeech-en
2021-06-01T09:55:16.000Z
[ "en", "dataset:ljspeech", "arxiv:1910.06711", "tensorflowtts", "audio", "text-to-speech", "mel-to-wav", "license:apache-2.0" ]
text-to-speech
[ ".gitattributes", "README.md", "config.yml", "model.h5" ]
tensorspeech
0
tensorflowtts
--- tags: - tensorflowtts - audio - text-to-speech - mel-to-wav language: en license: apache-2.0 datasets: - ljspeech widget: - text: "Hello, how are you doing?" --- # MelGAN trained on LJSpeech (En) This repository provides a pretrained [MelGAN](https://arxiv.org/abs/1910.06711) trained on LJSpeech dataset (Eng). For a detail of the model, we encourage you to read more about [TensorFlowTTS](https://github.com/TensorSpeech/TensorFlowTTS). ## Install TensorFlowTTS First of all, please install TensorFlowTTS with the following command: ``` pip install TensorFlowTTS ``` ### Converting your Text to Wav ```python import soundfile as sf import numpy as np import tensorflow as tf from tensorflow_tts.inference import AutoProcessor from tensorflow_tts.inference import TFAutoModel processor = AutoProcessor.from_pretrained("tensorspeech/tts-tacotron2-ljspeech-en") tacotron2 = TFAutoModel.from_pretrained("tensorspeech/tts-tacotron2-ljspeech-en") melgan = TFAutoModel.from_pretrained("tensorspeech/tts-melgan-ljspeech-en") text = "This is a demo to show how to use our model to generate mel spectrogram from raw text." input_ids = processor.text_to_sequence(text) # tacotron2 inference (text-to-mel) decoder_output, mel_outputs, stop_token_prediction, alignment_history = tacotron2.inference( input_ids=tf.expand_dims(tf.convert_to_tensor(input_ids, dtype=tf.int32), 0), input_lengths=tf.convert_to_tensor([len(input_ids)], tf.int32), speaker_ids=tf.convert_to_tensor([0], dtype=tf.int32), ) # melgan inference (mel-to-wav) audio = melgan.inference(mel_outputs)[0, :, 0] # save to file sf.write('./audio.wav', audio, 22050, "PCM_16") ``` #### Referencing MelGAN ``` @misc{kumar2019melgan, title={MelGAN: Generative Adversarial Networks for Conditional Waveform Synthesis}, author={Kundan Kumar and Rithesh Kumar and Thibault de Boissiere and Lucas Gestin and Wei Zhen Teoh and Jose Sotelo and Alexandre de Brebisson and Yoshua Bengio and Aaron Courville}, year={2019}, eprint={1910.06711}, archivePrefix={arXiv}, primaryClass={eess.AS} } ``` #### Referencing TensorFlowTTS ``` @misc{TFTTS, author = {Minh Nguyen, Alejandro Miguel Velasquez, Erogol, Kuan Chen, Dawid Kobus, Takuya Ebata, Trinh Le and Yunchao He}, title = {TensorflowTTS}, year = {2020}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\\url{https://github.com/TensorSpeech/TensorFlowTTS}}, } ```
tensorspeech/tts-tacotron2-baker-ch
2021-06-02T02:50:20.000Z
[ "ch", "dataset:baker", "arxiv:1712.05884", "arxiv:1710.08969", "tensorflowtts", "audio", "text-to-speech", "text-to-mel", "license:apache-2.0" ]
text-to-speech
[ ".gitattributes", "README.md", "config.yml", "model.h5", "processor.json" ]
tensorspeech
0
tensorflowtts
--- tags: - tensorflowtts - audio - text-to-speech - text-to-mel language: ch license: apache-2.0 datasets: - baker widget: - text: "这是一个开源的端到端中文语音合成系统" --- # Tacotron 2 with Guided Attention trained on Baker (Chinese) This repository provides a pretrained [Tacotron2](https://arxiv.org/abs/1712.05884) trained with [Guided Attention](https://arxiv.org/abs/1710.08969) on Baker dataset (Ch). For a detail of the model, we encourage you to read more about [TensorFlowTTS](https://github.com/TensorSpeech/TensorFlowTTS). ## Install TensorFlowTTS First of all, please install TensorFlowTTS with the following command: ``` pip install TensorFlowTTS ``` ### Converting your Text to Mel Spectrogram ```python import numpy as np import soundfile as sf import yaml import tensorflow as tf from tensorflow_tts.inference import AutoProcessor from tensorflow_tts.inference import TFAutoModel processor = AutoProcessor.from_pretrained("tensorspeech/tts-tacotron2-baker-ch") tacotron2 = TFAutoModel.from_pretrained("tensorspeech/tts-tacotron2-baker-ch") text = "这是一个开源的端到端中文语音合成系统" input_ids = processor.text_to_sequence(text, inference=True) decoder_output, mel_outputs, stop_token_prediction, alignment_history = tacotron2.inference( input_ids=tf.expand_dims(tf.convert_to_tensor(input_ids, dtype=tf.int32), 0), input_lengths=tf.convert_to_tensor([len(input_ids)], tf.int32), speaker_ids=tf.convert_to_tensor([0], dtype=tf.int32), ) ``` #### Referencing Tacotron 2 ``` @article{DBLP:journals/corr/abs-1712-05884, author = {Jonathan Shen and Ruoming Pang and Ron J. Weiss and Mike Schuster and Navdeep Jaitly and Zongheng Yang and Zhifeng Chen and Yu Zhang and Yuxuan Wang and R. J. Skerry{-}Ryan and Rif A. Saurous and Yannis Agiomyrgiannakis and Yonghui Wu}, title = {Natural {TTS} Synthesis by Conditioning WaveNet on Mel Spectrogram Predictions}, journal = {CoRR}, volume = {abs/1712.05884}, year = {2017}, url = {http://arxiv.org/abs/1712.05884}, archivePrefix = {arXiv}, eprint = {1712.05884}, timestamp = {Thu, 28 Nov 2019 08:59:52 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-1712-05884.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` #### Referencing TensorFlowTTS ``` @misc{TFTTS, author = {Minh Nguyen, Alejandro Miguel Velasquez, Erogol, Kuan Chen, Dawid Kobus, Takuya Ebata, Trinh Le and Yunchao He}, title = {TensorflowTTS}, year = {2020}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\\url{https://github.com/TensorSpeech/TensorFlowTTS}}, } ```
tensorspeech/tts-tacotron2-kss-ko
2021-06-01T09:56:01.000Z
[ "ko", "dataset:kss", "arxiv:1712.05884", "arxiv:1710.08969", "tensorflowtts", "audio", "text-to-speech", "text-to-mel", "license:apache-2.0" ]
text-to-speech
[ ".gitattributes", "README.md", "config.yml", "model.h5", "processor.json" ]
tensorspeech
0
tensorflowtts
--- tags: - tensorflowtts - audio - text-to-speech - text-to-mel language: ko license: apache-2.0 datasets: - kss widget: - text: "신은 우리의 수학 문제에는 관심이 없다. 신은 다만 경험적으로 통합할 뿐이다." --- # Tacotron 2 with Guided Attention trained on KSS (Korean) This repository provides a pretrained [Tacotron2](https://arxiv.org/abs/1712.05884) trained with [Guided Attention](https://arxiv.org/abs/1710.08969) on KSS dataset (KO). For a detail of the model, we encourage you to read more about [TensorFlowTTS](https://github.com/TensorSpeech/TensorFlowTTS). ## Install TensorFlowTTS First of all, please install TensorFlowTTS with the following command: ``` pip install TensorFlowTTS ``` ### Converting your Text to Mel Spectrogram ```python import numpy as np import soundfile as sf import yaml import tensorflow as tf from tensorflow_tts.inference import AutoProcessor from tensorflow_tts.inference import TFAutoModel processor = AutoProcessor.from_pretrained("tensorspeech/tts-tacotron2-kss-ko") tacotron2 = TFAutoModel.from_pretrained("tensorspeech/tts-tacotron2-kss-ko") text = "신은 우리의 수학 문제에는 관심이 없다. 신은 다만 경험적으로 통합할 뿐이다." input_ids = processor.text_to_sequence(text) decoder_output, mel_outputs, stop_token_prediction, alignment_history = tacotron2.inference( input_ids=tf.expand_dims(tf.convert_to_tensor(input_ids, dtype=tf.int32), 0), input_lengths=tf.convert_to_tensor([len(input_ids)], tf.int32), speaker_ids=tf.convert_to_tensor([0], dtype=tf.int32), ) ``` #### Referencing Tacotron 2 ``` @article{DBLP:journals/corr/abs-1712-05884, author = {Jonathan Shen and Ruoming Pang and Ron J. Weiss and Mike Schuster and Navdeep Jaitly and Zongheng Yang and Zhifeng Chen and Yu Zhang and Yuxuan Wang and R. J. Skerry{-}Ryan and Rif A. Saurous and Yannis Agiomyrgiannakis and Yonghui Wu}, title = {Natural {TTS} Synthesis by Conditioning WaveNet on Mel Spectrogram Predictions}, journal = {CoRR}, volume = {abs/1712.05884}, year = {2017}, url = {http://arxiv.org/abs/1712.05884}, archivePrefix = {arXiv}, eprint = {1712.05884}, timestamp = {Thu, 28 Nov 2019 08:59:52 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-1712-05884.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` #### Referencing TensorFlowTTS ``` @misc{TFTTS, author = {Minh Nguyen, Alejandro Miguel Velasquez, Erogol, Kuan Chen, Dawid Kobus, Takuya Ebata, Trinh Le and Yunchao He}, title = {TensorflowTTS}, year = {2020}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\\url{https://github.com/TensorSpeech/TensorFlowTTS}}, } ```
tensorspeech/tts-tacotron2-ljspeech-en
2021-06-01T09:56:19.000Z
[ "en", "dataset:ljspeech", "arxiv:1712.05884", "arxiv:1710.08969", "tensorflowtts", "audio", "text-to-speech", "text-to-mel", "license:apache-2.0" ]
text-to-speech
[ ".gitattributes", "README.md", "config.yml", "model.h5", "processor.json" ]
tensorspeech
0
tensorflowtts
--- tags: - tensorflowtts - audio - text-to-speech - text-to-mel language: en license: apache-2.0 datasets: - ljspeech widget: - text: "Hello, how are you doing?" --- # Tacotron 2 with Guided Attention trained on LJSpeech (En) This repository provides a pretrained [Tacotron2](https://arxiv.org/abs/1712.05884) trained with [Guided Attention](https://arxiv.org/abs/1710.08969) on LJSpeech dataset (Eng). For a detail of the model, we encourage you to read more about [TensorFlowTTS](https://github.com/TensorSpeech/TensorFlowTTS). ## Install TensorFlowTTS First of all, please install TensorFlowTTS with the following command: ``` pip install TensorFlowTTS ``` ### Converting your Text to Mel Spectrogram ```python import numpy as np import soundfile as sf import yaml import tensorflow as tf from tensorflow_tts.inference import AutoProcessor from tensorflow_tts.inference import TFAutoModel processor = AutoProcessor.from_pretrained("tensorspeech/tts-tacotron2-ljspeech-en") tacotron2 = TFAutoModel.from_pretrained("tensorspeech/tts-tacotron2-ljspeech-en") text = "This is a demo to show how to use our model to generate mel spectrogram from raw text." input_ids = processor.text_to_sequence(text) decoder_output, mel_outputs, stop_token_prediction, alignment_history = tacotron2.inference( input_ids=tf.expand_dims(tf.convert_to_tensor(input_ids, dtype=tf.int32), 0), input_lengths=tf.convert_to_tensor([len(input_ids)], tf.int32), speaker_ids=tf.convert_to_tensor([0], dtype=tf.int32), ) ``` #### Referencing Tacotron 2 ``` @article{DBLP:journals/corr/abs-1712-05884, author = {Jonathan Shen and Ruoming Pang and Ron J. Weiss and Mike Schuster and Navdeep Jaitly and Zongheng Yang and Zhifeng Chen and Yu Zhang and Yuxuan Wang and R. J. Skerry{-}Ryan and Rif A. Saurous and Yannis Agiomyrgiannakis and Yonghui Wu}, title = {Natural {TTS} Synthesis by Conditioning WaveNet on Mel Spectrogram Predictions}, journal = {CoRR}, volume = {abs/1712.05884}, year = {2017}, url = {http://arxiv.org/abs/1712.05884}, archivePrefix = {arXiv}, eprint = {1712.05884}, timestamp = {Thu, 28 Nov 2019 08:59:52 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-1712-05884.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` #### Referencing TensorFlowTTS ``` @misc{TFTTS, author = {Minh Nguyen, Alejandro Miguel Velasquez, Erogol, Kuan Chen, Dawid Kobus, Takuya Ebata, Trinh Le and Yunchao He}, title = {TensorflowTTS}, year = {2020}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\\url{https://github.com/TensorSpeech/TensorFlowTTS}}, } ```
tensorspeech/tts-tacotron2-thorsten-ger
2021-06-01T09:56:43.000Z
[ "german", "dataset:Thorsten", "arxiv:1712.05884", "arxiv:1710.08969", "tensorflowtts", "audio", "text-to-speech", "text-to-mel", "license:apache-2.0" ]
text-to-speech
[ ".gitattributes", "README.md", "config.yml", "model.h5", "processor.json" ]
tensorspeech
0
tensorflowtts
--- tags: - tensorflowtts - audio - text-to-speech - text-to-mel language: german license: apache-2.0 datasets: - Thorsten widget: - text: "Möchtest du das meiner Frau erklären? Nein? Ich auch nicht." --- # Tacotron 2 with Guided Attention trained on Thorsten (Ger) This repository provides a pretrained [Tacotron2](https://arxiv.org/abs/1712.05884) trained with [Guided Attention](https://arxiv.org/abs/1710.08969) on Thorsten dataset (Ger). For a detail of the model, we encourage you to read more about [TensorFlowTTS](https://github.com/TensorSpeech/TensorFlowTTS). ## Install TensorFlowTTS First of all, please install TensorFlowTTS with the following command: ``` pip install TensorFlowTTS ``` ### Converting your Text to Mel Spectrogram ```python import numpy as np import soundfile as sf import yaml import tensorflow as tf from tensorflow_tts.inference import AutoProcessor from tensorflow_tts.inference import TFAutoModel processor = AutoProcessor.from_pretrained("tensorspeech/tts-tacotron2-thorsten-ger") tacotron2 = TFAutoModel.from_pretrained("tensorspeech/tts-tacotron2-thorsten-ger") text = "Möchtest du das meiner Frau erklären? Nein? Ich auch nicht." input_ids = processor.text_to_sequence(text) decoder_output, mel_outputs, stop_token_prediction, alignment_history = tacotron2.inference( input_ids=tf.expand_dims(tf.convert_to_tensor(input_ids, dtype=tf.int32), 0), input_lengths=tf.convert_to_tensor([len(input_ids)], tf.int32), speaker_ids=tf.convert_to_tensor([0], dtype=tf.int32), ) ``` #### Referencing Tacotron 2 ``` @article{DBLP:journals/corr/abs-1712-05884, author = {Jonathan Shen and Ruoming Pang and Ron J. Weiss and Mike Schuster and Navdeep Jaitly and Zongheng Yang and Zhifeng Chen and Yu Zhang and Yuxuan Wang and R. J. Skerry{-}Ryan and Rif A. Saurous and Yannis Agiomyrgiannakis and Yonghui Wu}, title = {Natural {TTS} Synthesis by Conditioning WaveNet on Mel Spectrogram Predictions}, journal = {CoRR}, volume = {abs/1712.05884}, year = {2017}, url = {http://arxiv.org/abs/1712.05884}, archivePrefix = {arXiv}, eprint = {1712.05884}, timestamp = {Thu, 28 Nov 2019 08:59:52 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-1712-05884.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` #### Referencing TensorFlowTTS ``` @misc{TFTTS, author = {Minh Nguyen, Alejandro Miguel Velasquez, Erogol, Kuan Chen, Dawid Kobus, Takuya Ebata, Trinh Le and Yunchao He}, title = {TensorflowTTS}, year = {2020}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\\url{https://github.com/TensorSpeech/TensorFlowTTS}}, } ```
teshnizi/bert-lossy
2021-02-10T15:46:36.000Z
[]
[ ".gitattributes", "README.md" ]
teshnizi
0
hello hello
textattack/albert-base-v2-CoLA
2020-07-06T16:28:50.000Z
[ "pytorch", "albert", "text-classification", "transformers" ]
text-classification
[ ".gitattributes", "README.md", "config.json", "log.txt", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json", "train_args.json" ]
textattack
179
transformers
## TextAttack Model Cardand the glue dataset loaded using the `nlp` library. The model was fine-tuned for 5 epochs with a batch size of 32, a learning rate of 3e-05, and a maximum sequence length of 128. Since this was a classification task, the model was trained with a cross-entropy loss function. The best score the model achieved on this task was 0.8245445829338447, as measured by the eval set accuracy, found after 2 epochs. For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
textattack/albert-base-v2-MRPC
2020-07-06T16:29:43.000Z
[ "pytorch", "albert", "text-classification", "transformers" ]
text-classification
[ ".gitattributes", "README.md", "config.json", "log.txt", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json", "train_args.json" ]
textattack
1,520
transformers
## TextAttack Model Card This `albert-base-v2` model was fine-tuned for sequence classification using TextAttack and the glue dataset loaded using the `nlp` library. The model was fine-tuned for 5 epochs with a batch size of 32, a learning rate of 2e-05, and a maximum sequence length of 128. Since this was a classification task, the model was trained with a cross-entropy loss function. The best score the model achieved on this task was 0.8970588235294118, as measured by the eval set accuracy, found after 4 epochs. For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
textattack/albert-base-v2-QQP
2020-07-06T16:30:55.000Z
[ "pytorch", "albert", "text-classification", "transformers" ]
text-classification
[ ".gitattributes", "README.md", "config.json", "log.txt", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json", "train_args.json" ]
textattack
13
transformers
## TextAttack Model Card This `albert-base-v2` model was fine-tuned for sequence classification using TextAttack and the glue dataset loaded using the `nlp` library. The model was fine-tuned for 5 epochs with a batch size of 32, a learning rate of 5e-05, and a maximum sequence length of 128. Since this was a classification task, the model was trained with a cross-entropy loss function. The best score the model achieved on this task was 0.9073707642839476, as measured by the eval set accuracy, found after 3 epochs. For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
textattack/albert-base-v2-RTE
2020-07-06T16:31:05.000Z
[ "pytorch", "albert", "text-classification", "transformers" ]
text-classification
[ ".gitattributes", "README.md", "config.json", "log.txt", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json", "train_args.json" ]
textattack
26
transformers
## TextAttack Model Card This `albert-base-v2` model was fine-tuned for sequence classification using TextAttack and the glue dataset loaded using the `nlp` library. The model was fine-tuned for 5 epochs with a batch size of 64, a learning rate of 3e-05, and a maximum sequence length of 128. Since this was a classification task, the model was trained with a cross-entropy loss function. The best score the model achieved on this task was 0.776173285198556, as measured by the eval set accuracy, found after 4 epochs. For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
textattack/albert-base-v2-SST-2
2020-07-06T16:32:15.000Z
[ "pytorch", "albert", "text-classification", "transformers" ]
text-classification
[ ".gitattributes", "README.md", "config.json", "log.txt", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json", "train_args.json" ]
textattack
118
transformers
## TextAttack Model Card This `albert-base-v2` model was fine-tuned for sequence classification using TextAttack and the glue dataset loaded using the `nlp` library. The model was fine-tuned for 5 epochs with a batch size of 32, a learning rate of 3e-05, and a maximum sequence length of 64. Since this was a classification task, the model was trained with a cross-entropy loss function. The best score the model achieved on this task was 0.9254587155963303, as measured by the eval set accuracy, found after 2 epochs. For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
textattack/albert-base-v2-STS-B
2020-07-06T16:32:24.000Z
[ "pytorch", "albert", "text-classification", "transformers" ]
text-classification
[ ".gitattributes", "README.md", "config.json", "log.txt", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json", "train_args.json" ]
textattack
86
transformers
## TextAttack Model Card This `albert-base-v2` model was fine-tuned for sequence classification using TextAttack and the glue dataset loaded using the `nlp` library. The model was fine-tuned for 5 epochs with a batch size of 32, a learning rate of 3e-05, and a maximum sequence length of 128. Since this was a regression task, the model was trained with a mean squared error loss function. The best score the model achieved on this task was 0.9064220351504577, as measured by the eval set pearson correlation, found after 3 epochs. For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).