modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-07-31 06:28:41
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
539 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-07-31 06:26:51
card
stringlengths
11
1.01M
tner/xlm-roberta-base-uncased-wnut2017
tner
2021-02-12T23:48:34Z
4
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
# XLM-RoBERTa for NER XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner). ## Usage ``` from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-base-uncased-wnut2017") model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-base-uncased-wnut2017") ```
tner/xlm-roberta-base-uncased-mit-restaurant
tner
2021-02-12T23:47:38Z
4
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
# XLM-RoBERTa for NER XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner). ## Usage ``` from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-base-uncased-mit-restaurant") model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-base-uncased-mit-restaurant") ```
tner/xlm-roberta-base-uncased-fin
tner
2021-02-12T23:47:27Z
4
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
# XLM-RoBERTa for NER XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner). ## Usage ``` from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-base-uncased-fin") model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-base-uncased-fin") ```
tner/xlm-roberta-base-uncased-all-english
tner
2021-02-12T23:35:06Z
7
1
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
# XLM-RoBERTa for NER XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner). ## Usage ``` from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-base-uncased-all-english") model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-base-uncased-all-english") ```
tner/xlm-roberta-base-panx-dataset-es
tner
2021-02-12T23:34:35Z
4
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
# XLM-RoBERTa for NER XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner). ## Usage ``` from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-base-panx-dataset-es") model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-base-panx-dataset-es") ```
tner/xlm-roberta-base-panx-dataset-ar
tner
2021-02-12T23:34:15Z
4
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
# XLM-RoBERTa for NER XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner). ## Usage ``` from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-base-panx-dataset-ar") model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-base-panx-dataset-ar") ```
Musixmatch/umberto-commoncrawl-cased-v1
Musixmatch
2021-02-12T11:31:59Z
16,559
14
transformers
[ "transformers", "pytorch", "camembert", "fill-mask", "it", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
--- language: it --- # UmBERTo Commoncrawl Cased [UmBERTo](https://github.com/musixmatchresearch/umberto) is a Roberta-based Language Model trained on large Italian Corpora and uses two innovative approaches: SentencePiece and Whole Word Masking. Now available at [github.com/huggingface/transformers](https://huggingface.co/Musixmatch/umberto-commoncrawl-cased-v1) <p align="center"> <img src="https://user-images.githubusercontent.com/7140210/72913702-d55a8480-3d3d-11ea-99fc-f2ef29af4e72.jpg" width="700"> </br> Marco Lodola, Monument to Umberto Eco, Alessandria 2019 </p> ## Dataset UmBERTo-Commoncrawl-Cased utilizes the Italian subcorpus of [OSCAR](https://traces1.inria.fr/oscar/) as training set of the language model. We used deduplicated version of the Italian corpus that consists in 70 GB of plain text data, 210M sentences with 11B words where the sentences have been filtered and shuffled at line level in order to be used for NLP research. ## Pre-trained model | Model | WWM | Cased | Tokenizer | Vocab Size | Train Steps | Download | | ------ | ------ | ------ | ------ | ------ |------ | ------ | | `umberto-commoncrawl-cased-v1` | YES | YES | SPM | 32K | 125k | [Link](http://bit.ly/35zO7GH) | This model was trained with [SentencePiece](https://github.com/google/sentencepiece) and Whole Word Masking. ## Downstream Tasks These results refers to umberto-commoncrawl-cased model. All details are at [Umberto](https://github.com/musixmatchresearch/umberto) Official Page. #### Named Entity Recognition (NER) | Dataset | F1 | Precision | Recall | Accuracy | | ------ | ------ | ------ | ------ | ------ | | **ICAB-EvalITA07** | **87.565** | 86.596 | 88.556 | 98.690 | | **WikiNER-ITA** | **92.531** | 92.509 | 92.553 | 99.136 | #### Part of Speech (POS) | Dataset | F1 | Precision | Recall | Accuracy | | ------ | ------ | ------ | ------ | ------ | | **UD_Italian-ISDT** | 98.870 | 98.861 | 98.879 | **98.977** | | **UD_Italian-ParTUT** | 98.786 | 98.812 | 98.760 | **98.903** | ## Usage ##### Load UmBERTo with AutoModel, Autotokenizer: ```python import torch from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("Musixmatch/umberto-commoncrawl-cased-v1") umberto = AutoModel.from_pretrained("Musixmatch/umberto-commoncrawl-cased-v1") encoded_input = tokenizer.encode("Umberto Eco è stato un grande scrittore") input_ids = torch.tensor(encoded_input).unsqueeze(0) # Batch size 1 outputs = umberto(input_ids) last_hidden_states = outputs[0] # The last hidden-state is the first element of the output ``` ##### Predict masked token: ```python from transformers import pipeline fill_mask = pipeline( "fill-mask", model="Musixmatch/umberto-commoncrawl-cased-v1", tokenizer="Musixmatch/umberto-commoncrawl-cased-v1" ) result = fill_mask("Umberto Eco è <mask> un grande scrittore") # {'sequence': '<s> Umberto Eco è considerato un grande scrittore</s>', 'score': 0.18599839508533478, 'token': 5032} # {'sequence': '<s> Umberto Eco è stato un grande scrittore</s>', 'score': 0.17816807329654694, 'token': 471} # {'sequence': '<s> Umberto Eco è sicuramente un grande scrittore</s>', 'score': 0.16565583646297455, 'token': 2654} # {'sequence': '<s> Umberto Eco è indubbiamente un grande scrittore</s>', 'score': 0.0932890921831131, 'token': 17908} # {'sequence': '<s> Umberto Eco è certamente un grande scrittore</s>', 'score': 0.054701317101716995, 'token': 5269} ``` ## Citation All of the original datasets are publicly available or were released with the owners' grant. The datasets are all released under a CC0 or CCBY license. * UD Italian-ISDT Dataset [Github](https://github.com/UniversalDependencies/UD_Italian-ISDT) * UD Italian-ParTUT Dataset [Github](https://github.com/UniversalDependencies/UD_Italian-ParTUT) * I-CAB (Italian Content Annotation Bank), EvalITA [Page](http://www.evalita.it/) * WIKINER [Page](https://figshare.com/articles/Learning_multilingual_named_entity_recognition_from_Wikipedia/5462500) , [Paper](https://www.sciencedirect.com/science/article/pii/S0004370212000276?via%3Dihub) ``` @inproceedings {magnini2006annotazione, title = {Annotazione di contenuti concettuali in un corpus italiano: I - CAB}, author = {Magnini,Bernardo and Cappelli,Amedeo and Pianta,Emanuele and Speranza,Manuela and Bartalesi Lenzi,V and Sprugnoli,Rachele and Romano,Lorenza and Girardi,Christian and Negri,Matteo}, booktitle = {Proc.of SILFI 2006}, year = {2006} } @inproceedings {magnini2006cab, title = {I - CAB: the Italian Content Annotation Bank.}, author = {Magnini,Bernardo and Pianta,Emanuele and Girardi,Christian and Negri,Matteo and Romano,Lorenza and Speranza,Manuela and Lenzi,Valentina Bartalesi and Sprugnoli,Rachele}, booktitle = {LREC}, pages = {963--968}, year = {2006}, organization = {Citeseer} } ``` ## Authors **Loreto Parisi**: `loreto at musixmatch dot com`, [loretoparisi](https://github.com/loretoparisi) **Simone Francia**: `simone.francia at musixmatch dot com`, [simonefrancia](https://github.com/simonefrancia) **Paolo Magnani**: `paul.magnani95 at gmail dot com`, [paulthemagno](https://github.com/paulthemagno) ## About Musixmatch AI ![Musxmatch Ai mac app icon-128](https://user-images.githubusercontent.com/163333/72244273-396aa380-35ee-11ea-894b-4ea48230c02b.png) We do Machine Learning and Artificial Intelligence @[musixmatch](https://twitter.com/Musixmatch) Follow us on [Twitter](https://twitter.com/musixmatchai) [Github](https://github.com/musixmatchresearch)
microsoft/deberta-xxlarge-v2
microsoft
2021-02-11T02:05:17Z
155
0
transformers
[ "transformers", "pytorch", "deberta-v2", "deberta", "en", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: en tags: deberta thumbnail: https://huggingface.co/front/thumbnails/microsoft.png license: mit --- ## DeBERTa: Decoding-enhanced BERT with Disentangled Attention ## This model is DEPRECATED, please use [DeBERTa-V2-XXLarge](https://huggingface.co/microsoft/deberta-v2-xxlarge)
microsoft/deberta-xxlarge-v2-mnli
microsoft
2021-02-11T02:05:00Z
20
0
transformers
[ "transformers", "pytorch", "deberta-v2", "deberta", "en", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: en tags: deberta thumbnail: https://huggingface.co/front/thumbnails/microsoft.png license: mit --- ## DeBERTa: Decoding-enhanced BERT with Disentangled Attention ## This model is DEPRECATED, please use [DeBERTa-V2-XXLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xxlarge-mnli)
microsoft/deberta-xlarge-v2
microsoft
2021-02-11T02:04:50Z
24
0
transformers
[ "transformers", "pytorch", "deberta-v2", "deberta", "en", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: en tags: deberta thumbnail: https://huggingface.co/front/thumbnails/microsoft.png license: mit --- ## DeBERTa: Decoding-enhanced BERT with Disentangled Attention ## This model is DEPRECATED, please use [DeBERTa-V2-XLarge](https://huggingface.co/microsoft/deberta-v2-xlarge)
valhalla/longformer-base-4096-finetuned-squadv1
valhalla
2021-02-10T16:35:40Z
513
22
transformers
[ "transformers", "pytorch", "tf", "rust", "longformer", "question-answering", "dataset:squad_v1", "arxiv:2004.05150", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- datasets: - squad_v1 license: mit --- # LONGFORMER-BASE-4096 fine-tuned on SQuAD v1 This is longformer-base-4096 model fine-tuned on SQuAD v1 dataset for question answering task. [Longformer](https://arxiv.org/abs/2004.05150) model created by Iz Beltagy, Matthew E. Peters, Arman Coha from AllenAI. As the paper explains it > `Longformer` is a BERT-like model for long documents. The pre-trained model can handle sequences with upto 4096 tokens. ## Model Training This model was trained on google colab v100 GPU. You can find the fine-tuning colab here [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1zEl5D-DdkBKva-DdreVOmN0hrAfzKG1o?usp=sharing). Few things to keep in mind while training longformer for QA task, by default longformer uses sliding-window local attention on all tokens. But For QA, all question tokens should have global attention. For more details on this please refer the paper. The `LongformerForQuestionAnswering` model automatically does that for you. To allow it to do that 1. The input sequence must have three sep tokens, i.e the sequence should be encoded like this ` <s> question</s></s> context</s>`. If you encode the question and answer as a input pair, then the tokenizer already takes care of that, you shouldn't worry about it. 2. `input_ids` should always be a batch of examples. ## Results |Metric | # Value | |-------------|---------| | Exact Match | 85.1466 | | F1 | 91.5415 | ## Model in Action 🚀 ```python import torch from transformers import AutoTokenizer, AutoModelForQuestionAnswering, tokenizer = AutoTokenizer.from_pretrained("valhalla/longformer-base-4096-finetuned-squadv1") model = AutoModelForQuestionAnswering.from_pretrained("valhalla/longformer-base-4096-finetuned-squadv1") text = "Huggingface has democratized NLP. Huge thanks to Huggingface for this." question = "What has Huggingface done ?" encoding = tokenizer(question, text, return_tensors="pt") input_ids = encoding["input_ids"] # default is local attention everywhere # the forward method will automatically set global attention on question tokens attention_mask = encoding["attention_mask"] start_scores, end_scores = model(input_ids, attention_mask=attention_mask) all_tokens = tokenizer.convert_ids_to_tokens(input_ids[0].tolist()) answer_tokens = all_tokens[torch.argmax(start_scores) :torch.argmax(end_scores)+1] answer = tokenizer.decode(tokenizer.convert_tokens_to_ids(answer_tokens)) # output => democratized NLP ``` The `LongformerForQuestionAnswering` isn't yet supported in `pipeline` . I'll update this card once the support has been added. > Created with ❤️ by Suraj Patil [![Github icon](https://cdn0.iconfinder.com/data/icons/octicons/1024/mark-github-32.png)](https://github.com/patil-suraj/) [![Twitter icon](https://cdn0.iconfinder.com/data/icons/shift-logotypes/32/Twitter-32.png)](https://twitter.com/psuraj28)
Musixmatch/umberto-wikipedia-uncased-v1
Musixmatch
2021-02-10T09:53:35Z
8,613
7
transformers
[ "transformers", "pytorch", "camembert", "fill-mask", "it", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
--- language: it --- # UmBERTo Wikipedia Uncased [UmBERTo](https://github.com/musixmatchresearch/umberto) is a Roberta-based Language Model trained on large Italian Corpora and uses two innovative approaches: SentencePiece and Whole Word Masking. Now available at [github.com/huggingface/transformers](https://huggingface.co/Musixmatch/umberto-commoncrawl-cased-v1) <p align="center"> <img src="https://user-images.githubusercontent.com/7140210/72913702-d55a8480-3d3d-11ea-99fc-f2ef29af4e72.jpg" width="700"> </br> Marco Lodola, Monument to Umberto Eco, Alessandria 2019 </p> ## Dataset UmBERTo-Wikipedia-Uncased Training is trained on a relative small corpus (~7GB) extracted from [Wikipedia-ITA](https://linguatools.org/tools/corpora/wikipedia-monolingual-corpora/). ## Pre-trained model | Model | WWM | Cased | Tokenizer | Vocab Size | Train Steps | Download | | ------ | ------ | ------ | ------ | ------ |------ | ------ | | `umberto-wikipedia-uncased-v1` | YES | YES | SPM | 32K | 100k | [Link](http://bit.ly/35wbSj6) | This model was trained with [SentencePiece](https://github.com/google/sentencepiece) and Whole Word Masking. ## Downstream Tasks These results refers to umberto-wikipedia-uncased model. All details are at [Umberto](https://github.com/musixmatchresearch/umberto) Official Page. #### Named Entity Recognition (NER) | Dataset | F1 | Precision | Recall | Accuracy | | ------ | ------ | ------ | ------ | ----- | | **ICAB-EvalITA07** | **86.240** | 85.939 | 86.544 | 98.534 | | **WikiNER-ITA** | **90.483** | 90.328 | 90.638 | 98.661 | #### Part of Speech (POS) | Dataset | F1 | Precision | Recall | Accuracy | | ------ | ------ | ------ | ------ | ------ | | **UD_Italian-ISDT** | 98.563 | 98.508 | 98.618 | **98.717** | | **UD_Italian-ParTUT** | 97.810 | 97.835 | 97.784 | **98.060** | ## Usage ##### Load UmBERTo Wikipedia Uncased with AutoModel, Autotokenizer: ```python import torch from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("Musixmatch/umberto-wikipedia-uncased-v1") umberto = AutoModel.from_pretrained("Musixmatch/umberto-wikipedia-uncased-v1") encoded_input = tokenizer.encode("Umberto Eco è stato un grande scrittore") input_ids = torch.tensor(encoded_input).unsqueeze(0) # Batch size 1 outputs = umberto(input_ids) last_hidden_states = outputs[0] # The last hidden-state is the first element of the output ``` ##### Predict masked token: ```python from transformers import pipeline fill_mask = pipeline( "fill-mask", model="Musixmatch/umberto-wikipedia-uncased-v1", tokenizer="Musixmatch/umberto-wikipedia-uncased-v1" ) result = fill_mask("Umberto Eco è <mask> un grande scrittore") # {'sequence': '<s> umberto eco è stato un grande scrittore</s>', 'score': 0.5784581303596497, 'token': 361} # {'sequence': '<s> umberto eco è anche un grande scrittore</s>', 'score': 0.33813193440437317, 'token': 269} # {'sequence': '<s> umberto eco è considerato un grande scrittore</s>', 'score': 0.027196012437343597, 'token': 3236} # {'sequence': '<s> umberto eco è diventato un grande scrittore</s>', 'score': 0.013716378249228, 'token': 5742} # {'sequence': '<s> umberto eco è inoltre un grande scrittore</s>', 'score': 0.010662357322871685, 'token': 1030} ``` ## Citation All of the original datasets are publicly available or were released with the owners' grant. The datasets are all released under a CC0 or CCBY license. * UD Italian-ISDT Dataset [Github](https://github.com/UniversalDependencies/UD_Italian-ISDT) * UD Italian-ParTUT Dataset [Github](https://github.com/UniversalDependencies/UD_Italian-ParTUT) * I-CAB (Italian Content Annotation Bank), EvalITA [Page](http://www.evalita.it/) * WIKINER [Page](https://figshare.com/articles/Learning_multilingual_named_entity_recognition_from_Wikipedia/5462500) , [Paper](https://www.sciencedirect.com/science/article/pii/S0004370212000276?via%3Dihub) ``` @inproceedings {magnini2006annotazione, title = {Annotazione di contenuti concettuali in un corpus italiano: I - CAB}, author = {Magnini,Bernardo and Cappelli,Amedeo and Pianta,Emanuele and Speranza,Manuela and Bartalesi Lenzi,V and Sprugnoli,Rachele and Romano,Lorenza and Girardi,Christian and Negri,Matteo}, booktitle = {Proc.of SILFI 2006}, year = {2006} } @inproceedings {magnini2006cab, title = {I - CAB: the Italian Content Annotation Bank.}, author = {Magnini,Bernardo and Pianta,Emanuele and Girardi,Christian and Negri,Matteo and Romano,Lorenza and Speranza,Manuela and Lenzi,Valentina Bartalesi and Sprugnoli,Rachele}, booktitle = {LREC}, pages = {963--968}, year = {2006}, organization = {Citeseer} } ``` ## Authors **Loreto Parisi**: `loreto at musixmatch dot com`, [loretoparisi](https://github.com/loretoparisi) **Simone Francia**: `simone.francia at musixmatch dot com`, [simonefrancia](https://github.com/simonefrancia) **Paolo Magnani**: `paul.magnani95 at gmail dot com`, [paulthemagno](https://github.com/paulthemagno) ## About Musixmatch AI ![Musxmatch Ai mac app icon-128](https://user-images.githubusercontent.com/163333/72244273-396aa380-35ee-11ea-894b-4ea48230c02b.png) We do Machine Learning and Artificial Intelligence @[musixmatch](https://twitter.com/Musixmatch) Follow us on [Twitter](https://twitter.com/musixmatchai) [Github](https://github.com/musixmatchresearch)
byan/librispeech_asr_train_asr_transformer_e18_raw_bpe_sp
byan
2021-02-09T04:09:12Z
5
0
espnet
[ "espnet", "audio", "automatic-speech-recognition", "en", "dataset:librispeech", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- tags: - espnet - audio - automatic-speech-recognition language: en datasets: - librispeech license: cc-by-4.0 --- ## Example ESPnet2 ASR model ### `Shinji Watanabe/librispeech_asr_train_asr_transformer_e18_raw_bpe_sp_valid.acc.best` ♻️ Imported from https://zenodo.org/record/3966501 This model was trained by Shinji Watanabe using librispeech recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
cahya/bert2gpt-indonesian-summarization
cahya
2021-02-08T16:19:50Z
221
7
transformers
[ "transformers", "pytorch", "encoder-decoder", "text2text-generation", "pipeline:summarization", "summarization", "bert2gpt", "id", "dataset:id_liputan6", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:05Z
--- language: id tags: - pipeline:summarization - summarization - bert2gpt datasets: - id_liputan6 license: apache-2.0 --- # Indonesian BERT2BERT Summarization Model Finetuned EncoderDecoder model using BERT-base and GPT2-small for Indonesian text summarization. ## Finetuning Corpus `bert2gpt-indonesian-summarization` model is based on `cahya/bert-base-indonesian-1.5G` and `cahya/gpt2-small-indonesian-522M`by [cahya](https://huggingface.co/cahya), finetuned using [id_liputan6](https://huggingface.co/datasets/id_liputan6) dataset. ## Load Finetuned Model ```python from transformers import BertTokenizer, EncoderDecoderModel tokenizer = BertTokenizer.from_pretrained("cahya/bert2gpt-indonesian-summarization") tokenizer.bos_token = tokenizer.cls_token tokenizer.eos_token = tokenizer.sep_token model = EncoderDecoderModel.from_pretrained("cahya/bert2gpt-indonesian-summarization") ``` ## Code Sample ```python from transformers import BertTokenizer, EncoderDecoderModel tokenizer = BertTokenizer.from_pretrained("cahya/bert2gpt-indonesian-summarization") tokenizer.bos_token = tokenizer.cls_token tokenizer.eos_token = tokenizer.sep_token model = EncoderDecoderModel.from_pretrained("cahya/bert2gpt-indonesian-summarization") # ARTICLE_TO_SUMMARIZE = "" # generate summary input_ids = tokenizer.encode(ARTICLE_TO_SUMMARIZE, return_tensors='pt') summary_ids = model.generate(input_ids, min_length=20, max_length=80, num_beams=10, repetition_penalty=2.5, length_penalty=1.0, early_stopping=True, no_repeat_ngram_size=2, use_cache=True, do_sample = True, temperature = 0.8, top_k = 50, top_p = 0.95) summary_text = tokenizer.decode(summary_ids[0], skip_special_tokens=True) print(summary_text) ``` Output: ``` ```
astarostap/distilbert-cased-antisemitic-tweets
astarostap
2021-02-08T15:03:10Z
16
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: mit widget: - text: "Jews run the world." --- This model takes a tweet with the word "jew" in it, and determines if it's antisemitic. *Training data:* This model was trained on 4k tweets, where ~50% were labeled as antisemitic. I labeled them myself based on personal experience and knowledge about common antisemitic tropes. *Note:* The goal for this model is not to be used as a final say on what is or is not antisemitic, but rather as a first pass on what might be antisemitic and should be reviewed by human experts. Please keep in mind that I'm not an expert on antisemitism or hatespeech. Whether something is antisemitic or not depends on the context, as for any hate speech, and everyone has a different definition for what is hate speech. If you would like to collaborate on antisemitism detection, please feel free to contact me at [email protected] This model is not ready for production, it needs more evaluation and more training data.
cahya/distilbert-base-indonesian
cahya
2021-02-08T09:06:09Z
1,651
14
transformers
[ "transformers", "pytorch", "distilbert", "fill-mask", "id", "dataset:wikipedia", "dataset:id_newspapers_2018", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: "id" license: "mit" datasets: - wikipedia - id_newspapers_2018 widget: - text: "ayahku sedang bekerja di sawah untuk [MASK] padi." --- # Indonesian DistilBERT base model (uncased) ## Model description This model is a distilled version of the [Indonesian BERT base model](https://huggingface.co/cahya/bert-base-indonesian-1.5G). This model is uncased. This is one of several other language models that have been pre-trained with indonesian datasets. More detail about its usage on downstream tasks (text classification, text generation, etc) is available at [Transformer based Indonesian Language Models](https://github.com/cahya-wirawan/indonesian-language-models/tree/master/Transformers) ## Intended uses & limitations ### How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='cahya/distilbert-base-indonesian') >>> unmasker("Ayahku sedang bekerja di sawah untuk [MASK] padi") [ { "sequence": "[CLS] ayahku sedang bekerja di sawah untuk menanam padi [SEP]", "score": 0.6853187084197998, "token": 12712, "token_str": "menanam" }, { "sequence": "[CLS] ayahku sedang bekerja di sawah untuk bertani padi [SEP]", "score": 0.03739545866847038, "token": 15484, "token_str": "bertani" }, { "sequence": "[CLS] ayahku sedang bekerja di sawah untuk memetik padi [SEP]", "score": 0.02742469497025013, "token": 30338, "token_str": "memetik" }, { "sequence": "[CLS] ayahku sedang bekerja di sawah untuk penggilingan padi [SEP]", "score": 0.02214187942445278, "token": 28252, "token_str": "penggilingan" }, { "sequence": "[CLS] ayahku sedang bekerja di sawah untuk tanam padi [SEP]", "score": 0.0185895636677742, "token": 11308, "token_str": "tanam" } ] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import DistilBertTokenizer, DistilBertModel model_name='cahya/distilbert-base-indonesian' tokenizer = DistilBertTokenizer.from_pretrained(model_name) model = DistilBertModel.from_pretrained(model_name) text = "Silakan diganti dengan text apa saja." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in Tensorflow: ```python from transformers import DistilBertTokenizer, TFDistilBertModel model_name='cahya/distilbert-base-indonesian' tokenizer = DistilBertTokenizer.from_pretrained(model_name) model = TFDistilBertModel.from_pretrained(model_name) text = "Silakan diganti dengan text apa saja." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Training data This model was distiled with 522MB of indonesian Wikipedia and 1GB of [indonesian newspapers](https://huggingface.co/datasets/id_newspapers_2018). The texts are lowercased and tokenized using WordPiece and a vocabulary size of 32,000. The inputs of the model are then of the form: ```[CLS] Sentence A [SEP] Sentence B [SEP]```
ChristopherA08/IndoELECTRA
ChristopherA08
2021-02-04T06:23:59Z
9
1
transformers
[ "transformers", "pytorch", "electra", "pretraining", "id", "dataset:oscar", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04Z
--- language: id datasets: - oscar --- # IndoBERT (Indonesian BERT Model) ## Model description ELECTRA is a new method for self-supervised language representation learning. This repository contains the pre-trained Electra Base model (tensorflow 1.15.0) trained in a Large Indonesian corpus (~16GB of raw text | ~2B indonesian words). IndoELECTRA is a pre-trained language model based on ELECTRA architecture for the Indonesian Language. This model is base version which use electra-base config. ## Intended uses & limitations #### How to use ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("ChristopherA08/IndoELECTRA") model = AutoModel.from_pretrained("ChristopherA08/IndoELECTRA") tokenizer.encode("hai aku mau makan.") [2, 8078, 1785, 2318, 1946, 18, 4] ``` ## Training procedure The training of the model has been performed using Google's original Tensorflow code on eight core Google Cloud TPU v2. We used a Google Cloud Storage bucket, for persistent storage of training data and models.
dbernsohn/algebra_linear_1d
dbernsohn
2021-02-03T07:09:42Z
6
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
# algebra_linear_1d --- language: en datasets: - algebra_linear_1d --- This is a [t5-small](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) fine-tuned version on the [math_dataset/algebra_linear_1d](https://www.tensorflow.org/datasets/catalog/math_dataset#mathdatasetalgebra_linear_1d_default_config) for solving **algebra 1d equations** mission. To load the model: (necessary packages: !pip install transformers sentencepiece) ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("dbernsohn/algebra_linear_1d") model = AutoModelWithLMHead.from_pretrained("dbernsohn/algebra_linear_1d") ``` You can then use this model to solve algebra 1d equations into numbers. ```python query = "Solve 0 = 1026*x - 2474 + 46592 for x" input_text = f"{query} </s>" features = tokenizer([input_text], return_tensors='pt') model.to('cuda') output = model.generate(input_ids=features['input_ids'].cuda(), attention_mask=features['attention_mask'].cuda()) tokenizer.decode(output[0]) # <pad> -41</s> ``` Another examples: + Solve 1112*r + 1418*r - 5220 = 587*r - 28536 for r. + Answer: -12 Pred: -12 ---- + Solve -119*k + 6*k - 117 - 352 = 322 for k. + Answer: -7 Pred: -7 ---- + Solve -547 = -62*t + 437 - 798 for t. + Answer: 3 Pred: 3 ---- + Solve 3*j - 3*j + 0*j - 4802 = 98*j for j. + Answer: -49 Pred: -49 ---- + Solve 3047*n - 6130*n - 1700 = -3049*n for n. + Answer: -50 Pred: -50 ---- + Solve 121*i + 1690 = 76*i - 128*i + 133 for i. + Answer: -9 Pred: -9 The whole training process and hyperparameters are in my [GitHub repo](https://github.com/DorBernsohn/CodeLM/tree/main/MathLM) > Created by [Dor Bernsohn](https://www.linkedin.com/in/dor-bernsohn-70b2b1146/)
HHousen/distil-led-large-cnn-16384
HHousen
2021-02-02T00:58:07Z
288
4
transformers
[ "transformers", "pytorch", "led", "text2text-generation", "en", "dataset:cnn_dailymail", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:04Z
--- language: en datasets: - cnn_dailymail license: apache-2.0 --- ## DistilLED Large CNN 16384 *distil-led-large-cnn-16384* was initialized from [sshleifer/distilbart-cnn-12-6](https://huggingface.co/sshleifer/distilbart-cnn-12-6), in a fashion similar to [allenai/led-large-16384](https://huggingface.co/allenai/led-large-16384). To be able to process 16K tokens, *sshleifer/distilbart-cnn-12-6*'s position embedding matrix was simply copied 16 times. This checkpoint should be loaded into `LEDForConditionalGeneration.from_pretrained`. See the [LED documentation](https://huggingface.co/transformers/model_doc/led.html) for more information.
mrm8488/mobilebert-finetuned-ner
mrm8488
2021-01-30T11:42:05Z
82
1
transformers
[ "transformers", "pytorch", "mobilebert", "token-classification", "ner", "en", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: en tags: - mobilebert - ner license: mit ---
cahya/bert2bert-indonesian-summarization
cahya
2021-01-29T11:39:42Z
100
4
transformers
[ "transformers", "pytorch", "encoder-decoder", "text2text-generation", "pipeline:summarization", "summarization", "bert2bert", "id", "dataset:id_liputan6", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:05Z
--- language: id tags: - pipeline:summarization - summarization - bert2bert datasets: - id_liputan6 license: apache-2.0 --- # Indonesian BERT2BERT Summarization Model Finetuned BERT-base summarization model for Indonesian. ## Finetuning Corpus `bert2bert-indonesian-summarization` model is based on `cahya/bert-base-indonesian-1.5G` by [cahya](https://huggingface.co/cahya), finetuned using [id_liputan6](https://huggingface.co/datasets/id_liputan6) dataset. ## Load Finetuned Model ```python from transformers import BertTokenizer, EncoderDecoderModel tokenizer = BertTokenizer.from_pretrained("cahya/bert2bert-indonesian-summarization") tokenizer.bos_token = tokenizer.cls_token tokenizer.eos_token = tokenizer.sep_token model = EncoderDecoderModel.from_pretrained("cahya/bert2bert-indonesian-summarization") ``` ## Code Sample ```python from transformers import BertTokenizer, EncoderDecoderModel tokenizer = BertTokenizer.from_pretrained("cahya/bert2bert-indonesian-summarization") tokenizer.bos_token = tokenizer.cls_token tokenizer.eos_token = tokenizer.sep_token model = EncoderDecoderModel.from_pretrained("cahya/bert2bert-indonesian-summarization") # ARTICLE_TO_SUMMARIZE = "" # generate summary input_ids = tokenizer.encode(ARTICLE_TO_SUMMARIZE, return_tensors='pt') summary_ids = model.generate(input_ids, min_length=20, max_length=80, num_beams=10, repetition_penalty=2.5, length_penalty=1.0, early_stopping=True, no_repeat_ngram_size=2, use_cache=True, do_sample = True, temperature = 0.8, top_k = 50, top_p = 0.95) summary_text = tokenizer.decode(summary_ids[0], skip_special_tokens=True) print(summary_text) ``` Output: ``` ```
NTUYG/SOTitle-java-BART
NTUYG
2021-01-28T15:12:29Z
4
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:04Z
## How to use ```python import logging from simpletransformers.seq2seq import Seq2SeqModel, Seq2SeqArgs logging.basicConfig(level=logging.INFO) transformers_logger = logging.getLogger("transformers") transformers_logger.setLevel(logging.WARNING) model_args = Seq2SeqArgs() # 加载本地训练好的模型 model = Seq2SeqModel( encoder_decoder_type="bart", encoder_decoder_name="NTUYG/SOTitle-java-BART", args=model_args, ) describe = """ I am a beginner at Android Java development but I have a few years of school + uni experience in Java. I am trying to write to a text file in an assets folder in my app using FileOutputStream but it doesn't seem to write to it at all since I am using InputStream to read the file after and there haven't any updates. Here is my code """ code = """ private void updateTextFile(String update) { FileOutputStream fos = null; try { fos = openFileOutput("Questions",MODE_PRIVATE); fos.write("Testing".getBytes()); } catch (FileNotFoundException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } finally { if(fos!=null) { try { fos.close(); } catch (IOException e) { e.printStackTrace(); } } } String text = ""; try { InputStream is = getAssets().open("Questions"); int size = is.available(); byte[] buffer = new byte[size]; is.read(buffer); is.close(); text = new String(buffer); } catch (IOException e) { e.printStackTrace(); } System.out.println("Tesing output " + text); } """ from nltk import word_tokenize describe = describe.replace('\n',' ').replace('\r',' ') describe = ' '.join(word_tokenize(describe)) code = code.replace('\n',' ').replace('\r',' ') code = ' '.join(word_tokenize(code)) # human : Java Android Cant seem to update text file using FileOutputStream body = describe + ' <code> ' + code +' </code>' print( model.predict( [ body ] ) ) ```
ml6team/mt5-small-german-finetune-mlsum
ml6team
2021-01-28T13:15:00Z
546
9
transformers
[ "transformers", "pytorch", "tf", "t5", "text2text-generation", "summarization", "de", "dataset:mlsum", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:05Z
--- language: de tags: - summarization datasets: - mlsum --- # mT5-small fine-tuned on German MLSUM This model was finetuned for 3 epochs with a max_len (input) of 768 tokens and target_max_len of 192 tokens. It was fine-tuned on all German articles present in the train split of the [MLSUM dataset](https://huggingface.co/datasets/mlsum) having less than 384 "words" after splitting on whitespace, which resulted in 80249 articles. The exact expression to filter the dataset was the following: ```python dataset = dataset.filter(lambda e: len(e['text'].split()) < 384) ``` ## Evaluation results The fine-tuned model was evaluated on 2000 random articles from the validation set. Mean [f1 ROUGE scores](https://github.com/pltrdy/rouge) were calculated for both the fine-tuned model and the lead-3 baseline (which simply produces the leading three sentences of the document) and are presented in the following table. | Model | Rouge-1 | Rouge-2 | Rouge-L | | ------------- |:-------:| --------:| -------:| | mt5-small | 0.399 | 0.318 | 0.392 | | lead-3 | 0.343 | 0.263 | 0.341 |
ggoggam/xlnet-base-cased-squad-quoref
ggoggam
2021-01-28T06:54:08Z
5
1
transformers
[ "transformers", "pytorch", "xlnet", "question-answering", "arxiv:1906.08237", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
# XLNet Fine-tuned on SQuAD / Quoref Dataset [XLNet](https://arxiv.org/abs/1906.08237) jointly developed by Google and CMU and fine-tuned on [SQuAD / SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) and [Quoref](https://leaderboard.allenai.org/quoref) for question answering down-stream task. ## Evaluation Result on Quoref ``` { "exact_match": 73.65591397848462, "f1": 77.9981532789881 } ``` ## Results Comparison on Quoref | Metric | XLNet Base Line | Model FT on SQuAD | | ------ | --------- | --------- | | **EM** | **61.88** | **73.66** (+11.78) | | **F1** | **70.51** | **78.00** (+7.49)| ## How to Use ``` from transformers import XLNetForQuestionAnswering, XLNetTokenizerFast model = XLNetForQuestionAnswering.from_pretrained('jkgrad/xlnet-base-cased-squad-quoref) tokenizer = XLNetTokenizerFast.from_pretrained('jkgrad/xlnet-base-cased-squad-quoref') ```
mrm8488/mbart-large-finetuned-opus-it-en-translation
mrm8488
2021-01-27T13:19:19Z
16
1
transformers
[ "transformers", "pytorch", "mbart", "text2text-generation", "translation", "it", "en", "dataset:opus100", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:05Z
--- tags: - translation language: - it - en datasets: - opus100 --- ### mbart-large-it-en This is mbart-large-cc25, finetuned on opus100 for Italian to English translation. It scores BLEU **25.82** on test set.
acul3/mt5-large-id-qgen-qa
acul3
2021-01-27T12:55:12Z
4
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "id", "dataset:Squad", "dataset:XQuad", "dataset:Tydiqa", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: "id" license: "mit" datasets: - Squad - XQuad - Tydiqa widget: - text: "I love you" --- ## Prefix use Use prefix "question: {question} context: {context}" before input to generate the question answering e.g "question: siapa nama saya ? context: nama saya andi. saya tinggal di jakarta. istri saya bernama raisa" for generate question prefix generate questions: nama saya andi. saya tinggal di jakarta. istri saya bernama raisa ## Training data Squad XQuad Tydiqa
aychang/distilbert-squad
aychang
2021-01-25T08:37:16Z
0
0
null
[ "question-answering", "torchscript", "FastNN", "en", "dataset:squad", "license:mit", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- language: - en thumbnail: tags: - question-answering - torchscript - FastNN license: mit datasets: - squad metrics: --- # TorchScript model of distilbert-squad ## Model description A serialized torchscript model of distilbert-squad with a config.pbtxt for deployment using NVIDIA Triton Inference Server.
aychang/fasterrcnn-resnet50-cpu
aychang
2021-01-25T08:29:49Z
0
1
null
[ "object-detection", "torchscript", "FastNN", "en", "dataset:coco", "license:mit", "region:us" ]
object-detection
2022-03-02T23:29:05Z
--- language: - en thumbnail: tags: - object-detection - torchscript - FastNN license: mit datasets: - coco metrics: --- # TorchScript model of faster-rcnn ## Model description A serialized torchscript model of [faster-rcnn](https://pytorch.org/vision/stable/models.html#faster-r-cnn) with a config.pbtxt for deployment using NVIDIA Triton Inference Server.
hfl/chinese-legal-electra-small-discriminator
hfl
2021-01-22T05:19:55Z
1
1
transformers
[ "transformers", "pytorch", "tf", "electra", "pretraining", "zh", "arxiv:2004.13922", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: - zh license: "apache-2.0" --- # This model is specifically designed for legal domain. ## Chinese ELECTRA Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants. For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA. ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants. This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra) You may also interested in, - Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm - Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA - Chinese XLNet: https://github.com/ymcui/Chinese-XLNet - Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology ## Citation If you find our resource or paper is useful, please consider including the following citation in your paper. - https://arxiv.org/abs/2004.13922 ``` @inproceedings{cui-etal-2020-revisiting, title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", author = "Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Wang, Shijin and Hu, Guoping", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", pages = "657--668", } ```
hfl/chinese-legal-electra-large-discriminator
hfl
2021-01-22T05:19:50Z
57
4
transformers
[ "transformers", "pytorch", "tf", "electra", "zh", "arxiv:2004.13922", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: - zh license: "apache-2.0" --- # This model is specifically designed for legal domain. ## Chinese ELECTRA Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants. For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA. ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants. This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra) You may also interested in, - Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm - Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA - Chinese XLNet: https://github.com/ymcui/Chinese-XLNet - Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology ## Citation If you find our resource or paper is useful, please consider including the following citation in your paper. - https://arxiv.org/abs/2004.13922 ``` @inproceedings{cui-etal-2020-revisiting, title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", author = "Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Wang, Shijin and Hu, Guoping", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", pages = "657--668", } ```
kykim/funnel-kor-base
kykim
2021-01-22T01:56:37Z
11
1
transformers
[ "transformers", "pytorch", "tf", "funnel", "feature-extraction", "ko", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
--- language: ko --- # Funnel-transformer base model for Korean * 70GB Korean text dataset and 42000 lower-cased subwords are used * Check the model performance and other language models for Korean in [github](https://github.com/kiyoungkim1/LM-kor) ```python from transformers import FunnelTokenizer, FunnelModel tokenizer = FunnelTokenizer.from_pretrained("kykim/funnel-kor-base") model = FunnelModel.from_pretrained("kykim/funnel-kor-base") ```
kykim/electra-kor-base
kykim
2021-01-22T00:28:50Z
2,985
2
transformers
[ "transformers", "pytorch", "tf", "electra", "pretraining", "ko", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: ko --- # Electra base model for Korean * 70GB Korean text dataset and 42000 lower-cased subwords are used * Check the model performance and other language models for Korean in [github](https://github.com/kiyoungkim1/LM-kor) ```python from transformers import ElectraTokenizerFast, ElectraModel tokenizer_electra = ElectraTokenizerFast.from_pretrained("kykim/electra-kor-base") model = ElectraModel.from_pretrained("kykim/electra-kor-base") ```
kykim/albert-kor-base
kykim
2021-01-22T00:27:49Z
2,381
5
transformers
[ "transformers", "pytorch", "tf", "albert", "fill-mask", "ko", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: ko --- # Albert base model for Korean * 70GB Korean text dataset and 42000 lower-cased subwords are used * Check the model performance and other language models for Korean in [github](https://github.com/kiyoungkim1/LM-kor) ```python from transformers import BertTokenizerFast, AlbertModel tokenizer_albert = BertTokenizerFast.from_pretrained("kykim/albert-kor-base") model_albert = AlbertModel.from_pretrained("kykim/albert-kor-base") ```
ray1379/bio-convbert-medium-samll
ray1379
2021-01-21T02:55:31Z
0
1
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
pretrained convbert_medium-small with PubMed text.
Narsil/small_conversational_test
Narsil
2021-01-20T16:30:52Z
2
0
transformers
[ "transformers", "albert", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04Z
```python import tempfile from tokenizers import Tokenizer, models, processors from transformers.tokenization_utils_fast import PreTrainedTokenizerFast vocab = [(chr(i), i) for i in range(256)] tokenizer = Tokenizer(models.Unigram(vocab)) tokenizer.add_special_tokens(["<bos>", "<eos>"]) tokenizer.post_processor = processors.TemplateProcessing( single="<bos> $0 <eos>", special_tokens=[("<bos>", 256), ("<eos>", 257)] ) with tempfile.NamedTemporaryFile() as f: tokenizer.save(f.name) real_tokenizer = PreTrainedTokenizerFast(tokenizer_file=f.name, eos_token="<eos>", bos_token="<bos>") real_tokenizer._tokenizer.save("dummy.json") ``` Small change.
yannis-papanikolaou/t5-code-generation
yannis-papanikolaou
2021-01-19T14:46:48Z
0
1
null
[ "arxiv:2101.07138", "region:us" ]
null
2022-03-02T23:29:05Z
# T5 for Semantic Parsing ## Model description T5 (small and large) finetuned on CoNaLa for semantic parsing (Natural Language descriptions to Python code) Paper: https://arxiv.org/pdf/2101.07138.pdf Code, data and how to use: https://github.com/ypapanik/t5-for-code-generation ### Cite ``` @misc{papanikolaou2021teach, title={Teach me how to Label: Labeling Functions from Natural Language with Text-to-text Transformers}, author={Yannis Papanikolaou}, year={2021}, eprint={2101.07138}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
subham92/translation_model_by_subham
subham92
2021-01-18T10:29:50Z
3
0
transformers
[ "transformers", "pytorch", "marian", "text2text-generation", "translation", "fi", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:05Z
--- language: - fi - en tags: - translation license: apache-2.0 ---
ggoggam/xlnet-base-squadv2
ggoggam
2021-01-17T11:52:34Z
7
2
transformers
[ "transformers", "pytorch", "xlnet", "question-answering", "arxiv:1906.08237", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
# XLNet Fine-tuned on SQuAD 2.0 Dataset [XLNet](https://arxiv.org/abs/1906.08237) jointly developed by Google and CMU and fine-tuned on [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) for question answering down-stream task. ## Training Results (Metrics) ``` { "HasAns_exact": 74.7132253711201 "HasAns_f1": 82.11971607032643 "HasAns_total": 5928 "NoAns_exact": 73.38940285954584 "NoAns_f1": 73.38940285954584 "NoAns_total": 5945 "best_exact": 75.67590331003116 "best_exact_thresh": -19.554906845092773 "best_f1": 79.16215426779269 "best_f1_thresh": -19.554906845092773 "epoch": 4.0 "exact": 74.05036637749515 "f1": 77.74830934598614 "total": 11873 } ``` ## Results Comparison | Metric | Paper | Model | | ------ | --------- | --------- | | **EM** | **78.46** | **75.68** (-2.78) | | **F1** | **81.33** | **79.16** (-2.17)| Better fine-tuned models coming soon. ## How to Use ``` from transformers import XLNetForQuestionAnswering, XLNetTokenizerFast model = XLNetForQuestionAnswering.from_pretrained('jkgrad/xlnet-base-squadv2) tokenizer = XLNetTokenizerFast.from_pretrained('jkgrad/xlnet-base-squadv2') ```
Sahajtomar/German-question-answer-Electra
Sahajtomar
2021-01-16T02:18:37Z
46
7
transformers
[ "transformers", "pytorch", "tf", "electra", "question-answering", "Gelectra", "de", "dataset:mlqa", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:04Z
--- language: de tags: - pytorch - tf - Gelectra datasets: - mlqa metrics: - f1 - em --- ### QA Model trained on MLQA dataset for german langauge. MODEL used for fine tuning is GELECTRA Large by deepset.ai ## MLQA DEV (german) EM: 64.27 \ F1: 77.39 ## XQUAD TEST (german) EM: 66.38 \ F1: 82.25 ## Hyperparameters per_gpu_train_batch_size 4 \ per_gpu_eval_batch_size 32 \ gradient_accumulation_steps 8 \ learning_rate 3e-5 \ num_train_epochs 1.0 \ max_seq_length 384 \ doc_stride 128 ## Model inferencing: ```python !pip install -q transformers from transformers import pipeline qa_pipeline = pipeline( "question-answering", model="Sahajtomar/GELECTRAQA", tokenizer="Sahajtomar/GELECTRAQA" ) qa_pipeline({ 'context': "Vor einigen Jahren haben Wissenschaftler ein wichtiges Mutagen identifiziert, das in unseren eigenen Zellen liegt: APOBEC, ein Protein, das normalerweise als Schutzmittel gegen Virusinfektionen fungiert. Heute hat ein Team von Schweizer und russischen Wissenschaftlern unter der Leitung von Sergey Nikolaev, Genetiker an der Universität Genf (UNIGE) in der Schweiz, entschlüsselt, wie APOBEC eine Schwäche unseres DNA-Replikationsprozesses ausnutzt, um Mutationen in unserem Genom zu induzieren.", 'question': "Welches Mutagen schützt vor Virusinfektionen?" }) # output {'answer': 'APOBEC', 'end': 121, 'score': 0.987, 'start': 115} ## Even complex queries can be answered pretty well qa_pipeline({ "context": "Es wird erwartet, dass sich schwarze Löcher mit Sternmasse bilden, wenn sehr massive Sterne am Ende ihres Lebenszyklus zusammenbrechen. Nachdem sich ein Schwarzes Loch gebildet hat, kann es weiter wachsen,indem es Masse aus seiner Umgebung absorbiert. Durch Absorption anderer Sterne und Verschmelzung mit anderen Schwarzen Löchern können sich supermassereiche Schwarze Löcher mit Millionen von Sonnenmassen (M☉) bilden. Es besteht Konsens darüber, dass in den Zentren der meisten Galaxien supermassereiche Schwarze Löcher existieren.", 'question': "Wie Sonnenmassen entstehen?" }) #output {'answer': 'Durch Absorption anderer Sterne und Verschmelzung mit anderen Schwarzen Löchern', 'end': 332, 'score': 0.23970196, 'start': 253} ```
patrickvonplaten/led-large-16384-pubmed
patrickvonplaten
2021-01-11T15:42:53Z
56
12
transformers
[ "transformers", "pytorch", "tf", "led", "text2text-generation", "en", "dataset:scientific_papers", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - scientific_papers license: apache-2.0 --- ## Introduction [Allenai's Longformer Encoder-Decoder (LED)](https://github.com/allenai/longformer#longformer). This is an unofficial *led-large-16384* checkpoint that is fine-tuned on the [pubmed dataset](https://huggingface.co/datasets/scientific_papers). The model was fine-tuned and evaluated as detailed in [this notebook](https://colab.research.google.com/drive/12LjJazBl7Gam0XBPy_y0CTOJZeZ34c2v?usp=sharing) ## Results The model achieves a **Rouge-2** score of 19.33 on Pubmed which is competitive to state-of-the-art models. ## Usage The model can be used as follows. The input is taken from the test data of the [pubmed dataset](https://huggingface.co/datasets/scientific_papers). ```python LONG_ARTICLE = """"anxiety affects quality of life in those living with parkinson 's disease ( pd ) more so than overall cognitive status , motor deficits , apathy , and depression [ 13 ] . although anxiety and depression are often related and coexist in pd patients , recent research suggests that anxiety rather than depression is the most prominent and prevalent mood disorder in pd [ 5 , 6 ] . yet , our current understanding of anxiety and its impact on cognition in pd , as well as its neural basis and best treatment practices , remains meager and lags far behind that of depression . overall , neuropsychiatric symptoms in pd have been shown to be negatively associated with cognitive performance . for example , higher depression scores have been correlated with lower scores on the mini - mental state exam ( mmse ) [ 8 , 9 ] as well as tests of memory and executive functions ( e.g. , attention ) [ 1014 ] . likewise , apathy and anhedonia in pd patients have been associated with executive dysfunction [ 10 , 1523 ] . however , few studies have specifically investigated the relationship between anxiety and cognition in pd . one study showed a strong negative relationship between anxiety ( both state and trait ) and overall cognitive performance ( measured by the total of the repeatable battery for the assessment of neuropsychological status index ) within a sample of 27 pd patients . furthermore , trait anxiety was negatively associated with each of the cognitive domains assessed by the rbans ( i.e. , immediate memory , visuospatial construction , language , attention , and delayed memory ) . two further studies have examined whether anxiety differentially affects cognition in patients with left - sided dominant pd ( lpd ) versus right - sided dominant pd ( rpd ) ; however , their findings were inconsistent . the first study found that working memory performance was worse in lpd patients with anxiety compared to rpd patients with anxiety , whereas the second study reported that , in lpd , apathy but not anxiety was associated with performance on nonverbally mediated executive functions and visuospatial tasks ( e.g. , tmt - b , wms - iii spatial span ) , while in rpd , anxiety but not apathy significantly correlated with performance on verbally mediated tasks ( e.g. , clock reading test and boston naming test ) . furthermore , anxiety was significantly correlated with neuropsychological measures of attention and executive and visuospatial functions . taken together , it is evident that there are limited and inconsistent findings describing the relationship between anxiety and cognition in pd and more specifically how anxiety might influence particular domains of cognition such as attention and memory and executive functioning . it is also striking that , to date , no study has examined the influence of anxiety on cognition in pd by directly comparing groups of pd patients with and without anxiety while excluding depression . given that research on healthy young adults suggests that anxiety reduces processing capacity and impairs processing efficiency , especially in the central executive and attentional systems of working memory [ 26 , 27 ] , we hypothesized that pd patients with anxiety would show impairments in attentional set - shifting and working memory compared to pd patients without anxiety . furthermore , since previous work , albeit limited , has focused on the influence of symptom laterality on anxiety and cognition , we also explored this relationship . seventeen pd patients with anxiety and thirty - three pd patients without anxiety were included in this study ( see table 1 ) . the cross - sectional data from these participants was taken from a patient database that has been compiled over the past 8 years ( since 2008 ) at the parkinson 's disease research clinic at the brain and mind centre , university of sydney . inclusion criteria involved a diagnosis of idiopathic pd according to the united kingdom parkinson 's disease society brain bank criteria and were confirmed by a neurologist ( sjgl ) . patients also had to have an adequate proficiency in english and have completed a full neuropsychological assessment . ten patients in this study ( 5 pd with anxiety ; 5 pd without anxiety ) were taking psychotropic drugs ( i.e. , benzodiazepine or selective serotonin reuptake inhibitor ) . patients were also excluded if they had other neurological disorders , psychiatric disorders other than affective disorders ( such as anxiety ) , or if they reported a score greater than six on the depression subscale of the hospital anxiety and depression scale ( hads ) . thus , all participants who scored within a depressed ( hads - d > 6 ) range were excluded from this study , in attempt to examine a refined sample of pd patients with and without anxiety in order to determine the independent effect of anxiety on cognition . this research was approved by the human research ethics committee of the university of sydney , and written informed consent was obtained from all participants . self - reported hads was used to assess anxiety in pd and has been previously shown to be a useful measure of clinical anxiety in pd . a cut - off score of > 8 on the anxiety subscale of the hads ( hads - a ) was used to identify pd cases with anxiety ( pda+ ) , while a cut - off score of < 6 on the hads - a was used to identify pd cases without anxiety ( pda ) . this criterion was more stringent than usual ( > 7 cut - off score ) , in effort to create distinct patient groups . the neurological evaluation rated participants according to hoehn and yahr ( h&y ) stages and assessed their motor symptoms using part iii of the revised mds task force unified parkinson 's disease rating scale ( updrs ) . in a similar way this was determined by calculating a total left and right score from rigidity items 3035 , voluntary movement items 3643 , and tremor items 5057 from the mds - updrs part iii ( see table 1 ) . processing speed was assessed using the trail making test , part a ( tmt - a , z - score ) . attentional set - shifting was measured using the trail making test , part b ( tmt - b , z - score ) . working memory was assessed using the digit span forward and backward subtest of the wechsler memory scale - iii ( raw scores ) . language was assessed with semantic and phonemic verbal fluency via the controlled oral word associated test ( cowat animals and letters , z - score ) . the ability to retain learned verbal memory was assessed using the logical memory subtest from the wechsler memory scale - iii ( lm - i z - score , lm - ii z - score , % lm retention z - score ) . the mini - mental state examination ( mmse ) demographic , clinical , and neuropsychological variables were compared between the two groups with the independent t - test or mann whitney u test , depending on whether the variable met parametric assumptions . chi - square tests were used to examine gender and symptom laterality differences between groups . all analyses employed an alpha level of p < 0.05 and were two - tailed . spearman correlations were performed separately in each group to examine associations between anxiety and/or depression ratings and cognitive functions . as expected , the pda+ group reported significant greater levels of anxiety on the hads - a ( u = 0 , p < 0.001 ) and higher total score on the hads ( u = 1 , p < 0.001 ) compared to the pda group ( table 1 ) . groups were matched in age ( t(48 ) = 1.31 , p = 0.20 ) , disease duration ( u = 259 , p = 0.66 ) , updrs - iii score ( u = 250.5 , p = 0.65 ) , h&y ( u = 245 , p = 0.43 ) , ledd ( u = 159.5 , p = 0.80 ) , and depression ( hads - d ) ( u = 190.5 , p = 0.06 ) . additionally , all groups were matched in the distribution of gender ( = 0.098 , p = 0.75 ) and side - affected ( = 0.765 , p = 0.38 ) . there were no group differences for tmt - a performance ( u = 256 , p = 0.62 ) ( table 2 ) ; however , the pda+ group had worse performance on the trail making test part b ( t(46 ) = 2.03 , p = 0.048 ) compared to the pda group ( figure 1 ) . the pda+ group also demonstrated significantly worse performance on the digit span forward subtest ( t(48 ) = 2.22 , p = 0.031 ) and backward subtest ( u = 190.5 , p = 0.016 ) compared to the pda group ( figures 2(a ) and 2(b ) ) . neither semantic verbal fluency ( t(47 ) = 0.70 , p = 0.49 ) nor phonemic verbal fluency ( t(47 ) = 0.39 , p = 0.70 ) differed between groups . logical memory i immediate recall test ( u = 176 , p = 0.059 ) showed a trend that the pda+ group had worse new verbal learning and immediate recall abilities than the pda group . however , logical memory ii test performance ( u = 219 , p = 0.204 ) and logical memory % retention ( u = 242.5 , p = 0.434 ) did not differ between groups . there were also no differences between groups in global cognition ( mmse ) ( u = 222.5 , p = 0.23 ) . participants were split into lpd and rpd , and then further group differences were examined between pda+ and pda. importantly , the groups remained matched in age , disease duration , updrs - iii , dde , h&y stage , and depression but remained significantly different on self - reported anxiety . lpda+ demonstrated worse performance on the digit span forward test ( t(19 ) = 2.29 , p = 0.033 ) compared to lpda , whereas rpda+ demonstrated worse performance on the digit span backward test ( u = 36.5 , p = 0.006 ) , lm - i immediate recall ( u = 37.5 , p = 0.008 ) , and lm - ii ( u = 45.0 , p = 0.021 ) but not lm % retention ( u = 75.5 , p = 0.39 ) compared to rpda. this study is the first to directly compare cognition between pd patients with and without anxiety . the findings confirmed our hypothesis that anxiety negatively influences attentional set - shifting and working memory in pd . more specifically , we found that pd patients with anxiety were more impaired on the trail making test part b which assessed attentional set - shifting , on both digit span tests which assessed working memory and attention , and to a lesser extent on the logical memory test which assessed memory and new verbal learning compared to pd patients without anxiety . taken together , these findings suggest that anxiety in pd may reduce processing capacity and impair processing efficiency , especially in the central executive and attentional systems of working memory in a similar way as seen in young healthy adults [ 26 , 27 ] . although the neurobiology of anxiety in pd remains unknown , many researchers have postulated that anxiety disorders are related to neurochemical changes that occur during the early , premotor stages of pd - related degeneration [ 37 , 38 ] such as nigrostriatal dopamine depletion , as well as cell loss within serotonergic and noradrenergic brainstem nuclei ( i.e. , raphe nuclei and locus coeruleus , resp . , which provide massive inputs to corticolimbic regions ) . over time , chronic dysregulation of adrenocortical and catecholamine functions can lead to hippocampal damage as well as dysfunctional prefrontal neural circuitries [ 39 , 40 ] , which play a key role in memory and attention . recent functional neuroimaging work has suggested that enhanced hippocampal activation during executive functioning and working memory tasks may represent compensatory processes for impaired frontostriatal functions in pd patients compared to controls . therefore , chronic stress from anxiety , for example , may disrupt compensatory processes in pd patients and explain the cognitive impairments specifically in working memory and attention seen in pd patients with anxiety . it has also been suggested that hyperactivation within the putamen may reflect a compensatory striatal mechanism to maintain normal working memory performance in pd patients ; however , losing this compensatory activation has been shown to contribute to poor working memory performance . anxiety in mild pd has been linked to reduced putamen dopamine uptake which becomes more extensive as the disease progresses . this further supports the notion that anxiety may disrupt compensatory striatal mechanisms as well , providing another possible explanation for the cognitive impairments observed in pd patients with anxiety in this study . noradrenergic and serotonergic systems should also be considered when trying to explain the mechanisms by which anxiety may influence cognition in pd . although these neurotransmitter systems are relatively understudied in pd cognition , treating the noradrenergic and serotonergic systems has shown beneficial effects on cognition in pd . selective serotonin reuptake inhibitor , citalopram , was shown to improve response inhibition deficits in pd , while noradrenaline reuptake blocker , atomoxetine , has been recently reported to have promising effects on cognition in pd [ 45 , 46 ] . overall , very few neuroimaging studies have been conducted in pd in order to understand the neural correlates of pd anxiety and its underlying neural pathology . future research should focus on relating anatomical changes and neurochemical changes to neural activation in order to gain a clearer understanding on how these pathologies affect anxiety in pd . to further understand how anxiety and cognitive dysfunction are related , future research should focus on using advanced structural and function imaging techniques to explain both cognitive and neural breakdowns that are associated with anxiety in pd patients . research has indicated that those with amnestic mild cognitive impairment who have more neuropsychiatric symptoms have a greater risk of developing dementia compared to those with fewer neuropsychiatric symptoms . future studies should also examine whether treating neuropsychiatric symptoms might impact the progression of cognitive decline and improve cognitive impairments in pd patients . previous studies have used pd symptom laterality as a window to infer asymmetrical dysfunction of neural circuits . for example , lpd patients have greater inferred right hemisphere pathology , whereas rpd patients have greater inferred left hemisphere pathology . thus , cognitive domains predominantly subserved by the left hemisphere ( e.g. , verbally mediated tasks of executive function and verbal memory ) might be hypothesized to be more affected in rpd than lpd ; however , this remains controversial . it has also been suggested that since anxiety is a common feature of left hemisphere involvement [ 48 , 49 ] , cognitive domains subserved by the left hemisphere may also be more strongly related to anxiety . results from this study showed selective verbal memory deficits in rpd patients with anxiety compared to rpd without anxiety , whereas lpd patients with anxiety had greater attentional / working memory deficits compared to lpd without anxiety . although these results align with previous research , interpretations of these findings should be made with caution due to the small sample size in the lpd comparison specifically . recent work has suggested that the hads questionnaire may underestimate the burden of anxiety related symptomology and therefore be a less sensitive measure of anxiety in pd [ 30 , 50 ] . in addition , our small sample size also limited the statistical power for detecting significant findings . based on these limitations , our findings are likely conservative and underrepresent the true impact anxiety has on cognition in pd . additionally , the current study employed a very brief neuropsychological assessment including one or two tests for each cognitive domain . future studies are encouraged to collect a more complex and comprehensive battery from a larger sample of pd participants in order to better understand the role anxiety plays on cognition in pd . another limitation of this study was the absence of diagnostic interviews to characterize participants ' psychiatric symptoms and specify the type of anxiety disorders included in this study . future studies should perform diagnostic interviews with participants ( e.g. , using dsm - v criteria ) rather than relying on self - reported measures to group participants , in order to better understand whether the type of anxiety disorder ( e.g. , social anxiety , phobias , panic disorders , and generalized anxiety ) influences cognitive performance differently in pd . one advantage the hads questionnaire provided over other anxiety scales was that it assessed both anxiety and depression simultaneously and allowed us to control for coexisting depression . although there was a trend that the pda+ group self - reported higher levels of depression than the pda group , all participants included in the study scored < 6 on the depression subscale of the hads . controlling for depression while assessing anxiety has been identified as a key shortcoming in the majority of recent work . considering many previous studies have investigated the influence of depression on cognition in pd without accounting for the presence of anxiety and the inconsistent findings reported to date , we recommend that future research should try to disentangle the influence of anxiety versus depression on cognitive impairments in pd . considering the growing number of clinical trials for treating depression , there are few if any for the treatment of anxiety in pd . anxiety is a key contributor to decreased quality of life in pd and greatly requires better treatment options . moreover , anxiety has been suggested to play a key role in freezing of gait ( fog ) , which is also related to attentional set - shifting [ 52 , 53 ] . future research should examine the link between anxiety , set - shifting , and fog , in order to determine whether treating anxiety might be a potential therapy for improving fog .""" from transformers import LEDForConditionalGeneration, LEDTokenizer import torch tokenizer = LEDTokenizer.from_pretrained("patrickvonplaten/led-large-16384-pubmed") input_ids = tokenizer(LONG_ARTICLE, return_tensors="pt").input_ids.to("cuda") global_attention_mask = torch.zeros_like(input_ids) # set global_attention_mask on first token global_attention_mask[:, 0] = 1 model = LEDForConditionalGeneration.from_pretrained("patrickvonplaten/led-large-16384-pubmed", return_dict_in_generate=True).to("cuda") sequences = model.generate(input_ids, global_attention_mask=global_attention_mask).sequences summary = tokenizer.batch_decode(sequences) ```
Wikidepia/albert-bahasa-uncased-squad
Wikidepia
2021-01-11T01:39:05Z
723
0
transformers
[ "transformers", "pytorch", "albert", "question-answering", "id", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- language: id inference: false --- # SQuAD IndoBERT-Lite Base Model Fine-tuned IndoBERT-Lite from IndoBenchmark using Translated SQuAD datasets. ## How to use ### Using pipeline ```python from transformers import BertTokenizerFast, pipeline tokenizer = BertTokenizerFast.from_pretrained( 'Wikidepia/albert-bahasa-uncased-squad' ) nlp = pipeline('question-answering', model="Wikidepia/albert-bahasa-uncased-squad", tokenizer=tokenizer) QA_input = { 'question': 'Kapan orang Normandia berada di Normandia?', 'context': 'The Normans (Norman: Nourmands; French: Normands; Latin: Normanni) adalah orang-orang yang pada abad ke-10 dan ke-11 memberikan nama mereka ke Normandia, sebuah wilayah di Prancis. Mereka adalah keturunan dari Norse (\ "Norman \" berasal dari \ "Norseman \") perampok dan perompak dari Denmark, Islandia dan Norwegia yang, di bawah pemimpin mereka Rollo, setuju untuk bersumpah setia kepada Raja Charles III dari Francia Barat. Melalui generasi asimilasi dan pencampuran dengan penduduk asli Franka dan Romawi-Gaul, keturunan mereka secara bertahap akan bergabung dengan budaya Francia Barat yang berbasis di Karoling. Identitas budaya dan etnis orang Normandia yang berbeda awalnya muncul pada paruh pertama abad ke-10, dan terus berkembang selama abad-abad berikutnya.' } res = nlp(QA_input) print(res) ```
mrm8488/electricidad-small-finetuned-muchocine
mrm8488
2021-01-09T04:46:14Z
8
2
transformers
[ "transformers", "pytorch", "electra", "text-classification", "sentiment", "analysis", "spanish", "es", "dataset:muchocine", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: es datasets: - muchocine widget: - text: "Una buena película, sin más." tags: - sentiment - analysis - spanish --- # Electricidad-small fine-tuned for (Spanish) Sentiment Anlalysis 🎞️👍👎 [Electricidad](https://huggingface.co/mrm8488/electricidad-small-discriminator) small fine-tuned on [muchocine](https://huggingface.co/datasets/muchocine) dataset for Spanish **Sentiment Analysis** downstream task. ## Fast usage with `pipelines` 🚀 ```python # pip install -q transformers from transformers import AutoModelForSequenceClassification, AutoTokenizer CHKPT = 'mrm8488/electricidad-small-finetuned-muchocine' model = AutoModelForSequenceClassification.from_pretrained(CHKPT) tokenizer = AutoTokenizer.from_pretrained(CHKPT) from transformers import pipeline classifier = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer) # It ranks your comments between 1 and 5 (stars) classifier('Es una obra mestra. Brillante.') classifier('Es una película muy buena.') classifier('Una buena película, sin más.') classifier('Esperaba mucho más.') classifier('He tirado el dinero. Una basura. Vergonzoso.') ```
Narsil/small_summarization_test
Narsil
2021-01-08T11:18:02Z
3
0
transformers
[ "transformers", "albert", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04Z
```python import tempfile from tokenizers import Tokenizer, models from transformers import PreTrainedTokenizerFast model_max_length = 4 vocab = [(chr(i), i) for i in range(256)] tokenizer = Tokenizer(models.Unigram(vocab)) with tempfile.NamedTemporaryFile() as f: tokenizer.save(f.name) real_tokenizer = PreTrainedTokenizerFast(tokenizer_file=f.name, model_max_length=model_max_length) real_tokenizer._tokenizer.save("dummy/tokenizer.json") ``` config uses Albert which works with a minimal `config.json`
MoritzLaurer/policy-distilbert-7d
MoritzLaurer
2021-01-04T20:22:18Z
7
3
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- language: - en tags: - text-classification metrics: - accuracy (balanced) - F1 (weighted) widget: - text: "70-85% of the population needs to get vaccinated against the novel coronavirus to achieve herd immunity." --- # Policy-DistilBERT-7d ## Model description This model was trained on 129.669 manually annotated sentences to classify text into one of seven political categories: 'Economy', 'External Relations', 'Fabric of Society', 'Freedom and Democracy', 'Political System', 'Welfare and Quality of Life' or 'Social Groups'. ## Intended uses & limitations #### How to use the model ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch model_name = "MoritzLaurer/policy-distilbert-7d" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSequenceClassification.from_pretrained(model_name) text = "The new variant first detected in southern England in September is blamed for sharp rises in levels of positive tests in recent weeks in London, south-east England and the east of England" input = tokenizer(text, truncation=True, return_tensors="pt") output = model(input["input_ids"]) # the output corresponds to the following labels: # 0: external relations, 1: freedom and democracy, 2: political system, 3: economy, 4: welfare and quality of life, 5: fabric of society, 6: social groups # output to dictionary prediction = torch.softmax(output["logits"][0], -1).tolist() label_names = ["external relations", "freedom and democracy", "political system", "economy", "welfare and quality of life", "fabric of society", "social groups"] prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)} print(prediction) #{'external relations': 0.0, 'freedom and democracy': 0.0, 'political system': 0.9, 'economy': 0.4, # 'welfare and quality of life': 98.3, 'fabric of society': 0.3, 'social groups': 0.0} ``` ### Training data Policy-DistilBERT-7d was trained on the English-speaking subset of the [Manifesto Project Dataset (MPDS2020a)](https://manifesto-project.wzb.eu/datasets). The model was trained on 129.669 sentences from 164 political manifestos from 55 political parties in 8 English-speaking countries (Australia, Canada, Ireland, Israel, New Zealand, South Africa, United Kingdom, United States). The manifestos were published between 1992 - 2019. The Manifesto Project mannually annotates individual sentences from political party manifestos in 7 main political domains: 'Economy', 'External Relations', 'Fabric of Society', 'Freedom and Democracy', 'Political System', 'Welfare and Quality of Life' or 'Social Groups' - see the [codebook](https://manifesto-project.wzb.eu/down/data/2020b/codebooks/codebook_MPDataset_MPDS2020b.pdf) for the exact definitions of each domain. ### Training procedure `distilbert-base-uncased` was trained using the Hugging Face trainer with the following hyperparameters. The hyperparameters were determined using a hyperparameter search on a 15% validation set. ``` training_args = TrainingArguments( num_train_epochs=5, # total number of training epochs learning_rate=4e-05, per_device_train_batch_size=4, # batch size per device during training per_device_eval_batch_size=4, # batch size for evaluation warmup_steps=500, # number of warmup steps for learning rate scheduler weight_decay=0.02, # strength of weight decay fp16=True # mixed precision training ) ``` ### Eval results The model was evaluated using 15% of the sentences (85-15 train-test split). accuracy (balanced) | F1 (weighted) | precision | recall | accuracy (not balanced) -------|---------|----------|---------|---------- 0.745 | 0.773 | 0.772 | 0.771 | 0.771 Please note that the label distribution in the dataset is imbalanced: ``` Welfare and Quality of Life 0.327225 Economy 0.259191 Fabric of Society 0.111800 Political System 0.095081 Social Groups 0.094371 External Relations 0.063724 Freedom and Democracy 0.048608 ``` [Balanced accuracy](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.balanced_accuracy_score.html) and [weighted F1](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_recall_fscore_support.html) were therefore used to evaluate model performance. ## Limitations and bias The model was trained on sentences in political manifestos from parties in the 8 countries mentioned above between 1992-2019, manually annotated by the [Manifesto Project](https://manifesto-project.wzb.eu/information/documents/information). The model output therefore reproduces the limitations of the dataset in terms of country coverage, time span, domain definitions and potential biases of the annotators - as any supervised machine learning model would. Applying the model to other types of data (other types of texts, countries etc.) will reduce performance. ### BibTeX entry and citation info ```bibtex @unpublished{ title={Policy-DistilBERT}, author={Moritz Laurer}, year={2020}, note={Unpublished paper} } ```
thilina/mt5-sinhalese-english
thilina
2021-01-03T21:14:26Z
65
8
transformers
[ "transformers", "pytorch", "tf", "mt5", "text2text-generation", "translation", "si", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:05Z
--- language: - si - en tags: - translation license: apache-2.0 metrics: - sacrebleu --- # mt5-sinhalese-english ## Model description An mT5-base model fine-tuned on the Sinhalese-English dataset in the Tatoeba Challenge. Can be used to translate from Sinhalese to English and vice versa. ## Training details - English - Sinhala dataset from the Tatoeba Challenge [Datasets](https://github.com/Helsinki-NLP/Tatoeba-Challenge/blob/master/Data.md) - [mT5-base pre-trained weights](https://huggingface.co/google/mt5-base) ## Eval results SacreBLEU score: - English to Sinhalese: 10.3 - Sinhalese to English: 24.4
julien-c/kan-bayashi_csmsc_tacotron2
julien-c
2020-12-31T11:13:04Z
2
0
espnet
[ "espnet", "audio", "text-to-speech", "zh", "dataset:csmsc", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
text-to-speech
2022-03-02T23:29:05Z
--- tags: - espnet - audio - text-to-speech language: zh datasets: - csmsc license: cc-by-4.0 widget: - text: "请您说得慢些好吗" --- ## ESPnet2 TTS model ### `kan-bayashi/csmsc_tacotron2` ♻️ Imported from https://zenodo.org/record/3969118 This model was trained by kan-bayashi using csmsc/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
julien-c/kan-bayashi-jsut_tts_train_tacotron2
julien-c
2020-12-27T18:48:06Z
4
0
espnet
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jsut", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
text-to-speech
2022-03-02T23:29:05Z
--- tags: - espnet - audio - text-to-speech language: ja datasets: - jsut license: cc-by-4.0 inference: false --- ## Example ESPnet2 TTS model ### `kan-bayashi/jsut_tts_train_tacotron2_raw_phn_jaconv_pyopenjtalk_accent_train.loss.ave` ♻️ Imported from https://zenodo.org/record/4381098/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Training ![](./exp/tts_train_tacotron2_raw_phn_jaconv_pyopenjtalk_accent/images/attn_loss.png) ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
julien-c/ljspeech_tts_train_tacotron2_raw_phn_tacotron_g2p_en_no_space_train
julien-c
2020-12-27T18:47:01Z
14
2
espnet
[ "espnet", "audio", "text-to-speech", "en", "dataset:ljspeech", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
text-to-speech
2022-03-02T23:29:05Z
--- tags: - espnet - audio - text-to-speech language: en datasets: - ljspeech license: cc-by-4.0 widget: - text: "Hello, how are you doing?" --- ## Example ESPnet2 TTS model ### `kan-bayashi/ljspeech_tts_train_tacotron2_raw_phn_tacotron_g2p_en_no_space_train.loss.best` ♻️ Imported from https://zenodo.org/record/3989498#.X90RlOlKjkM This model was trained by kan-bayashi using ljspeech/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Training config See full config in [`config.yaml`](./config.yaml) ```yaml config: conf/tuning/train_tacotron2.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/tts_train_tacotron2_raw ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: 0 dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true ```
monologg/koelectra-small-generator
monologg
2020-12-26T16:23:42Z
11
0
transformers
[ "transformers", "pytorch", "electra", "fill-mask", "ko", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: ko --- # KoELECTRA (Small Generator) Pretrained ELECTRA Language Model for Korean (`koelectra-small-generator`) For more detail, please see [original repository](https://github.com/monologg/KoELECTRA/blob/master/README_EN.md). ## Usage ### Load model and tokenizer ```python >>> from transformers import ElectraModel, ElectraTokenizer >>> model = ElectraModel.from_pretrained("monologg/koelectra-small-generator") >>> tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-small-generator") ``` ### Tokenizer example ```python >>> from transformers import ElectraTokenizer >>> tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-small-generator") >>> tokenizer.tokenize("[CLS] 한국어 ELECTRA를 공유합니다. [SEP]") ['[CLS]', '한국어', 'E', '##L', '##EC', '##T', '##RA', '##를', '공유', '##합니다', '.', '[SEP]'] >>> tokenizer.convert_tokens_to_ids(['[CLS]', '한국어', 'E', '##L', '##EC', '##T', '##RA', '##를', '공유', '##합니다', '.', '[SEP]']) [2, 18429, 41, 6240, 15229, 6204, 20894, 5689, 12622, 10690, 18, 3] ``` ## Example using ElectraForMaskedLM ```python from transformers import pipeline fill_mask = pipeline( "fill-mask", model="monologg/koelectra-small-generator", tokenizer="monologg/koelectra-small-generator" ) print(fill_mask("나는 {} 밥을 먹었다.".format(fill_mask.tokenizer.mask_token))) ```
monologg/koelectra-small-discriminator
monologg
2020-12-26T16:23:23Z
168
0
transformers
[ "transformers", "pytorch", "electra", "pretraining", "ko", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: ko --- # KoELECTRA (Small Discriminator) Pretrained ELECTRA Language Model for Korean (`koelectra-small-discriminator`) For more detail, please see [original repository](https://github.com/monologg/KoELECTRA/blob/master/README_EN.md). ## Usage ### Load model and tokenizer ```python >>> from transformers import ElectraModel, ElectraTokenizer >>> model = ElectraModel.from_pretrained("monologg/koelectra-small-discriminator") >>> tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-small-discriminator") ``` ### Tokenizer example ```python >>> from transformers import ElectraTokenizer >>> tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-small-discriminator") >>> tokenizer.tokenize("[CLS] 한국어 ELECTRA를 공유합니다. [SEP]") ['[CLS]', '한국어', 'E', '##L', '##EC', '##T', '##RA', '##를', '공유', '##합니다', '.', '[SEP]'] >>> tokenizer.convert_tokens_to_ids(['[CLS]', '한국어', 'E', '##L', '##EC', '##T', '##RA', '##를', '공유', '##합니다', '.', '[SEP]']) [2, 18429, 41, 6240, 15229, 6204, 20894, 5689, 12622, 10690, 18, 3] ``` ## Example using ElectraForPreTraining ```python import torch from transformers import ElectraForPreTraining, ElectraTokenizer discriminator = ElectraForPreTraining.from_pretrained("monologg/koelectra-small-discriminator") tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-small-discriminator") sentence = "나는 방금 밥을 먹었다." fake_sentence = "나는 내일 밥을 먹었다." fake_tokens = tokenizer.tokenize(fake_sentence) fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt") discriminator_outputs = discriminator(fake_inputs) predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2) print(list(zip(fake_tokens, predictions.tolist()[1:-1]))) ```
m3hrdadfi/albert-fa-base-v2-sentiment-snappfood
m3hrdadfi
2020-12-26T08:49:28Z
7
0
transformers
[ "transformers", "pytorch", "tf", "albert", "text-classification", "fa", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: fa license: apache-2.0 --- # ALBERT Persian A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language > میتونی بهش بگی برت_کوچولو [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) is the first attempt on ALBERT for the Persian Language. The model was trained based on Google's ALBERT BASE Version 2.0 over various writing styles from numerous subjects (e.g., scientific, novels, news) with more than 3.9M documents, 73M sentences, and 1.3B words, like the way we did for ParsBERT. Please follow the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo for the latest information about previous and current models. ## Persian Sentiment [Digikala, SnappFood, DeepSentiPers] It aims to classify text, such as comments, based on their emotional bias. We tested three well-known datasets for this task: `Digikala` user comments, `SnappFood` user comments, and `DeepSentiPers` in two binary-form and multi-form types. ### SnappFood [Snappfood](https://snappfood.ir/) (an online food delivery company) user comments containing 70,000 comments with two labels (i.e. polarity classification): 1. Happy 2. Sad | Label | # | |:--------:|:-----:| | Negative | 35000 | | Positive | 35000 | **Download** You can download the dataset from [here](https://drive.google.com/uc?id=15J4zPN1BD7Q_ZIQ39VeFquwSoW8qTxgu) ## Results The following table summarizes the F1 score obtained as compared to other models and architectures. | Dataset | ALBERT-fa-base-v2 | ParsBERT-v1 | mBERT | DeepSentiPers | |:------------------------:|:-----------------:|:-----------:|:-----:|:-------------:| | SnappFood User Comments | 85.79 | 88.12 | 87.87 | - | ### BibTeX entry and citation info Please cite in publications as the following: ```bibtex @misc{ALBERTPersian, author = {Mehrdad Farahani}, title = {ALBERT-Persian: A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language}, year = {2020}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/m3hrdadfi/albert-persian}}, } @article{ParsBERT, title={ParsBERT: Transformer-based Model for Persian Language Understanding}, author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri}, journal={ArXiv}, year={2020}, volume={abs/2005.12515} } ``` ## Questions? Post a Github issue on the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo.
m3hrdadfi/albert-fa-base-v2-sentiment-digikala
m3hrdadfi
2020-12-26T08:48:33Z
5
0
transformers
[ "transformers", "pytorch", "tf", "albert", "text-classification", "fa", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: fa license: apache-2.0 --- # ALBERT Persian A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language > میتونی بهش بگی برت_کوچولو [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) is the first attempt on ALBERT for the Persian Language. The model was trained based on Google's ALBERT BASE Version 2.0 over various writing styles from numerous subjects (e.g., scientific, novels, news) with more than 3.9M documents, 73M sentences, and 1.3B words, like the way we did for ParsBERT. Please follow the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo for the latest information about previous and current models. ## Persian Sentiment [Digikala, SnappFood, DeepSentiPers] It aims to classify text, such as comments, based on their emotional bias. We tested three well-known datasets for this task: `Digikala` user comments, `SnappFood` user comments, and `DeepSentiPers` in two binary-form and multi-form types. ### Digikala Digikala user comments provided by [Open Data Mining Program (ODMP)](https://www.digikala.com/opendata/). This dataset contains 62,321 user comments with three labels: | Label | # | |:---------------:|:------:| | no_idea | 10394 | | not_recommended | 15885 | | recommended | 36042 | **Download** You can download the dataset from [here](https://www.digikala.com/opendata/) ## Results The following table summarizes the F1 score obtained as compared to other models and architectures. | Dataset | ALBERT-fa-base-v2 | ParsBERT-v1 | mBERT | DeepSentiPers | |:------------------------:|:-----------------:|:-----------:|:-----:|:-------------:| | Digikala User Comments | 81.12 | 81.74 | 80.74 | - | ### BibTeX entry and citation info Please cite in publications as the following: ```bibtex @misc{ALBERTPersian, author = {Mehrdad Farahani}, title = {ALBERT-Persian: A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language}, year = {2020}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/m3hrdadfi/albert-persian}}, } @article{ParsBERT, title={ParsBERT: Transformer-based Model for Persian Language Understanding}, author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri}, journal={ArXiv}, year={2020}, volume={abs/2005.12515} } ``` ## Questions? Post a Github issue on the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo.
m3hrdadfi/albert-fa-base-v2-sentiment-binary
m3hrdadfi
2020-12-26T08:46:58Z
9
1
transformers
[ "transformers", "pytorch", "tf", "albert", "text-classification", "fa", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: fa license: apache-2.0 --- # ALBERT Persian A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language > میتونی بهش بگی برت_کوچولو [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) is the first attempt on ALBERT for the Persian Language. The model was trained based on Google's ALBERT BASE Version 2.0 over various writing styles from numerous subjects (e.g., scientific, novels, news) with more than 3.9M documents, 73M sentences, and 1.3B words, like the way we did for ParsBERT. Please follow the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo for the latest information about previous and current models. ## Persian Sentiment [Digikala, SnappFood, DeepSentiPers] It aims to classify text, such as comments, based on their emotional bias. We tested three well-known datasets for this task: `Digikala` user comments, `SnappFood` user comments, and `DeepSentiPers` in two binary-form and multi-form types. ## Results The model obtained an F1 score of 87.56% for a composition of all three datasets into a binary-labels `Negative` and `Positive`. ### BibTeX entry and citation info Please cite in publications as the following: ```bibtex @misc{ALBERTPersian, author = {Mehrdad Farahani}, title = {ALBERT-Persian: A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language}, year = {2020}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/m3hrdadfi/albert-persian}}, } @article{ParsBERT, title={ParsBERT: Transformer-based Model for Persian Language Understanding}, author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri}, journal={ArXiv}, year={2020}, volume={abs/2005.12515} } ``` ## Questions? Post a Github issue on the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo.
m3hrdadfi/albert-fa-base-v2-sentiment-deepsentipers-multi
m3hrdadfi
2020-12-26T08:42:15Z
44
0
transformers
[ "transformers", "pytorch", "tf", "albert", "text-classification", "fa", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: fa license: apache-2.0 --- # ALBERT Persian A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language > میتونی بهش بگی برت_کوچولو [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) is the first attempt on ALBERT for the Persian Language. The model was trained based on Google's ALBERT BASE Version 2.0 over various writing styles from numerous subjects (e.g., scientific, novels, news) with more than 3.9M documents, 73M sentences, and 1.3B words, like the way we did for ParsBERT. Please follow the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo for the latest information about previous and current models. ## Persian Sentiment [Digikala, SnappFood, DeepSentiPers] It aims to classify text, such as comments, based on their emotional bias. We tested three well-known datasets for this task: `Digikala` user comments, `SnappFood` user comments, and `DeepSentiPers` in two binary-form and multi-form types. ### DeepSentiPers which is a balanced and augmented version of SentiPers, contains 12,138 user opinions about digital products labeled with five different classes; two positives (i.e., happy and delighted), two negatives (i.e., furious and angry) and one neutral class. Therefore, this dataset can be utilized for both multi-class and binary classification. In the case of binary classification, the neutral class and its corresponding sentences are removed from the dataset. **Binary:** 1. Negative (Furious + Angry) 2. Positive (Happy + Delighted) **Multi** 1. Furious 2. Angry 3. Neutral 4. Happy 5. Delighted | Label | # | |:---------:|:----:| | Furious | 236 | | Angry | 1357 | | Neutral | 2874 | | Happy | 2848 | | Delighted | 2516 | **Download** You can download the dataset from: - [SentiPers](https://github.com/phosseini/sentipers) - [DeepSentiPers](https://github.com/JoyeBright/DeepSentiPers) ## Results The following table summarizes the F1 score obtained as compared to other models and architectures. | Dataset | ALBERT-fa-base-v2 | ParsBERT-v1 | mBERT | DeepSentiPers | |:------------------------:|:-----------------:|:-----------:|:-----:|:-------------:| | SentiPers (Multi Class) | 66.12 | 71.11 | - | 69.33 | | SentiPers (Binary Class) | 91.09 | 92.13 | - | 91.98 | ### BibTeX entry and citation info Please cite in publications as the following: ```bibtex @misc{ALBERTPersian, author = {Mehrdad Farahani}, title = {ALBERT-Persian: A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language}, year = {2020}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/m3hrdadfi/albert-persian}}, } @article{ParsBERT, title={ParsBERT: Transformer-based Model for Persian Language Understanding}, author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri}, journal={ArXiv}, year={2020}, volume={abs/2005.12515} } ``` ## Questions? Post a Github issue on the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo.
m3hrdadfi/albert-fa-base-v2-ner-arman
m3hrdadfi
2020-12-26T08:36:57Z
17
3
transformers
[ "transformers", "pytorch", "tf", "albert", "token-classification", "fa", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: fa license: apache-2.0 --- # ALBERT Persian A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language > میتونی بهش بگی برت_کوچولو [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) is the first attempt on ALBERT for the Persian Language. The model was trained based on Google's ALBERT BASE Version 2.0 over various writing styles from numerous subjects (e.g., scientific, novels, news) with more than 3.9M documents, 73M sentences, and 1.3B words, like the way we did for ParsBERT. Please follow the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo for the latest information about previous and current models. ## Persian NER [ARMAN, PEYMA] This task aims to extract named entities in the text, such as names and label with appropriate `NER` classes such as locations, organizations, etc. The datasets used for this task contain sentences that are marked with `IOB` format. In this format, tokens that are not part of an entity are tagged as `”O”` the `”B”`tag corresponds to the first word of an object, and the `”I”` tag corresponds to the rest of the terms of the same entity. Both `”B”` and `”I”` tags are followed by a hyphen (or underscore), followed by the entity category. Therefore, the NER task is a multi-class token classification problem that labels the tokens upon being fed a raw text. There are two primary datasets used in Persian NER, `ARMAN`, and `PEYMA`. ### ARMAN ARMAN dataset holds 7,682 sentences with 250,015 sentences tagged over six different classes. 1. Organization 2. Location 3. Facility 4. Event 5. Product 6. Person | Label | # | |:------------:|:-----:| | Organization | 30108 | | Location | 12924 | | Facility | 4458 | | Event | 7557 | | Product | 4389 | | Person | 15645 | **Download** You can download the dataset from [here](https://github.com/HaniehP/PersianNER) ## Results The following table summarizes the F1 score obtained as compared to other models and architectures. | Dataset | ALBERT-fa-base-v2 | ParsBERT-v1 | mBERT | MorphoBERT | Beheshti-NER | LSTM-CRF | Rule-Based CRF | BiLSTM-CRF | |:-------:|:-----------------:|:-----------:|:-----:|:----------:|:------------:|:--------:|:--------------:|:----------:| | ARMAN | 97.43 | 98.79 | 95.89 | 89.9 | 84.03 | 86.55 | - | 77.45 | ### BibTeX entry and citation info Please cite in publications as the following: ```bibtex @misc{ALBERTPersian, author = {Mehrdad Farahani}, title = {ALBERT-Persian: A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language}, year = {2020}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/m3hrdadfi/albert-persian}}, } @article{ParsBERT, title={ParsBERT: Transformer-based Model for Persian Language Understanding}, author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri}, journal={ArXiv}, year={2020}, volume={abs/2005.12515} } ``` ## Questions? Post a Github issue on the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo.
m3hrdadfi/albert-fa-base-v2-ner-peyma
m3hrdadfi
2020-12-26T08:36:20Z
4
1
transformers
[ "transformers", "pytorch", "tf", "albert", "token-classification", "fa", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: fa license: apache-2.0 --- # ALBERT Persian A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language > میتونی بهش بگی برت_کوچولو [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) is the first attempt on ALBERT for the Persian Language. The model was trained based on Google's ALBERT BASE Version 2.0 over various writing styles from numerous subjects (e.g., scientific, novels, news) with more than 3.9M documents, 73M sentences, and 1.3B words, like the way we did for ParsBERT. Please follow the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo for the latest information about previous and current models. ## Persian NER [ARMAN, PEYMA] This task aims to extract named entities in the text, such as names and label with appropriate `NER` classes such as locations, organizations, etc. The datasets used for this task contain sentences that are marked with `IOB` format. In this format, tokens that are not part of an entity are tagged as `”O”` the `”B”`tag corresponds to the first word of an object, and the `”I”` tag corresponds to the rest of the terms of the same entity. Both `”B”` and `”I”` tags are followed by a hyphen (or underscore), followed by the entity category. Therefore, the NER task is a multi-class token classification problem that labels the tokens upon being fed a raw text. There are two primary datasets used in Persian NER, `ARMAN`, and `PEYMA`. ### PEYMA PEYMA dataset includes 7,145 sentences with a total of 302,530 tokens from which 41,148 tokens are tagged with seven different classes. 1. Organization 2. Money 3. Location 4. Date 5. Time 6. Person 7. Percent | Label | # | |:------------:|:-----:| | Organization | 16964 | | Money | 2037 | | Location | 8782 | | Date | 4259 | | Time | 732 | | Person | 7675 | | Percent | 699 | **Download** You can download the dataset from [here](http://nsurl.org/tasks/task-7-named-entity-recognition-ner-for-farsi/) ## Results The following table summarizes the F1 score obtained as compared to other models and architectures. | Dataset | ALBERT-fa-base-v2 | ParsBERT-v1 | mBERT | MorphoBERT | Beheshti-NER | LSTM-CRF | Rule-Based CRF | BiLSTM-CRF | |:-------:|:-----------------:|:-----------:|:-----:|:----------:|:------------:|:--------:|:--------------:|:----------:| | PEYMA | 88.99 | 93.10 | 86.64 | - | 90.59 | - | 84.00 | - | ### BibTeX entry and citation info Please cite in publications as the following: ```bibtex @misc{ALBERTPersian, author = {Mehrdad Farahani}, title = {ALBERT-Persian: A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language}, year = {2020}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/m3hrdadfi/albert-persian}}, } @article{ParsBERT, title={ParsBERT: Transformer-based Model for Persian Language Understanding}, author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri}, journal={ArXiv}, year={2020}, volume={abs/2005.12515} } ``` ## Questions? Post a Github issue on the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo.
julien-c/voice-activity-detection
julien-c
2020-12-21T22:38:05Z
4
16
null
[ "pytorch", "pyannote", "audio", "voice-activity-detection", "dataset:dihard", "arxiv:1910.10655", "license:mit", "region:us" ]
voice-activity-detection
2022-03-02T23:29:05Z
--- tags: - pyannote - audio - voice-activity-detection datasets: - dihard license: mit inference: false --- ## Example pyannote-audio Voice Activity Detection model ### `pyannote.audio.models.segmentation.PyanNet` ♻️ Imported from https://github.com/pyannote/pyannote-audio-hub This model was trained by @hbredin. ### Demo: How to use in pyannote-audio ```python from pyannote.audio.core.inference import Inference model = Inference('julien-c/voice-activity-detection', device='cuda') model({ "audio": "TheBigBangTheory.wav" }) ``` ### Citing pyannote-audio ```BibTex @inproceedings{Bredin2020, Title = {{pyannote.audio: neural building blocks for speaker diarization}}, Author = {{Bredin}, Herv{\'e} and {Yin}, Ruiqing and {Coria}, Juan Manuel and {Gelly}, Gregory and {Korshunov}, Pavel and {Lavechin}, Marvin and {Fustes}, Diego and {Titeux}, Hadrien and {Bouaziz}, Wassim and {Gill}, Marie-Philippe}, Booktitle = {ICASSP 2020, IEEE International Conference on Acoustics, Speech, and Signal Processing}, Address = {Barcelona, Spain}, Month = {May}, Year = {2020}, } ``` or ```bibtex @inproceedings{Lavechin2020, author = {Marvin Lavechin and Marie-Philippe Gill and Ruben Bousbib and Herv\'{e} Bredin and Leibny Paola Garcia-Perera}, title = {{End-to-end Domain-Adversarial Voice Activity Detection}}, year = {2020}, url = {https://arxiv.org/abs/1910.10655}, } ```
laboro-ai/distilbert-base-japanese-finetuned-livedoor
laboro-ai
2020-12-18T03:09:54Z
3
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "ja", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: ja tags: - distilbert license: cc-by-nc-4.0 ---
sibt-rj/albert-large-urdu
sibt-rj
2020-12-16T20:27:42Z
6
0
transformers
[ "transformers", "pytorch", "albert", "fill-mask", "urdu", "language-model", "ur", "dataset:urdu-text-news", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: - ur tags: - urdu - language-model license: mit datasets: - urdu-text-news ---
patrickvonplaten/bert2bert-cnn_dailymail-fp16
patrickvonplaten
2020-12-12T11:22:49Z
997
4
transformers
[ "transformers", "pytorch", "encoder-decoder", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
# Bert2Bert Summarization with 🤗 EncoderDecoder Framework This model is a Bert2Bert model fine-tuned on summarization. Bert2Bert is a `EncoderDecoderModel`, meaning that both the encoder and the decoder are `bert-base-uncased` BERT models. Leveraging the [EncoderDecoderFramework](https://huggingface.co/transformers/model_doc/encoderdecoder.html#encoder-decoder-models), the two pretrained models can simply be loaded into the framework via: ```python bert2bert = EncoderDecoderModel.from_encoder_decoder_pretrained("bert-base-uncased", "bert-base-uncased") ``` The decoder of an `EncoderDecoder` model needs cross-attention layers and usually makes use of causal masking for auto-regressiv generation. Thus, ``bert2bert`` is consequently fined-tuned on the `CNN/Daily Mail`dataset and the resulting model `bert2bert-cnn_dailymail-fp16` is uploaded here. ## Example The model is by no means a state-of-the-art model, but nevertheless produces reasonable summarization results. It was mainly fine-tuned as a proof-of-concept for the 🤗 EncoderDecoder Framework. The model can be used as follows: ```python from transformers import BertTokenizer, EncoderDecoderModel model = EncoderDecoderModel.from_pretrained("patrickvonplaten/bert2bert-cnn_dailymail-fp16") tokenizer = BertTokenizer.from_pretrained("patrickvonplaten/bert2bert-cnn_dailymail-fp16") article = """(CNN)Sigma Alpha Epsilon is under fire for a video showing party-bound fraternity members singing a racist chant. SAE's national chapter suspended the students, but University of Oklahoma President David Boren took it a step further, saying the university's affiliation with the fraternity is permanently done. The news is shocking, but it's not the first time SAE has faced controversy. SAE was founded March 9, 1856, at the University of Alabama, five years before the American Civil War, according to the fraternity website. When the war began, the group had fewer than 400 members, of which "369 went to war for the Confederate States and seven for the Union Army," the website says. The fraternity now boasts more than 200,000 living alumni, along with about 15,000 undergraduates populating 219 chapters and 20 "colonies" seeking full membership at universities. SAE has had to work hard to change recently after a string of member deaths, many blamed on the hazing of new recruits, SAE national President Bradley Cohen wrote in a message on the fraternity's website. The fraternity's website lists more than 130 chapters cited or suspended for "health and safety incidents" since 2010. At least 30 of the incidents involved hazing, and dozens more involved alcohol. However, the list is missing numerous incidents from recent months. Among them, according to various media outlets: Yale University banned the SAEs from campus activities last month after members allegedly tried to interfere with a sexual misconduct investigation connected to an initiation rite. Stanford University in December suspended SAE housing privileges after finding sorority members attending a fraternity function were subjected to graphic sexual content. And Johns Hopkins University in November suspended the fraternity for underage drinking. "The media has labeled us as the 'nation's deadliest fraternity,' " Cohen said. In 2011, for example, a student died while being coerced into excessive alcohol consumption, according to a lawsuit. SAE's previous insurer dumped the fraternity. "As a result, we are paying Lloyd's of London the highest insurance rates in the Greek-letter world," Cohen said. Universities have turned down SAE's attempts to open new chapters, and the fraternity had to close 12 in 18 months over hazing incidents.""" input_ids = tokenizer(article, return_tensors="pt").input_ids output_ids = model.generate(input_ids) print(tokenizer.decode(output_ids[0], skip_special_tokens=True)) # should produce # sae was founded in 1856, five years before the civil war. the fraternity has had to work hard to change recently. the university of oklahoma president says the university's affiliation with the fraternity is permanently done. the sae has had a string of members in recent mon ths. ``` ## Training script: Please follow this tutorial to see how to warm-start a BERT2BERT model: https://colab.research.google.com/drive/1WIk2bxglElfZewOHboPFNj8H44_VAyKE?usp=sharing The obtained results should be: | - | Rouge2 - mid -precision | Rouge2 - mid - recall | Rouge2 - mid - fmeasure | |----------|:-------------:|:------:|:------:| | **CNN/Daily Mail** | 16.12 | 17.07 | **16.1** |
ashwani-tanwar/Indo-Aryan-XLM-R-Base
ashwani-tanwar
2020-12-12T02:52:59Z
6
0
transformers
[ "transformers", "pytorch", "tf", "xlm-roberta", "fill-mask", "gu", "hi", "mr", "bn", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: - gu - hi - mr - bn --- # Indo-Aryan-XLM-R-Base This model is finetuned over [XLM-RoBERTa](https://huggingface.co/xlm-roberta-base) (XLM-R) using its base variant with the Hindi, Gujarati, Marathi, and Bengali languages from the Indo-Aryan family using the [OSCAR](https://oscar-corpus.com/) monolingual datasets. As these languages had imbalanced datasets, we used resampling strategies as used in pretraining the XLM-R to balance the resulting dataset after combining these languages. We used the same masked language modelling (MLM) objective which was used for pretraining the XLM-R. As it is built over the pretrained XLM-R, we leveraged *Transfer Learning* by exploiting the knowledge from its parent model. ## Dataset OSCAR corpus contains several diverse datasets for different languages. We followed the work of [CamemBERT](https://www.aclweb.org/anthology/2020.acl-main.645/) who reported better performance with this diverse dataset as compared to the other large homogenous datasets. ## Preprocessing and Training Procedure Please visit [this link](https://github.com/ashwanitanwar/nmt-transfer-learning-xlm-r#6-finetuning-xlm-r) for the detailed procedure. ## Usage - This model can be used for further finetuning for different NLP tasks using the Hindi, Gujarati, Marathi, and Bengali languages. - It can be used to generate contextualised word representations for the words from the above languages. - It can be used for domain adaptation. - It can be used to predict the missing words from their sentences. ## Demo ### Using the model to predict missing words ``` from transformers import pipeline unmasker = pipeline('fill-mask', model='ashwani-tanwar/Indo-Aryan-XLM-R-Base') pred_word = unmasker("અમદાવાદ એ ગુજરાતનું એક <mask> છે.") print(pred_word) ``` ``` [{'sequence': '<s> અમદાવાદ એ ગુજરાતનું એક શહેર છે.</s>', 'score': 0.7811868786811829, 'token': 85227, 'token_str': '▁શહેર'}, {'sequence': '<s> અમદાવાદ એ ગુજરાતનું એક ગામ છે.</s>', 'score': 0.055032357573509216, 'token': 66346, 'token_str': '▁ગામ'}, {'sequence': '<s> અમદાવાદ એ ગુજરાતનું એક નામ છે.</s>', 'score': 0.0287721399217844, 'token': 29565, 'token_str': '▁નામ'}, {'sequence': '<s> અમદાવાદ એ ગુજરાતનું એક રાજ્ય છે.</s>', 'score': 0.02565067447721958, 'token': 63678, 'token_str': '▁રાજ્ય'}, {'sequence': '<s> અમદાવાદ એ ગુજરાતનું એકનગર છે.</s>', 'score': 0.022877279669046402, 'token': 69702, 'token_str': 'નગર'}] ``` ### Using the model to generate contextualised word representations ``` from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("ashwani-tanwar/Indo-Aryan-XLM-R-Base") model = AutoModel.from_pretrained("ashwani-tanwar/Indo-Aryan-XLM-R-Base") sentence = "અમદાવાદ એ ગુજરાતનું એક શહેર છે." encoded_sentence = tokenizer(sentence, return_tensors='pt') context_word_rep = model(**encoded_sentence) ```
ashwani-tanwar/Gujarati-in-Devanagari-XLM-R-Base
ashwani-tanwar
2020-12-12T02:22:48Z
4
0
transformers
[ "transformers", "pytorch", "tf", "xlm-roberta", "fill-mask", "gu", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: gu --- # Gujarati-in-Devanagari-XLM-R-Base This model is finetuned over [XLM-RoBERTa](https://huggingface.co/xlm-roberta-base) (XLM-R) using its base variant with the Gujarati language using the [OSCAR](https://oscar-corpus.com/) monolingual dataset. We converted the Gujarati script to the Devanagari using [Indic-NLP](https://github.com/anoopkunchukuttan/indic_nlp_library) library. For example, the sentence 'અમદાવાદ એ ગુજરાતનું એક શહેર છે.' was converted to 'अमदावाद ए गुजरातनुं एक शहेर छे.'. This helped to get better contextualised representations for some words as the XLM-R was pre-trained with several languages written in Devanagari script such as Hindi, Marathi, Sanskrit, and so on. We used the same masked language modelling (MLM) objective which was used for pretraining the XLM-R. As it is built over the pretrained XLM-R, we leveraged *Transfer Learning* by exploiting the knowledge from its parent model. ## Dataset OSCAR corpus contains several diverse datasets for different languages. We followed the work of [CamemBERT](https://www.aclweb.org/anthology/2020.acl-main.645/) who reported better performance with this diverse dataset as compared to the other large homogenous datasets. ## Preprocessing and Training Procedure Please visit [this link](https://github.com/ashwanitanwar/nmt-transfer-learning-xlm-r#6-finetuning-xlm-r) for the detailed procedure. ## Usage - This model can be used for further finetuning for different NLP tasks using the Gujarati language. - It can be used to generate contextualised word representations for the Gujarati words. - It can be used for domain adaptation. - It can be used to predict the missing words from the Gujarati sentences. ## Demo ### Using the model to predict missing words ``` from transformers import pipeline unmasker = pipeline('fill-mask', model='ashwani-tanwar/Gujarati-in-Devanagari-XLM-R-Base') pred_word = unmasker("अमदावाद ए गुजरातनुं एक <mask> छे.") print(pred_word) ``` ``` [{'sequence': '<s> अमदावाद ए गुजरातनुं एक नगर छे.</s>', 'score': 0.24843722581863403, 'token': 18576, 'token_str': '▁नगर'}, {'sequence': '<s> अमदावाद ए गुजरातनुं एक महानगर छे.</s>', 'score': 0.21455222368240356, 'token': 122519, 'token_str': '▁महानगर'}, {'sequence': '<s> अमदावाद ए गुजरातनुं एक राज्य छे.</s>', 'score': 0.16832049190998077, 'token': 10665, 'token_str': '▁राज्य'}, {'sequence': '<s> अमदावाद ए गुजरातनुं एक जिल्ला छे.</s>', 'score': 0.06764694303274155, 'token': 20396, 'token_str': '▁जिल्ला'}, {'sequence': '<s> अमदावाद ए गुजरातनुं एक शहर छे.</s>', 'score': 0.05364946648478508, 'token': 22770, 'token_str': '▁शहर'}] ``` ### Using the model to generate contextualised word representations ``` from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("ashwani-tanwar/Gujarati-in-Devanagari-XLM-R-Base") model = AutoModel.from_pretrained("ashwani-tanwar/Gujarati-in-Devanagari-XLM-R-Base") sentence = "अमदावाद ए गुजरातनुं एक शहेर छे." encoded_sentence = tokenizer(sentence, return_tensors='pt') context_word_rep = model(**encoded_sentence) ```
flexudy/t5-base-multi-sentence-doctor
flexudy
2020-12-11T23:33:25Z
47
45
transformers
[ "transformers", "pytorch", "tf", "t5", "text2text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
![avatar](sent-banner.png) # Sentence-Doctor Sentence doctor is a T5 model that attempts to correct the errors or mistakes found in sentences. Model works on English, German and French text. ## 1. Problem: Many NLP models depend on tasks like *Text Extraction Libraries, OCR, Speech to Text libraries* and **Sentence Boundary Detection** As a consequence errors caused by these tasks in your NLP pipeline can affect the quality of models in applications. Especially since models are often trained on **clean** input. ## 2. Solution: Here we provide a model that **attempts** to reconstruct sentences based on the its context (sourrounding text). The task is pretty straightforward: * `Given an "erroneous" sentence, and its context, reconstruct the "intended" sentence`. ## 3. Use Cases: * Attempt to repair noisy sentences that where extracted with OCR software or text extractors. * Attempt to repair sentence boundaries. * Example (in German): **Input: "und ich bin im**", * Prefix_Context: "Hallo! Mein Name ist John", Postfix_Context: "Januar 1990 geboren." * Output: "John und ich bin im Jahr 1990 geboren" * Possibly sentence level spelling correction -- Although this is not the intended use. * Input: "I went to church **las yesteday**" => Output: "I went to church last Sunday". ## 4. Disclaimer Note how we always emphises on the word *attempt*. The current version of the model was only trained on **150K** sentences from the tatoeba dataset: https://tatoeba.org/eng. (50K per language -- En, Fr, De). Hence, we strongly encourage you to finetune the model on your dataset. We might release a version trained on more data. ## 5. Datasets We generated synthetic data from the tatoeba dataset: https://tatoeba.org/eng. Randomly applying different transformations on words and characters based on some probabilities. The datasets are available in the data folder (where **sentence_doctor_dataset_300K** is a larger dataset with 100K sentences for each language). ## 6. Usage ### 6.1 Preprocessing * Let us assume we have the following text (Note that there are no punctuation marks in the text): ```python text = "That is my job I am a medical doctor I save lives" ``` * You decided extract the sentences and for some obscure reason, you obtained these sentences: ```python sentences = ["That is my job I a", "m a medical doct", "I save lives"] ``` * You now wish to correct the sentence **"m a medical doct"**. Here is the single preprocessing step for the model: ```python input_text = "repair_sentence: " + sentences[1] + " context: {" + sentences[0] + "}{" + sentences[2] + "} </s>" ``` **Explanation**:</br> * We are telling the model to repair the sentence with the prefix "repair_sentence: " * Then append the sentence we want to repair **sentence[1]** which is "m a medical doct" * Next we give some context to the model. In the case, the context is some text that occured before the sentence and some text that appeard after the sentence in the original text. * To do that, we append the keyword "context :" * Append **{sentence[0]}** "{That is my job I a}". (Note how it is sourrounded by curly braces). * Append **{sentence[2]}** "{I save lives}". * At last we tell the model this is the end of the input with </s>. ```python print(input_text) # repair_sentence: m a medical doct context: {That is my job I a}{or I save lives} </s> ``` <br/> **The context is optional**, so the input could also be ```repair_sentence: m a medical doct context: {}{} </s>``` ### 6.2 Inference ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("flexudy/t5-base-multi-sentence-doctor") model = AutoModelWithLMHead.from_pretrained("flexudy/t5-base-multi-sentence-doctor") input_text = "repair_sentence: m a medical doct context: {That is my job I a}{or I save lives} </s>" input_ids = tokenizer.encode(input_text, return_tensors="pt") outputs = model.generate(input_ids, max_length=32, num_beams=1) sentence = tokenizer.decode(outputs[0], skip_special_tokens=True, clean_up_tokenization_spaces=True) assert sentence == "I am a medical doctor." ``` ## 7. Fine-tuning We also provide a script `train_any_t5_task.py` that might help you fine-tune any Text2Text Task with T5. We added #TODO comments all over to help you use train with ease. For example: ```python # TODO Set your training epochs config.TRAIN_EPOCHS = 3 ``` If you don't want to read the #TODO comments, just pass in your data like this ```python # TODO Where is your data ? Enter the path trainer.start("data/sentence_doctor_dataset_300.csv") ``` and voila!! Please feel free to correct any mistakes in the code and make a pull request. ## 8. Attribution * [Huggingface](https://huggingface.co/) transformer lib for making this possible * Abhishek Kumar Mishra's transformer [tutorial](https://github.com/abhimishra91/transformers-tutorials/blob/master/transformers_summarization_wandb.ipynb) on text summarisation. Our training code is just a modified version of their code. So many thanks. * We finetuned this model from the huggingface hub: WikinewsSum/t5-base-multi-combine-wiki-news. Thanks to the [authors](https://huggingface.co/WikinewsSum) * We also read a lot of work from [Suraj Patil](https://github.com/patil-suraj) * No one has been forgotten, hopefully :)
valhalla/electra-base-discriminator-finetuned_squadv1
valhalla
2020-12-11T22:03:34Z
5
0
transformers
[ "transformers", "pytorch", "electra", "question-answering", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
# ELECTRA-BASE-DISCRIMINATOR finetuned on SQuADv1 This is electra-base-discriminator model finetuned on SQuADv1 dataset for for question answering task. ## Model details As mentioned in the original paper: ELECTRA is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a GAN. At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the SQuAD 2.0 dataset. | Param | #Value | |---------------------|--------| | layers | 12 | | hidden size | 768 | | num attetion heads | 12 | | on disk size | 436MB | ## Model training This model was trained on google colab v100 GPU. You can find the fine-tuning colab here [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/11yo-LaFsgggwmDSy2P8zD3tzf5cCb-DU?usp=sharing). ## Results The results are actually slightly better than given in the paper. In the paper the authors mentioned that electra-base achieves 84.5 EM and 90.8 F1 | Metric | #Value | |--------|--------| | EM | 85.0520| | F1 | 91.6050| ## Model in Action 🚀 ```python3 from transformers import pipeline nlp = pipeline('question-answering', model='valhalla/electra-base-discriminator-finetuned_squadv1') nlp({ 'question': 'What is the answer to everything ?', 'context': '42 is the answer to life the universe and everything' }) => {'answer': '42', 'end': 2, 'score': 0.981274963050339, 'start': 0} ``` > Created with ❤️ by Suraj Patil [![Github icon](https://cdn0.iconfinder.com/data/icons/octicons/1024/mark-github-32.png)](https://github.com/patil-suraj/) [![Twitter icon](https://cdn0.iconfinder.com/data/icons/shift-logotypes/32/Twitter-32.png)](https://twitter.com/psuraj28)
twmkn9/distilbert-base-uncased-squad2
twmkn9
2020-12-11T22:03:01Z
184
4
transformers
[ "transformers", "pytorch", "distilbert", "question-answering", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
This model is [Distilbert base uncased](https://huggingface.co/distilbert-base-uncased) trained on SQuAD v2 as: ``` export SQUAD_DIR=../../squad2 python3 run_squad.py --model_type distilbert --model_name_or_path distilbert-base-uncased --do_train --do_eval --overwrite_cache --do_lower_case --version_2_with_negative --save_steps 100000 --train_file $SQUAD_DIR/train-v2.0.json --predict_file $SQUAD_DIR/dev-v2.0.json --per_gpu_train_batch_size 8 --num_train_epochs 3 --learning_rate 3e-5 --max_seq_length 384 --doc_stride 128 --output_dir ./tmp/distilbert_fine_tuned/ ``` Performance on a dev subset is close to the original paper: ``` Results: { 'exact': 64.88976637051661, 'f1': 68.1776176526635, 'total': 6078, 'HasAns_exact': 69.7594501718213, 'HasAns_f1': 76.62665295288285, 'HasAns_total': 2910, 'NoAns_exact': 60.416666666666664, 'NoAns_f1': 60.416666666666664, 'NoAns_total': 3168, 'best_exact': 64.88976637051661, 'best_exact_thresh': 0.0, 'best_f1': 68.17761765266337, 'best_f1_thresh': 0.0 } ``` We are hopeful this might save you time, energy, and compute. Cheers!
twmkn9/albert-base-v2-squad2
twmkn9
2020-12-11T22:02:54Z
4,239
4
transformers
[ "transformers", "pytorch", "albert", "question-answering", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
This model is [ALBERT base v2](https://huggingface.co/albert-base-v2) trained on SQuAD v2 as: ``` export SQUAD_DIR=../../squad2 python3 run_squad.py --model_type albert --model_name_or_path albert-base-v2 --do_train --do_eval --overwrite_cache --do_lower_case --version_2_with_negative --save_steps 100000 --train_file $SQUAD_DIR/train-v2.0.json --predict_file $SQUAD_DIR/dev-v2.0.json --per_gpu_train_batch_size 8 --num_train_epochs 3 --learning_rate 3e-5 --max_seq_length 384 --doc_stride 128 --output_dir ./tmp/albert_fine/ ``` Performance on a dev subset is close to the original paper: ``` Results: { 'exact': 78.71010200723923, 'f1': 81.89228117126069, 'total': 6078, 'HasAns_exact': 75.39518900343643, 'HasAns_f1': 82.04167868004215, 'HasAns_total': 2910, 'NoAns_exact': 81.7550505050505, 'NoAns_f1': 81.7550505050505, 'NoAns_total': 3168, 'best_exact': 78.72655478775913, 'best_exact_thresh': 0.0, 'best_f1': 81.90873395178066, 'best_f1_thresh': 0.0 } ``` We are hopeful this might save you time, energy, and compute. Cheers!
squeezebert/squeezebert-uncased
squeezebert
2020-12-11T22:02:17Z
41,687
2
transformers
[ "transformers", "pytorch", "squeezebert", "arxiv:2006.11316", "arxiv:1904.00962", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
language: en license: bsd datasets: - bookcorpus - wikipedia --- # SqueezeBERT pretrained model This model, `squeezebert-uncased`, is a pretrained model for the English language using a masked language modeling (MLM) and Sentence Order Prediction (SOP) objective. SqueezeBERT was introduced in [this paper](https://arxiv.org/abs/2006.11316). This model is case-insensitive. The model architecture is similar to BERT-base, but with the pointwise fully-connected layers replaced with [grouped convolutions](https://blog.yani.io/filter-group-tutorial/). The authors found that SqueezeBERT is 4.3x faster than `bert-base-uncased` on a Google Pixel 3 smartphone. ## Pretraining ### Pretraining data - [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of thousands of unpublished books - [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) ### Pretraining procedure The model is pretrained using the Masked Language Model (MLM) and Sentence Order Prediction (SOP) tasks. (Author's note: If you decide to pretrain your own model, and you prefer to train with MLM only, that should work too.) From the SqueezeBERT paper: > We pretrain SqueezeBERT from scratch (without distillation) using the [LAMB](https://arxiv.org/abs/1904.00962) optimizer, and we employ the hyperparameters recommended by the LAMB authors: a global batch size of 8192, a learning rate of 2.5e-3, and a warmup proportion of 0.28. Following the LAMB paper's recommendations, we pretrain for 56k steps with a maximum sequence length of 128 and then for 6k steps with a maximum sequence length of 512. ## Finetuning The SqueezeBERT paper results from 2 approaches to finetuning the model: - "finetuning without bells and whistles" -- after pretraining the SqueezeBERT model, finetune it on each GLUE task - "finetuning with bells and whistles" -- after pretraining the SqueezeBERT model, finetune it on a MNLI with distillation from a teacher model. Then, use the MNLI-finetuned SqueezeBERT model as a student model to finetune on each of the other GLUE tasks (e.g. RTE, MRPC, …) with distillation from a task-specific teacher model. A detailed discussion of the hyperparameters used for finetuning is provided in the appendix of the [SqueezeBERT paper](https://arxiv.org/abs/2006.11316). Note that finetuning SqueezeBERT with distillation is not yet implemented in this repo. If the author (Forrest Iandola - [email protected]) gets enough encouragement from the user community, he will add example code to Transformers for finetuning SqueezeBERT with distillation. This model, `squeezebert/squeezebert-uncased`, has been pretrained but not finetuned. For most text classification tasks, we recommend using squeezebert-mnli-headless as a starting point. ### How to finetune To try finetuning SqueezeBERT on the [MRPC](https://www.microsoft.com/en-us/download/details.aspx?id=52398) text classification task, you can run the following command: ``` ./utils/download_glue_data.py python examples/text-classification/run_glue.py \ --model_name_or_path squeezebert-base-headless \ --task_name mrpc \ --data_dir ./glue_data/MRPC \ --output_dir ./models/squeezebert_mrpc \ --overwrite_output_dir \ --do_train \ --do_eval \ --num_train_epochs 10 \ --learning_rate 3e-05 \ --per_device_train_batch_size 16 \ --save_steps 20000 ``` ## BibTeX entry and citation info ``` @article{2020_SqueezeBERT, author = {Forrest N. Iandola and Albert E. Shaw and Ravi Krishna and Kurt W. Keutzer}, title = {{SqueezeBERT}: What can computer vision teach NLP about efficient neural networks?}, journal = {arXiv:2006.11316}, year = {2020} } ```
squeezebert/squeezebert-mnli
squeezebert
2020-12-11T22:02:13Z
1,816
1
transformers
[ "transformers", "pytorch", "squeezebert", "arxiv:2006.11316", "arxiv:1904.00962", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
language: en license: bsd datasets: - bookcorpus - wikipedia --- # SqueezeBERT pretrained model This model, `squeezebert-mnli`, has been pretrained for the English language using a masked language modeling (MLM) and Sentence Order Prediction (SOP) objective and finetuned on the [Multi-Genre Natural Language Inference (MNLI)](https://cims.nyu.edu/~sbowman/multinli/) dataset. SqueezeBERT was introduced in [this paper](https://arxiv.org/abs/2006.11316). This model is case-insensitive. The model architecture is similar to BERT-base, but with the pointwise fully-connected layers replaced with [grouped convolutions](https://blog.yani.io/filter-group-tutorial/). The authors found that SqueezeBERT is 4.3x faster than `bert-base-uncased` on a Google Pixel 3 smartphone. ## Pretraining ### Pretraining data - [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of thousands of unpublished books - [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) ### Pretraining procedure The model is pretrained using the Masked Language Model (MLM) and Sentence Order Prediction (SOP) tasks. (Author's note: If you decide to pretrain your own model, and you prefer to train with MLM only, that should work too.) From the SqueezeBERT paper: > We pretrain SqueezeBERT from scratch (without distillation) using the [LAMB](https://arxiv.org/abs/1904.00962) optimizer, and we employ the hyperparameters recommended by the LAMB authors: a global batch size of 8192, a learning rate of 2.5e-3, and a warmup proportion of 0.28. Following the LAMB paper's recommendations, we pretrain for 56k steps with a maximum sequence length of 128 and then for 6k steps with a maximum sequence length of 512. ## Finetuning The SqueezeBERT paper presents 2 approaches to finetuning the model: - "finetuning without bells and whistles" -- after pretraining the SqueezeBERT model, finetune it on each GLUE task - "finetuning with bells and whistles" -- after pretraining the SqueezeBERT model, finetune it on a MNLI with distillation from a teacher model. Then, use the MNLI-finetuned SqueezeBERT model as a student model to finetune on each of the other GLUE tasks (e.g. RTE, MRPC, …) with distillation from a task-specific teacher model. A detailed discussion of the hyperparameters used for finetuning is provided in the appendix of the [SqueezeBERT paper](https://arxiv.org/abs/2006.11316). Note that finetuning SqueezeBERT with distillation is not yet implemented in this repo. If the author (Forrest Iandola - [email protected]) gets enough encouragement from the user community, he will add example code to Transformers for finetuning SqueezeBERT with distillation. This model, `squeezebert/squeezebert-mnli`, is the "trained with bells and whistles" MNLI-finetuned SqueezeBERT model. ### How to finetune To try finetuning SqueezeBERT on the [MRPC](https://www.microsoft.com/en-us/download/details.aspx?id=52398) text classification task, you can run the following command: ``` ./utils/download_glue_data.py python examples/text-classification/run_glue.py \ --model_name_or_path squeezebert-base-headless \ --task_name mrpc \ --data_dir ./glue_data/MRPC \ --output_dir ./models/squeezebert_mrpc \ --overwrite_output_dir \ --do_train \ --do_eval \ --num_train_epochs 10 \ --learning_rate 3e-05 \ --per_device_train_batch_size 16 \ --save_steps 20000 ``` ## BibTeX entry and citation info ``` @article{2020_SqueezeBERT, author = {Forrest N. Iandola and Albert E. Shaw and Ravi Krishna and Kurt W. Keutzer}, title = {{SqueezeBERT}: What can computer vision teach NLP about efficient neural networks?}, journal = {arXiv:2006.11316}, year = {2020} } ```
spentaur/yelp
spentaur
2020-12-11T22:02:07Z
76
1
transformers
[ "transformers", "tf", "distilbert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
# DistilBERT Yelp Review Sentiment This model is used for sentiment analysis on english yelp reviews. It is a DistilBERT model trained on 1 million reviews from the yelp open dataset. It is a regression model, with outputs in the range of ~-2 to ~2. With -2 being 1 star and 2 being 5 stars. It was trained using the [ktrain](https://github.com/amaiya/ktrain) because of it's ease of use. Example use: ``` tokenizer = AutoTokenizer.from_pretrained( 'distilbert-base-uncased', use_fast=True) model = TFAutoModelForSequenceClassification.from_pretrained( "spentaur/yelp") review = "This place is great!" input_ids = tokenizer.encode(review, return_tensors='tf') pred = model(input_ids)[0][0][0].numpy() # pred should === 1.9562385 ```
shoarora/alectra-small-owt
shoarora
2020-12-11T22:01:54Z
4
0
transformers
[ "transformers", "pytorch", "albert", "feature-extraction", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
# ALECTRA-small-OWT This is an extension of [ELECTRA](https://openreview.net/forum?id=r1xMH1BtvB) small model, trained on the [OpenWebText corpus](https://skylion007.github.io/OpenWebTextCorpus/). The training task (discriminative LM / replaced-token-detection) can be generalized to any transformer type. Here, we train an ALBERT model under the same scheme. ## Pretraining task ![electra task diagram](https://github.com/shoarora/lmtuners/raw/master/assets/electra.png) (figure from [Clark et al. 2020](https://openreview.net/pdf?id=r1xMH1BtvB)) ELECTRA uses discriminative LM / replaced-token-detection for pretraining. This involves a generator (a Masked LM model) creating examples for a discriminator to classify as original or replaced for each token. The generator generalizes to any `*ForMaskedLM` model and the discriminator could be any `*ForTokenClassification` model. Therefore, we can extend the task to ALBERT models, not just BERT as in the original paper. ## Usage ```python from transformers import AlbertForSequenceClassification, BertTokenizer # Both models use the bert-base-uncased tokenizer and vocab. tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') alectra = AlbertForSequenceClassification.from_pretrained('shoarora/alectra-small-owt') ``` NOTE: this ALBERT model uses a BERT WordPiece tokenizer. ## Code The pytorch module that implements this task is available [here](https://github.com/shoarora/lmtuners/blob/master/lmtuners/lightning_modules/discriminative_lm.py). Further implementation information [here](https://github.com/shoarora/lmtuners/tree/master/experiments/disc_lm_small), and [here](https://github.com/shoarora/lmtuners/blob/master/experiments/disc_lm_small/train_alectra_small.py) is the script that created this model. This specific model was trained with the following params: - `batch_size: 512` - `training_steps: 5e5` - `warmup_steps: 4e4` - `learning_rate: 2e-3` ## Downstream tasks #### GLUE Dev results | Model | # Params | CoLA | SST | MRPC | STS | QQP | MNLI | QNLI | RTE | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | ELECTRA-Small++ | 14M | 57.0 | 91. | 88.0 | 87.5 | 89.0 | 81.3 | 88.4 | 66.7| | ELECTRA-Small-OWT | 14M | 56.8 | 88.3| 87.4 | 86.8 | 88.3 | 78.9 | 87.9 | 68.5| | ELECTRA-Small-OWT (ours) | 17M | 56.3 | 88.4| 75.0 | 86.1 | 89.1 | 77.9 | 83.0 | 67.1| | ALECTRA-Small-OWT (ours) | 4M | 50.6 | 89.1| 86.3 | 87.2 | 89.1 | 78.2 | 85.9 | 69.6| #### GLUE Test results | Model | # Params | CoLA | SST | MRPC | STS | QQP | MNLI | QNLI | RTE | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | BERT-Base | 110M | 52.1 | 93.5| 84.8 | 85.9 | 89.2 | 84.6 | 90.5 | 66.4| | GPT | 117M | 45.4 | 91.3| 75.7 | 80.0 | 88.5 | 82.1 | 88.1 | 56.0| | ELECTRA-Small++ | 14M | 57.0 | 91.2| 88.0 | 87.5 | 89.0 | 81.3 | 88.4 | 66.7| | ELECTRA-Small-OWT (ours) | 17M | 57.4 | 89.3| 76.2 | 81.9 | 87.5 | 78.1 | 82.4 | 68.1| | ALECTRA-Small-OWT (ours) | 4M | 43.9 | 87.9| 82.1 | 82.0 | 87.6 | 77.9 | 85.8 | 67.5|
patrickvonplaten/longformer2roberta-cnn_dailymail-fp16
patrickvonplaten
2020-12-11T21:59:19Z
102
6
transformers
[ "transformers", "pytorch", "encoder_decoder", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
# Longformer2Roberta Summarization with 🤗 EncoderDecoder Framework This model is a Longformer2Roberta model fine-tuned on summarization. Longformer2Roberta is a `EncoderDecoderModel`, meaning that both the encoder is a `allenai/longformer-base-4096` model and the decoder is a `roberta-base` model. Leveraging the [EncoderDecoderFramework](https://huggingface.co/transformers/model_doc/encoderdecoder.html#encoder-decoder-models), the two pretrained models can simply be loaded into the framework via: ```python roberta2roberta = EncoderDecoderModel.from_encoder_decoder_pretrained("allenai/longformer-base-4096", "roberta-base") ``` The decoder of an `EncoderDecoder` model needs cross-attention layers and usually makes use of causal masking for auto-regressiv generation. Thus, ``longformer2roberta`` is consequently fined-tuned on the `CNN/Daily Mail`dataset and the resulting model `longformer2roberta-cnn_dailymail-fp16` is uploaded here. ## Example The model is by no means a state-of-the-art model, but nevertheless produces reasonable summarization results. It was mainly fine-tuned as a proof-of-concept for the 🤗 EncoderDecoder Framework. The model can be used as follows: ```python from transformers import LongformerTokenizer, EncoderDecoderModel model = EncoderDecoderModel.from_pretrained("patrickvonplaten/longformer2roberta-cnn_dailymail-fp16") tokenizer = LongformerTokenizer.from_pretrained("allenai/longformer-base-4096") article = """(CNN)James Holmes made his introduction to the world in a Colorado cinema filled with spectators watching a midnight showing of the new Batman movie, "The Dark Knight Rises," in June 2012. The moment became one of the deadliest shootings in U.S. history. Holmes is accused of opening fire on the crowd, killing 12 people and injuring or maiming 70 others in Aurora, a suburb of Denver. Holmes appeared like a comic book character: He resembled the Joker, with red-orange hair, similar to the late actor Heath Ledger\'s portrayal of the villain in an earlier Batman movie, authorities said. But Holmes was hardly a cartoon. Authorities said he wore body armor and carried several guns, including an AR-15 rifle, with lots of ammo. He also wore a gas mask. Holmes says he was insane at the time of the shootings, and that is his legal defense and court plea: not guilty by reason of insanity. Prosecutors aren\'t swayed and will seek the death penalty. Opening statements in his trial are scheduled to begin Monday. Holmes admits to the shootings but says he was suffering "a psychotic episode" at the time, according to court papers filed in July 2013 by the state public defenders, Daniel King and Tamara A. Brady. Evidence "revealed thus far in the case supports the defense\'s position that Mr. Holmes suffers from a severe mental illness and was in the throes of a psychotic episode when he committed the acts that resulted in the tragic loss of life and injuries sustained by moviegoers on July 20, 2012," the public defenders wrote. Holmes no longer looks like a dazed Joker, as he did in his first appearance before a judge in 2012. He appeared dramatically different in January when jury selection began for his trial: 9,000 potential jurors were summoned for duty, described as one of the nation\'s largest jury calls. Holmes now has a cleaner look, with a mustache, button-down shirt and khaki pants. In January, he had a beard and eyeglasses. If this new image sounds like one of an academician, it may be because Holmes, now 27, once was one. Just before the shooting, Holmes was a doctoral student in neuroscience, and he was studying how the brain works, with his schooling funded by a U.S. government grant. Yet for all his learning, Holmes apparently lacked the capacity to command his own mind, according to the case against him. A jury will ultimately decide Holmes\' fate. That panel is made up of 12 jurors and 12 alternates. They are 19 women and five men, and almost all are white and middle-aged. The trial could last until autumn. When jury summonses were issued in January, each potential juror stood a 0.2% chance of being selected, District Attorney George Brauchler told the final jury this month. He described the approaching trial as "four to five months of a horrible roller coaster through the worst haunted house you can imagine." The jury will have to render verdicts on each of the 165 counts against Holmes, including murder and attempted murder charges. Meanwhile, victims and their relatives are challenging all media outlets "to stop the gratuitous use of the name and likeness of mass killers, thereby depriving violent individuals the media celebrity and media spotlight they so crave," the No Notoriety group says. They are joined by victims from eight other mass shootings in recent U.S. history. Raised in central coastal California and in San Diego, James Eagan Holmes is the son of a mathematician father noted for his work at the FICO firm that provides credit scores and a registered nurse mother, according to the U-T San Diego newspaper. Holmes also has a sister, Chris, a musician, who\'s five years younger, the newspaper said. His childhood classmates remember him as a clean-cut, bespectacled boy with an "exemplary" character who "never gave any trouble, and never got in trouble himself," The Salinas Californian reported. His family then moved down the California coast, where Holmes grew up in the San Diego-area neighborhood of Rancho Peñasquitos, which a neighbor described as "kind of like Mayberry," the San Diego newspaper said. Holmes attended Westview High School, which says its school district sits in "a primarily middle- to upper-middle-income residential community." There, Holmes ran cross-country, played soccer and later worked at a biotechnology internship at the Salk Institute and Miramar College, which attracts academically talented students. By then, his peers described him as standoffish and a bit of a wiseacre, the San Diego newspaper said. Holmes attended college fairly close to home, in a neighboring area known as Southern California\'s "inland empire" because it\'s more than an hour\'s drive from the coast, in a warm, low-desert climate. He entered the University of California, Riverside, in 2006 as a scholarship student. In 2008 he was a summer camp counselor for disadvantaged children, age 7 to 14, at Camp Max Straus, run by Jewish Big Brothers Big Sisters of Los Angeles. He graduated from UC Riverside in 2010 with the highest honors and a bachelor\'s degree in neuroscience. "Academically, he was at the top of the top," Chancellor Timothy P. White said. He seemed destined for even higher achievement. By 2011, he had enrolled as a doctoral student in the neuroscience program at the University of Colorado Anschutz Medical Campus in Aurora, the largest academic health center in the Rocky Mountain region. The doctoral in neuroscience program attended by Holmes focuses on how the brain works, with an emphasis on processing of information, behavior, learning and memory. Holmes was one of six pre-thesis Ph.D. students in the program who were awarded a neuroscience training grant from the National Institutes of Health. The grant rewards outstanding neuroscientists who will make major contributions to neurobiology. A syllabus that listed Holmes as a student at the medical school shows he was to have delivered a presentation about microRNA biomarkers. But Holmes struggled, and his own mental health took an ominous turn. In March 2012, he told a classmate he wanted to kill people, and that he would do so "when his life was over," court documents said. Holmes was "denied access to the school after June 12, 2012, after he made threats to a professor," according to court documents. About that time, Holmes was a patient of University of Colorado psychiatrist Lynne Fenton. Fenton was so concerned about Holmes\' behavior that she mentioned it to her colleagues, saying he could be a danger to others, CNN affiliate KMGH-TV reported, citing sources with knowledge of the investigation. Fenton\'s concerns surfaced in early June, sources told the Denver station. Holmes began to fantasize about killing "a lot of people" in early June, nearly six weeks before the shootings, the station reported, citing unidentified sources familiar with the investigation. Holmes\' psychiatrist contacted several members of a "behavioral evaluation and threat assessment" team to say Holmes could be a danger to others, the station reported. At issue was whether to order Holmes held for 72 hours to be evaluated by mental health professionals, the station reported. "Fenton made initial phone calls about engaging the BETA team" in "the first 10 days" of June, but it "never came together" because in the period Fenton was having conversations with team members, Holmes began the process of dropping out of school, a source told KMGH. Defense attorneys have rejected the prosecution\'s assertions that Holmes was barred from campus. Citing statements from the university, Holmes\' attorneys have argued that his access was revoked because that\'s normal procedure when a student drops enrollment. What caused this turn for the worse for Holmes has yet to be clearly detailed. In the months before the shooting, he bought four weapons and more than 6,000 rounds of ammunition, authorities said. Police said he also booby-trapped his third-floor apartment with explosives, but police weren\'t fooled. After Holmes was caught in the cinema parking lot immediately after the shooting, bomb technicians went to the apartment and neutralized the explosives. No one was injured at the apartment building. Nine minutes before Holmes went into the movie theater, he called a University of Colorado switchboard, public defender Brady has said in court. The number he called can be used to get in contact with faculty members during off hours, Brady said. Court documents have also revealed that investigators have obtained text messages that Holmes exchanged with someone before the shooting. That person was not named, and the content of the texts has not been made public. According to The New York Times, Holmes sent a text message to a fellow graduate student, a woman, about two weeks before the shooting. She asked if he had left Aurora yet, reported the newspaper, which didn\'t identify her. No, he had two months left on his lease, Holmes wrote back, according to the Times. He asked if she had heard of "dysphoric mania," a form of bipolar disorder marked by the highs of mania and the dark and sometimes paranoid delusions of major depression. The woman asked if the disorder could be managed with treatment. "It was," Holmes wrote her, according to the Times. But he warned she should stay away from him "because I am bad news," the newspaper reported. It was her last contact with Holmes. After the shooting, Holmes\' family issued a brief statement: "Our hearts go out to those who were involved in this tragedy and to the families and friends of those involved," they said, without giving any information about their son. Since then, prosecutors have refused to offer a plea deal to Holmes. For Holmes, "justice is death," said Brauchler, the district attorney. In December, Holmes\' parents, who will be attending the trial, issued another statement: They asked that their son\'s life be spared and that he be sent to an institution for mentally ill people for the rest of his life, if he\'s found not guilty by reason of insanity. "He is not a monster," Robert and Arlene Holmes wrote, saying the death penalty is "morally wrong, especially when the condemned is mentally ill." "He is a human being gripped by a severe mental illness," the parents said. The matter will be settled by the jury. CNN\'s Ana Cabrera and Sara Weisfeldt contributed to this report from Denver.""" input_ids = tokenizer(article, return_tensors="pt").input_ids output_ids = model.generate(input_ids) print(tokenizer.decode(output_ids[0], skip_special_tokens=True)) # should produce # James Holmes, 27, is accused of opening fire on a Colorado theater. # He was a doctoral student at University of Colorado. # Holmes says he was suffering "a psychotic episode" at the time of the shooting. # Prosecutors won't say whether Holmes was barred from campus. ``` Such an article has a length of > 2000 tokens, which means that it cannot be handled correctly by Bert or Roberta encoders. ## Training script: **IMPORTANT**: In order for this code to work, make sure you checkout to the branch [more_general_trainer_metric](https://github.com/huggingface/transformers/tree/more_general_trainer_metric), which slightly adapts the `Trainer` for `EncoderDecoderModels` according to this PR: https://github.com/huggingface/transformers/pull/5840. The following code shows the complete training script that was used to fine-tune `longformer2roberta-cnn_dailymail-fp16 ` for reproducability. The training last ~90h on a standard GPU. ```python #!/usr/bin/env python3 import nlp import logging from transformers import LongformerTokenizer, EncoderDecoderModel, Trainer, TrainingArguments logging.basicConfig(level=logging.INFO) model = EncoderDecoderModel.from_encoder_decoder_pretrained("allenai/longformer-base-4096", "roberta-base") tokenizer = LongformerTokenizer.from_pretrained("allenai/longformer-base-4096") # load train and validation data train_dataset = nlp.load_dataset("cnn_dailymail", "3.0.0", split="train") val_dataset = nlp.load_dataset("cnn_dailymail", "3.0.0", split="validation[:5%]") # load rouge for validation rouge = nlp.load_metric("rouge", experiment_id=0) # enable gradient checkpointing for longformer encoder model.encoder.config.gradient_checkpointing = True # set decoding params model.config.decoder_start_token_id = tokenizer.bos_token_id model.config.eos_token_id = tokenizer.eos_token_id model.config.max_length = 142 model.config.min_length = 56 model.config.no_repeat_ngram_size = 3 model.early_stopping = True model.length_penalty = 2.0 model.num_beams = 4 encoder_length = 2048 decoder_length = 128 batch_size = 16 # map data correctly def map_to_encoder_decoder_inputs(batch): # Tokenizer will automatically set [BOS] <text> [EOS] # cut off at Longformer at 2048 inputs = tokenizer(batch["article"], padding="max_length", truncation=True, max_length=encoder_length) # force summarization <= 128 outputs = tokenizer(batch["highlights"], padding="max_length", truncation=True, max_length=decoder_length) batch["input_ids"] = inputs.input_ids batch["attention_mask"] = inputs.attention_mask # set 128 tokens to global attention batch["global_attention_mask"] = [[1 if i < 128 else 0 for i in range(sequence_length)] for sequence_length in len(inputs.input_ids) * [encoder_length]] batch["decoder_input_ids"] = outputs.input_ids batch["labels"] = outputs.input_ids.copy() # mask loss for padding batch["labels"] = [ [-100 if token == tokenizer.pad_token_id else token for token in labels] for labels in batch["labels"] ] batch["decoder_attention_mask"] = outputs.attention_mask assert all([len(x) == encoder_length for x in inputs.input_ids]) assert all([len(x) == decoder_length for x in outputs.input_ids]) return batch def compute_metrics(pred): labels_ids = pred.label_ids pred_ids = pred.predictions # all unnecessary tokens are removed pred_str = tokenizer.batch_decode(pred_ids, skip_special_tokens=True) labels_ids[labels_ids == -100] = tokenizer.eos_token_id label_str = tokenizer.batch_decode(labels_ids, skip_special_tokens=True) rouge_output = rouge.compute(predictions=pred_str, references=label_str, rouge_types=["rouge2"])["rouge2"].mid return { "rouge2_precision": round(rouge_output.precision, 4), "rouge2_recall": round(rouge_output.recall, 4), "rouge2_fmeasure": round(rouge_output.fmeasure, 4), } return { "rouge2_precision": round(rouge_output.precision, 4), "rouge2_recall": round(rouge_output.recall, 4), "rouge2_fmeasure": round(rouge_output.fmeasure, 4), } # make train dataset ready train_dataset = train_dataset.map( map_to_encoder_decoder_inputs, batched=True, batch_size=batch_size, remove_columns=["article", "highlights"], ) train_dataset.set_format( type="torch", columns=["input_ids", "attention_mask", "global_attention_mask", "decoder_input_ids", "decoder_attention_mask", "labels"], ) # same for validation dataset val_dataset = val_dataset.map( map_to_encoder_decoder_inputs, batched=True, batch_size=batch_size, remove_columns=["article", "highlights"], ) val_dataset.set_format( type="torch", columns=["input_ids", "global_attention_mask", "attention_mask", "decoder_input_ids", "decoder_attention_mask", "labels"], ) # set training arguments - these params are not really tuned, feel free to change training_args = TrainingArguments( output_dir="./", per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, predict_from_generate=True, evaluate_during_training=True, do_train=True, do_eval=True, logging_steps=1000, save_steps=1000, eval_steps=1000, overwrite_output_dir=True, warmup_steps=2000, save_total_limit=3, fp16=True, ) # instantiate trainer trainer = Trainer( model=model, args=training_args, compute_metrics=compute_metrics, train_dataset=train_dataset, eval_dataset=val_dataset, ) # start training trainer.train() ``` ## Evaluation The following script evaluates the model on the test set of CNN/Daily Mail. ```python #!/usr/bin/env python3 import nlp import torch from transformers import LongformerTokenizer, EncoderDecoderModel tokenizer = LongformerTokenizer.from_pretrained("allenai/longformer-base-4096") model = EncoderDecoderModel.from_pretrained("patrickvonplaten/longformer2roberta-cnn_dailymail-fp16") model.to("cuda") test_dataset = nlp.load_dataset("cnn_dailymail", "3.0.0", split="test") batch_size = 32 encoder_length = 2048 decoder_length = 128 # map data correctly def generate_summary(batch): # Tokenizer will automatically set [BOS] <text> [EOS] # cut off at BERT max length 512 inputs = tokenizer(batch["article"], padding="max_length", truncation=True, max_length=encoder_length, return_tensors="pt") input_ids = inputs.input_ids.to("cuda") attention_mask = inputs.attention_mask.to("cuda") global_attention_mask = torch.zeros_like(attention_mask) global_attention_mask[:, :decoder_length] = 1 outputs = model.generate(input_ids, attention_mask=attention_mask, global_attention_mask=global_attention_mask) # all special tokens including will be removed output_str = tokenizer.batch_decode(outputs, skip_special_tokens=True) batch["pred"] = output_str return batch results = test_dataset.map(generate_summary, batched=True, batch_size=batch_size, remove_columns=["article"]) # load rouge for validation rouge = nlp.load_metric("rouge") pred_str = results["pred"] label_str = results["highlights"] rouge_output = rouge.compute(predictions=pred_str, references=label_str, rouge_types=["rouge2"])["rouge2"].mid print(rouge_output) ``` The obtained results should be: | - | Rouge2 - mid -precision | Rouge2 - mid - recall | Rouge2 - mid - fmeasure | |----------|:-------------:|:------:|:------:| | **CNN/Daily Mail** | 12.39 | 15.05 | **13.21** | **Note** This model was trained to show how Longformer can be used as an Encoder model in a EncoderDecoder setup. Better results are obtained for datasets of much longer inputs.
mys/electra-base-turkish-cased-ner
mys
2020-12-11T21:56:51Z
280
2
transformers
[ "transformers", "pytorch", "tf", "electra", "token-classification", "tr", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: tr --- ## What is this A NER model for Turkish with 48 categories trained on the dataset [Shrinked TWNERTC Turkish NER Data](https://www.kaggle.com/behcetsenturk/shrinked-twnertc-turkish-ner-data-by-kuzgunlar) by Behçet Şentürk, which is itself a filtered and cleaned version of the following automatically labeled dataset: > Sahin, H. Bahadir; Eren, Mustafa Tolga; Tirkaz, Caglar; Sonmez, Ozan; Yildiz, Eray (2017), “English/Turkish Wikipedia Named-Entity Recognition and Text Categorization Dataset”, Mendeley Data, v1 http://dx.doi.org/10.17632/cdcztymf4k.1 ## Backbone model The backbone model is [electra-base-turkish-cased-discriminator](https://huggingface.co/dbmdz/electra-base-turkish-cased-discriminator), and I finetuned it for token classification. I'm continuing to figure out if it is possible to improve accuracy with this dataset, but it is already usable for non-critic applications. You can reach out to me on [Twitter](https://twitter.com/myusufsarigoz) for discussions and issues. I will also release a notebook to finetune NER models with Shrinked TWNERTC as well as sample inference code to demonstrate what's possible with this model.
mrm8488/xlm-multi-finetuned-xquadv1
mrm8488
2020-12-11T21:56:48Z
5
0
transformers
[ "transformers", "pytorch", "xlm", "question-answering", "multilingual", "arxiv:1901.07291", "arxiv:1910.11856", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- language: multilingual thumbnail: --- # [XLM](https://github.com/facebookresearch/XLM/) (multilingual version) fine-tuned for multilingual Q&A Released from `Facebook` together with the paper [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample and Alexis Conneau and fine-tuned on [XQuAD](https://github.com/deepmind/xquad) for multilingual (`11 different languages`) **Q&A** downstream task. ## Details of the language model('xlm-mlm-100-1280') [Language model](https://github.com/facebookresearch/XLM/#ii-cross-lingual-language-model-pretraining-xlm) | Languages | --------- | | 100 | It includes the following languages: <details> en-es-fr-de-zh-ru-pt-it-ar-ja-id-tr-nl-pl-simple-fa-vi-sv-ko-he-ro-no-hi-uk-cs-fi-hu-th-da-ca-el-bg-sr-ms-bn-hr-sl-zh_yue-az-sk-eo-ta-sh-lt-et-ml-la-bs-sq-arz-af-ka-mr-eu-tl-ang-gl-nn-ur-kk-be-hy-te-lv-mk-zh_classical-als-is-wuu-my-sco-mn-ceb-ast-cy-kn-br-an-gu-bar-uz-lb-ne-si-war-jv-ga-zh_min_nan-oc-ku-sw-nds-ckb-ia-yi-fy-scn-gan-tt-am </details> ## Details of the downstream task (multilingual Q&A) - Dataset Deepmind [XQuAD](https://github.com/deepmind/xquad) Languages covered: - Arabic: `ar` - German: `de` - Greek: `el` - English: `en` - Spanish: `es` - Hindi: `hi` - Russian: `ru` - Thai: `th` - Turkish: `tr` - Vietnamese: `vi` - Chinese: `zh` As the dataset is based on SQuAD v1.1, there are no unanswerable questions in the data. We chose this setting so that models can focus on cross-lingual transfer. We show the average number of tokens per paragraph, question, and answer for each language in the table below. The statistics were obtained using [Jieba](https://github.com/fxsjy/jieba) for Chinese and the [Moses tokenizer](https://github.com/moses-smt/mosesdecoder/blob/master/scripts/tokenizer/tokenizer.perl) for the other languages. | | en | es | de | el | ru | tr | ar | vi | th | zh | hi | | --------- | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | Paragraph | 142.4 | 160.7 | 139.5 | 149.6 | 133.9 | 126.5 | 128.2 | 191.2 | 158.7 | 147.6 | 232.4 | | Question | 11.5 | 13.4 | 11.0 | 11.7 | 10.0 | 9.8 | 10.7 | 14.8 | 11.5 | 10.5 | 18.7 | | Answer | 3.1 | 3.6 | 3.0 | 3.3 | 3.1 | 3.1 | 3.1 | 4.5 | 4.1 | 3.5 | 5.6 | Citation: <details> ```bibtex @article{Artetxe:etal:2019, author = {Mikel Artetxe and Sebastian Ruder and Dani Yogatama}, title = {On the cross-lingual transferability of monolingual representations}, journal = {CoRR}, volume = {abs/1910.11856}, year = {2019}, archivePrefix = {arXiv}, eprint = {1910.11856} } ``` </details> As XQuAD is just an evaluation dataset, I used Data augmentation techniques (scraping, neural machine translation, etc) to obtain more samples and split the dataset in order to have a train and test set. The test set was created in a way that contains the same number of samples for each language. Finally, I got: | Dataset | # samples | | ----------- | --------- | | XQUAD train | 50 K | | XQUAD test | 8 K | ## Model training The model was trained on a Tesla P100 GPU and 25GB of RAM. The script for fine tuning can be found [here](https://github.com/huggingface/transformers/blob/master/examples/distillation/run_squad_w_distillation.py) ## Model in action Fast usage with **pipelines**: ```python from transformers import pipeline qa_pipeline = pipeline( "question-answering", model="mrm8488/xlm-multi-finetuned-xquadv1", tokenizer="mrm8488/xlm-multi-finetuned-xquadv1" ) # English qa_pipeline({ 'context': "Manuel Romero has been working hardly in the repository hugginface/transformers lately", 'question': "Who has been working hard for hugginface/transformers lately?" }) #Output: {'answer': 'Manuel', 'end': 6, 'score': 8.531880747878265e-05, 'start': 0} # Russian qa_pipeline({ 'context': "Мануэль Ромеро в последнее время почти не работал в репозитории hugginface / transformers", 'question': "Кто в последнее время усердно работал над обнимашками / трансформерами?" }) #Output: {'answer': 'работал в репозитории hugginface /','end': 76, 'score': 0.00012340750456964894, 'start': 42} ``` Try it on a Colab (*Do not forget to change the model and tokenizer path in the Colab if necessary*): <a href="https://colab.research.google.com/github/mrm8488/shared_colab_notebooks/blob/master/Try_mrm8488_xquad_finetuned_uncased_model.ipynb" target="_parent"><img src="https://camo.githubusercontent.com/52feade06f2fecbf006889a904d221e6a730c194/68747470733a2f2f636f6c61622e72657365617263682e676f6f676c652e636f6d2f6173736574732f636f6c61622d62616467652e737667" alt="Open In Colab" data-canonical-src="https://colab.research.google.com/assets/colab-badge.svg"></a> > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/umberto-wikipedia-uncased-v1-finetuned-squadv1-it
mrm8488
2020-12-11T21:56:44Z
12
0
transformers
[ "transformers", "pytorch", "camembert", "question-answering", "it", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- language: it --- # UmBERTo Wikipedia Uncased + italian SQuAD v1 📚 🧐 ❓ [UmBERTo-Wikipedia-Uncased](https://huggingface.co/Musixmatch/umberto-wikipedia-uncased-v1) fine-tuned on [Italian SQUAD v1 dataset](https://github.com/crux82/squad-it) for **Q&A** downstream task. ## Details of the downstream task (Q&A) - Model 🧠 [UmBERTo](https://github.com/musixmatchresearch/umberto) is a Roberta-based Language Model trained on large Italian Corpora and uses two innovative approaches: SentencePiece and Whole Word Masking. UmBERTo-Wikipedia-Uncased Training is trained on a relative small corpus (~7GB) extracted from Wikipedia-ITA. ## Details of the downstream task (Q&A) - Dataset 📚 [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/explore/1.1/dev/) [Rajpurkar et al. 2016] is a large scale dataset for training of question answering systems on factoid questions. It contains more than 100,000 question-answer pairs about passages from 536 articles chosen from various domains of Wikipedia. **SQuAD-it** is derived from the SQuAD dataset and it is obtained through semi-automatic translation of the SQuAD dataset into Italian. It represents a large-scale dataset for open question answering processes on factoid questions in Italian. The dataset contains more than 60,000 question/answer pairs derived from the original English dataset. ## Model training 🏋️‍ The model was trained on a Tesla P100 GPU and 25GB of RAM with the following command: ```bash python transformers/examples/question-answering/run_squad.py \ --model_type bert \ --model_name_or_path 'Musixmatch/umberto-wikipedia-uncased-v1' \ --do_eval \ --do_train \ --do_lower_case \ --train_file '/content/dataset/SQuAD_it-train.json' \ --predict_file '/content/dataset/SQuAD_it-test.json' \ --per_gpu_train_batch_size 16 \ --learning_rate 3e-5 \ --num_train_epochs 10 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /content/drive/My\ Drive/umberto-uncased-finetuned-squadv1-it \ --overwrite_output_dir \ --save_steps 1000 ``` With 10 epochs the model overfits the train dataset so I evaluated the different checkpoints created during training (every 1000 steps) and chose the best (In this case the one created at 17000 steps). ## Test set Results 🧾 | Metric | # Value | | ------ | --------- | | **EM** | **60.50** | | **F1** | **72.41** | ```json { 'exact': 60.50729399395453, 'f1': 72.4141113348361, 'total': 7609, 'HasAns_exact': 60.50729399395453, 'HasAns_f1': 72.4141113348361, 'HasAns_total': 7609, 'best_exact': 60.50729399395453, 'best_exact_thresh': 0.0, 'best_f1': 72.4141113348361, 'best_f1_thresh': 0.0 } ``` ## Comparison ⚖️ | Model | EM | F1 score | | -------------------------------------------------------------------------------------------------------------------------------- | --------- | --------- | | [DrQA-it trained on SQuAD-it ](https://github.com/crux82/squad-it/blob/master/README.md#evaluating-a-neural-model-over-squad-it) | 56.1 | 65.9 | | This one |60.50 |72.41 | | [bert-italian-finedtuned-squadv1-it-alfa](https://huggingface.co/mrm8488/bert-italian-finedtuned-squadv1-it-alfa) |**62.51** |**74.16** | | **62.51** | **74.16** | ### Model in action 🚀 Fast usage with **pipelines**: ```python from transformers import pipeline QnA_pipeline = pipeline('question-answering', model='mrm8488/umberto-wikipedia-uncased-v1-finetuned-squadv1-it') QnA_pipeline({ 'context': 'Marco Aurelio era un imperatore romano che praticava lo stoicismo come filosofia di vita .', 'question': 'Quale filosofia seguì Marco Aurelio ?' }) # Output: {'answer': 'stoicismo', 'end': 65, 'score': 0.9477770241566028, 'start': 56} ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/t5-small-finetuned-quora-for-paraphrasing
mrm8488
2020-12-11T21:56:30Z
163
8
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "en", "dataset:quora", "arxiv:1910.10683", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - quora --- # T5-base fine-tuned on Quora question pair dataset for Question Paraphrasing ❓↔️❓ [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) fine-tuned on [Quodra question pair](https://huggingface.co/nlp/viewer/?dataset=quora) dataset for **Question Paraphrasing** task. ## Details of T5 The **T5** model was presented in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) by *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* in Here the abstract: Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code. ![model image](https://i.imgur.com/jVFMMWR.png) ## Details of the downstream task (Question Paraphrasing) - Dataset 📚❓↔️❓ Dataset ID: ```quora``` from [Huggingface/NLP](https://github.com/huggingface/nlp) | Dataset | Split | # samples | | -------- | ----- | --------- | | quora | train | 404290 | | quora after filter repeated questions | train | 149263 | Check out more about this dataset and others in [NLP Viewer](https://huggingface.co/nlp/viewer/) ## Model fine-tuning 🏋️‍ The training script is a slightly modified version of [this one](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb) ## Model in Action 🚀 ```python from transformers import AutoModelWithLMHead, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-small-finetuned-quora-for-paraphrasing") model = AutoModelWithLMHead.from_pretrained("mrm8488/t5-small-finetuned-quora-for-paraphrasing") def paraphrase(text, max_length=128): input_ids = tokenizer.encode(text, return_tensors="pt", add_special_tokens=True) generated_ids = model.generate(input_ids=input_ids, num_return_sequences=5, num_beams=5, max_length=max_length, no_repeat_ngram_size=2, repetition_penalty=3.5, length_penalty=1.0, early_stopping=True) preds = [tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=True) for g in generated_ids] return preds preds = paraphrase("paraphrase: What is the best framework for dealing with a huge text dataset?") for pred in preds: print(pred) # Output: ''' What is the best framework for dealing with a huge text dataset? What is the best framework for dealing with a large text dataset? What is the best framework to deal with a huge text dataset? What are the best frameworks for dealing with a huge text dataset? What is the best framework for dealing with huge text datasets? ''' ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/t5-small-finetuned-emotion
mrm8488
2020-12-11T21:56:24Z
11
1
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "en", "dataset:emotion", "arxiv:1910.10683", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - emotion --- # T5-small fine-tuned for Emotion Recognition 😂😢😡😃😯 [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) [small](https://huggingface.co/t5-small) fine-tuned on [emotion recognition](https://github.com/dair-ai/emotion_dataset) dataset for **Emotion Recognition** downstream task. ## Details of T5 The **T5** model was presented in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) by *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* in Here the abstract: Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code. ![model image](https://i.imgur.com/jVFMMWR.png) ## Details of the downstream task (Sentiment Recognition) - Dataset 📚 [Elvis Saravia](https://twitter.com/omarsar0) has gathered a great [dataset](https://github.com/dair-ai/emotion_dataset) for emotion recognition. It allows to classifiy the text into one of the following **6** emotions: - sadness 😢 - joy 😃 - love 🥰 - anger 😡 - fear 😱 - surprise 😯 ## Model fine-tuning 🏋️‍ The training script is a slightly modified version of [this Colab Notebook](https://github.com/patil-suraj/exploring-T5/blob/master/t5_fine_tuning.ipynb) created by [Suraj Patil](https://github.com/patil-suraj), so all credits to him! ## Test set metrics 🧾 | |precision | recall | f1-score |support| |----------|----------|---------|----------|-------| |anger | 0.92| 0.93| 0.92| 275| |fear | 0.90| 0.90| 0.90| 224| |joy | 0.97| 0.91| 0.94| 695| |love | 0.75| 0.89| 0.82| 159| |sadness | 0.96| 0.97| 0.96| 581| |surpirse | 0.73| 0.80| 0.76| 66| | | |accuracy| | | 0.92| 2000| |macro avg| 0.87| 0.90| 0.88| 2000| |weighted avg| 0.93| 0.92| 0.92| 2000| Confusion Matrix ![CM](https://i.imgur.com/JBtAwPx.png) ## Model in Action 🚀 ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-small-finetuned-emotion") model = AutoModelWithLMHead.from_pretrained("mrm8488/t5-small-finetuned-emotion") def get_emotion(text): input_ids = tokenizer.encode(text + '</s>', return_tensors='pt') output = model.generate(input_ids=input_ids, max_length=2) dec = [tokenizer.decode(ids) for ids in output] label = dec[0] return label get_emotion("i feel as if i havent blogged in ages are at least truly blogged i am doing an update cute") # Output: 'joy' get_emotion("i have a feeling i kinda lost my best friend") # Output: 'sadness' ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/t5-base-finetuned-wikiSQL-sql-to-en
mrm8488
2020-12-11T21:56:17Z
35
12
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "en", "dataset:wikisql", "arxiv:1910.10683", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - wikisql --- # T5-base fine-tuned on WikiSQL for SQL to English translation [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) fine-tuned on [WikiSQL](https://github.com/salesforce/WikiSQL) for **SQL** to **English** **translation** task. ## Details of T5 The **T5** model was presented in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) by *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* in Here the abstract: Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code. ![model image](https://i.imgur.com/jVFMMWR.png) ## Details of the Dataset 📚 Dataset ID: ```wikisql``` from [Huggingface/NLP](https://huggingface.co/nlp/viewer/?dataset=wikisql) | Dataset | Split | # samples | | -------- | ----- | --------- | | wikisql | train | 56355 | | wikisql | valid | 14436 | How to load it from [nlp](https://github.com/huggingface/nlp) ```python train_dataset = nlp.load_dataset('wikisql', split=nlp.Split.TRAIN) valid_dataset = nlp.load_dataset('wikisql', split=nlp.Split.VALIDATION) ``` Check out more about this dataset and others in [NLP Viewer](https://huggingface.co/nlp/viewer/) ## Model fine-tuning 🏋️‍ The training script is a slightly modified version of [this Colab Notebook](https://github.com/patil-suraj/exploring-T5/blob/master/t5_fine_tuning.ipynb) created by [Suraj Patil](https://github.com/patil-suraj), so all credits to him! ## Model in Action 🚀 ```python from transformers import AutoModelWithLMHead, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-base-finetuned-wikiSQL-sql-to-en") model = AutoModelWithLMHead.from_pretrained("mrm8488/t5-base-finetuned-wikiSQL-sql-to-en") def get_explanation(query): input_text = "translate Sql to English: %s </s>" % query features = tokenizer([input_text], return_tensors='pt') output = model.generate(input_ids=features['input_ids'], attention_mask=features['attention_mask']) return tokenizer.decode(output[0]) query = "SELECT COUNT Params form model where location=HF-Hub" get_explanation(query) # output: 'How many parameters form model for HF-hub?' ``` Play with it in a Colab: <img src="https://camo.githubusercontent.com/52feade06f2fecbf006889a904d221e6a730c194/68747470733a2f2f636f6c61622e72657365617263682e676f6f676c652e636f6d2f6173736574732f636f6c61622d62616467652e737667" alt="Open In Colab" data-canonical-src="https://colab.research.google.com/assets/colab-badge.svg"> > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/electra-small-finetuned-squadv1
mrm8488
2020-12-11T21:53:59Z
7
0
transformers
[ "transformers", "pytorch", "electra", "question-answering", "en", "arxiv:1406.2661", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- language: en --- # Electra small ⚡ + SQuAD v1 ❓ [Electra-small-discriminator](https://huggingface.co/google/electra-small-discriminator) fine-tuned on [SQUAD v1.1 dataset](https://rajpurkar.github.io/SQuAD-explorer/explore/1.1/dev/) for **Q&A** downstream task. ## Details of the downstream task (Q&A) - Model 🧠 **ELECTRA** is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset. ## Details of the downstream task (Q&A) - Dataset 📚 **S**tanford **Q**uestion **A**nswering **D**ataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable. SQuAD v1.1 contains **100,000+** question-answer pairs on **500+** articles. ## Model training 🏋️‍ The model was trained on a Tesla P100 GPU and 25GB of RAM with the following command: ```bash python transformers/examples/question-answering/run_squad.py \ --model_type electra \ --model_name_or_path 'google/electra-small-discriminator' \ --do_eval \ --do_train \ --do_lower_case \ --train_file '/content/dataset/train-v1.1.json' \ --predict_file '/content/dataset/dev-v1.1.json' \ --per_gpu_train_batch_size 16 \ --learning_rate 3e-5 \ --num_train_epochs 10 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir '/content/output' \ --overwrite_output_dir \ --save_steps 1000 ``` ## Test set Results 🧾 | Metric | # Value | | ------ | --------- | | **EM** | **77.70** | | **F1** | **85.74** | | **Size**| **50 MB** | Very good metrics for such a "small" model! ```json { 'exact': 77.70104068117313, 'f1': 85.73991234187997, 'total': 10570, 'HasAns_exact': 77.70104068117313, 'HasAns_f1': 85.73991234187997, 'HasAns_total': 10570, 'best_exact': 77.70104068117313, 'best_exact_thresh': 0.0, 'best_f1': 85.73991234187997, 'best_f1_thresh': 0.0 } ``` ### Model in action 🚀 Fast usage with **pipelines**: ```python from transformers import pipeline QnA_pipeline = pipeline('question-answering', model='mrm8488/electra-small-finetuned-squadv1') QnA_pipeline({ 'context': 'A new strain of flu that has the potential to become a pandemic has been identified in China by scientists.', 'question': 'What has been discovered by scientists from China ?' }) # Output: {'answer': 'A new strain of flu', 'end': 19, 'score': 0.7950334108113424, 'start': 0} ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/electra-base-finetuned-squadv1
mrm8488
2020-12-11T21:53:55Z
4
0
transformers
[ "transformers", "pytorch", "electra", "question-answering", "en", "arxiv:1406.2661", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- language: en --- # Electra base ⚡ + SQuAD v1 ❓ [Electra-base-discriminator](https://huggingface.co/google/electra-base-discriminator) fine-tuned on [SQUAD v1.1 dataset](https://rajpurkar.github.io/SQuAD-explorer/explore/1.1/dev/) for **Q&A** downstream task. ## Details of the downstream task (Q&A) - Model 🧠 **ELECTRA** is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset. ## Details of the downstream task (Q&A) - Dataset 📚 **S**tanford **Q**uestion **A**nswering **D**ataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable. SQuAD v1.1 contains **100,000+** question-answer pairs on **500+** articles. ## Model training 🏋️‍ The model was trained on a Tesla P100 GPU and 25GB of RAM with the following command: ```bash python transformers/examples/question-answering/run_squad.py \ --model_type electra \ --model_name_or_path 'google/electra-base-discriminator' \ --do_eval \ --do_train \ --do_lower_case \ --train_file '/content/dataset/train-v1.1.json' \ --predict_file '/content/dataset/dev-v1.1.json' \ --per_gpu_train_batch_size 16 \ --learning_rate 3e-5 \ --num_train_epochs 10 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir '/content/output' \ --overwrite_output_dir \ --save_steps 1000 ``` ## Test set Results 🧾 | Metric | # Value | | ------ | --------- | | **EM** | **83.03** | | **F1** | **90.77** | | **Size**| **+ 400 MB** | Very good metrics for such a "small" model! ```json { 'exact': 83.03689687795648, 'f1': 90.77486052446231, 'total': 10570, 'HasAns_exact': 83.03689687795648, 'HasAns_f1': 90.77486052446231, 'HasAns_total': 10570, 'best_exact': 83.03689687795648, 'best_exact_thresh': 0.0, 'best_f1': 90.77486052446231, 'best_f1_thresh': 0.0 } ``` ### Model in action 🚀 Fast usage with **pipelines**: ```python from transformers import pipeline QnA_pipeline = pipeline('question-answering', model='mrm8488/electra-base-finetuned-squadv1') QnA_pipeline({ 'context': 'A new strain of flu that has the potential to become a pandemic has been identified in China by scientists.', 'question': 'What has been discovered by scientists from China ?' }) # Output: {'answer': 'A new strain of flu', 'end': 19, 'score': 0.9995211430099182, 'start': 0} ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
moumeneb1/flaubert-base-cased-ecology_crisis
moumeneb1
2020-12-11T21:51:41Z
5
0
transformers
[ "transformers", "flaubert", "feature-extraction", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
# Flaubert-base-cased-ecology_crisis An adapted [__Flaubert/Flaubert_base-cased model__](https://github.com/getalp/Flaubert) Trained further on a Language modeling Task of unlabeled French tweets used to create the [CrisisDataset](https://github.com/DiegoKoz/french_ecological_crisis), The intermediate task of masqued language modeling helped us improve the results on our [paper](http://www.sciencedirect.com/science/article/pii/S0306457320300650) compared to the standard flaubert-base-cased model. If you use this pretrained model on your work, please cite us as follows 🤗 ``` @article{Kozlowski-et-al2020, title = "A three-level classification of French tweets in ecological crises", journal = "Information Processing & Management", volume = "57", number = "5", pages = "102284", year = "2020", issn = "0306-4573", doi = "https://doi.org/10.1016/j.ipm.2020.102284", url = "http://www.sciencedirect.com/science/article/pii/S0306457320300650", author = "Diego Kozlowski and Elisa Lannelongue and Frédéric Saudemont and Farah Benamara and Alda Mari and Véronique Moriceau and Abdelmoumene Boumadane", keywords = "Crisis response from social media, Machine learning, Natural language processing, Transfer learning", } ```
m3hrdadfi/bert2bert-fa-news-headline
m3hrdadfi
2020-12-11T21:50:16Z
43
0
transformers
[ "transformers", "pytorch", "encoder-decoder", "text2text-generation", "summarization", "fa", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:05Z
--- language: fa license: apache-2.0 tags: - summarization --- A Bert2Bert model on VoA Persian Corpus (a medium-sized corpus of 7.9 million words, 2003-2008) generates headlines. The model achieved a 25.30 ROUGE-2 score. For more detail, please follow the [News Headline Generation](https://github.com/m3hrdadfi/news-headline-generation) repo. ## Eval results The following table summarizes the ROUGE scores obtained by the Bert2Bert model. | % | Precision | Recall | FMeasure | |:-------:|:---------:|:------:|:--------:| | ROUGE-1 | 43.78 | 45.52 | 43.54 | | ROUGE-2 | 24.50 | 25.30* | 24.24 | | ROUGE-L | 41.20 | 42.22 | 40.76 | ## Questions? Post a Github issue on the [News Headline Generation](https://github.com/hooshvare/news-headline-generation/issues) repo.
loodos/electra-small-turkish-uncased-discriminator
loodos
2020-12-11T21:49:36Z
4
0
transformers
[ "transformers", "pytorch", "tf", "electra", "pretraining", "tr", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: tr --- # Turkish Language Models with Huggingface's Transformers As R&D Team at Loodos, we release cased and uncased versions of most recent language models for Turkish. More details about pretrained models and evaluations on downstream tasks can be found [here (our repo)](https://github.com/Loodos/turkish-language-models). # Turkish ELECTRA-Small-discriminator (uncased) This is ELECTRA-Small model's discriminator which has 12 encoder layers with 256 hidden layer size trained on uncased Turkish dataset. ## Usage Using AutoModelWithLMHead and AutoTokenizer from Transformers, you can import the model as described below. ```python from transformers import AutoModel, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("loodos/electra-small-turkish-uncased-discriminator", do_lower_case=False) model = AutoModelWithLMHead.from_pretrained("loodos/electra-small-turkish-uncased-discriminator") normalizer = TextNormalization() normalized_text = normalizer.normalize(text, do_lower_case=True, is_turkish=True) tokenizer.tokenize(normalized_text) ``` ### Notes on Tokenizers Currently, Huggingface's tokenizers (which were written in Python) have a bug concerning letters "ı, i, I, İ" and non-ASCII Turkish specific letters. There are two reasons. 1- Vocabulary and sentence piece model is created with NFC/NFKC normalization but tokenizer uses NFD/NFKD. NFD/NFKD normalization changes text that contains Turkish characters I-ı, İ-i, Ç-ç, Ö-ö, Ş-ş, Ğ-ğ, Ü-ü. This causes wrong tokenization, wrong training and loss of information. Some tokens are never trained.(like "şanlıurfa", "öğün", "çocuk" etc.) NFD/NFKD normalization is not proper for Turkish. 2- Python's default ```string.lower()``` and ```string.upper()``` make the conversions - "I" and "İ" to 'i' - 'i' and 'ı' to 'I' respectively. However, in Turkish, 'I' and 'İ' are two different letters. We opened an [issue](https://github.com/huggingface/transformers/issues/6680) in Huggingface's github repo about this bug. Until it is fixed, in case you want to train your model with uncased data, we provide a simple text normalization module (`TextNormalization()` in the code snippet above) in our [repo](https://github.com/Loodos/turkish-language-models). ## Details and Contact You contact us to ask a question, open an issue or give feedback via our github [repo](https://github.com/Loodos/turkish-language-models). ## Acknowledgments Many thanks to TFRC Team for providing us cloud TPUs on Tensorflow Research Cloud to train our models.
loodos/electra-base-turkish-uncased-discriminator
loodos
2020-12-11T21:49:30Z
58
0
transformers
[ "transformers", "pytorch", "tf", "electra", "pretraining", "tr", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: tr --- # Turkish Language Models with Huggingface's Transformers As R&D Team at Loodos, we release cased and uncased versions of most recent language models for Turkish. More details about pretrained models and evaluations on downstream tasks can be found [here (our repo)](https://github.com/Loodos/turkish-language-models). # Turkish ELECTRA-Base-discriminator (uncased) This is ELECTRA-Base model's discriminator which has the same structure with BERT-Base trained on uncased Turkish dataset. ## Usage Using AutoModelWithLMHead and AutoTokenizer from Transformers, you can import the model as described below. ```python from transformers import AutoModel, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("loodos/electra-base-turkish-uncased-discriminator", do_lower_case=False) model = AutoModelWithLMHead.from_pretrained("loodos/electra-base-turkish-uncased-discriminator") normalizer = TextNormalization() normalized_text = normalizer.normalize(text, do_lower_case=True, is_turkish=True) tokenizer.tokenize(normalized_text) ``` ### Notes on Tokenizers Currently, Huggingface's tokenizers (which were written in Python) have a bug concerning letters "ı, i, I, İ" and non-ASCII Turkish specific letters. There are two reasons. 1- Vocabulary and sentence piece model is created with NFC/NFKC normalization but tokenizer uses NFD/NFKD. NFD/NFKD normalization changes text that contains Turkish characters I-ı, İ-i, Ç-ç, Ö-ö, Ş-ş, Ğ-ğ, Ü-ü. This causes wrong tokenization, wrong training and loss of information. Some tokens are never trained.(like "şanlıurfa", "öğün", "çocuk" etc.) NFD/NFKD normalization is not proper for Turkish. 2- Python's default ```string.lower()``` and ```string.upper()``` make the conversions - "I" and "İ" to 'i' - 'i' and 'ı' to 'I' respectively. However, in Turkish, 'I' and 'İ' are two different letters. We opened an [issue](https://github.com/huggingface/transformers/issues/6680) in Huggingface's github repo about this bug. Until it is fixed, in case you want to train your model with uncased data, we provide a simple text normalization module (`TextNormalization()` in the code snippet above) in our [repo](https://github.com/Loodos/turkish-language-models). ## Details and Contact You contact us to ask a question, open an issue or give feedback via our github [repo](https://github.com/Loodos/turkish-language-models). ## Acknowledgments Many thanks to TFRC Team for providing us cloud TPUs on Tensorflow Research Cloud to train our models.
loodos/albert-base-turkish-uncased
loodos
2020-12-11T21:49:21Z
50
1
transformers
[ "transformers", "pytorch", "tf", "albert", "tr", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: tr --- # Turkish Language Models with Huggingface's Transformers As R&D Team at Loodos, we release cased and uncased versions of most recent language models for Turkish. More details about pretrained models and evaluations on downstream tasks can be found [here (our repo)](https://github.com/Loodos/turkish-language-models). # Turkish ALBERT-Base (uncased) This is ALBERT-Base model which has 12 repeated encoder layers with 768 hidden layer size trained on uncased Turkish dataset. ## Usage Using AutoModel and AutoTokenizer from Transformers, you can import the model as described below. ```python from transformers import AutoModel, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("loodos/albert-base-turkish-uncased", do_lower_case=False, keep_accents=True) model = AutoModel.from_pretrained("loodos/albert-base-turkish-uncased") normalizer = TextNormalization() normalized_text = normalizer.normalize(text, do_lower_case=True, is_turkish=True) tokenizer.tokenize(normalized_text) ``` ### Notes on Tokenizers Currently, Huggingface's tokenizers (which were written in Python) have a bug concerning letters "ı, i, I, İ" and non-ASCII Turkish specific letters. There are two reasons. 1- Vocabulary and sentence piece model is created with NFC/NFKC normalization but tokenizer uses NFD/NFKD. NFD/NFKD normalization changes text that contains Turkish characters I-ı, İ-i, Ç-ç, Ö-ö, Ş-ş, Ğ-ğ, Ü-ü. This causes wrong tokenization, wrong training and loss of information. Some tokens are never trained.(like "şanlıurfa", "öğün", "çocuk" etc.) NFD/NFKD normalization is not proper for Turkish. 2- Python's default ```string.lower()``` and ```string.upper()``` make the conversions - "I" and "İ" to 'i' - 'i' and 'ı' to 'I' respectively. However, in Turkish, 'I' and 'İ' are two different letters. We opened an [issue](https://github.com/huggingface/transformers/issues/6680) in Huggingface's github repo about this bug. Until it is fixed, in case you want to train your model with uncased data, we provide a simple text normalization module (`TextNormalization()` in the code snippet above) in our [repo](https://github.com/Loodos/turkish-language-models). ## Details and Contact You contact us to ask a question, open an issue or give feedback via our github [repo](https://github.com/Loodos/turkish-language-models). ## Acknowledgments Many thanks to TFRC Team for providing us cloud TPUs on Tensorflow Research Cloud to train our models.
krevas/finance-koelectra-small-discriminator
krevas
2020-12-11T21:48:34Z
3
0
transformers
[ "transformers", "pytorch", "electra", "pretraining", "ko", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: ko --- # 📈 Financial Korean ELECTRA model Pretrained ELECTRA Language Model for Korean (`finance-koelectra-small-discriminator`) > ELECTRA is a new method for self-supervised language representation learning. It can be used to > pre-train transformer networks using relatively little compute. ELECTRA models are trained to > distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to > the discriminator of a GAN. More details about ELECTRA can be found in the [ICLR paper](https://openreview.net/forum?id=r1xMH1BtvB) or in the [official ELECTRA repository](https://github.com/google-research/electra) on GitHub. ## Stats The current version of the model is trained on a financial news data of Naver news. The final training corpus has a size of 25GB and 2.3B tokens. This model was trained a cased model on a TITAN RTX for 500k steps. ## Usage ```python from transformers import ElectraForPreTraining, ElectraTokenizer import torch discriminator = ElectraForPreTraining.from_pretrained("krevas/finance-koelectra-small-discriminator") tokenizer = ElectraTokenizer.from_pretrained("krevas/finance-koelectra-small-discriminator") sentence = "내일 해당 종목이 대폭 상승할 것이다" fake_sentence = "내일 해당 종목이 맛있게 상승할 것이다" fake_tokens = tokenizer.tokenize(fake_sentence) fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt") discriminator_outputs = discriminator(fake_inputs) predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2) [print("%7s" % token, end="") for token in fake_tokens] [print("%7s" % int(prediction), end="") for prediction in predictions.tolist()[1:-1]] print("fake token : %s" % fake_tokens[predictions.tolist()[1:-1].index(1)]) ``` # Huggingface model hub All models are available on the [Huggingface model hub](https://huggingface.co/krevas).
krevas/finance-koelectra-base-discriminator
krevas
2020-12-11T21:48:27Z
1
0
transformers
[ "transformers", "pytorch", "electra", "pretraining", "ko", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: ko --- # 📈 Financial Korean ELECTRA model Pretrained ELECTRA Language Model for Korean (`finance-koelectra-base-discriminator`) > ELECTRA is a new method for self-supervised language representation learning. It can be used to > pre-train transformer networks using relatively little compute. ELECTRA models are trained to > distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to > the discriminator of a GAN. More details about ELECTRA can be found in the [ICLR paper](https://openreview.net/forum?id=r1xMH1BtvB) or in the [official ELECTRA repository](https://github.com/google-research/electra) on GitHub. ## Stats The current version of the model is trained on a financial news data of Naver news. The final training corpus has a size of 25GB and 2.3B tokens. This model was trained a cased model on a TITAN RTX for 500k steps. ## Usage ```python from transformers import ElectraForPreTraining, ElectraTokenizer import torch discriminator = ElectraForPreTraining.from_pretrained("krevas/finance-koelectra-base-discriminator") tokenizer = ElectraTokenizer.from_pretrained("krevas/finance-koelectra-base-discriminator") sentence = "내일 해당 종목이 대폭 상승할 것이다" fake_sentence = "내일 해당 종목이 맛있게 상승할 것이다" fake_tokens = tokenizer.tokenize(fake_sentence) fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt") discriminator_outputs = discriminator(fake_inputs) predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2) [print("%7s" % token, end="") for token in fake_tokens] [print("%7s" % int(prediction), end="") for prediction in predictions.tolist()[1:-1]] print("fake token : %s" % fake_tokens[predictions.tolist()[1:-1].index(1)]) ``` # Huggingface model hub All models are available on the [Huggingface model hub](https://huggingface.co/krevas).
kiri-ai/distiluse-base-multilingual-cased-et
kiri-ai
2020-12-11T21:48:24Z
6
0
transformers
[ "transformers", "pytorch", "distilbert", "feature-extraction", "et", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
--- language: et --- ## Model Description This model is based off **Sentence-Transformer's** `distiluse-base-multilingual-cased` multilingual model that has been extended to understand sentence embeddings in Estonian. ## Sentence-Transformers This model can be imported directly via the SentenceTransformers package as shown below: ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer('kiri-ai/distiluse-base-multilingual-cased-et') sentences = ['Here is a sample sentence','Another sample sentence'] embeddings = model.encode(sentences) print("Sentence embeddings:") print(embeddings) ``` ## Fine-tuning The fine-tuning and training processes were inspired by [sbert's](https://www.sbert.net/) multilingual training techniques which are available [here](https://www.sbert.net/examples/training/multilingual/README.html). The documentation shows and explains the step-by-step process of using parallel sentences to train models in a different language. ### Resources The model was fine-tuned on English-Estonian parallel sentences taken from [OPUS](http://opus.nlpl.eu/) and [ParaCrawl](https://paracrawl.eu/).
jplu/tf-xlm-roberta-large
jplu
2020-12-11T21:48:04Z
144
1
transformers
[ "transformers", "tf", "xlm-roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
# Tensorflow XLM-RoBERTa In this repository you will find different versions of the XLM-RoBERTa model for Tensorflow. ## XLM-RoBERTa [XLM-RoBERTa](https://ai.facebook.com/blog/-xlm-r-state-of-the-art-cross-lingual-understanding-through-self-supervision/) is a scaled cross lingual sentence encoder. It is trained on 2.5T of data across 100 languages data filtered from Common Crawl. XLM-R achieves state-of-the-arts results on multiple cross lingual benchmarks. ## Model Weights | Model | Downloads | -------------------------------- | --------------------------------------------------------------------------------------------------------------- | `jplu/tf-xlm-roberta-base` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/jplu/tf-xlm-roberta-base/config.json) • [`tf_model.h5`](https://s3.amazonaws.com/models.huggingface.co/bert/jplu/tf-xlm-roberta-base/tf_model.h5) | `jplu/tf-xlm-roberta-large` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/jplu/tf-xlm-roberta-large/config.json) • [`tf_model.h5`](https://s3.amazonaws.com/models.huggingface.co/bert/jplu/tf-xlm-roberta-large/tf_model.h5) ## Usage With Transformers >= 2.4 the Tensorflow models of XLM-RoBERTa can be loaded like: ```python from transformers import TFXLMRobertaModel model = TFXLMRobertaModel.from_pretrained("jplu/tf-xlm-roberta-base") ``` Or ``` model = TFXLMRobertaModel.from_pretrained("jplu/tf-xlm-roberta-large") ``` ## Huggingface model hub All models are available on the [Huggingface model hub](https://huggingface.co/jplu). ## Acknowledgments Thanks to all the Huggingface team for the support and their amazing library!
jplu/tf-xlm-roberta-base
jplu
2020-12-11T21:48:00Z
4,839
1
transformers
[ "transformers", "tf", "xlm-roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
# Tensorflow XLM-RoBERTa In this repository you will find different versions of the XLM-RoBERTa model for Tensorflow. ## XLM-RoBERTa [XLM-RoBERTa](https://ai.facebook.com/blog/-xlm-r-state-of-the-art-cross-lingual-understanding-through-self-supervision/) is a scaled cross lingual sentence encoder. It is trained on 2.5T of data across 100 languages data filtered from Common Crawl. XLM-R achieves state-of-the-arts results on multiple cross lingual benchmarks. ## Model Weights | Model | Downloads | -------------------------------- | --------------------------------------------------------------------------------------------------------------- | `jplu/tf-xlm-roberta-base` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/jplu/tf-xlm-roberta-base/config.json) • [`tf_model.h5`](https://s3.amazonaws.com/models.huggingface.co/bert/jplu/tf-xlm-roberta-base/tf_model.h5) | `jplu/tf-xlm-roberta-large` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/jplu/tf-xlm-roberta-large/config.json) • [`tf_model.h5`](https://s3.amazonaws.com/models.huggingface.co/bert/jplu/tf-xlm-roberta-large/tf_model.h5) ## Usage With Transformers >= 2.4 the Tensorflow models of XLM-RoBERTa can be loaded like: ```python from transformers import TFXLMRobertaModel model = TFXLMRobertaModel.from_pretrained("jplu/tf-xlm-roberta-base") ``` Or ``` model = TFXLMRobertaModel.from_pretrained("jplu/tf-xlm-roberta-large") ``` ## Huggingface model hub All models are available on the [Huggingface model hub](https://huggingface.co/jplu). ## Acknowledgments Thanks to all the Huggingface team for the support and their amazing library!
indobenchmark/indobert-lite-large-p2
indobenchmark
2020-12-11T21:45:59Z
186
1
transformers
[ "transformers", "pytorch", "tf", "albert", "feature-extraction", "indobert", "indobenchmark", "indonlu", "id", "dataset:Indo4B", "arxiv:2009.05387", "license:mit", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
--- language: id tags: - indobert - indobenchmark - indonlu license: mit inference: false datasets: - Indo4B --- # IndoBERT-Lite Large Model (phase2 - uncased) [IndoBERT](https://arxiv.org/abs/2009.05387) is a state-of-the-art language model for Indonesian based on the BERT model. The pretrained model is trained using a masked language modeling (MLM) objective and next sentence prediction (NSP) objective. ## All Pre-trained Models | Model | #params | Arch. | Training data | |--------------------------------|--------------------------------|-------|-----------------------------------| | `indobenchmark/indobert-base-p1` | 124.5M | Base | Indo4B (23.43 GB of text) | | `indobenchmark/indobert-base-p2` | 124.5M | Base | Indo4B (23.43 GB of text) | | `indobenchmark/indobert-large-p1` | 335.2M | Large | Indo4B (23.43 GB of text) | | `indobenchmark/indobert-large-p2` | 335.2M | Large | Indo4B (23.43 GB of text) | | `indobenchmark/indobert-lite-base-p1` | 11.7M | Base | Indo4B (23.43 GB of text) | | `indobenchmark/indobert-lite-base-p2` | 11.7M | Base | Indo4B (23.43 GB of text) | | `indobenchmark/indobert-lite-large-p1` | 17.7M | Large | Indo4B (23.43 GB of text) | | `indobenchmark/indobert-lite-large-p2` | 17.7M | Large | Indo4B (23.43 GB of text) | ## How to use ### Load model and tokenizer ```python from transformers import BertTokenizer, AutoModel tokenizer = BertTokenizer.from_pretrained("indobenchmark/indobert-lite-large-p2") model = AutoModel.from_pretrained("indobenchmark/indobert-lite-large-p2") ``` ### Extract contextual representation ```python x = torch.LongTensor(tokenizer.encode('aku adalah anak [MASK]')).view(1,-1) print(x, model(x)[0].sum()) ``` ## Authors <b>IndoBERT</b> was trained and evaluated by Bryan Wilie\*, Karissa Vincentio\*, Genta Indra Winata\*, Samuel Cahyawijaya\*, Xiaohong Li, Zhi Yuan Lim, Sidik Soleman, Rahmad Mahendra, Pascale Fung, Syafri Bahar, Ayu Purwarianti. ## Citation If you use our work, please cite: ```bibtex @inproceedings{wilie2020indonlu, title={IndoNLU: Benchmark and Resources for Evaluating Indonesian Natural Language Understanding}, author={Bryan Wilie and Karissa Vincentio and Genta Indra Winata and Samuel Cahyawijaya and X. Li and Zhi Yuan Lim and S. Soleman and R. Mahendra and Pascale Fung and Syafri Bahar and A. Purwarianti}, booktitle={Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing}, year={2020} } ```
indobenchmark/indobert-lite-large-p1
indobenchmark
2020-12-11T21:45:56Z
40
0
transformers
[ "transformers", "pytorch", "tf", "albert", "feature-extraction", "indobert", "indobenchmark", "indonlu", "id", "dataset:Indo4B", "arxiv:2009.05387", "license:mit", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
--- language: id tags: - indobert - indobenchmark - indonlu license: mit inference: false datasets: - Indo4B --- # IndoBERT-Lite Large Model (phase1 - uncased) [IndoBERT](https://arxiv.org/abs/2009.05387) is a state-of-the-art language model for Indonesian based on the BERT model. The pretrained model is trained using a masked language modeling (MLM) objective and next sentence prediction (NSP) objective. ## All Pre-trained Models | Model | #params | Arch. | Training data | |--------------------------------|--------------------------------|-------|-----------------------------------| | `indobenchmark/indobert-base-p1` | 124.5M | Base | Indo4B (23.43 GB of text) | | `indobenchmark/indobert-base-p2` | 124.5M | Base | Indo4B (23.43 GB of text) | | `indobenchmark/indobert-large-p1` | 335.2M | Large | Indo4B (23.43 GB of text) | | `indobenchmark/indobert-large-p2` | 335.2M | Large | Indo4B (23.43 GB of text) | | `indobenchmark/indobert-lite-base-p1` | 11.7M | Base | Indo4B (23.43 GB of text) | | `indobenchmark/indobert-lite-base-p2` | 11.7M | Base | Indo4B (23.43 GB of text) | | `indobenchmark/indobert-lite-large-p1` | 17.7M | Large | Indo4B (23.43 GB of text) | | `indobenchmark/indobert-lite-large-p2` | 17.7M | Large | Indo4B (23.43 GB of text) | ## How to use ### Load model and tokenizer ```python from transformers import BertTokenizer, AutoModel tokenizer = BertTokenizer.from_pretrained("indobenchmark/indobert-lite-large-p1") model = AutoModel.from_pretrained("indobenchmark/indobert-lite-large-p1") ``` ### Extract contextual representation ```python x = torch.LongTensor(tokenizer.encode('aku adalah anak [MASK]')).view(1,-1) print(x, model(x)[0].sum()) ``` ## Authors <b>IndoBERT</b> was trained and evaluated by Bryan Wilie\*, Karissa Vincentio\*, Genta Indra Winata\*, Samuel Cahyawijaya\*, Xiaohong Li, Zhi Yuan Lim, Sidik Soleman, Rahmad Mahendra, Pascale Fung, Syafri Bahar, Ayu Purwarianti. ## Citation If you use our work, please cite: ```bibtex @inproceedings{wilie2020indonlu, title={IndoNLU: Benchmark and Resources for Evaluating Indonesian Natural Language Understanding}, author={Bryan Wilie and Karissa Vincentio and Genta Indra Winata and Samuel Cahyawijaya and X. Li and Zhi Yuan Lim and S. Soleman and R. Mahendra and Pascale Fung and Syafri Bahar and A. Purwarianti}, booktitle={Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing}, year={2020} } ```
indobenchmark/indobert-lite-base-p2
indobenchmark
2020-12-11T21:45:53Z
35,934
0
transformers
[ "transformers", "pytorch", "tf", "albert", "feature-extraction", "indobert", "indobenchmark", "indonlu", "id", "dataset:Indo4B", "arxiv:2009.05387", "license:mit", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
--- language: id tags: - indobert - indobenchmark - indonlu license: mit inference: false datasets: - Indo4B --- # IndoBERT-Lite Base Model (phase2 - uncased) [IndoBERT](https://arxiv.org/abs/2009.05387) is a state-of-the-art language model for Indonesian based on the BERT model. The pretrained model is trained using a masked language modeling (MLM) objective and next sentence prediction (NSP) objective. ## All Pre-trained Models | Model | #params | Arch. | Training data | |--------------------------------|--------------------------------|-------|-----------------------------------| | `indobenchmark/indobert-base-p1` | 124.5M | Base | Indo4B (23.43 GB of text) | | `indobenchmark/indobert-base-p2` | 124.5M | Base | Indo4B (23.43 GB of text) | | `indobenchmark/indobert-large-p1` | 335.2M | Large | Indo4B (23.43 GB of text) | | `indobenchmark/indobert-large-p2` | 335.2M | Large | Indo4B (23.43 GB of text) | | `indobenchmark/indobert-lite-base-p1` | 11.7M | Base | Indo4B (23.43 GB of text) | | `indobenchmark/indobert-lite-base-p2` | 11.7M | Base | Indo4B (23.43 GB of text) | | `indobenchmark/indobert-lite-large-p1` | 17.7M | Large | Indo4B (23.43 GB of text) | | `indobenchmark/indobert-lite-large-p2` | 17.7M | Large | Indo4B (23.43 GB of text) | ## How to use ### Load model and tokenizer ```python from transformers import BertTokenizer, AutoModel tokenizer = BertTokenizer.from_pretrained("indobenchmark/indobert-lite-base-p2") model = AutoModel.from_pretrained("indobenchmark/indobert-lite-base-p2") ``` ### Extract contextual representation ```python x = torch.LongTensor(tokenizer.encode('aku adalah anak [MASK]')).view(1,-1) print(x, model(x)[0].sum()) ``` ## Authors <b>IndoBERT</b> was trained and evaluated by Bryan Wilie\*, Karissa Vincentio\*, Genta Indra Winata\*, Samuel Cahyawijaya\*, Xiaohong Li, Zhi Yuan Lim, Sidik Soleman, Rahmad Mahendra, Pascale Fung, Syafri Bahar, Ayu Purwarianti. ## Citation If you use our work, please cite: ```bibtex @inproceedings{wilie2020indonlu, title={IndoNLU: Benchmark and Resources for Evaluating Indonesian Natural Language Understanding}, author={Bryan Wilie and Karissa Vincentio and Genta Indra Winata and Samuel Cahyawijaya and X. Li and Zhi Yuan Lim and S. Soleman and R. Mahendra and Pascale Fung and Syafri Bahar and A. Purwarianti}, booktitle={Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing}, year={2020} } ```
indobenchmark/indobert-lite-base-p1
indobenchmark
2020-12-11T21:45:50Z
261
0
transformers
[ "transformers", "pytorch", "tf", "albert", "feature-extraction", "indobert", "indobenchmark", "indonlu", "id", "dataset:Indo4B", "arxiv:2009.05387", "license:mit", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
--- language: id tags: - indobert - indobenchmark - indonlu license: mit inference: false datasets: - Indo4B --- # IndoBERT-Lite Base Model (phase1 - uncased) [IndoBERT](https://arxiv.org/abs/2009.05387) is a state-of-the-art language model for Indonesian based on the BERT model. The pretrained model is trained using a masked language modeling (MLM) objective and next sentence prediction (NSP) objective. ## All Pre-trained Models | Model | #params | Arch. | Training data | |--------------------------------|--------------------------------|-------|-----------------------------------| | `indobenchmark/indobert-base-p1` | 124.5M | Base | Indo4B (23.43 GB of text) | | `indobenchmark/indobert-base-p2` | 124.5M | Base | Indo4B (23.43 GB of text) | | `indobenchmark/indobert-large-p1` | 335.2M | Large | Indo4B (23.43 GB of text) | | `indobenchmark/indobert-large-p2` | 335.2M | Large | Indo4B (23.43 GB of text) | | `indobenchmark/indobert-lite-base-p1` | 11.7M | Base | Indo4B (23.43 GB of text) | | `indobenchmark/indobert-lite-base-p2` | 11.7M | Base | Indo4B (23.43 GB of text) | | `indobenchmark/indobert-lite-large-p1` | 17.7M | Large | Indo4B (23.43 GB of text) | | `indobenchmark/indobert-lite-large-p2` | 17.7M | Large | Indo4B (23.43 GB of text) | ## How to use ### Load model and tokenizer ```python from transformers import BertTokenizer, AutoModel tokenizer = BertTokenizer.from_pretrained("indobenchmark/indobert-lite-base-p1") model = AutoModel.from_pretrained("indobenchmark/indobert-lite-base-p1") ``` ### Extract contextual representation ```python x = torch.LongTensor(tokenizer.encode('aku adalah anak [MASK]')).view(1,-1) print(x, model(x)[0].sum()) ``` ## Authors <b>IndoBERT</b> was trained and evaluated by Bryan Wilie\*, Karissa Vincentio\*, Genta Indra Winata\*, Samuel Cahyawijaya\*, Xiaohong Li, Zhi Yuan Lim, Sidik Soleman, Rahmad Mahendra, Pascale Fung, Syafri Bahar, Ayu Purwarianti. ## Citation If you use our work, please cite: ```bibtex @inproceedings{wilie2020indonlu, title={IndoNLU: Benchmark and Resources for Evaluating Indonesian Natural Language Understanding}, author={Bryan Wilie and Karissa Vincentio and Genta Indra Winata and Samuel Cahyawijaya and X. Li and Zhi Yuan Lim and S. Soleman and R. Mahendra and Pascale Fung and Syafri Bahar and A. Purwarianti}, booktitle={Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing}, year={2020} } ```
illuin/camembert-base-fquad
illuin
2020-12-11T21:45:27Z
506
7
transformers
[ "transformers", "pytorch", "camembert", "question-answering", "fr", "dataset:fquad", "license:gpl-3.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- language: fr tags: - question-answering - camembert license: gpl-3.0 datasets: - fquad --- # camembert-base-fquad ## Description A native French Question Answering model [CamemBERT-base](https://camembert-model.fr/) fine-tuned on [FQuAD](https://fquad.illuin.tech/). ## Evaluation results On the development set. ```shell {"f1": 88.1, "exact_match": 78.1} ``` On the test set. ```shell {"f1": 88.3, "exact_match": 78.0} ``` ## Usage ```python from transformers import pipeline nlp = pipeline('question-answering', model='illuin/camembert-base-fquad', tokenizer='illuin/camembert-base-fquad') nlp({ 'question': "Qui est Claude Monet?", 'context': "Claude Monet, né le 14 novembre 1840 à Paris et mort le 5 décembre 1926 à Giverny, est un peintre français et l’un des fondateurs de l'impressionnisme." }) ``` ## Citation If you use our work, please cite: ```bibtex @article{dHoffschmidt2020FQuADFQ, title={FQuAD: French Question Answering Dataset}, author={Martin d'Hoffschmidt and Maxime Vidal and Wacim Belblidia and Tom Brendl'e and Quentin Heinrich}, journal={ArXiv}, year={2020}, volume={abs/2002.06071} } ```
healx/gpt-2-pubmed-medium
healx
2020-12-11T21:43:41Z
3,105
2
transformers
[ "transformers", "pytorch", "arxiv:2004.13845", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
GPT-2 (355M model) finetuned on 0.5m PubMed abstracts. Used in the [writemeanabstract.com](writemeanabstract.com) and the following preprint: [Papanikolaou, Yannis, and Andrea Pierleoni. "DARE: Data Augmented Relation Extraction with GPT-2." arXiv preprint arXiv:2004.13845 (2020).](https://arxiv.org/abs/2004.13845)
facebook/rag-token-base
facebook
2020-12-11T21:39:44Z
7,396
17
transformers
[ "transformers", "pytorch", "rag", "en", "dataset:wiki_dpr", "arxiv:2005.11401", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: en license: apache-2.0 datasets: - wiki_dpr thumbnail: https://huggingface.co/front/thumbnails/facebook.png --- ## RAG This is a non-finetuned version of the RAG-Token model of the the paper [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/pdf/2005.11401.pdf) by Patrick Lewis, Ethan Perez, Aleksandara Piktus et al. Rag consits of a *question encoder*, *retriever* and a *generator*. The retriever should be a `RagRetriever` instance. The *question encoder* can be any model that can be loaded with `AutoModel` and the *generator* can be any model that can be loaded with `AutoModelForSeq2SeqLM`. This model is a non-finetuned RAG-Token model and was created as follows: ```python from transformers import RagTokenizer, RagRetriever, RagTokenForGeneration, AutoTokenizer model = RagTokenForGeneration.from_pretrained_question_encoder_generator("facebook/dpr-question_encoder-single-nq-base", "facebook/bart-large") question_encoder_tokenizer = AutoTokenizer.from_pretrained("facebook/dpr-question_encoder-single-nq-base") generator_tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large") tokenizer = RagTokenizer(question_encoder_tokenizer, generator_tokenizer) model.config.use_dummy_dataset = True model.config.index_name = "exact" retriever = RagRetriever(model.config, question_encoder_tokenizer, generator_tokenizer) model.save_pretrained("./") tokenizer.save_pretrained("./") retriever.save_pretrained("./") ``` Note that the model is *uncased* so that all capital input letters are converted to lower-case. ## Usage: *Note*: the model uses the *dummy* retriever as a default. Better results are obtained by using the full retriever, by setting `config.index_name="legacy"` and `config.use_dummy_dataset=False`. The model can be fine-tuned as follows: ```python from transformers import RagTokenizer, RagRetriever, RagTokenForGeneration tokenizer = RagTokenizer.from_pretrained("facebook/rag-token-base") retriever = RagRetriever.from_pretrained("facebook/rag-token-base") model = RagTokenForGeneration.from_pretrained("facebook/rag-token-base", retriever=retriever) input_dict = tokenizer.prepare_seq2seq_batch("who holds the record in 100m freestyle", "michael phelps", return_tensors="pt") outputs = model(input_dict["input_ids"], labels=input_dict["labels"]) loss = outputs.loss # train on loss ```
elgeish/cs224n-squad2.0-albert-xxlarge-v1
elgeish
2020-12-11T21:39:01Z
7
0
transformers
[ "transformers", "pytorch", "albert", "question-answering", "exbert", "arxiv:2004.07067", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- tags: - exbert --- ## CS224n SQuAD2.0 Project Dataset The goal of this model is to save CS224n students GPU time when establishing baselines to beat for the [Default Final Project](http://web.stanford.edu/class/cs224n/project/default-final-project-handout.pdf). The training set used to fine-tune this model is the same as the [official one](https://rajpurkar.github.io/SQuAD-explorer/); however, evaluation and model selection were performed using roughly half of the official dev set, 6078 examples, picked at random. The data files can be found at <https://github.com/elgeish/squad/tree/master/data> — this is the Winter 2020 version. Given that the official SQuAD2.0 dev set contains the project's test set, students must make sure not to use the official SQuAD2.0 dev set in any way — including the use of models fine-tuned on the official SQuAD2.0, since they used the official SQuAD2.0 dev set for model selection. <a href="https://huggingface.co/exbert/?model=elgeish/cs224n-squad2.0-albert-xxlarge-v1"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a> ## Results ```json { "exact": 85.93287265547877, "f1": 88.91258331187983, "total": 6078, "HasAns_exact": 84.36426116838489, "HasAns_f1": 90.58786301361013, "HasAns_total": 2910, "NoAns_exact": 87.37373737373737, "NoAns_f1": 87.37373737373737, "NoAns_total": 3168, "best_exact": 85.93287265547877, "best_exact_thresh": 0.0, "best_f1": 88.91258331187993, "best_f1_thresh": 0.0 } ``` ## Notable Arguments ```json { "do_lower_case": true, "doc_stride": 128, "fp16": false, "fp16_opt_level": "O1", "gradient_accumulation_steps": 24, "learning_rate": 3e-05, "max_answer_length": 30, "max_grad_norm": 1, "max_query_length": 64, "max_seq_length": 512, "model_name_or_path": "albert-xxlarge-v1", "model_type": "albert", "num_train_epochs": 4, "per_gpu_train_batch_size": 1, "save_steps": 1000, "seed": 42, "train_batch_size": 1, "version_2_with_negative": true, "warmup_steps": 814, "weight_decay": 0 } ``` ## Environment Setup ```json { "transformers": "2.5.1", "pytorch": "1.4.0=py3.6_cuda10.1.243_cudnn7.6.3_0", "python": "3.6.5=hc3d631a_2", "os": "Linux 4.15.0-1060-aws #62-Ubuntu SMP Tue Feb 11 21:23:22 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux", "gpu": "Tesla V100-SXM2-16GB" } ``` ## How to Cite ```BibTeX @misc{elgeish2020gestalt, title={Gestalt: a Stacking Ensemble for SQuAD2.0}, author={Mohamed El-Geish}, journal={arXiv e-prints}, archivePrefix={arXiv}, eprint={2004.07067}, year={2020}, } ``` ## Related Models * [elgeish/cs224n-squad2.0-albert-base-v2](https://huggingface.co/elgeish/cs224n-squad2.0-albert-base-v2) * [elgeish/cs224n-squad2.0-albert-large-v2](https://huggingface.co/elgeish/cs224n-squad2.0-albert-large-v2) * [elgeish/cs224n-squad2.0-distilbert-base-uncased](https://huggingface.co/elgeish/cs224n-squad2.0-distilbert-base-uncased) * [elgeish/cs224n-squad2.0-roberta-base](https://huggingface.co/elgeish/cs224n-squad2.0-roberta-base)
elgeish/cs224n-squad2.0-albert-large-v2
elgeish
2020-12-11T21:38:57Z
7
0
transformers
[ "transformers", "pytorch", "albert", "question-answering", "exbert", "arxiv:2004.07067", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- tags: - exbert --- ## CS224n SQuAD2.0 Project Dataset The goal of this model is to save CS224n students GPU time when establishing baselines to beat for the [Default Final Project](http://web.stanford.edu/class/cs224n/project/default-final-project-handout.pdf). The training set used to fine-tune this model is the same as the [official one](https://rajpurkar.github.io/SQuAD-explorer/); however, evaluation and model selection were performed using roughly half of the official dev set, 6078 examples, picked at random. The data files can be found at <https://github.com/elgeish/squad/tree/master/data> — this is the Winter 2020 version. Given that the official SQuAD2.0 dev set contains the project's test set, students must make sure not to use the official SQuAD2.0 dev set in any way — including the use of models fine-tuned on the official SQuAD2.0, since they used the official SQuAD2.0 dev set for model selection. <a href="https://huggingface.co/exbert/?model=elgeish/cs224n-squad2.0-albert-large-v2"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a> ## Results ```json { "exact": 79.2694965449161, "f1": 82.50844352970152, "total": 6078, "HasAns_exact": 74.87972508591065, "HasAns_f1": 81.64478342732858, "HasAns_total": 2910, "NoAns_exact": 83.30176767676768, "NoAns_f1": 83.30176767676768, "NoAns_total": 3168, "best_exact": 79.2694965449161, "best_exact_thresh": 0.0, "best_f1": 82.50844352970155, "best_f1_thresh": 0.0 } ``` ## Notable Arguments ```json { "do_lower_case": true, "doc_stride": 128, "fp16": false, "fp16_opt_level": "O1", "gradient_accumulation_steps": 1, "learning_rate": 3e-05, "max_answer_length": 30, "max_grad_norm": 1, "max_query_length": 64, "max_seq_length": 384, "model_name_or_path": "albert-large-v2", "model_type": "albert", "num_train_epochs": 5, "per_gpu_train_batch_size": 8, "save_steps": 5000, "seed": 42, "train_batch_size": 8, "version_2_with_negative": true, "warmup_steps": 0, "weight_decay": 0 } ``` ## Environment Setup ```json { "transformers": "2.5.1", "pytorch": "1.4.0=py3.6_cuda10.1.243_cudnn7.6.3_0", "python": "3.6.5=hc3d631a_2", "os": "Linux 4.15.0-1060-aws #62-Ubuntu SMP Tue Feb 11 21:23:22 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux", "gpu": "Tesla V100-SXM2-16GB" } ``` ## How to Cite ```BibTeX @misc{elgeish2020gestalt, title={Gestalt: a Stacking Ensemble for SQuAD2.0}, author={Mohamed El-Geish}, journal={arXiv e-prints}, archivePrefix={arXiv}, eprint={2004.07067}, year={2020}, } ``` ## Related Models * [elgeish/cs224n-squad2.0-albert-base-v2](https://huggingface.co/elgeish/cs224n-squad2.0-albert-base-v2) * [elgeish/cs224n-squad2.0-albert-xxlarge-v1](https://huggingface.co/elgeish/cs224n-squad2.0-albert-xxlarge-v1) * [elgeish/cs224n-squad2.0-distilbert-base-uncased](https://huggingface.co/elgeish/cs224n-squad2.0-distilbert-base-uncased) * [elgeish/cs224n-squad2.0-roberta-base](https://huggingface.co/elgeish/cs224n-squad2.0-roberta-base)
txus/calbert-base-uncased
txus
2020-12-11T21:36:11Z
11
1
transformers
[ "transformers", "pytorch", "albert", "masked-lm", "catalan", "exbert", "ca", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: "ca" tags: - masked-lm - catalan - exbert license: mit --- # Calbert: a Catalan Language Model ## Introduction CALBERT is an open-source language model for Catalan pretrained on the ALBERT architecture. It is now available on Hugging Face in its `tiny-uncased` version and `base-uncased` (the one you're looking at) as well, and was pretrained on the [OSCAR dataset](https://traces1.inria.fr/oscar/). For further information or requests, please go to the [GitHub repository](https://github.com/codegram/calbert) ## Pre-trained models | Model | Arch. | Training data | | ----------------------------------- | -------------- | ---------------------- | | `codegram` / `calbert-tiny-uncased` | Tiny (uncased) | OSCAR (4.3 GB of text) | | `codegram` / `calbert-base-uncased` | Base (uncased) | OSCAR (4.3 GB of text) | ## How to use Calbert with HuggingFace #### Load Calbert and its tokenizer: ```python from transformers import AutoModel, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("codegram/calbert-base-uncased") model = AutoModel.from_pretrained("codegram/calbert-base-uncased") model.eval() # disable dropout (or leave in train mode to finetune ``` #### Filling masks using pipeline ```python from transformers import pipeline calbert_fill_mask = pipeline("fill-mask", model="codegram/calbert-base-uncased", tokenizer="codegram/calbert-base-uncased") results = calbert_fill_mask("M'agrada [MASK] això") # results # [{'sequence': "[CLS] m'agrada molt aixo[SEP]", 'score': 0.614592969417572, 'token': 61}, # {'sequence': "[CLS] m'agrada moltíssim aixo[SEP]", 'score': 0.06058056280016899, 'token': 4867}, # {'sequence': "[CLS] m'agrada més aixo[SEP]", 'score': 0.017195818945765495, 'token': 43}, # {'sequence': "[CLS] m'agrada llegir aixo[SEP]", 'score': 0.016321714967489243, 'token': 684}, # {'sequence': "[CLS] m'agrada escriure aixo[SEP]", 'score': 0.012185849249362946, 'token': 1306}] ``` #### Extract contextual embedding features from Calbert output ```python import torch # Tokenize in sub-words with SentencePiece tokenized_sentence = tokenizer.tokenize("M'és una mica igual") # ['▁m', "'", 'es', '▁una', '▁mica', '▁igual'] # 1-hot encode and add special starting and end tokens encoded_sentence = tokenizer.encode(tokenized_sentence) # [2, 109, 7, 71, 36, 371, 1103, 3] # NB: Can be done in one step : tokenize.encode("M'és una mica igual") # Feed tokens to Calbert as a torch tensor (batch dim 1) encoded_sentence = torch.tensor(encoded_sentence).unsqueeze(0) embeddings, _ = model(encoded_sentence) embeddings.size() # torch.Size([1, 8, 768]) embeddings.detach() # tensor([[[-0.0261, 0.1166, -0.1075, ..., -0.0368, 0.0193, 0.0017], # [ 0.1289, -0.2252, 0.9881, ..., -0.1353, 0.3534, 0.0734], # [-0.0328, -1.2364, 0.9466, ..., 0.3455, 0.7010, -0.2085], # ..., # [ 0.0397, -1.0228, -0.2239, ..., 0.2932, 0.1248, 0.0813], # [-0.0261, 0.1165, -0.1074, ..., -0.0368, 0.0193, 0.0017], # [-0.1934, -0.2357, -0.2554, ..., 0.1831, 0.6085, 0.1421]]]) ``` ## Authors CALBERT was trained and evaluated by [Txus Bach](https://twitter.com/txustice), as part of [Codegram](https://www.codegram.com)'s applied research. <a href="https://huggingface.co/exbert/?model=codegram/calbert-base-uncased&modelKind=bidirectional&sentence=M%27agradaria%20força%20saber-ne%20més"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
clue/albert_chinese_tiny
clue
2020-12-11T21:35:55Z
120
17
transformers
[ "transformers", "pytorch", "albert", "zh", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: zh --- ## albert_chinese_tiny ### Overview **Language model:** albert-tiny **Model size:** 16M **Language:** Chinese **Training data:** [CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020) **Eval data:** [CLUE dataset](https://github.com/CLUEbenchmark/CLUE) ### Results For results on downstream tasks like text classification, please refer to [this repository](https://github.com/CLUEbenchmark/CLUE). ### Usage **NOTE:**Since sentencepiece is not used in `albert_chinese_tiny` model, you have to call **BertTokenizer** instead of AlbertTokenizer !!! ``` import torch from transformers import BertTokenizer, AlbertModel tokenizer = BertTokenizer.from_pretrained("clue/albert_chinese_tiny") albert = AlbertModel.from_pretrained("clue/albert_chinese_tiny") ``` ### About CLUE benchmark Organization of Language Understanding Evaluation benchmark for Chinese: tasks & datasets, baselines, pre-trained Chinese models, corpus and leaderboard. Github: https://github.com/CLUEbenchmark Website: https://www.cluebenchmarks.com/