modelId
stringlengths
4
112
lastModified
stringlengths
24
24
tags
list
pipeline_tag
stringclasses
21 values
files
list
publishedBy
stringlengths
2
37
downloads_last_month
int32
0
9.44M
library
stringclasses
15 values
modelCard
large_stringlengths
0
100k
lysandre/dummy-hf-hub
2021-04-02T22:55:24.000Z
[]
[ ".gitattributes", "README.md" ]
lysandre
0
Files are only in the master branch.
lysandre/dummy-model
2021-05-19T22:18:30.000Z
[ "pytorch", "jax", "bert", "transformers" ]
[ ".gitattributes", "config.json", "flax_model.msgpack", "pytorch_model.bin", "special_tokens_map.json", "tokenizer.json", "tokenizer_config.json", "vocab.txt" ]
lysandre
21
transformers
lysandre/dummy-test
2021-04-20T22:23:28.000Z
[ "pytorch" ]
[ ".gitattributes", "pytorch_model.bin" ]
lysandre
0
lysandre/dummy-test2
2021-04-20T22:23:15.000Z
[]
[ ".gitattributes" ]
lysandre
0
lysandre/dummy
2021-06-09T10:22:02.000Z
[ "tf", "camembert", "masked-lm", "transformers", "fill-mask" ]
fill-mask
[ ".gitattributes", "README.md", "config.json", "sentencepiece.bpe.model", "special_tokens_map.json", "tf_model.h5", "tokenizer.json", "tokenizer_config.json" ]
lysandre
7
transformers
This is a dummy model
lysandre/elmo-2x4096_512_2048cnn_2xhighway
2021-03-12T16:17:03.000Z
[]
[ ".gitattributes", "options.json", "weights.hdf5" ]
lysandre
0
lysandre/elmo
2021-03-05T00:09:22.000Z
[]
[ ".gitattributes", "options.json", "weights.hdf5" ]
lysandre
0
lysandre/lysandre
2021-06-10T09:15:22.000Z
[]
[ ".gitattributes" ]
lysandre
0
lysandre/ner-elmo.2021-02-12
2021-04-05T22:26:15.000Z
[ "transformers" ]
[ ".gitattributes", "config.json", "weights.th", "vocabulary/.lock", "vocabulary/labels.txt", "vocabulary/non_padded_namespaces.txt", "vocabulary/token_characters.txt", "vocabulary/tokens.txt" ]
lysandre
12
transformers
lysandre/new-dummy-model
2021-06-12T07:49:19.000Z
[ "pytorch", "tf", "distilbert", "text-classification", "transformers" ]
text-classification
[ ".gitattributes", "README.md", "config.json", "new-file.txt", "pytorch_model.bin", "tf_model.h5", "tokenizer_config.json", "vocab.txt" ]
lysandre
6
transformers
# Dummy model This is a dummy model.
lysandre/pair-classification-roberta-mnli
2021-03-12T17:09:17.000Z
[ "transformers" ]
[ ".gitattributes", "config.json", "weights.th", "vocabulary/.lock", "vocabulary/labels.txt", "vocabulary/non_padded_namespaces.txt" ]
lysandre
17
transformers
lysandre/tapas-temporary-repo
2020-12-17T15:56:06.000Z
[ "pytorch", "tapas", "table-question-answering", "en", "dataset:sqa", "arxiv:2004.02349", "arxiv:2010.00571", "transformers", "license:apache-2.0" ]
table-question-answering
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.txt" ]
lysandre
18
transformers
--- language: en tags: - tapas license: apache-2.0 datasets: - sqa --- # TAPAS base model fine-tuned on Sequential Question Answering (SQA) This model has 4 versions which can be used. The latest version, which is the default one, corresponds to the `tapas_sqa_inter_masklm_base_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas). This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training, and then fine-tuned on [SQA](https://www.microsoft.com/en-us/download/details.aspx?id=54253). It uses relative position embeddings by default (i.e. resetting the position index at every cell of the table). The other (non-default) versions which can be used are: - `revision="v3"`, which corresponds to `tapas_sqa_inter_masklm_base` (intermediate pre-training, absolute position embeddings) - `revision="V2"`, which corresponds to `tapas_sqa_masklm_base_reset` (no intermediate pre-training, relative position embeddings) - `revision="v1"`, which corresponds to `tapas_sqa_masklm_base` (no intermediate pre-training, absolute position embeddings) Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by the Hugging Face team and contributors. ## Model description TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion. This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of a table and associated text. - Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements. This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed or refuted by the contents of a table. Fine-tuning is done by adding a cell selection head on top of the pre-trained model, and then jointly train this randomly initialized classification head with the base model on SQA. ## Intended uses & limitations You can use this model for answering questions related to a table in a conversational set-up. For code examples, we refer to the documentation of TAPAS on the HuggingFace website. ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Question [SEP] Flattened table [SEP] ``` ### Fine-tuning The model was fine-tuned on 32 Cloud TPU v3 cores for 200,000 steps with maximum sequence length 512 and batch size of 128. In this setup, fine-tuning takes around 20 hours. The optimizer used is Adam with a learning rate of 1.25e-5, and a warmup ratio of 0.2. An inductive bias is added such that the model only selects cells of the same column. This is reflected by the `select_one_column` parameter of `TapasConfig`. See also table 12 of the [original paper](https://arxiv.org/abs/2004.02349). ### BibTeX entry and citation info ```bibtex @misc{herzig2020tapas, title={TAPAS: Weakly Supervised Table Parsing via Pre-training}, author={Jonathan Herzig and Paweł Krzysztof Nowak and Thomas Müller and Francesco Piccinno and Julian Martin Eisenschlos}, year={2020}, eprint={2004.02349}, archivePrefix={arXiv}, primaryClass={cs.IR} } ``` ```bibtex @misc{eisenschlos2020understanding, title={Understanding tables with intermediate pre-training}, author={Julian Martin Eisenschlos and Syrine Krichene and Thomas Müller}, year={2020}, eprint={2010.00571}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ```bibtex @InProceedings{iyyer2017search-based, author = {Iyyer, Mohit and Yih, Scott Wen-tau and Chang, Ming-Wei}, title = {Search-based Neural Structured Learning for Sequential Question Answering}, booktitle = {Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics}, year = {2017}, month = {July}, abstract = {Recent work in semantic parsing for question answering has focused on long and complicated questions, many of which would seem unnatural if asked in a normal conversation between two humans. In an effort to explore a conversational QA setting, we present a more realistic task: answering sequences of simple but inter-related questions. We collect a dataset of 6,066 question sequences that inquire about semi-structured tables from Wikipedia, with 17,553 question-answer pairs in total. To solve this sequential question answering task, we propose a novel dynamic neural semantic parsing framework trained using a weakly supervised reward-guided search. Our model effectively leverages the sequential context to outperform state-of-the-art QA systems that are designed to answer highly complex questions.}, publisher = {Association for Computational Linguistics}, url = {https://www.microsoft.com/en-us/research/publication/search-based-neural-structured-learning-sequential-question-answering/}, } ```
lysandre/test-elmo-tiny
2021-04-05T22:40:06.000Z
[]
[ ".gitattributes", "elmo_token_embeddings.hdf5", "lm_embeddings_0.hdf5", "lm_embeddings_1.hdf5", "lm_embeddings_2.hdf5", "lm_weights.hdf5", "options.json", "sentences.json", "vocab_test.txt", "config/characters_token_embedder.json" ]
lysandre
0
lysandre/test-model
2021-06-08T22:37:40.000Z
[]
[ ".gitattributes", "README.md" ]
lysandre
0
atesta
lysandre/test-simple-tagger-tiny
2021-04-05T22:57:17.000Z
[ "transformers" ]
[ ".gitattributes", "config.json", "weights.th", "vocabulary/labels.txt", "vocabulary/non_padded_namespaces.txt", "vocabulary/test_tokens.txt", "vocabulary/tokens.txt" ]
lysandre
1,327
transformers
lysandre/test
2021-05-14T07:16:42.000Z
[]
[ ".gitattributes" ]
lysandre
0
lysandre/tests
2021-06-17T06:55:09.000Z
[ "pytorch", "tensorboard", "distilbert", "transformers" ]
[ "__init__.py", "config.json", "conftest.py", "pytorch_model.bin", "test_activations.py", "test_activations_tf.py", "test_benchmark.py", "test_benchmark_tf.py", "test_cli.py", "test_configuration_auto.py", "test_configuration_common.py", "test_data_collator.py", "test_doc_samples.py", "test_feature_extraction_auto.py", "test_feature_extraction_clip.py", "test_feature_extraction_common.py", "test_feature_extraction_deit.py", "test_feature_extraction_detr.py", "test_feature_extraction_speech_to_text.py", "test_feature_extraction_vit.py", "test_feature_extraction_wav2vec2.py", "test_file_utils.py", "test_flax_auto.py", "test_generation_beam_search.py", "test_generation_flax_logits_process.py", "test_generation_flax_utils.py", "test_generation_logits_process.py", "test_generation_stopping_criteria.py", "test_generation_utils.py", "test_hf_api.py", "test_hf_argparser.py", "test_image_utils.py", "test_logging.py", "test_model_card.py", "test_model_output.py", "test_modeling_albert.py", "test_modeling_auto.py", "test_modeling_bart.py", "test_modeling_bert.py", "test_modeling_bert_generation.py", "test_modeling_big_bird.py", "test_modeling_bigbird_pegasus.py", "test_modeling_blenderbot.py", "test_modeling_blenderbot_small.py", "test_modeling_bort.py", "test_modeling_camembert.py", "test_modeling_clip.py", "test_modeling_common.py", "test_modeling_convbert.py", "test_modeling_ctrl.py", "test_modeling_deberta.py", "test_modeling_deberta_v2.py", "test_modeling_deit.py", "test_modeling_detr.py", "test_modeling_distilbert.py", "test_modeling_dpr.py", "test_modeling_electra.py", "test_modeling_encoder_decoder.py", "test_modeling_flaubert.py", "test_modeling_flax_bart.py", "test_modeling_flax_bert.py", "test_modeling_flax_big_bird.py", "test_modeling_flax_clip.py", "test_modeling_flax_common.py", "test_modeling_flax_electra.py", "test_modeling_flax_gpt2.py", "test_modeling_flax_roberta.py", "test_modeling_flax_vit.py", "test_modeling_fsmt.py", "test_modeling_funnel.py", "test_modeling_gpt2.py", "test_modeling_gpt_neo.py", "test_modeling_ibert.py", "test_modeling_layoutlm.py", "test_modeling_led.py", "test_modeling_longformer.py", "test_modeling_luke.py", "test_modeling_lxmert.py", "test_modeling_m2m_100.py", "test_modeling_marian.py", "test_modeling_mbart.py", "test_modeling_megatron_bert.py", "test_modeling_megatron_gpt2.py", "test_modeling_mobilebert.py", "test_modeling_mpnet.py", "test_modeling_mt5.py", "test_modeling_openai.py", "test_modeling_pegasus.py", "test_modeling_prophetnet.py", "test_modeling_rag.py", "test_modeling_reformer.py", "test_modeling_roberta.py", "test_modeling_roformer.py", "test_modeling_speech_to_text.py", "test_modeling_squeezebert.py", "test_modeling_t5.py", "test_modeling_tapas.py", "test_modeling_tf_albert.py", "test_modeling_tf_auto.py", "test_modeling_tf_bart.py", "test_modeling_tf_bert.py", "test_modeling_tf_blenderbot.py", "test_modeling_tf_blenderbot_small.py", "test_modeling_tf_bort.py", "test_modeling_tf_camembert.py", "test_modeling_tf_common.py", "test_modeling_tf_convbert.py", "test_modeling_tf_ctrl.py", "test_modeling_tf_distilbert.py", "test_modeling_tf_dpr.py", "test_modeling_tf_electra.py", "test_modeling_tf_flaubert.py", "test_modeling_tf_funnel.py", "test_modeling_tf_gpt2.py", "test_modeling_tf_layoutlm.py", "test_modeling_tf_led.py", "test_modeling_tf_longformer.py", "test_modeling_tf_lxmert.py", "test_modeling_tf_marian.py", "test_modeling_tf_mbart.py", "test_modeling_tf_mobilebert.py", "test_modeling_tf_mpnet.py", "test_modeling_tf_mt5.py", "test_modeling_tf_openai.py", "test_modeling_tf_pegasus.py", "test_modeling_tf_pytorch.py", "test_modeling_tf_rag.py", "test_modeling_tf_roberta.py", "test_modeling_tf_roformer.py", "test_modeling_tf_t5.py", "test_modeling_tf_transfo_xl.py", "test_modeling_tf_wav2vec2.py", "test_modeling_tf_xlm.py", "test_modeling_tf_xlm_roberta.py", "test_modeling_tf_xlnet.py", "test_modeling_transfo_xl.py", "test_modeling_visual_bert.py", "test_modeling_vit.py", "test_modeling_wav2vec2.py", "test_modeling_xlm.py", "test_modeling_xlm_prophetnet.py", "test_modeling_xlm_roberta.py", "test_modeling_xlnet.py", "test_offline.py", "test_onnx.py", "test_optimization.py", "test_optimization_tf.py", "test_pipelines_automatic_speech_recognition.py", "test_pipelines_common.py", "test_pipelines_conversational.py", "test_pipelines_feature_extraction.py", "test_pipelines_fill_mask.py", "test_pipelines_image_classification.py", "test_pipelines_question_answering.py", "test_pipelines_summarization.py", "test_pipelines_table_question_answering.py", "test_pipelines_text2text_generation.py", "test_pipelines_text_classification.py", "test_pipelines_text_generation.py", "test_pipelines_token_classification.py", "test_pipelines_translation.py", "test_pipelines_zero_shot.py", "test_processor_clip.py", "test_processor_speech_to_text.py", "test_processor_wav2vec2.py", "test_retrieval_rag.py", "test_sequence_feature_extraction_common.py", "test_skip_decorators.py", "test_tokenization_albert.py", "test_tokenization_auto.py", "test_tokenization_bart.py", "test_tokenization_barthez.py", "test_tokenization_bert.py", "test_tokenization_bert_generation.py", "test_tokenization_bert_japanese.py", "test_tokenization_bertweet.py", "test_tokenization_big_bird.py", "test_tokenization_blenderbot.py", "test_tokenization_byt5.py", "test_tokenization_camembert.py", "test_tokenization_clip.py", "test_tokenization_common.py", "test_tokenization_cpm.py", "test_tokenization_ctrl.py", "test_tokenization_deberta.py", "test_tokenization_deberta_v2.py", "test_tokenization_distilbert.py", "test_tokenization_dpr.py", "test_tokenization_fast.py", "test_tokenization_fsmt.py", "test_tokenization_funnel.py", "test_tokenization_gpt2.py", "test_tokenization_herbert.py", "test_tokenization_layoutlm.py", "test_tokenization_luke.py", "test_tokenization_lxmert.py", "test_tokenization_m2m_100.py", "test_tokenization_marian.py", "test_tokenization_mbart.py", "test_tokenization_mbart50.py", "test_tokenization_mpnet.py", "test_tokenization_openai.py", "test_tokenization_pegasus.py", "test_tokenization_phobert.py", "test_tokenization_prophetnet.py", "test_tokenization_rag.py", "test_tokenization_reformer.py", "test_tokenization_roberta.py", "test_tokenization_roformer.py", "test_tokenization_small_blenderbot.py", "test_tokenization_speech_to_text.py", "test_tokenization_squeezebert.py", "test_tokenization_t5.py", "test_tokenization_tapas.py", "test_tokenization_transfo_xl.py", "test_tokenization_utils.py", "test_tokenization_wav2vec2.py", "test_tokenization_xlm.py", "test_tokenization_xlm_prophetnet.py", "test_tokenization_xlm_roberta.py", "test_tokenization_xlnet.py", "test_trainer.py", "test_trainer_callback.py", "test_trainer_distributed.py", "test_trainer_seq2seq.py", "test_trainer_tpu.py", "test_trainer_utils.py", "test_utils_check_copies.py", "test_versions_utils.py", "__pycache__/__init__.cpython-38.pyc", "__pycache__/__init__.cpython-39.pyc", "__pycache__/conftest.cpython-38-pytest-6.1.2.pyc", "__pycache__/conftest.cpython-38-pytest-6.2.0.pyc", "__pycache__/conftest.cpython-38-pytest-6.2.2.pyc", "__pycache__/conftest.cpython-38-pytest-6.2.4.pyc", "__pycache__/conftest.cpython-39-pytest-6.2.4.pyc", "__pycache__/test_activations.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_activations.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_activations.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_activations_tf.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_activations_tf.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_activations_tf.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_benchmark.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_benchmark.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_benchmark.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_benchmark_tf.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_benchmark_tf.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_benchmark_tf.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_cli.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_cli.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_cli.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_configuration_auto.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_configuration_auto.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_configuration_auto.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_configuration_common.cpython-38-pytest-6.1.2.pyc", "__pycache__/test_configuration_common.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_configuration_common.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_configuration_common.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_configuration_common.cpython-39-pytest-6.2.4.pyc", "__pycache__/test_data_collator.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_data_collator.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_data_collator.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_doc_samples.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_doc_samples.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_doc_samples.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_feature_extraction_auto.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_feature_extraction_auto.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_feature_extraction_common.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_feature_extraction_common.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_feature_extraction_deit.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_feature_extraction_detr.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_feature_extraction_detr.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_feature_extraction_speech_to_text.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_feature_extraction_speech_to_text.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_feature_extraction_vit.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_feature_extraction_vit.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_feature_extraction_wav2vec2.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_feature_extraction_wav2vec2.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_file_utils.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_file_utils.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_file_utils.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_flax_auto.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_flax_auto.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_flax_auto.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_generation_beam_search.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_generation_beam_search.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_generation_beam_search.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_generation_logits_process.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_generation_logits_process.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_generation_logits_process.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_generation_stopping_criteria.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_generation_stopping_criteria.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_generation_utils.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_generation_utils.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_generation_utils.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_generation_utils.cpython-39-pytest-6.2.4.pyc", "__pycache__/test_hf_api.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_hf_api.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_hf_api.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_hf_argparser.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_hf_argparser.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_hf_argparser.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_image_utils.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_image_utils.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_logging.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_logging.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_logging.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_model_card.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_model_card.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_model_card.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_model_output.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_model_output.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_model_output.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_albert.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_albert.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_albert.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_auto.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_auto.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_auto.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_bart.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_bart.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_bart.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_bert.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_bert.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_bert.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_bert_generation.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_bert_generation.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_bert_generation.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_big_bird.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_big_bird.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_blenderbot.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_blenderbot.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_blenderbot.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_blenderbot_small.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_blenderbot_small.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_blenderbot_small.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_bort.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_bort.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_bort.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_camembert.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_camembert.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_camembert.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_common.cpython-38-pytest-6.1.2.pyc", "__pycache__/test_modeling_common.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_common.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_common.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_common.cpython-39-pytest-6.2.4.pyc", "__pycache__/test_modeling_convbert.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_convbert.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_convbert.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_ctrl.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_ctrl.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_ctrl.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_deberta.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_deberta.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_deberta.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_deberta_v2.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_deberta_v2.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_deit.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_deit.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_detr.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_detr.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_distilbert.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_distilbert.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_distilbert.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_dpr.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_dpr.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_dpr.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_electra.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_electra.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_electra.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_encoder_decoder.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_encoder_decoder.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_encoder_decoder.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_flaubert.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_flaubert.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_flaubert.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_flax_bert.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_flax_bert.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_flax_bert.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_flax_common.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_flax_common.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_flax_common.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_flax_electra.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_flax_roberta.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_flax_roberta.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_flax_roberta.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_fsmt.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_fsmt.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_fsmt.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_funnel.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_funnel.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_funnel.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_gpt2.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_gpt2.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_gpt2.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_gpt_neo.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_gpt_neo.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_ibert.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_ibert.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_layoutlm.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_layoutlm.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_layoutlm.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_led.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_led.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_led.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_longformer.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_longformer.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_longformer.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_luke.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_lxmert.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_lxmert.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_lxmert.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_m2m_100.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_m2m_100.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_marian.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_marian.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_marian.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_mbart.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_mbart.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_mbart.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_megatron_bert.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_megatron_bert.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_mobilebert.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_mobilebert.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_mobilebert.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_mpnet.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_mpnet.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_mpnet.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_mt5.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_mt5.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_mt5.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_openai.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_openai.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_openai.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_pegasus.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_pegasus.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_pegasus.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_prophetnet.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_prophetnet.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_prophetnet.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_rag.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_rag.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_rag.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_reformer.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_reformer.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_reformer.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_rembert.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_roberta.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_roberta.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_roberta.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_speech_to_text.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_speech_to_text.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_squeezebert.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_squeezebert.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_squeezebert.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_t5.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_t5.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_t5.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_tapas.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_tapas.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_tapas.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_template_bi_encoder_bert.cpython-38-pytest-6.1.2.pyc", "__pycache__/test_modeling_template_encoder_bert.cpython-38-pytest-6.1.2.pyc", "__pycache__/test_modeling_template_pt_encoder_bert.cpython-38-pytest-6.1.2.pyc", "__pycache__/test_modeling_tf_albert.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_tf_albert.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_tf_albert.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_tf_auto.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_tf_auto.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_tf_auto.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_tf_bart.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_tf_bart.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_tf_bart.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_tf_bert.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_tf_bert.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_tf_bert.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_tf_blenderbot.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_tf_blenderbot.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_tf_blenderbot.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_tf_blenderbot_small.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_tf_blenderbot_small.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_tf_blenderbot_small.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_tf_bort.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_tf_bort.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_tf_bort.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_tf_camembert.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_tf_camembert.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_tf_camembert.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_tf_common.cpython-38-pytest-6.1.2.pyc", "__pycache__/test_modeling_tf_common.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_tf_common.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_tf_common.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_tf_convbert.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_tf_convbert.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_tf_convbert.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_tf_ctrl.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_tf_ctrl.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_tf_ctrl.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_tf_distilbert.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_tf_distilbert.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_tf_distilbert.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_tf_dpr.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_tf_dpr.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_tf_dpr.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_tf_electra.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_tf_electra.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_tf_electra.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_tf_flaubert.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_tf_flaubert.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_tf_flaubert.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_tf_funnel.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_tf_funnel.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_tf_funnel.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_tf_gpt2.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_tf_gpt2.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_tf_gpt2.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_tf_layoutlm.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_tf_layoutlm.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_tf_led.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_tf_led.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_tf_led.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_tf_longformer.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_tf_longformer.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_tf_longformer.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_tf_lxmert.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_tf_lxmert.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_tf_lxmert.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_tf_marian.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_tf_marian.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_tf_marian.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_tf_mbart.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_tf_mbart.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_tf_mbart.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_tf_mobilebert.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_tf_mobilebert.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_tf_mobilebert.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_tf_mpnet.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_tf_mpnet.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_tf_mpnet.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_tf_mt5.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_tf_mt5.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_tf_mt5.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_tf_openai.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_tf_openai.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_tf_openai.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_tf_pegasus.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_tf_pegasus.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_tf_pegasus.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_tf_pytorch.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_tf_pytorch.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_tf_pytorch.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_tf_rag.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_tf_rag.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_tf_rembert.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_tf_roberta.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_tf_roberta.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_tf_roberta.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_tf_t5.cpython-38-pytest-6.1.2.pyc", "__pycache__/test_modeling_tf_t5.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_tf_t5.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_tf_t5.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_tf_template_bi_encoder_bert.cpython-38-pytest-6.1.2.pyc", "__pycache__/test_modeling_tf_template_encoder_bert.cpython-38-pytest-6.1.2.pyc", "__pycache__/test_modeling_tf_template_tf_encoder_bert.cpython-38-pytest-6.1.2.pyc", "__pycache__/test_modeling_tf_transfo_xl.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_tf_transfo_xl.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_tf_transfo_xl.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_tf_xlm.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_tf_xlm.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_tf_xlm.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_tf_xlm_roberta.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_tf_xlm_roberta.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_tf_xlm_roberta.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_tf_xlnet.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_tf_xlnet.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_tf_xlnet.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_transfo_xl.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_transfo_xl.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_transfo_xl.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_vit.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_vit.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_wav2vec2.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_wav2vec2.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_wav2vec2.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_xlm.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_xlm.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_xlm.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_xlm_prophetnet.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_xlm_prophetnet.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_xlm_prophetnet.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_xlm_roberta.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_xlm_roberta.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_xlm_roberta.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_xlnet.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_modeling_xlnet.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_modeling_xlnet.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_modeling_xlnet.cpython-39-pytest-6.2.4.pyc", "__pycache__/test_offline.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_offline.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_onnx.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_onnx.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_onnx.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_optimization.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_optimization.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_optimization.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_optimization_tf.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_optimization_tf.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_optimization_tf.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_pipelines_automatic_speech_recognition.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_pipelines_automatic_speech_recognition.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_pipelines_common.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_pipelines_common.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_pipelines_common.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_pipelines_conversational.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_pipelines_conversational.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_pipelines_conversational.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_pipelines_feature_extraction.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_pipelines_feature_extraction.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_pipelines_feature_extraction.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_pipelines_fill_mask.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_pipelines_fill_mask.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_pipelines_fill_mask.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_pipelines_image_classification.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_pipelines_image_classification.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_pipelines_ner.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_pipelines_ner.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_pipelines_question_answering.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_pipelines_question_answering.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_pipelines_question_answering.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_pipelines_sentiment_analysis.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_pipelines_sentiment_analysis.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_pipelines_summarization.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_pipelines_summarization.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_pipelines_summarization.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_pipelines_table_question_answering.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_pipelines_table_question_answering.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_pipelines_table_question_answering.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_pipelines_text2text_generation.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_pipelines_text2text_generation.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_pipelines_text2text_generation.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_pipelines_text_classification.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_pipelines_text_classification.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_pipelines_text_generation.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_pipelines_text_generation.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_pipelines_text_generation.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_pipelines_token_classification.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_pipelines_token_classification.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_pipelines_translation.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_pipelines_translation.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_pipelines_translation.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_pipelines_zero_shot.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_pipelines_zero_shot.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_pipelines_zero_shot.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_processor_speech_to_text.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_processor_speech_to_text.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_processor_wav2vec2.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_processor_wav2vec2.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_retrieval_rag.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_retrieval_rag.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_retrieval_rag.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_sequence_feature_extraction_common.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_sequence_feature_extraction_common.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_skip_decorators.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_skip_decorators.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_skip_decorators.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_tokenization_albert.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_tokenization_albert.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_tokenization_albert.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_tokenization_albert.cpython-38.pyc", "__pycache__/test_tokenization_albert.cpython-39-pytest-6.2.4.pyc", "__pycache__/test_tokenization_auto.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_tokenization_auto.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_tokenization_auto.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_tokenization_auto.cpython-39-pytest-6.2.4.pyc", "__pycache__/test_tokenization_bart.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_tokenization_bart.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_tokenization_bart.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_tokenization_bart.cpython-39-pytest-6.2.4.pyc", "__pycache__/test_tokenization_barthez.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_tokenization_barthez.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_tokenization_barthez.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_tokenization_barthez.cpython-39-pytest-6.2.4.pyc", "__pycache__/test_tokenization_bert.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_tokenization_bert.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_tokenization_bert.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_tokenization_bert.cpython-39-pytest-6.2.4.pyc", "__pycache__/test_tokenization_bert_generation.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_tokenization_bert_generation.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_tokenization_bert_generation.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_tokenization_bert_generation.cpython-39-pytest-6.2.4.pyc", "__pycache__/test_tokenization_bert_japanese.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_tokenization_bert_japanese.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_tokenization_bert_japanese.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_tokenization_bert_japanese.cpython-39-pytest-6.2.4.pyc", "__pycache__/test_tokenization_bertweet.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_tokenization_bertweet.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_tokenization_bertweet.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_tokenization_bertweet.cpython-39-pytest-6.2.4.pyc", "__pycache__/test_tokenization_big_bird.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_tokenization_big_bird.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_tokenization_big_bird.cpython-39-pytest-6.2.4.pyc", "__pycache__/test_tokenization_blenderbot.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_tokenization_blenderbot.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_tokenization_blenderbot.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_tokenization_blenderbot.cpython-39-pytest-6.2.4.pyc", "__pycache__/test_tokenization_camembert.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_tokenization_camembert.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_tokenization_camembert.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_tokenization_camembert.cpython-39-pytest-6.2.4.pyc", "__pycache__/test_tokenization_clip.cpython-39-pytest-6.2.4.pyc", "__pycache__/test_tokenization_common.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_tokenization_common.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_tokenization_common.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_tokenization_common.cpython-38.pyc", "__pycache__/test_tokenization_common.cpython-39-pytest-6.2.4.pyc", "__pycache__/test_tokenization_cpm.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_tokenization_cpm.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_tokenization_cpm.cpython-39-pytest-6.2.4.pyc", "__pycache__/test_tokenization_ctrl.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_tokenization_ctrl.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_tokenization_ctrl.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_tokenization_ctrl.cpython-39-pytest-6.2.4.pyc", "__pycache__/test_tokenization_deberta.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_tokenization_deberta.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_tokenization_deberta.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_tokenization_deberta.cpython-39-pytest-6.2.4.pyc", "__pycache__/test_tokenization_deberta_v2.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_tokenization_deberta_v2.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_tokenization_deberta_v2.cpython-39-pytest-6.2.4.pyc", "__pycache__/test_tokenization_distilbert.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_tokenization_distilbert.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_tokenization_distilbert.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_tokenization_distilbert.cpython-39-pytest-6.2.4.pyc", "__pycache__/test_tokenization_dpr.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_tokenization_dpr.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_tokenization_dpr.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_tokenization_dpr.cpython-39-pytest-6.2.4.pyc", "__pycache__/test_tokenization_fsmt.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_tokenization_fsmt.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_tokenization_fsmt.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_tokenization_fsmt.cpython-39-pytest-6.2.4.pyc", "__pycache__/test_tokenization_funnel.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_tokenization_funnel.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_tokenization_funnel.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_tokenization_funnel.cpython-39-pytest-6.2.4.pyc", "__pycache__/test_tokenization_gpt2.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_tokenization_gpt2.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_tokenization_gpt2.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_tokenization_gpt2.cpython-39-pytest-6.2.4.pyc", "__pycache__/test_tokenization_herbert.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_tokenization_herbert.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_tokenization_herbert.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_tokenization_herbert.cpython-39-pytest-6.2.4.pyc", "__pycache__/test_tokenization_layoutlm.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_tokenization_layoutlm.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_tokenization_layoutlm.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_tokenization_layoutlm.cpython-39-pytest-6.2.4.pyc", "__pycache__/test_tokenization_luke.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_tokenization_luke.cpython-39-pytest-6.2.4.pyc", "__pycache__/test_tokenization_lxmert.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_tokenization_lxmert.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_tokenization_lxmert.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_tokenization_lxmert.cpython-39-pytest-6.2.4.pyc", "__pycache__/test_tokenization_m2m_100.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_tokenization_m2m_100.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_tokenization_m2m_100.cpython-39-pytest-6.2.4.pyc", "__pycache__/test_tokenization_marian.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_tokenization_marian.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_tokenization_marian.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_tokenization_marian.cpython-39-pytest-6.2.4.pyc", "__pycache__/test_tokenization_mbart.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_tokenization_mbart.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_tokenization_mbart.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_tokenization_mbart.cpython-39-pytest-6.2.4.pyc", "__pycache__/test_tokenization_mbart50.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_tokenization_mbart50.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_tokenization_mbart50.cpython-39-pytest-6.2.4.pyc", "__pycache__/test_tokenization_mpnet.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_tokenization_mpnet.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_tokenization_mpnet.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_tokenization_mpnet.cpython-39-pytest-6.2.4.pyc", "__pycache__/test_tokenization_openai.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_tokenization_openai.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_tokenization_openai.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_tokenization_openai.cpython-39-pytest-6.2.4.pyc", "__pycache__/test_tokenization_pegasus.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_tokenization_pegasus.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_tokenization_pegasus.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_tokenization_pegasus.cpython-39-pytest-6.2.4.pyc", "__pycache__/test_tokenization_phobert.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_tokenization_phobert.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_tokenization_phobert.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_tokenization_phobert.cpython-39-pytest-6.2.4.pyc", "__pycache__/test_tokenization_prophetnet.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_tokenization_prophetnet.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_tokenization_prophetnet.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_tokenization_prophetnet.cpython-39-pytest-6.2.4.pyc", "__pycache__/test_tokenization_rag.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_tokenization_rag.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_tokenization_rag.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_tokenization_rag.cpython-39-pytest-6.2.4.pyc", "__pycache__/test_tokenization_reformer.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_tokenization_reformer.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_tokenization_reformer.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_tokenization_reformer.cpython-39-pytest-6.2.4.pyc", "__pycache__/test_tokenization_roberta.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_tokenization_roberta.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_tokenization_roberta.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_tokenization_roberta.cpython-39-pytest-6.2.4.pyc", "__pycache__/test_tokenization_roformer.cpython-39-pytest-6.2.4.pyc", "__pycache__/test_tokenization_small_blenderbot.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_tokenization_small_blenderbot.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_tokenization_small_blenderbot.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_tokenization_small_blenderbot.cpython-39-pytest-6.2.4.pyc", "__pycache__/test_tokenization_speech_to_text.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_tokenization_speech_to_text.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_tokenization_speech_to_text.cpython-39-pytest-6.2.4.pyc", "__pycache__/test_tokenization_squeezebert.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_tokenization_squeezebert.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_tokenization_squeezebert.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_tokenization_squeezebert.cpython-39-pytest-6.2.4.pyc", "__pycache__/test_tokenization_t5.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_tokenization_t5.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_tokenization_t5.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_tokenization_t5.cpython-39-pytest-6.2.4.pyc", "__pycache__/test_tokenization_tapas.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_tokenization_tapas.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_tokenization_tapas.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_tokenization_tapas.cpython-39-pytest-6.2.4.pyc", "__pycache__/test_tokenization_transfo_xl.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_tokenization_transfo_xl.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_tokenization_transfo_xl.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_tokenization_transfo_xl.cpython-39-pytest-6.2.4.pyc", "__pycache__/test_tokenization_utils.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_tokenization_utils.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_tokenization_utils.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_tokenization_utils.cpython-39-pytest-6.2.4.pyc", "__pycache__/test_tokenization_wav2vec2.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_tokenization_wav2vec2.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_tokenization_wav2vec2.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_tokenization_wav2vec2.cpython-39-pytest-6.2.4.pyc", "__pycache__/test_tokenization_xlm.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_tokenization_xlm.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_tokenization_xlm.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_tokenization_xlm.cpython-39-pytest-6.2.4.pyc", "__pycache__/test_tokenization_xlm_prophetnet.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_tokenization_xlm_prophetnet.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_tokenization_xlm_prophetnet.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_tokenization_xlm_prophetnet.cpython-39-pytest-6.2.4.pyc", "__pycache__/test_tokenization_xlm_roberta.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_tokenization_xlm_roberta.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_tokenization_xlm_roberta.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_tokenization_xlm_roberta.cpython-39-pytest-6.2.4.pyc", "__pycache__/test_tokenization_xlnet.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_tokenization_xlnet.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_tokenization_xlnet.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_tokenization_xlnet.cpython-39-pytest-6.2.4.pyc", "__pycache__/test_trainer.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_trainer.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_trainer.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_trainer_callback.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_trainer_callback.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_trainer_callback.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_trainer_distributed.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_trainer_distributed.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_trainer_distributed.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_trainer_seq2seq.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_trainer_seq2seq.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_trainer_seq2seq.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_trainer_tpu.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_trainer_tpu.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_trainer_tpu.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_trainer_utils.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_trainer_utils.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_trainer_utils.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_utils_check_copies.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_utils_check_copies.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_utils_check_copies.cpython-38-pytest-6.2.4.pyc", "__pycache__/test_versions_utils.cpython-38-pytest-6.2.0.pyc", "__pycache__/test_versions_utils.cpython-38-pytest-6.2.2.pyc", "__pycache__/test_versions_utils.cpython-38-pytest-6.2.4.pyc", "deepspeed/ds_config_zero2.json", "deepspeed/ds_config_zero3.json", "deepspeed/test_deepspeed.py", "deepspeed/__pycache__/test_deepspeed.cpython-38-pytest-6.2.2.pyc", "deepspeed/__pycache__/test_deepspeed.cpython-38-pytest-6.2.4.pyc", "extended/test_trainer_ext.py", "extended/__pycache__/test_trainer_ext.cpython-38-pytest-6.2.2.pyc", "extended/__pycache__/test_trainer_ext.cpython-38-pytest-6.2.4.pyc", "extended/runs/Apr13_11-47-31_Beaver/events.out.tfevents.1618328854.Beaver.520043.0", "extended/runs/Apr13_11-47-31_Beaver/events.out.tfevents.1618328857.Beaver.520043.2", "extended/runs/Apr13_11-47-31_Beaver/1618328854.4132347/events.out.tfevents.1618328854.Beaver.520043.1", "fixtures/dummy-config.json", "fixtures/dummy_feature_extractor_config.json", "fixtures/empty.txt", "fixtures/input.txt", "fixtures/preprocessor_config.json", "fixtures/sample_text.txt", "fixtures/sample_text_no_unicode.txt", "fixtures/spiece.model", "fixtures/test_sentencepiece.model", "fixtures/test_sentencepiece_bpe.model", "fixtures/test_sentencepiece_no_bos.model", "fixtures/tests_samples/.gitignore", "fixtures/tests_samples/COCO/000000039769.png", "fixtures/tests_samples/COCO/coco_annotations.txt", "fixtures/tests_samples/COCO/coco_panoptic_annotations.txt", "fixtures/tests_samples/COCO/coco_panoptic/000000039769.png", "fixtures/tests_samples/GermEval/dev.txt", "fixtures/tests_samples/GermEval/labels.txt", "fixtures/tests_samples/GermEval/train.txt", "fixtures/tests_samples/MRPC/dev.csv", "fixtures/tests_samples/MRPC/dev.tsv", "fixtures/tests_samples/MRPC/train.csv", "fixtures/tests_samples/MRPC/train.tsv", "fixtures/tests_samples/SQUAD/sample.json", "fixtures/tests_samples/STS-B/dev.tsv", "fixtures/tests_samples/STS-B/train.tsv", "fixtures/tests_samples/conll/sample.json", "fixtures/tests_samples/swag/sample.json", "fixtures/tests_samples/wiki_text/wiki_00", "fixtures/tests_samples/wmt16/sample.json", "fixtures/tests_samples/wmt_en_ro/test.json", "fixtures/tests_samples/wmt_en_ro/train.json", "fixtures/tests_samples/wmt_en_ro/val.json", "fixtures/tests_samples/xsum/sample.json", "runs/Feb15_12-40-57_Beaver/events.out.tfevents.1613410865.Beaver.416948.0", "runs/Feb15_12-40-57_Beaver/1613410865.1810203/events.out.tfevents.1613410865.Beaver.416948.1", "runs/Feb15_12-41-05_Beaver/events.out.tfevents.1613410865.Beaver.416948.2", "runs/Feb15_12-41-05_Beaver/events.out.tfevents.1613410865.Beaver.416948.4", "runs/Feb15_12-41-05_Beaver/events.out.tfevents.1613410865.Beaver.416948.6", "runs/Feb15_12-41-05_Beaver/1613410865.2578263/events.out.tfevents.1613410865.Beaver.416948.3", "runs/Feb15_12-41-05_Beaver/1613410865.3054066/events.out.tfevents.1613410865.Beaver.416948.5", "runs/Feb15_12-41-05_Beaver/1613410865.3519046/events.out.tfevents.1613410865.Beaver.416948.7", "runs/Feb15_12-43-30_Beaver/events.out.tfevents.1613411012.Beaver.418028.0", "runs/Feb15_12-43-30_Beaver/1613411012.460237/events.out.tfevents.1613411012.Beaver.418028.1", "runs/Feb15_12-43-32_Beaver/events.out.tfevents.1613411012.Beaver.418028.2", "runs/Feb15_12-43-32_Beaver/events.out.tfevents.1613411012.Beaver.418028.4", "runs/Feb15_12-43-32_Beaver/events.out.tfevents.1613411012.Beaver.418028.6", "runs/Feb15_12-43-32_Beaver/1613411012.5048187/events.out.tfevents.1613411012.Beaver.418028.3", "runs/Feb15_12-43-32_Beaver/1613411012.5604634/events.out.tfevents.1613411012.Beaver.418028.5", "runs/Feb15_12-43-32_Beaver/1613411012.6075606/events.out.tfevents.1613411012.Beaver.418028.7", "runs/Feb15_12-43-55_Beaver/events.out.tfevents.1613411037.Beaver.418259.0", "runs/Feb15_12-43-55_Beaver/1613411037.134224/events.out.tfevents.1613411037.Beaver.418259.1", "runs/Feb15_12-43-57_Beaver/events.out.tfevents.1613411037.Beaver.418259.2", "runs/Feb15_12-43-57_Beaver/events.out.tfevents.1613411037.Beaver.418259.4", "runs/Feb15_12-43-57_Beaver/events.out.tfevents.1613411037.Beaver.418259.6", "runs/Feb15_12-43-57_Beaver/events.out.tfevents.1613411037.Beaver.418259.8", "runs/Feb15_12-43-57_Beaver/1613411037.1697214/events.out.tfevents.1613411037.Beaver.418259.3", "runs/Feb15_12-43-57_Beaver/1613411037.2089725/events.out.tfevents.1613411037.Beaver.418259.5", "runs/Feb15_12-43-57_Beaver/1613411037.2617972/events.out.tfevents.1613411037.Beaver.418259.7", "runs/Feb15_12-43-57_Beaver/1613411037.474745/events.out.tfevents.1613411037.Beaver.418259.9", "runs/May25_09-58-37_Beaver/events.out.tfevents.1621929520.Beaver.140926.0", "runs/May25_09-58-37_Beaver/1621929520.2667465/events.out.tfevents.1621929520.Beaver.140926.1", "runs/May25_09-58-40_Beaver/events.out.tfevents.1621929520.Beaver.140926.10", "runs/May25_09-58-40_Beaver/events.out.tfevents.1621929520.Beaver.140926.12", "runs/May25_09-58-40_Beaver/events.out.tfevents.1621929520.Beaver.140926.14", "runs/May25_09-58-40_Beaver/events.out.tfevents.1621929520.Beaver.140926.16", "runs/May25_09-58-40_Beaver/events.out.tfevents.1621929520.Beaver.140926.2", "runs/May25_09-58-40_Beaver/events.out.tfevents.1621929520.Beaver.140926.4", "runs/May25_09-58-40_Beaver/events.out.tfevents.1621929520.Beaver.140926.6", "runs/May25_09-58-40_Beaver/events.out.tfevents.1621929520.Beaver.140926.8", "runs/May25_09-58-40_Beaver/1621929520.30502/events.out.tfevents.1621929520.Beaver.140926.3", "runs/May25_09-58-40_Beaver/1621929520.3385923/events.out.tfevents.1621929520.Beaver.140926.5", "runs/May25_09-58-40_Beaver/1621929520.4424307/events.out.tfevents.1621929520.Beaver.140926.7", "runs/May25_09-58-40_Beaver/1621929520.5577512/events.out.tfevents.1621929520.Beaver.140926.9", "runs/May25_09-58-40_Beaver/1621929520.6550326/events.out.tfevents.1621929520.Beaver.140926.11", "runs/May25_09-58-40_Beaver/1621929520.7905831/events.out.tfevents.1621929520.Beaver.140926.13", "runs/May25_09-58-40_Beaver/1621929520.8967938/events.out.tfevents.1621929520.Beaver.140926.15", "runs/May25_09-58-40_Beaver/1621929520.972776/events.out.tfevents.1621929520.Beaver.140926.17", "runs/May25_09-58-41_Beaver/events.out.tfevents.1621929521.Beaver.140926.18", "runs/May25_09-58-41_Beaver/events.out.tfevents.1621929521.Beaver.140926.20", "runs/May25_09-58-41_Beaver/events.out.tfevents.1621929521.Beaver.140926.22", "runs/May25_09-58-41_Beaver/events.out.tfevents.1621929521.Beaver.140926.24", "runs/May25_09-58-41_Beaver/events.out.tfevents.1621929521.Beaver.140926.26", "runs/May25_09-58-41_Beaver/events.out.tfevents.1621929521.Beaver.140926.28", "runs/May25_09-58-41_Beaver/events.out.tfevents.1621929521.Beaver.140926.30", "runs/May25_09-58-41_Beaver/events.out.tfevents.1621929521.Beaver.140926.31", "runs/May25_09-58-41_Beaver/events.out.tfevents.1621929521.Beaver.140926.32", "runs/May25_09-58-41_Beaver/events.out.tfevents.1621929521.Beaver.140926.34", "runs/May25_09-58-41_Beaver/events.out.tfevents.1621929521.Beaver.140926.36", "runs/May25_09-58-41_Beaver/events.out.tfevents.1621929521.Beaver.140926.38", "runs/May25_09-58-41_Beaver/events.out.tfevents.1621929521.Beaver.140926.40", "runs/May25_09-58-41_Beaver/events.out.tfevents.1621929521.Beaver.140926.42", "runs/May25_09-58-41_Beaver/events.out.tfevents.1621929521.Beaver.140926.44", "runs/May25_09-58-41_Beaver/events.out.tfevents.1621929521.Beaver.140926.45", "runs/May25_09-58-41_Beaver/events.out.tfevents.1621929521.Beaver.140926.46", "runs/May25_09-58-41_Beaver/events.out.tfevents.1621929521.Beaver.140926.48", "runs/May25_09-58-41_Beaver/events.out.tfevents.1621929521.Beaver.140926.50", "runs/May25_09-58-41_Beaver/events.out.tfevents.1621929521.Beaver.140926.51", "runs/May25_09-58-41_Beaver/events.out.tfevents.1621929521.Beaver.140926.53", "runs/May25_09-58-41_Beaver/1621929521.0160458/events.out.tfevents.1621929521.Beaver.140926.19", "runs/May25_09-58-41_Beaver/1621929521.0619283/events.out.tfevents.1621929521.Beaver.140926.21", "runs/May25_09-58-41_Beaver/1621929521.1005495/events.out.tfevents.1621929521.Beaver.140926.23", "runs/May25_09-58-41_Beaver/1621929521.1390715/events.out.tfevents.1621929521.Beaver.140926.25", "runs/May25_09-58-41_Beaver/1621929521.1741974/events.out.tfevents.1621929521.Beaver.140926.27", "runs/May25_09-58-41_Beaver/1621929521.211107/events.out.tfevents.1621929521.Beaver.140926.29", "runs/May25_09-58-41_Beaver/1621929521.3070912/events.out.tfevents.1621929521.Beaver.140926.33", "runs/May25_09-58-41_Beaver/1621929521.3477812/events.out.tfevents.1621929521.Beaver.140926.35", "runs/May25_09-58-41_Beaver/1621929521.3893588/events.out.tfevents.1621929521.Beaver.140926.37", "runs/May25_09-58-41_Beaver/1621929521.4362571/events.out.tfevents.1621929521.Beaver.140926.39", "runs/May25_09-58-41_Beaver/1621929521.4462042/events.out.tfevents.1621929521.Beaver.140926.41", "runs/May25_09-58-41_Beaver/1621929521.4867842/events.out.tfevents.1621929521.Beaver.140926.43", "runs/May25_09-58-41_Beaver/1621929521.55299/events.out.tfevents.1621929521.Beaver.140926.47", "runs/May25_09-58-41_Beaver/1621929521.5932963/events.out.tfevents.1621929521.Beaver.140926.49", "runs/May25_09-58-41_Beaver/1621929521.6587298/events.out.tfevents.1621929521.Beaver.140926.52", "runs/May25_09-58-41_Beaver/1621929521.6989424/events.out.tfevents.1621929521.Beaver.140926.54", "runs/May25_09-58-42_Beaver/events.out.tfevents.1621929522.Beaver.140926.55", "runs/May25_09-58-42_Beaver/events.out.tfevents.1621929522.Beaver.140926.57", "runs/May25_09-58-42_Beaver/events.out.tfevents.1621929522.Beaver.140926.59", "runs/May25_09-58-42_Beaver/events.out.tfevents.1621929522.Beaver.140926.61", "runs/May25_09-58-42_Beaver/events.out.tfevents.1621929522.Beaver.140926.63", "runs/May25_09-58-42_Beaver/events.out.tfevents.1621929522.Beaver.140926.65", "runs/May25_09-58-42_Beaver/events.out.tfevents.1621929522.Beaver.140926.66", "runs/May25_09-58-42_Beaver/events.out.tfevents.1621929522.Beaver.140926.68", "runs/May25_09-58-42_Beaver/events.out.tfevents.1621929522.Beaver.140926.70", "runs/May25_09-58-42_Beaver/events.out.tfevents.1621929522.Beaver.140926.72", "runs/May25_09-58-42_Beaver/events.out.tfevents.1621929522.Beaver.140926.74", "runs/May25_09-58-42_Beaver/events.out.tfevents.1621929522.Beaver.140926.76", "runs/May25_09-58-42_Beaver/events.out.tfevents.1621929522.Beaver.140926.78", "runs/May25_09-58-42_Beaver/events.out.tfevents.1621929522.Beaver.140926.79", "runs/May25_09-58-42_Beaver/events.out.tfevents.1621929522.Beaver.140926.81", "runs/May25_09-58-42_Beaver/events.out.tfevents.1621929522.Beaver.140926.82", "runs/May25_09-58-42_Beaver/events.out.tfevents.1621929522.Beaver.140926.84", "runs/May25_09-58-42_Beaver/events.out.tfevents.1621929522.Beaver.140926.85", "runs/May25_09-58-42_Beaver/events.out.tfevents.1621929523.Beaver.140926.87", "runs/May25_09-58-42_Beaver/1621929522.0873444/events.out.tfevents.1621929522.Beaver.140926.56", "runs/May25_09-58-42_Beaver/1621929522.125773/events.out.tfevents.1621929522.Beaver.140926.58", "runs/May25_09-58-42_Beaver/1621929522.1686683/events.out.tfevents.1621929522.Beaver.140926.60", "runs/May25_09-58-42_Beaver/1621929522.2041962/events.out.tfevents.1621929522.Beaver.140926.62", "runs/May25_09-58-42_Beaver/1621929522.246403/events.out.tfevents.1621929522.Beaver.140926.64", "runs/May25_09-58-42_Beaver/1621929522.3852613/events.out.tfevents.1621929522.Beaver.140926.67", "runs/May25_09-58-42_Beaver/1621929522.42572/events.out.tfevents.1621929522.Beaver.140926.69", "runs/May25_09-58-42_Beaver/1621929522.4651358/events.out.tfevents.1621929522.Beaver.140926.71", "runs/May25_09-58-42_Beaver/1621929522.5098524/events.out.tfevents.1621929522.Beaver.140926.73", "runs/May25_09-58-42_Beaver/1621929522.5454214/events.out.tfevents.1621929522.Beaver.140926.75", "runs/May25_09-58-42_Beaver/1621929522.5821316/events.out.tfevents.1621929522.Beaver.140926.77", "runs/May25_09-58-42_Beaver/1621929522.6771054/events.out.tfevents.1621929522.Beaver.140926.80", "runs/May25_09-58-42_Beaver/1621929522.8371618/events.out.tfevents.1621929522.Beaver.140926.83", "runs/May25_09-58-42_Beaver/1621929522.9744604/events.out.tfevents.1621929522.Beaver.140926.86", "runs/May25_09-58-43_Beaver/events.out.tfevents.1621929523.Beaver.140926.100", "runs/May25_09-58-43_Beaver/events.out.tfevents.1621929523.Beaver.140926.102", "runs/May25_09-58-43_Beaver/events.out.tfevents.1621929523.Beaver.140926.104", "runs/May25_09-58-43_Beaver/events.out.tfevents.1621929523.Beaver.140926.106", "runs/May25_09-58-43_Beaver/events.out.tfevents.1621929523.Beaver.140926.108", "runs/May25_09-58-43_Beaver/events.out.tfevents.1621929523.Beaver.140926.110", "runs/May25_09-58-43_Beaver/events.out.tfevents.1621929523.Beaver.140926.112", "runs/May25_09-58-43_Beaver/events.out.tfevents.1621929523.Beaver.140926.114", "runs/May25_09-58-43_Beaver/events.out.tfevents.1621929523.Beaver.140926.116", "runs/May25_09-58-43_Beaver/events.out.tfevents.1621929523.Beaver.140926.118", "runs/May25_09-58-43_Beaver/events.out.tfevents.1621929523.Beaver.140926.120", "runs/May25_09-58-43_Beaver/events.out.tfevents.1621929523.Beaver.140926.122", "runs/May25_09-58-43_Beaver/events.out.tfevents.1621929523.Beaver.140926.124", "runs/May25_09-58-43_Beaver/events.out.tfevents.1621929523.Beaver.140926.126", "runs/May25_09-58-43_Beaver/events.out.tfevents.1621929523.Beaver.140926.128", "runs/May25_09-58-43_Beaver/events.out.tfevents.1621929523.Beaver.140926.130", "runs/May25_09-58-43_Beaver/events.out.tfevents.1621929523.Beaver.140926.132", "runs/May25_09-58-43_Beaver/events.out.tfevents.1621929523.Beaver.140926.88", "runs/May25_09-58-43_Beaver/events.out.tfevents.1621929523.Beaver.140926.90", "runs/May25_09-58-43_Beaver/events.out.tfevents.1621929523.Beaver.140926.92", "runs/May25_09-58-43_Beaver/events.out.tfevents.1621929523.Beaver.140926.94", "runs/May25_09-58-43_Beaver/events.out.tfevents.1621929523.Beaver.140926.96", "runs/May25_09-58-43_Beaver/events.out.tfevents.1621929523.Beaver.140926.98", "runs/May25_09-58-43_Beaver/1621929523.103434/events.out.tfevents.1621929523.Beaver.140926.89", "runs/May25_09-58-43_Beaver/1621929523.145813/events.out.tfevents.1621929523.Beaver.140926.91", "runs/May25_09-58-43_Beaver/1621929523.1874003/events.out.tfevents.1621929523.Beaver.140926.93", "runs/May25_09-58-43_Beaver/1621929523.2311344/events.out.tfevents.1621929523.Beaver.140926.95", "runs/May25_09-58-43_Beaver/1621929523.2765958/events.out.tfevents.1621929523.Beaver.140926.97", "runs/May25_09-58-43_Beaver/1621929523.3203046/events.out.tfevents.1621929523.Beaver.140926.99", "runs/May25_09-58-43_Beaver/1621929523.3610222/events.out.tfevents.1621929523.Beaver.140926.101", "runs/May25_09-58-43_Beaver/1621929523.3983119/events.out.tfevents.1621929523.Beaver.140926.103", "runs/May25_09-58-43_Beaver/1621929523.4384258/events.out.tfevents.1621929523.Beaver.140926.105", "runs/May25_09-58-43_Beaver/1621929523.4915836/events.out.tfevents.1621929523.Beaver.140926.107", "runs/May25_09-58-43_Beaver/1621929523.5408142/events.out.tfevents.1621929523.Beaver.140926.109", "runs/May25_09-58-43_Beaver/1621929523.5900943/events.out.tfevents.1621929523.Beaver.140926.111", "runs/May25_09-58-43_Beaver/1621929523.6283345/events.out.tfevents.1621929523.Beaver.140926.113", "runs/May25_09-58-43_Beaver/1621929523.6451433/events.out.tfevents.1621929523.Beaver.140926.115", "runs/May25_09-58-43_Beaver/1621929523.6670148/events.out.tfevents.1621929523.Beaver.140926.117", "runs/May25_09-58-43_Beaver/1621929523.7080624/events.out.tfevents.1621929523.Beaver.140926.119", "runs/May25_09-58-43_Beaver/1621929523.7512586/events.out.tfevents.1621929523.Beaver.140926.121", "runs/May25_09-58-43_Beaver/1621929523.7888353/events.out.tfevents.1621929523.Beaver.140926.123", "runs/May25_09-58-43_Beaver/1621929523.8130193/events.out.tfevents.1621929523.Beaver.140926.125", "runs/May25_09-58-43_Beaver/1621929523.8340597/events.out.tfevents.1621929523.Beaver.140926.127", "runs/May25_09-58-43_Beaver/1621929523.8695621/events.out.tfevents.1621929523.Beaver.140926.129", "runs/May25_09-58-43_Beaver/1621929523.944521/events.out.tfevents.1621929523.Beaver.140926.131", "runs/May25_09-58-43_Beaver/1621929523.987287/events.out.tfevents.1621929523.Beaver.140926.133", "runs/May25_09-58-44_Beaver/events.out.tfevents.1621929524.Beaver.140926.134", "runs/May25_09-58-44_Beaver/events.out.tfevents.1621929524.Beaver.140926.136", "runs/May25_09-58-44_Beaver/events.out.tfevents.1621929524.Beaver.140926.138", "runs/May25_09-58-44_Beaver/events.out.tfevents.1621929524.Beaver.140926.140", "runs/May25_09-58-44_Beaver/events.out.tfevents.1621929524.Beaver.140926.142", "runs/May25_09-58-44_Beaver/events.out.tfevents.1621929524.Beaver.140926.144", "runs/May25_09-58-44_Beaver/events.out.tfevents.1621929524.Beaver.140926.146", "runs/May25_09-58-44_Beaver/events.out.tfevents.1621929524.Beaver.140926.148", "runs/May25_09-58-44_Beaver/events.out.tfevents.1621929524.Beaver.140926.150", "runs/May25_09-58-44_Beaver/events.out.tfevents.1621929524.Beaver.140926.152", "runs/May25_09-58-44_Beaver/events.out.tfevents.1621929524.Beaver.140926.154", "runs/May25_09-58-44_Beaver/events.out.tfevents.1621929524.Beaver.140926.156", "runs/May25_09-58-44_Beaver/events.out.tfevents.1621929524.Beaver.140926.158", "runs/May25_09-58-44_Beaver/events.out.tfevents.1621929524.Beaver.140926.160", "runs/May25_09-58-44_Beaver/events.out.tfevents.1621929524.Beaver.140926.162", "runs/May25_09-58-44_Beaver/events.out.tfevents.1621929525.Beaver.140926.164", "runs/May25_09-58-44_Beaver/1621929524.0455928/events.out.tfevents.1621929524.Beaver.140926.135", "runs/May25_09-58-44_Beaver/1621929524.0864418/events.out.tfevents.1621929524.Beaver.140926.137", "runs/May25_09-58-44_Beaver/1621929524.129653/events.out.tfevents.1621929524.Beaver.140926.139", "runs/May25_09-58-44_Beaver/1621929524.166109/events.out.tfevents.1621929524.Beaver.140926.141", "runs/May25_09-58-44_Beaver/1621929524.2016873/events.out.tfevents.1621929524.Beaver.140926.143", "runs/May25_09-58-44_Beaver/1621929524.2405143/events.out.tfevents.1621929524.Beaver.140926.145", "runs/May25_09-58-44_Beaver/1621929524.2798102/events.out.tfevents.1621929524.Beaver.140926.147", "runs/May25_09-58-44_Beaver/1621929524.448186/events.out.tfevents.1621929524.Beaver.140926.149", "runs/May25_09-58-44_Beaver/1621929524.613718/events.out.tfevents.1621929524.Beaver.140926.151", "runs/May25_09-58-44_Beaver/1621929524.6483233/events.out.tfevents.1621929524.Beaver.140926.153", "runs/May25_09-58-44_Beaver/1621929524.6850502/events.out.tfevents.1621929524.Beaver.140926.155", "runs/May25_09-58-44_Beaver/1621929524.8011994/events.out.tfevents.1621929524.Beaver.140926.157", "runs/May25_09-58-44_Beaver/1621929524.9131796/events.out.tfevents.1621929524.Beaver.140926.159", "runs/May25_09-58-44_Beaver/1621929524.9491317/events.out.tfevents.1621929524.Beaver.140926.161", "runs/May25_09-58-44_Beaver/1621929524.9837644/events.out.tfevents.1621929524.Beaver.140926.163", "runs/May25_09-58-44_Beaver/1621929525.0798411/events.out.tfevents.1621929525.Beaver.140926.165", "runs/May25_09-58-45_Beaver/events.out.tfevents.1621929525.Beaver.140926.166", "runs/May25_09-58-45_Beaver/events.out.tfevents.1621929525.Beaver.140926.168", "runs/May25_09-58-45_Beaver/events.out.tfevents.1621929525.Beaver.140926.170", "runs/May25_09-58-45_Beaver/events.out.tfevents.1621929525.Beaver.140926.172", "runs/May25_09-58-45_Beaver/events.out.tfevents.1621929525.Beaver.140926.174", "runs/May25_09-58-45_Beaver/events.out.tfevents.1621929525.Beaver.140926.176", "runs/May25_09-58-45_Beaver/events.out.tfevents.1621929525.Beaver.140926.178", "runs/May25_09-58-45_Beaver/events.out.tfevents.1621929525.Beaver.140926.180", "runs/May25_09-58-45_Beaver/events.out.tfevents.1621929525.Beaver.140926.182", "runs/May25_09-58-45_Beaver/events.out.tfevents.1621929525.Beaver.140926.184", "runs/May25_09-58-45_Beaver/events.out.tfevents.1621929525.Beaver.140926.186", "runs/May25_09-58-45_Beaver/events.out.tfevents.1621929525.Beaver.140926.188", "runs/May25_09-58-45_Beaver/1621929525.1670873/events.out.tfevents.1621929525.Beaver.140926.167", "runs/May25_09-58-45_Beaver/1621929525.2064312/events.out.tfevents.1621929525.Beaver.140926.169", "runs/May25_09-58-45_Beaver/1621929525.2458148/events.out.tfevents.1621929525.Beaver.140926.171", "runs/May25_09-58-45_Beaver/1621929525.290004/events.out.tfevents.1621929525.Beaver.140926.173", "runs/May25_09-58-45_Beaver/1621929525.3364563/events.out.tfevents.1621929525.Beaver.140926.175", "runs/May25_09-58-45_Beaver/1621929525.3724895/events.out.tfevents.1621929525.Beaver.140926.177", "runs/May25_09-58-45_Beaver/1621929525.4163318/events.out.tfevents.1621929525.Beaver.140926.179", "runs/May25_09-58-45_Beaver/1621929525.4486094/events.out.tfevents.1621929525.Beaver.140926.181", "runs/May25_09-58-45_Beaver/1621929525.5042927/events.out.tfevents.1621929525.Beaver.140926.183", "runs/May25_09-58-45_Beaver/1621929525.548137/events.out.tfevents.1621929525.Beaver.140926.185", "runs/May25_09-58-45_Beaver/1621929525.774989/events.out.tfevents.1621929525.Beaver.140926.187", "runs/May25_09-58-45_Beaver/1621929525.8153777/events.out.tfevents.1621929525.Beaver.140926.189", "runs/May25_10-00-23_Beaver/events.out.tfevents.1621929625.Beaver.142246.0", "runs/May25_10-00-23_Beaver/1621929625.9840627/events.out.tfevents.1621929625.Beaver.142246.1", "runs/May25_10-00-26_Beaver/events.out.tfevents.1621929626.Beaver.142246.10", "runs/May25_10-00-26_Beaver/events.out.tfevents.1621929626.Beaver.142246.12", "runs/May25_10-00-26_Beaver/events.out.tfevents.1621929626.Beaver.142246.14", "runs/May25_10-00-26_Beaver/events.out.tfevents.1621929626.Beaver.142246.16", "runs/May25_10-00-26_Beaver/events.out.tfevents.1621929626.Beaver.142246.18", "runs/May25_10-00-26_Beaver/events.out.tfevents.1621929626.Beaver.142246.2", "runs/May25_10-00-26_Beaver/events.out.tfevents.1621929626.Beaver.142246.20", "runs/May25_10-00-26_Beaver/events.out.tfevents.1621929626.Beaver.142246.22", "runs/May25_10-00-26_Beaver/events.out.tfevents.1621929626.Beaver.142246.24", "runs/May25_10-00-26_Beaver/events.out.tfevents.1621929626.Beaver.142246.26", "runs/May25_10-00-26_Beaver/events.out.tfevents.1621929626.Beaver.142246.28", "runs/May25_10-00-26_Beaver/events.out.tfevents.1621929626.Beaver.142246.30", "runs/May25_10-00-26_Beaver/events.out.tfevents.1621929626.Beaver.142246.31", "runs/May25_10-00-26_Beaver/events.out.tfevents.1621929626.Beaver.142246.32", "runs/May25_10-00-26_Beaver/events.out.tfevents.1621929626.Beaver.142246.34", "runs/May25_10-00-26_Beaver/events.out.tfevents.1621929626.Beaver.142246.36", "runs/May25_10-00-26_Beaver/events.out.tfevents.1621929626.Beaver.142246.38", "runs/May25_10-00-26_Beaver/events.out.tfevents.1621929626.Beaver.142246.4", "runs/May25_10-00-26_Beaver/events.out.tfevents.1621929626.Beaver.142246.40", "runs/May25_10-00-26_Beaver/events.out.tfevents.1621929626.Beaver.142246.42", "runs/May25_10-00-26_Beaver/events.out.tfevents.1621929626.Beaver.142246.44", "runs/May25_10-00-26_Beaver/events.out.tfevents.1621929626.Beaver.142246.45", "runs/May25_10-00-26_Beaver/events.out.tfevents.1621929626.Beaver.142246.46", "runs/May25_10-00-26_Beaver/events.out.tfevents.1621929626.Beaver.142246.48", "runs/May25_10-00-26_Beaver/events.out.tfevents.1621929626.Beaver.142246.50", "runs/May25_10-00-26_Beaver/events.out.tfevents.1621929626.Beaver.142246.6", "runs/May25_10-00-26_Beaver/events.out.tfevents.1621929626.Beaver.142246.8", "runs/May25_10-00-26_Beaver/1621929626.0194006/events.out.tfevents.1621929626.Beaver.142246.3", "runs/May25_10-00-26_Beaver/1621929626.050524/events.out.tfevents.1621929626.Beaver.142246.5", "runs/May25_10-00-26_Beaver/1621929626.1312027/events.out.tfevents.1621929626.Beaver.142246.7", "runs/May25_10-00-26_Beaver/1621929626.2053585/events.out.tfevents.1621929626.Beaver.142246.9", "runs/May25_10-00-26_Beaver/1621929626.2785819/events.out.tfevents.1621929626.Beaver.142246.11", "runs/May25_10-00-26_Beaver/1621929626.3553183/events.out.tfevents.1621929626.Beaver.142246.13", "runs/May25_10-00-26_Beaver/1621929626.4262426/events.out.tfevents.1621929626.Beaver.142246.15", "runs/May25_10-00-26_Beaver/1621929626.4900403/events.out.tfevents.1621929626.Beaver.142246.17", "runs/May25_10-00-26_Beaver/1621929626.5203342/events.out.tfevents.1621929626.Beaver.142246.19", "runs/May25_10-00-26_Beaver/1621929626.556481/events.out.tfevents.1621929626.Beaver.142246.21", "runs/May25_10-00-26_Beaver/1621929626.5871282/events.out.tfevents.1621929626.Beaver.142246.23", "runs/May25_10-00-26_Beaver/1621929626.6166458/events.out.tfevents.1621929626.Beaver.142246.25", "runs/May25_10-00-26_Beaver/1621929626.645574/events.out.tfevents.1621929626.Beaver.142246.27", "runs/May25_10-00-26_Beaver/1621929626.6773708/events.out.tfevents.1621929626.Beaver.142246.29", "runs/May25_10-00-26_Beaver/1621929626.7424588/events.out.tfevents.1621929626.Beaver.142246.33", "runs/May25_10-00-26_Beaver/1621929626.7730799/events.out.tfevents.1621929626.Beaver.142246.35", "runs/May25_10-00-26_Beaver/1621929626.8034246/events.out.tfevents.1621929626.Beaver.142246.37", "runs/May25_10-00-26_Beaver/1621929626.839024/events.out.tfevents.1621929626.Beaver.142246.39", "runs/May25_10-00-26_Beaver/1621929626.8465173/events.out.tfevents.1621929626.Beaver.142246.41", "runs/May25_10-00-26_Beaver/1621929626.878244/events.out.tfevents.1621929626.Beaver.142246.43", "runs/May25_10-00-26_Beaver/1621929626.9280276/events.out.tfevents.1621929626.Beaver.142246.47", "runs/May25_10-00-26_Beaver/1621929626.9589884/events.out.tfevents.1621929626.Beaver.142246.49", "runs/May25_10-00-27_Beaver/events.out.tfevents.1621929627.Beaver.142246.51", "runs/May25_10-00-27_Beaver/events.out.tfevents.1621929627.Beaver.142246.53", "runs/May25_10-00-27_Beaver/events.out.tfevents.1621929627.Beaver.142246.55", "runs/May25_10-00-27_Beaver/events.out.tfevents.1621929627.Beaver.142246.57", "runs/May25_10-00-27_Beaver/events.out.tfevents.1621929627.Beaver.142246.59", "runs/May25_10-00-27_Beaver/events.out.tfevents.1621929627.Beaver.142246.61", "runs/May25_10-00-27_Beaver/events.out.tfevents.1621929627.Beaver.142246.63", "runs/May25_10-00-27_Beaver/events.out.tfevents.1621929627.Beaver.142246.65", "runs/May25_10-00-27_Beaver/events.out.tfevents.1621929627.Beaver.142246.66", "runs/May25_10-00-27_Beaver/events.out.tfevents.1621929627.Beaver.142246.67", "runs/May25_10-00-27_Beaver/events.out.tfevents.1621929627.Beaver.142246.69", "runs/May25_10-00-27_Beaver/events.out.tfevents.1621929627.Beaver.142246.71", "runs/May25_10-00-27_Beaver/events.out.tfevents.1621929627.Beaver.142246.73", "runs/May25_10-00-27_Beaver/events.out.tfevents.1621929627.Beaver.142246.75", "runs/May25_10-00-27_Beaver/events.out.tfevents.1621929627.Beaver.142246.77", "runs/May25_10-00-27_Beaver/events.out.tfevents.1621929628.Beaver.142246.79", "runs/May25_10-00-27_Beaver/1621929627.0051198/events.out.tfevents.1621929627.Beaver.142246.52", "runs/May25_10-00-27_Beaver/1621929627.0357857/events.out.tfevents.1621929627.Beaver.142246.54", "runs/May25_10-00-27_Beaver/1621929627.343746/events.out.tfevents.1621929627.Beaver.142246.56", "runs/May25_10-00-27_Beaver/1621929627.375256/events.out.tfevents.1621929627.Beaver.142246.58", "runs/May25_10-00-27_Beaver/1621929627.4082663/events.out.tfevents.1621929627.Beaver.142246.60", "runs/May25_10-00-27_Beaver/1621929627.4390442/events.out.tfevents.1621929627.Beaver.142246.62", "runs/May25_10-00-27_Beaver/1621929627.4708414/events.out.tfevents.1621929627.Beaver.142246.64", "runs/May25_10-00-27_Beaver/1621929627.800928/events.out.tfevents.1621929627.Beaver.142246.68", "runs/May25_10-00-27_Beaver/1621929627.8332822/events.out.tfevents.1621929627.Beaver.142246.70", "runs/May25_10-00-27_Beaver/1621929627.8644283/events.out.tfevents.1621929627.Beaver.142246.72", "runs/May25_10-00-27_Beaver/1621929627.9068155/events.out.tfevents.1621929627.Beaver.142246.74", "runs/May25_10-00-27_Beaver/1621929627.9389122/events.out.tfevents.1621929627.Beaver.142246.76", "runs/May25_10-00-27_Beaver/1621929627.9700875/events.out.tfevents.1621929627.Beaver.142246.78", "runs/May25_10-00-28_Beaver/events.out.tfevents.1621929628.Beaver.142246.101", "runs/May25_10-00-28_Beaver/events.out.tfevents.1621929628.Beaver.142246.80", "runs/May25_10-00-28_Beaver/events.out.tfevents.1621929628.Beaver.142246.82", "runs/May25_10-00-28_Beaver/events.out.tfevents.1621929628.Beaver.142246.83", "runs/May25_10-00-28_Beaver/events.out.tfevents.1621929628.Beaver.142246.85", "runs/May25_10-00-28_Beaver/events.out.tfevents.1621929628.Beaver.142246.86", "runs/May25_10-00-28_Beaver/events.out.tfevents.1621929628.Beaver.142246.88", "runs/May25_10-00-28_Beaver/events.out.tfevents.1621929628.Beaver.142246.89", "runs/May25_10-00-28_Beaver/events.out.tfevents.1621929628.Beaver.142246.91", "runs/May25_10-00-28_Beaver/events.out.tfevents.1621929628.Beaver.142246.93", "runs/May25_10-00-28_Beaver/events.out.tfevents.1621929628.Beaver.142246.95", "runs/May25_10-00-28_Beaver/events.out.tfevents.1621929628.Beaver.142246.96", "runs/May25_10-00-28_Beaver/events.out.tfevents.1621929628.Beaver.142246.98", "runs/May25_10-00-28_Beaver/events.out.tfevents.1621929628.Beaver.142246.99", "runs/May25_10-00-28_Beaver/1621929628.0610754/events.out.tfevents.1621929628.Beaver.142246.81", "runs/May25_10-00-28_Beaver/1621929628.1517868/events.out.tfevents.1621929628.Beaver.142246.84", "runs/May25_10-00-28_Beaver/1621929628.2332213/events.out.tfevents.1621929628.Beaver.142246.87", "runs/May25_10-00-28_Beaver/1621929628.3112667/events.out.tfevents.1621929628.Beaver.142246.90", "runs/May25_10-00-28_Beaver/1621929628.3437808/events.out.tfevents.1621929628.Beaver.142246.92", "runs/May25_10-00-28_Beaver/1621929628.55281/events.out.tfevents.1621929628.Beaver.142246.94", "runs/May25_10-00-28_Beaver/1621929628.8906076/events.out.tfevents.1621929628.Beaver.142246.97", "runs/May25_10-00-28_Beaver/1621929628.9347723/events.out.tfevents.1621929628.Beaver.142246.100", "runs/May25_10-00-28_Beaver/1621929628.9711795/events.out.tfevents.1621929628.Beaver.142246.102", "runs/May25_10-00-29_Beaver/events.out.tfevents.1621929629.Beaver.142246.103", "runs/May25_10-00-29_Beaver/events.out.tfevents.1621929629.Beaver.142246.105", "runs/May25_10-00-29_Beaver/events.out.tfevents.1621929629.Beaver.142246.107", "runs/May25_10-00-29_Beaver/events.out.tfevents.1621929629.Beaver.142246.109", "runs/May25_10-00-29_Beaver/events.out.tfevents.1621929629.Beaver.142246.111", "runs/May25_10-00-29_Beaver/events.out.tfevents.1621929629.Beaver.142246.113", "runs/May25_10-00-29_Beaver/events.out.tfevents.1621929629.Beaver.142246.115", "runs/May25_10-00-29_Beaver/events.out.tfevents.1621929629.Beaver.142246.117", "runs/May25_10-00-29_Beaver/events.out.tfevents.1621929629.Beaver.142246.119", "runs/May25_10-00-29_Beaver/events.out.tfevents.1621929629.Beaver.142246.121", "runs/May25_10-00-29_Beaver/events.out.tfevents.1621929629.Beaver.142246.123", "runs/May25_10-00-29_Beaver/events.out.tfevents.1621929629.Beaver.142246.125", "runs/May25_10-00-29_Beaver/events.out.tfevents.1621929629.Beaver.142246.127", "runs/May25_10-00-29_Beaver/events.out.tfevents.1621929629.Beaver.142246.129", "runs/May25_10-00-29_Beaver/events.out.tfevents.1621929629.Beaver.142246.131", "runs/May25_10-00-29_Beaver/events.out.tfevents.1621929629.Beaver.142246.133", "runs/May25_10-00-29_Beaver/events.out.tfevents.1621929629.Beaver.142246.135", "runs/May25_10-00-29_Beaver/events.out.tfevents.1621929629.Beaver.142246.137", "runs/May25_10-00-29_Beaver/events.out.tfevents.1621929629.Beaver.142246.139", "runs/May25_10-00-29_Beaver/events.out.tfevents.1621929629.Beaver.142246.141", "runs/May25_10-00-29_Beaver/events.out.tfevents.1621929629.Beaver.142246.143", "runs/May25_10-00-29_Beaver/events.out.tfevents.1621929629.Beaver.142246.145", "runs/May25_10-00-29_Beaver/events.out.tfevents.1621929629.Beaver.142246.147", "runs/May25_10-00-29_Beaver/events.out.tfevents.1621929629.Beaver.142246.149", "runs/May25_10-00-29_Beaver/events.out.tfevents.1621929629.Beaver.142246.151", "runs/May25_10-00-29_Beaver/events.out.tfevents.1621929629.Beaver.142246.153", "runs/May25_10-00-29_Beaver/1621929629.0058544/events.out.tfevents.1621929629.Beaver.142246.104", "runs/May25_10-00-29_Beaver/1621929629.0357509/events.out.tfevents.1621929629.Beaver.142246.106", "runs/May25_10-00-29_Beaver/1621929629.0653656/events.out.tfevents.1621929629.Beaver.142246.108", "runs/May25_10-00-29_Beaver/1621929629.0977867/events.out.tfevents.1621929629.Beaver.142246.110", "runs/May25_10-00-29_Beaver/1621929629.1301053/events.out.tfevents.1621929629.Beaver.142246.112", "runs/May25_10-00-29_Beaver/1621929629.1655993/events.out.tfevents.1621929629.Beaver.142246.114", "runs/May25_10-00-29_Beaver/1621929629.1982548/events.out.tfevents.1621929629.Beaver.142246.116", "runs/May25_10-00-29_Beaver/1621929629.2312965/events.out.tfevents.1621929629.Beaver.142246.118", "runs/May25_10-00-29_Beaver/1621929629.2464006/events.out.tfevents.1621929629.Beaver.142246.120", "runs/May25_10-00-29_Beaver/1621929629.2620585/events.out.tfevents.1621929629.Beaver.142246.122", "runs/May25_10-00-29_Beaver/1621929629.2938466/events.out.tfevents.1621929629.Beaver.142246.124", "runs/May25_10-00-29_Beaver/1621929629.3254178/events.out.tfevents.1621929629.Beaver.142246.126", "runs/May25_10-00-29_Beaver/1621929629.3548/events.out.tfevents.1621929629.Beaver.142246.128", "runs/May25_10-00-29_Beaver/1621929629.3733013/events.out.tfevents.1621929629.Beaver.142246.130", "runs/May25_10-00-29_Beaver/1621929629.3905444/events.out.tfevents.1621929629.Beaver.142246.132", "runs/May25_10-00-29_Beaver/1621929629.420209/events.out.tfevents.1621929629.Beaver.142246.134", "runs/May25_10-00-29_Beaver/1621929629.486138/events.out.tfevents.1621929629.Beaver.142246.136", "runs/May25_10-00-29_Beaver/1621929629.5189574/events.out.tfevents.1621929629.Beaver.142246.138", "runs/May25_10-00-29_Beaver/1621929629.5649562/events.out.tfevents.1621929629.Beaver.142246.140", "runs/May25_10-00-29_Beaver/1621929629.5960956/events.out.tfevents.1621929629.Beaver.142246.142", "runs/May25_10-00-29_Beaver/1621929629.6296866/events.out.tfevents.1621929629.Beaver.142246.144", "runs/May25_10-00-29_Beaver/1621929629.6615765/events.out.tfevents.1621929629.Beaver.142246.146", "runs/May25_10-00-29_Beaver/1621929629.6964898/events.out.tfevents.1621929629.Beaver.142246.148", "runs/May25_10-00-29_Beaver/1621929629.7286127/events.out.tfevents.1621929629.Beaver.142246.150", "runs/May25_10-00-29_Beaver/1621929629.7610254/events.out.tfevents.1621929629.Beaver.142246.152", "runs/May25_10-00-29_Beaver/1621929629.9056401/events.out.tfevents.1621929629.Beaver.142246.154", "runs/May25_10-00-30_Beaver/events.out.tfevents.1621929630.Beaver.142246.155", "runs/May25_10-00-30_Beaver/events.out.tfevents.1621929630.Beaver.142246.157", "runs/May25_10-00-30_Beaver/events.out.tfevents.1621929630.Beaver.142246.159", "runs/May25_10-00-30_Beaver/events.out.tfevents.1621929630.Beaver.142246.161", "runs/May25_10-00-30_Beaver/events.out.tfevents.1621929630.Beaver.142246.163", "runs/May25_10-00-30_Beaver/events.out.tfevents.1621929630.Beaver.142246.165", "runs/May25_10-00-30_Beaver/events.out.tfevents.1621929630.Beaver.142246.167", "runs/May25_10-00-30_Beaver/events.out.tfevents.1621929630.Beaver.142246.169", "runs/May25_10-00-30_Beaver/events.out.tfevents.1621929630.Beaver.142246.171", "runs/May25_10-00-30_Beaver/events.out.tfevents.1621929630.Beaver.142246.173", "runs/May25_10-00-30_Beaver/events.out.tfevents.1621929630.Beaver.142246.175", "runs/May25_10-00-30_Beaver/events.out.tfevents.1621929630.Beaver.142246.177", "runs/May25_10-00-30_Beaver/events.out.tfevents.1621929630.Beaver.142246.179", "runs/May25_10-00-30_Beaver/events.out.tfevents.1621929630.Beaver.142246.181", "runs/May25_10-00-30_Beaver/events.out.tfevents.1621929630.Beaver.142246.183", "runs/May25_10-00-30_Beaver/events.out.tfevents.1621929630.Beaver.142246.185", "runs/May25_10-00-30_Beaver/events.out.tfevents.1621929630.Beaver.142246.187", "runs/May25_10-00-30_Beaver/events.out.tfevents.1621929630.Beaver.142246.189", "runs/May25_10-00-30_Beaver/events.out.tfevents.1621929631.Beaver.142246.191", "runs/May25_10-00-30_Beaver/1621929630.0479634/events.out.tfevents.1621929630.Beaver.142246.156", "runs/May25_10-00-30_Beaver/1621929630.0811975/events.out.tfevents.1621929630.Beaver.142246.158", "runs/May25_10-00-30_Beaver/1621929630.1143088/events.out.tfevents.1621929630.Beaver.142246.160", "runs/May25_10-00-30_Beaver/1621929630.2127478/events.out.tfevents.1621929630.Beaver.142246.162", "runs/May25_10-00-30_Beaver/1621929630.3048697/events.out.tfevents.1621929630.Beaver.142246.164", "runs/May25_10-00-30_Beaver/1621929630.3386166/events.out.tfevents.1621929630.Beaver.142246.166", "runs/May25_10-00-30_Beaver/1621929630.3693666/events.out.tfevents.1621929630.Beaver.142246.168", "runs/May25_10-00-30_Beaver/1621929630.450536/events.out.tfevents.1621929630.Beaver.142246.170", "runs/May25_10-00-30_Beaver/1621929630.5090072/events.out.tfevents.1621929630.Beaver.142246.172", "runs/May25_10-00-30_Beaver/1621929630.5389555/events.out.tfevents.1621929630.Beaver.142246.174", "runs/May25_10-00-30_Beaver/1621929630.5731533/events.out.tfevents.1621929630.Beaver.142246.176", "runs/May25_10-00-30_Beaver/1621929630.6146882/events.out.tfevents.1621929630.Beaver.142246.178", "runs/May25_10-00-30_Beaver/1621929630.6563115/events.out.tfevents.1621929630.Beaver.142246.180", "runs/May25_10-00-30_Beaver/1621929630.6881378/events.out.tfevents.1621929630.Beaver.142246.182", "runs/May25_10-00-30_Beaver/1621929630.7252176/events.out.tfevents.1621929630.Beaver.142246.184", "runs/May25_10-00-30_Beaver/1621929630.7566783/events.out.tfevents.1621929630.Beaver.142246.186", "runs/May25_10-00-30_Beaver/1621929630.7898812/events.out.tfevents.1621929630.Beaver.142246.188", "runs/May25_10-00-30_Beaver/1621929630.8268893/events.out.tfevents.1621929630.Beaver.142246.190", "runs/May25_10-00-30_Beaver/1621929631.00648/events.out.tfevents.1621929631.Beaver.142246.192", "runs/May25_10-00-31_Beaver/events.out.tfevents.1621929631.Beaver.142246.193", "runs/May25_10-00-31_Beaver/events.out.tfevents.1621929631.Beaver.142246.195", "runs/May25_10-00-31_Beaver/events.out.tfevents.1621929631.Beaver.142246.197", "runs/May25_10-00-31_Beaver/events.out.tfevents.1621929631.Beaver.142246.199", "runs/May25_10-00-31_Beaver/events.out.tfevents.1621929631.Beaver.142246.200", "runs/May25_10-00-31_Beaver/events.out.tfevents.1621929631.Beaver.142246.202", "runs/May25_10-00-31_Beaver/events.out.tfevents.1621929631.Beaver.142246.204", "runs/May25_10-00-31_Beaver/events.out.tfevents.1621929631.Beaver.142246.206", "runs/May25_10-00-31_Beaver/events.out.tfevents.1621929631.Beaver.142246.208", "runs/May25_10-00-31_Beaver/events.out.tfevents.1621929631.Beaver.142246.210", "runs/May25_10-00-31_Beaver/1621929631.04029/events.out.tfevents.1621929631.Beaver.142246.194", "runs/May25_10-00-31_Beaver/1621929631.0724509/events.out.tfevents.1621929631.Beaver.142246.196", "runs/May25_10-00-31_Beaver/1621929631.104547/events.out.tfevents.1621929631.Beaver.142246.198", "runs/May25_10-00-31_Beaver/1621929631.146896/events.out.tfevents.1621929631.Beaver.142246.201", "runs/May25_10-00-31_Beaver/1621929631.1790407/events.out.tfevents.1621929631.Beaver.142246.203", "runs/May25_10-00-31_Beaver/1621929631.2113008/events.out.tfevents.1621929631.Beaver.142246.205", "runs/May25_10-00-31_Beaver/1621929631.2421656/events.out.tfevents.1621929631.Beaver.142246.207", "runs/May25_10-00-31_Beaver/1621929631.2736862/events.out.tfevents.1621929631.Beaver.142246.209", "runs/May25_10-00-31_Beaver/1621929631.3026946/events.out.tfevents.1621929631.Beaver.142246.211", "runs/May25_10-00-51_Beaver/events.out.tfevents.1621929653.Beaver.142785.0", "runs/May25_10-00-51_Beaver/1621929653.6101258/events.out.tfevents.1621929653.Beaver.142785.1", "runs/May25_10-00-53_Beaver/events.out.tfevents.1621929653.Beaver.142785.10", "runs/May25_10-00-53_Beaver/events.out.tfevents.1621929653.Beaver.142785.12", "runs/May25_10-00-53_Beaver/events.out.tfevents.1621929653.Beaver.142785.2", "runs/May25_10-00-53_Beaver/events.out.tfevents.1621929653.Beaver.142785.4", "runs/May25_10-00-53_Beaver/events.out.tfevents.1621929653.Beaver.142785.6", "runs/May25_10-00-53_Beaver/events.out.tfevents.1621929653.Beaver.142785.8", "runs/May25_10-00-53_Beaver/1621929653.6463249/events.out.tfevents.1621929653.Beaver.142785.3", "runs/May25_10-00-53_Beaver/1621929653.680276/events.out.tfevents.1621929653.Beaver.142785.5", "runs/May25_10-00-53_Beaver/1621929653.7682738/events.out.tfevents.1621929653.Beaver.142785.7", "runs/May25_10-00-53_Beaver/1621929653.8453882/events.out.tfevents.1621929653.Beaver.142785.9", "runs/May25_10-00-53_Beaver/1621929653.9061093/events.out.tfevents.1621929653.Beaver.142785.11", "runs/May25_10-00-53_Beaver/1621929653.9888256/events.out.tfevents.1621929653.Beaver.142785.13", "runs/May25_10-00-54_Beaver/events.out.tfevents.1621929654.Beaver.142785.14", "runs/May25_10-00-54_Beaver/events.out.tfevents.1621929654.Beaver.142785.16", "runs/May25_10-00-54_Beaver/events.out.tfevents.1621929654.Beaver.142785.18", "runs/May25_10-00-54_Beaver/events.out.tfevents.1621929654.Beaver.142785.20", "runs/May25_10-00-54_Beaver/events.out.tfevents.1621929654.Beaver.142785.22", "runs/May25_10-00-54_Beaver/events.out.tfevents.1621929654.Beaver.142785.24", "runs/May25_10-00-54_Beaver/events.out.tfevents.1621929654.Beaver.142785.26", "runs/May25_10-00-54_Beaver/events.out.tfevents.1621929654.Beaver.142785.28", "runs/May25_10-00-54_Beaver/events.out.tfevents.1621929654.Beaver.142785.30", "runs/May25_10-00-54_Beaver/events.out.tfevents.1621929654.Beaver.142785.31", "runs/May25_10-00-54_Beaver/events.out.tfevents.1621929654.Beaver.142785.32", "runs/May25_10-00-54_Beaver/events.out.tfevents.1621929654.Beaver.142785.34", "runs/May25_10-00-54_Beaver/events.out.tfevents.1621929654.Beaver.142785.36", "runs/May25_10-00-54_Beaver/events.out.tfevents.1621929654.Beaver.142785.38", "runs/May25_10-00-54_Beaver/events.out.tfevents.1621929654.Beaver.142785.40", "runs/May25_10-00-54_Beaver/events.out.tfevents.1621929654.Beaver.142785.42", "runs/May25_10-00-54_Beaver/events.out.tfevents.1621929654.Beaver.142785.44", "runs/May25_10-00-54_Beaver/events.out.tfevents.1621929654.Beaver.142785.45", "runs/May25_10-00-54_Beaver/events.out.tfevents.1621929654.Beaver.142785.46", "runs/May25_10-00-54_Beaver/events.out.tfevents.1621929654.Beaver.142785.48", "runs/May25_10-00-54_Beaver/events.out.tfevents.1621929654.Beaver.142785.50", "runs/May25_10-00-54_Beaver/events.out.tfevents.1621929654.Beaver.142785.51", "runs/May25_10-00-54_Beaver/events.out.tfevents.1621929654.Beaver.142785.53", "runs/May25_10-00-54_Beaver/1621929654.0658596/events.out.tfevents.1621929654.Beaver.142785.15", "runs/May25_10-00-54_Beaver/1621929654.1417844/events.out.tfevents.1621929654.Beaver.142785.17", "runs/May25_10-00-54_Beaver/1621929654.176249/events.out.tfevents.1621929654.Beaver.142785.19", "runs/May25_10-00-54_Beaver/1621929654.2148995/events.out.tfevents.1621929654.Beaver.142785.21", "runs/May25_10-00-54_Beaver/1621929654.2477686/events.out.tfevents.1621929654.Beaver.142785.23", "runs/May25_10-00-54_Beaver/1621929654.2896044/events.out.tfevents.1621929654.Beaver.142785.25", "runs/May25_10-00-54_Beaver/1621929654.3196106/events.out.tfevents.1621929654.Beaver.142785.27", "runs/May25_10-00-54_Beaver/1621929654.3518898/events.out.tfevents.1621929654.Beaver.142785.29", "runs/May25_10-00-54_Beaver/1621929654.4202127/events.out.tfevents.1621929654.Beaver.142785.33", "runs/May25_10-00-54_Beaver/1621929654.450152/events.out.tfevents.1621929654.Beaver.142785.35", "runs/May25_10-00-54_Beaver/1621929654.4812932/events.out.tfevents.1621929654.Beaver.142785.37", "runs/May25_10-00-54_Beaver/1621929654.5179353/events.out.tfevents.1621929654.Beaver.142785.39", "runs/May25_10-00-54_Beaver/1621929654.5247114/events.out.tfevents.1621929654.Beaver.142785.41", "runs/May25_10-00-54_Beaver/1621929654.5589337/events.out.tfevents.1621929654.Beaver.142785.43", "runs/May25_10-00-54_Beaver/1621929654.6079504/events.out.tfevents.1621929654.Beaver.142785.47", "runs/May25_10-00-54_Beaver/1621929654.6400998/events.out.tfevents.1621929654.Beaver.142785.49", "runs/May25_10-00-54_Beaver/1621929654.6879833/events.out.tfevents.1621929654.Beaver.142785.52", "runs/May25_10-00-54_Beaver/1621929654.721376/events.out.tfevents.1621929654.Beaver.142785.54", "runs/May25_10-00-55_Beaver/events.out.tfevents.1621929655.Beaver.142785.55", "runs/May25_10-00-55_Beaver/events.out.tfevents.1621929655.Beaver.142785.57", "runs/May25_10-00-55_Beaver/events.out.tfevents.1621929655.Beaver.142785.59", "runs/May25_10-00-55_Beaver/events.out.tfevents.1621929655.Beaver.142785.61", "runs/May25_10-00-55_Beaver/events.out.tfevents.1621929655.Beaver.142785.63", "runs/May25_10-00-55_Beaver/events.out.tfevents.1621929655.Beaver.142785.65", "runs/May25_10-00-55_Beaver/events.out.tfevents.1621929655.Beaver.142785.66", "runs/May25_10-00-55_Beaver/events.out.tfevents.1621929655.Beaver.142785.67", "runs/May25_10-00-55_Beaver/events.out.tfevents.1621929655.Beaver.142785.69", "runs/May25_10-00-55_Beaver/events.out.tfevents.1621929655.Beaver.142785.71", "runs/May25_10-00-55_Beaver/events.out.tfevents.1621929655.Beaver.142785.73", "runs/May25_10-00-55_Beaver/events.out.tfevents.1621929655.Beaver.142785.75", "runs/May25_10-00-55_Beaver/events.out.tfevents.1621929655.Beaver.142785.77", "runs/May25_10-00-55_Beaver/events.out.tfevents.1621929655.Beaver.142785.79", "runs/May25_10-00-55_Beaver/events.out.tfevents.1621929655.Beaver.142785.80", "runs/May25_10-00-55_Beaver/events.out.tfevents.1621929656.Beaver.142785.82", "runs/May25_10-00-55_Beaver/1621929655.0138834/events.out.tfevents.1621929655.Beaver.142785.56", "runs/May25_10-00-55_Beaver/1621929655.0459123/events.out.tfevents.1621929655.Beaver.142785.58", "runs/May25_10-00-55_Beaver/1621929655.0781739/events.out.tfevents.1621929655.Beaver.142785.60", "runs/May25_10-00-55_Beaver/1621929655.1095493/events.out.tfevents.1621929655.Beaver.142785.62", "runs/May25_10-00-55_Beaver/1621929655.1402154/events.out.tfevents.1621929655.Beaver.142785.64", "runs/May25_10-00-55_Beaver/1621929655.7117682/events.out.tfevents.1621929655.Beaver.142785.68", "runs/May25_10-00-55_Beaver/1621929655.7425992/events.out.tfevents.1621929655.Beaver.142785.70", "runs/May25_10-00-55_Beaver/1621929655.776609/events.out.tfevents.1621929655.Beaver.142785.72", "runs/May25_10-00-55_Beaver/1621929655.8211157/events.out.tfevents.1621929655.Beaver.142785.74", "runs/May25_10-00-55_Beaver/1621929655.8534324/events.out.tfevents.1621929655.Beaver.142785.76", "runs/May25_10-00-55_Beaver/1621929655.886793/events.out.tfevents.1621929655.Beaver.142785.78", "runs/May25_10-00-55_Beaver/1621929655.9620328/events.out.tfevents.1621929655.Beaver.142785.81", "runs/May25_10-00-56_Beaver/events.out.tfevents.1621929656.Beaver.142785.101", "runs/May25_10-00-56_Beaver/events.out.tfevents.1621929656.Beaver.142785.103", "runs/May25_10-00-56_Beaver/events.out.tfevents.1621929656.Beaver.142785.105", "runs/May25_10-00-56_Beaver/events.out.tfevents.1621929656.Beaver.142785.107", "runs/May25_10-00-56_Beaver/events.out.tfevents.1621929656.Beaver.142785.109", "runs/May25_10-00-56_Beaver/events.out.tfevents.1621929656.Beaver.142785.111", "runs/May25_10-00-56_Beaver/events.out.tfevents.1621929656.Beaver.142785.83", "runs/May25_10-00-56_Beaver/events.out.tfevents.1621929656.Beaver.142785.85", "runs/May25_10-00-56_Beaver/events.out.tfevents.1621929656.Beaver.142785.86", "runs/May25_10-00-56_Beaver/events.out.tfevents.1621929656.Beaver.142785.88", "runs/May25_10-00-56_Beaver/events.out.tfevents.1621929656.Beaver.142785.89", "runs/May25_10-00-56_Beaver/events.out.tfevents.1621929656.Beaver.142785.91", "runs/May25_10-00-56_Beaver/events.out.tfevents.1621929656.Beaver.142785.93", "runs/May25_10-00-56_Beaver/events.out.tfevents.1621929656.Beaver.142785.95", "runs/May25_10-00-56_Beaver/events.out.tfevents.1621929656.Beaver.142785.96", "runs/May25_10-00-56_Beaver/events.out.tfevents.1621929656.Beaver.142785.98", "runs/May25_10-00-56_Beaver/events.out.tfevents.1621929656.Beaver.142785.99", "runs/May25_10-00-56_Beaver/1621929656.0435553/events.out.tfevents.1621929656.Beaver.142785.84", "runs/May25_10-00-56_Beaver/1621929656.12348/events.out.tfevents.1621929656.Beaver.142785.87", "runs/May25_10-00-56_Beaver/1621929656.1999166/events.out.tfevents.1621929656.Beaver.142785.90", "runs/May25_10-00-56_Beaver/1621929656.2327192/events.out.tfevents.1621929656.Beaver.142785.92", "runs/May25_10-00-56_Beaver/1621929656.4396164/events.out.tfevents.1621929656.Beaver.142785.94", "runs/May25_10-00-56_Beaver/1621929656.781376/events.out.tfevents.1621929656.Beaver.142785.97", "runs/May25_10-00-56_Beaver/1621929656.8229332/events.out.tfevents.1621929656.Beaver.142785.100", "runs/May25_10-00-56_Beaver/1621929656.8529081/events.out.tfevents.1621929656.Beaver.142785.102", "runs/May25_10-00-56_Beaver/1621929656.8827977/events.out.tfevents.1621929656.Beaver.142785.104", "runs/May25_10-00-56_Beaver/1621929656.9086256/events.out.tfevents.1621929656.Beaver.142785.106", "runs/May25_10-00-56_Beaver/1621929656.9346185/events.out.tfevents.1621929656.Beaver.142785.108", "runs/May25_10-00-56_Beaver/1621929656.963316/events.out.tfevents.1621929656.Beaver.142785.110", "runs/May25_10-00-56_Beaver/1621929656.9926782/events.out.tfevents.1621929656.Beaver.142785.112", "runs/May25_10-00-57_Beaver/events.out.tfevents.1621929657.Beaver.142785.113", "runs/May25_10-00-57_Beaver/events.out.tfevents.1621929657.Beaver.142785.115", "runs/May25_10-00-57_Beaver/events.out.tfevents.1621929657.Beaver.142785.117", "runs/May25_10-00-57_Beaver/events.out.tfevents.1621929657.Beaver.142785.119", "runs/May25_10-00-57_Beaver/events.out.tfevents.1621929657.Beaver.142785.121", "runs/May25_10-00-57_Beaver/events.out.tfevents.1621929657.Beaver.142785.123", "runs/May25_10-00-57_Beaver/events.out.tfevents.1621929657.Beaver.142785.125", "runs/May25_10-00-57_Beaver/events.out.tfevents.1621929657.Beaver.142785.127", "runs/May25_10-00-57_Beaver/events.out.tfevents.1621929657.Beaver.142785.129", "runs/May25_10-00-57_Beaver/events.out.tfevents.1621929657.Beaver.142785.131", "runs/May25_10-00-57_Beaver/events.out.tfevents.1621929657.Beaver.142785.133", "runs/May25_10-00-57_Beaver/events.out.tfevents.1621929657.Beaver.142785.135", "runs/May25_10-00-57_Beaver/events.out.tfevents.1621929657.Beaver.142785.137", "runs/May25_10-00-57_Beaver/events.out.tfevents.1621929657.Beaver.142785.139", "runs/May25_10-00-57_Beaver/events.out.tfevents.1621929657.Beaver.142785.141", "runs/May25_10-00-57_Beaver/events.out.tfevents.1621929657.Beaver.142785.143", "runs/May25_10-00-57_Beaver/events.out.tfevents.1621929657.Beaver.142785.145", "runs/May25_10-00-57_Beaver/events.out.tfevents.1621929657.Beaver.142785.147", "runs/May25_10-00-57_Beaver/events.out.tfevents.1621929657.Beaver.142785.149", "runs/May25_10-00-57_Beaver/events.out.tfevents.1621929657.Beaver.142785.151", "runs/May25_10-00-57_Beaver/events.out.tfevents.1621929657.Beaver.142785.153", "runs/May25_10-00-57_Beaver/events.out.tfevents.1621929657.Beaver.142785.155", "runs/May25_10-00-57_Beaver/events.out.tfevents.1621929657.Beaver.142785.157", "runs/May25_10-00-57_Beaver/events.out.tfevents.1621929657.Beaver.142785.159", "runs/May25_10-00-57_Beaver/1621929657.0256495/events.out.tfevents.1621929657.Beaver.142785.114", "runs/May25_10-00-57_Beaver/1621929657.0594552/events.out.tfevents.1621929657.Beaver.142785.116", "runs/May25_10-00-57_Beaver/1621929657.0896955/events.out.tfevents.1621929657.Beaver.142785.118", "runs/May25_10-00-57_Beaver/1621929657.1046095/events.out.tfevents.1621929657.Beaver.142785.120", "runs/May25_10-00-57_Beaver/1621929657.1207852/events.out.tfevents.1621929657.Beaver.142785.122", "runs/May25_10-00-57_Beaver/1621929657.152045/events.out.tfevents.1621929657.Beaver.142785.124", "runs/May25_10-00-57_Beaver/1621929657.1835978/events.out.tfevents.1621929657.Beaver.142785.126", "runs/May25_10-00-57_Beaver/1621929657.2131639/events.out.tfevents.1621929657.Beaver.142785.128", "runs/May25_10-00-57_Beaver/1621929657.2309308/events.out.tfevents.1621929657.Beaver.142785.130", "runs/May25_10-00-57_Beaver/1621929657.2479937/events.out.tfevents.1621929657.Beaver.142785.132", "runs/May25_10-00-57_Beaver/1621929657.2793288/events.out.tfevents.1621929657.Beaver.142785.134", "runs/May25_10-00-57_Beaver/1621929657.3396373/events.out.tfevents.1621929657.Beaver.142785.136", "runs/May25_10-00-57_Beaver/1621929657.3714027/events.out.tfevents.1621929657.Beaver.142785.138", "runs/May25_10-00-57_Beaver/1621929657.4159045/events.out.tfevents.1621929657.Beaver.142785.140", "runs/May25_10-00-57_Beaver/1621929657.449367/events.out.tfevents.1621929657.Beaver.142785.142", "runs/May25_10-00-57_Beaver/1621929657.4801264/events.out.tfevents.1621929657.Beaver.142785.144", "runs/May25_10-00-57_Beaver/1621929657.5108728/events.out.tfevents.1621929657.Beaver.142785.146", "runs/May25_10-00-57_Beaver/1621929657.5423715/events.out.tfevents.1621929657.Beaver.142785.148", "runs/May25_10-00-57_Beaver/1621929657.5736563/events.out.tfevents.1621929657.Beaver.142785.150", "runs/May25_10-00-57_Beaver/1621929657.6037364/events.out.tfevents.1621929657.Beaver.142785.152", "runs/May25_10-00-57_Beaver/1621929657.7471054/events.out.tfevents.1621929657.Beaver.142785.154", "runs/May25_10-00-57_Beaver/1621929657.8854764/events.out.tfevents.1621929657.Beaver.142785.156", "runs/May25_10-00-57_Beaver/1621929657.918847/events.out.tfevents.1621929657.Beaver.142785.158", "runs/May25_10-00-57_Beaver/1621929657.9529428/events.out.tfevents.1621929657.Beaver.142785.160", "runs/May25_10-00-58_Beaver/events.out.tfevents.1621929658.Beaver.142785.161", "runs/May25_10-00-58_Beaver/events.out.tfevents.1621929658.Beaver.142785.163", "runs/May25_10-00-58_Beaver/events.out.tfevents.1621929658.Beaver.142785.165", "runs/May25_10-00-58_Beaver/events.out.tfevents.1621929658.Beaver.142785.167", "runs/May25_10-00-58_Beaver/events.out.tfevents.1621929658.Beaver.142785.169", "runs/May25_10-00-58_Beaver/events.out.tfevents.1621929658.Beaver.142785.171", "runs/May25_10-00-58_Beaver/events.out.tfevents.1621929658.Beaver.142785.173", "runs/May25_10-00-58_Beaver/events.out.tfevents.1621929658.Beaver.142785.175", "runs/May25_10-00-58_Beaver/events.out.tfevents.1621929658.Beaver.142785.177", "runs/May25_10-00-58_Beaver/events.out.tfevents.1621929658.Beaver.142785.179", "runs/May25_10-00-58_Beaver/events.out.tfevents.1621929658.Beaver.142785.181", "runs/May25_10-00-58_Beaver/events.out.tfevents.1621929658.Beaver.142785.183", "runs/May25_10-00-58_Beaver/events.out.tfevents.1621929658.Beaver.142785.185", "runs/May25_10-00-58_Beaver/events.out.tfevents.1621929658.Beaver.142785.187", "runs/May25_10-00-58_Beaver/events.out.tfevents.1621929658.Beaver.142785.189", "runs/May25_10-00-58_Beaver/events.out.tfevents.1621929658.Beaver.142785.191", "runs/May25_10-00-58_Beaver/events.out.tfevents.1621929658.Beaver.142785.193", "runs/May25_10-00-58_Beaver/events.out.tfevents.1621929658.Beaver.142785.195", "runs/May25_10-00-58_Beaver/events.out.tfevents.1621929658.Beaver.142785.197", "runs/May25_10-00-58_Beaver/events.out.tfevents.1621929658.Beaver.142785.199", "runs/May25_10-00-58_Beaver/events.out.tfevents.1621929658.Beaver.142785.200", "runs/May25_10-00-58_Beaver/1621929658.0503018/events.out.tfevents.1621929658.Beaver.142785.162", "runs/May25_10-00-58_Beaver/1621929658.145187/events.out.tfevents.1621929658.Beaver.142785.164", "runs/May25_10-00-58_Beaver/1621929658.1766331/events.out.tfevents.1621929658.Beaver.142785.166", "runs/May25_10-00-58_Beaver/1621929658.2087781/events.out.tfevents.1621929658.Beaver.142785.168", "runs/May25_10-00-58_Beaver/1621929658.286852/events.out.tfevents.1621929658.Beaver.142785.170", "runs/May25_10-00-58_Beaver/1621929658.3468313/events.out.tfevents.1621929658.Beaver.142785.172", "runs/May25_10-00-58_Beaver/1621929658.3797822/events.out.tfevents.1621929658.Beaver.142785.174", "runs/May25_10-00-58_Beaver/1621929658.4120576/events.out.tfevents.1621929658.Beaver.142785.176", "runs/May25_10-00-58_Beaver/1621929658.4520016/events.out.tfevents.1621929658.Beaver.142785.178", "runs/May25_10-00-58_Beaver/1621929658.4944272/events.out.tfevents.1621929658.Beaver.142785.180", "runs/May25_10-00-58_Beaver/1621929658.524788/events.out.tfevents.1621929658.Beaver.142785.182", "runs/May25_10-00-58_Beaver/1621929658.561759/events.out.tfevents.1621929658.Beaver.142785.184", "runs/May25_10-00-58_Beaver/1621929658.6087508/events.out.tfevents.1621929658.Beaver.142785.186", "runs/May25_10-00-58_Beaver/1621929658.6411138/events.out.tfevents.1621929658.Beaver.142785.188", "runs/May25_10-00-58_Beaver/1621929658.6782937/events.out.tfevents.1621929658.Beaver.142785.190", "runs/May25_10-00-58_Beaver/1621929658.85667/events.out.tfevents.1621929658.Beaver.142785.192", "runs/May25_10-00-58_Beaver/1621929658.890611/events.out.tfevents.1621929658.Beaver.142785.194", "runs/May25_10-00-58_Beaver/1621929658.92037/events.out.tfevents.1621929658.Beaver.142785.196", "runs/May25_10-00-58_Beaver/1621929658.9512928/events.out.tfevents.1621929658.Beaver.142785.198", "runs/May25_10-00-58_Beaver/1621929658.993777/events.out.tfevents.1621929658.Beaver.142785.201", "runs/May25_10-00-59_Beaver/events.out.tfevents.1621929659.Beaver.142785.202", "runs/May25_10-00-59_Beaver/events.out.tfevents.1621929659.Beaver.142785.204", "runs/May25_10-00-59_Beaver/events.out.tfevents.1621929659.Beaver.142785.206", "runs/May25_10-00-59_Beaver/events.out.tfevents.1621929659.Beaver.142785.208", "runs/May25_10-00-59_Beaver/events.out.tfevents.1621929659.Beaver.142785.210", "runs/May25_10-00-59_Beaver/1621929659.0242562/events.out.tfevents.1621929659.Beaver.142785.203", "runs/May25_10-00-59_Beaver/1621929659.0552833/events.out.tfevents.1621929659.Beaver.142785.205", "runs/May25_10-00-59_Beaver/1621929659.0882845/events.out.tfevents.1621929659.Beaver.142785.207", "runs/May25_10-00-59_Beaver/1621929659.1174583/events.out.tfevents.1621929659.Beaver.142785.209", "runs/May25_10-00-59_Beaver/1621929659.1484091/events.out.tfevents.1621929659.Beaver.142785.211", "sagemaker/README.md", "sagemaker/__init__.py", "sagemaker/conftest.py", "sagemaker/test_multi_node_data_parallel.py", "sagemaker/test_multi_node_model_parallel.py", "sagemaker/test_single_node_gpu.py", "sagemaker/__pycache__/__init__.cpython-38.pyc", "sagemaker/__pycache__/conftest.cpython-38-pytest-6.2.2.pyc", "sagemaker/__pycache__/conftest.cpython-38-pytest-6.2.4.pyc", "sagemaker/__pycache__/test_multi_node_data_parallel.cpython-38-pytest-6.2.2.pyc", "sagemaker/__pycache__/test_multi_node_data_parallel.cpython-38-pytest-6.2.4.pyc", "sagemaker/__pycache__/test_multi_node_model_parallel.cpython-38-pytest-6.2.2.pyc", "sagemaker/__pycache__/test_multi_node_model_parallel.cpython-38-pytest-6.2.4.pyc", "sagemaker/__pycache__/test_single_node_gpu.cpython-38-pytest-6.2.2.pyc", "sagemaker/__pycache__/test_single_node_gpu.cpython-38-pytest-6.2.4.pyc", "sagemaker/scripts/pytorch/requirements.txt", "sagemaker/scripts/pytorch/run_ddp.py", "sagemaker/scripts/pytorch/run_glue_model_parallelism.py", "sagemaker/scripts/tensorflow/requirements.txt", "sagemaker/scripts/tensorflow/run_tf.py", "sagemaker/scripts/tensorflow/run_tf_dist.py" ]
lysandre
0
transformers
lysandre/tiny-bert-random
2020-12-14T19:28:41.000Z
[ "pytorch", "tf", "jax", "bert", "pretraining", "transformers" ]
[ ".gitattributes", "config.json", "flax_model.msgpack", "pytorch_model.bin", "special_tokens_map.json", "tf_model.h5", "tokenizer_config.json", "vocab.txt" ]
lysandre
4,433
transformers
lysandre/tiny-distil
2021-06-17T07:47:21.000Z
[ "pytorch", "distilbert", "transformers" ]
[ ".gitattributes", "config.json", "pytorch_model.bin" ]
lysandre
0
transformers
okkk
lysandre/tiny-electra-random
2020-10-06T21:06:11.000Z
[ "tf", "electra", "transformers" ]
[ ".gitattributes", "config.json", "tf_model.h5" ]
lysandre
73
transformers
lysandre/tiny-longformer-random
2020-10-06T21:12:54.000Z
[ "tf", "longformer", "transformers" ]
[ ".gitattributes", "config.json", "tf_model.h5" ]
lysandre
72
transformers
lysandre/tiny-roberta-random
2021-05-20T17:40:39.000Z
[ "tf", "roberta", "transformers" ]
[ ".gitattributes", "config.json", "tf_model.h5" ]
lysandre
11
transformers
lysandre/tiny-tapas-random-sqa
2020-12-14T23:23:58.000Z
[ "pytorch", "tapas", "table-question-answering", "transformers" ]
table-question-answering
[ ".gitattributes", "config.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.txt" ]
lysandre
23,708
transformers
lysandre/tiny-tapas-random-wtq
2020-12-15T04:19:58.000Z
[ "pytorch", "tapas", "table-question-answering", "transformers" ]
table-question-answering
[ ".gitattributes", "config.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.txt" ]
lysandre
28,538
transformers
lysandre/tiny-vit-random
2021-05-05T14:04:37.000Z
[ "pytorch", "vit", "transformers" ]
[ ".gitattributes", "config.json", "preprocessor_config.json", "pytorch_model.bin" ]
lysandre
13,512
transformers
lysandre/xlnet-base-cased
2021-04-07T16:10:54.000Z
[]
[ ".gitattributes", "special_tokens_map.json", "tokenizer.json", "tokenizer_config.json", "tokenizer_v4-5-0.json" ]
lysandre
0
lysoladmin/test-model
2021-02-18T13:33:14.000Z
[]
[ ".gitattributes" ]
lysoladmin
0
lysoladmin/textmodel
2021-05-27T18:25:31.000Z
[]
[ ".gitattributes" ]
lysoladmin
0
m-polignano-uniba/bert_uncased_L-12_H-768_A-12_italian_alb3rt0
2021-05-19T22:20:54.000Z
[ "pytorch", "tf", "jax", "bert", "masked-lm", "transformers", "fill-mask" ]
fill-mask
[ ".gitattributes", "README.txt", "config.json", "flax_model.msgpack", "pytorch_model.bin", "tf_model.h5", "tokenizer.py", "vocab.txt" ]
m-polignano-uniba
840
transformers
:::::README:::::: AlBERTo the first italian BERT model for Twitter languange understanding Recent scientific studies on natural language processing (NLP) report the outstanding effectiveness observed in the use of context-dependent and task-free language understanding models such as ELMo, GPT, and BERT. Specifically, they have proved to achieve state of the art performance in numerous complex NLP tasks such as question answering and sentiment analysis in the English language. Following the great popularity and effectiveness that these models are gaining in the scientific community, we trained a BERT language understanding model for the Italian language (AlBERTo). In particular, AlBERTo is focused on the language used in social networks, specifically on Twitter. To demonstrate its robustness, we evaluated AlBERTo on the EVALITA 2016 task SENTIPOLC (SENTIment POLarity Classification) obtaining state of the art results in subjectivity, polarity and irony detection on Italian tweets. The pre-trained AlBERTo model will be publicly distributed through the GitHub platform at the following web address: https://github.com/marcopoli/AlBERTo-it in order to facilitate future research. http://ceur-ws.org/Vol-2481/paper57.pdf Please cite: @InProceedings{PolignanoEtAlCLIC2019, author = {Marco Polignano and Pierpaolo Basile and Marco de Gemmis and Giovanni Semeraro and Valerio Basile}, title = {{AlBERTo: Italian BERT Language Understanding Model for NLP Challenging Tasks Based on Tweets}}, booktitle = {Proceedings of the Sixth Italian Conference on Computational Linguistics (CLiC-it 2019)}, year = {2019}, publisher = {CEUR}, journal={CEUR Workshop Proceedings}, volume={2481}, url={https://www.scopus.com/inward/record.uri?eid=2-s2.0-85074851349&partnerID=40&md5=7abed946e06f76b3825ae5e294ffac14}, document_type={Conference Paper}, source={Scopus} } ::::CREDITS:::: Authors: Marco Polignano, Pierpaolo Basile, Marco de Gemmis, Giovanni Semeraro University of Bari ALDO Moro Valerio Basile University of Turin Thanks to: Angelo Basile Junior Research Scientist at Symanto - Profiling AI for tensorflow and pytorch models compatible with huggingface.co Transformers library ::::COPYRIGHTS:::: # Copyright 2019 Marco Polignano # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License.
m-polignano-uniba/bert_uncased_L-12_H-768_A-12_italian_alberto
2020-04-24T16:02:31.000Z
[ "pytorch", "tf", "albert", "masked-lm", "transformers", "fill-mask" ]
fill-mask
[ ".gitattributes", "README.txt", "config.json", "pytorch_model.bin", "tf_model.h5", "tokenizer.py", "vocab.txt" ]
m-polignano-uniba
47
transformers
:::::README:::::: AlBERTo the first italian BERT model for Twitter languange understanding Recent scientific studies on natural language processing (NLP) report the outstanding effectiveness observed in the use of context-dependent and task-free language understanding models such as ELMo, GPT, and BERT. Specifically, they have proved to achieve state of the art performance in numerous complex NLP tasks such as question answering and sentiment analysis in the English language. Following the great popularity and effectiveness that these models are gaining in the scientific community, we trained a BERT language understanding model for the Italian language (AlBERTo). In particular, AlBERTo is focused on the language used in social networks, specifically on Twitter. To demonstrate its robustness, we evaluated AlBERTo on the EVALITA 2016 task SENTIPOLC (SENTIment POLarity Classification) obtaining state of the art results in subjectivity, polarity and irony detection on Italian tweets. The pre-trained AlBERTo model will be publicly distributed through the GitHub platform at the following web address: https://github.com/marcopoli/AlBERTo-it in order to facilitate future research. http://ceur-ws.org/Vol-2481/paper57.pdf Please cite: @InProceedings{PolignanoEtAlCLIC2019, author = {Marco Polignano and Pierpaolo Basile and Marco de Gemmis and Giovanni Semeraro and Valerio Basile}, title = {{AlBERTo: Italian BERT Language Understanding Model for NLP Challenging Tasks Based on Tweets}}, booktitle = {Proceedings of the Sixth Italian Conference on Computational Linguistics (CLiC-it 2019)}, year = {2019}, publisher = {CEUR}, journal={CEUR Workshop Proceedings}, volume={2481}, url={https://www.scopus.com/inward/record.uri?eid=2-s2.0-85074851349&partnerID=40&md5=7abed946e06f76b3825ae5e294ffac14}, document_type={Conference Paper}, source={Scopus} } ::::CREDITS:::: Authors: Marco Polignano, Pierpaolo Basile, Marco de Gemmis, Giovanni Semeraro University of Bari ALDO Moro Valerio Basile University of Turin Thanks to: Angelo Basile Junior Research Scientist at Symanto - Profiling AI for tensorflow and pytorch models compatible with huggingface.co Transformers library ::::COPYRIGHTS:::: # Copyright 2019 Marco Polignano # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License.
m3hrdadfi/albert-fa-base-v2-clf-digimag
2020-12-26T08:28:59.000Z
[ "pytorch", "tf", "albert", "text-classification", "fa", "transformers", "license:apache-2.0" ]
text-classification
[ ".gitattributes", "README.md", "config.json", "eval_results.txt", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "spiece.vocab", "test_predictions.txt", "test_results.txt", "tf_model.h5", "tokenizer_config.json", "training_args.bin" ]
m3hrdadfi
34
transformers
--- language: fa license: apache-2.0 --- # ALBERT Persian A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language > میتونی بهش بگی برت_کوچولو [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) is the first attempt on ALBERT for the Persian Language. The model was trained based on Google's ALBERT BASE Version 2.0 over various writing styles from numerous subjects (e.g., scientific, novels, news) with more than 3.9M documents, 73M sentences, and 1.3B words, like the way we did for ParsBERT. Please follow the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo for the latest information about previous and current models. ## Persian Text Classification [DigiMag, Persian News] The task target is labeling texts in a supervised manner in both existing datasets `DigiMag` and `Persian News`. ### DigiMag A total of 8,515 articles scraped from [Digikala Online Magazine](https://www.digikala.com/mag/). This dataset includes seven different classes. 1. Video Games 2. Shopping Guide 3. Health Beauty 4. Science Technology 5. General 6. Art Cinema 7. Books Literature | Label | # | |:------------------:|:----:| | Video Games | 1967 | | Shopping Guide | 125 | | Health Beauty | 1610 | | Science Technology | 2772 | | General | 120 | | Art Cinema | 1667 | | Books Literature | 254 | **Download** You can download the dataset from [here](https://drive.google.com/uc?id=1YgrCYY-Z0h2z0-PfWVfOGt1Tv0JDI-qz) ## Results The following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures. | Dataset | ALBERT-fa-base-v2 | ParsBERT-v1 | mBERT | |:-----------------:|:-----------------:|:-----------:|:-----:| | Digikala Magazine | 92.33 | 93.59 | 90.72 | ### BibTeX entry and citation info Please cite in publications as the following: ```bibtex @misc{ALBERTPersian, author = {Mehrdad Farahani}, title = {ALBERT-Persian: A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language}, year = {2020}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/m3hrdadfi/albert-persian}}, } @article{ParsBERT, title={ParsBERT: Transformer-based Model for Persian Language Understanding}, author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri}, journal={ArXiv}, year={2020}, volume={abs/2005.12515} } ``` ## Questions? Post a Github issue on the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo.
m3hrdadfi/albert-fa-base-v2-clf-persiannews
2020-12-26T08:36:46.000Z
[ "pytorch", "tf", "albert", "text-classification", "fa", "transformers", "license:apache-2.0" ]
text-classification
[ ".gitattributes", "README.md", "config.json", "eval_results.txt", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "spiece.vocab", "test_predictions.txt", "test_results.txt", "tf_model.h5", "tokenizer_config.json", "training_args.bin" ]
m3hrdadfi
87
transformers
--- language: fa license: apache-2.0 --- # ALBERT Persian A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language > میتونی بهش بگی برت_کوچولو [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) is the first attempt on ALBERT for the Persian Language. The model was trained based on Google's ALBERT BASE Version 2.0 over various writing styles from numerous subjects (e.g., scientific, novels, news) with more than 3.9M documents, 73M sentences, and 1.3B words, like the way we did for ParsBERT. Please follow the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo for the latest information about previous and current models. ## Persian Text Classification [DigiMag, Persian News] The task target is labeling texts in a supervised manner in both existing datasets `DigiMag` and `Persian News`. ### Persian News A dataset of various news articles scraped from different online news agencies' websites. The total number of articles is 16,438, spread over eight different classes. 1. Economic 2. International 3. Political 4. Science Technology 5. Cultural Art 6. Sport 7. Medical | Label | # | |:------------------:|:----:| | Social | 2170 | | Economic | 1564 | | International | 1975 | | Political | 2269 | | Science Technology | 2436 | | Cultural Art | 2558 | | Sport | 1381 | | Medical | 2085 | **Download** You can download the dataset from [here](https://drive.google.com/uc?id=1B6xotfXCcW9xS1mYSBQos7OCg0ratzKC) ## Results The following table summarizes the F1 score obtained as compared to other models and architectures. | Dataset | ALBERT-fa-base-v2 | ParsBERT-v1 | mBERT | |:-----------------:|:-----------------:|:-----------:|:-----:| | Persian News | 97.01 | 97.19 | 95.79 | ### BibTeX entry and citation info Please cite in publications as the following: ```bibtex @misc{ALBERTPersian, author = {Mehrdad Farahani}, title = {ALBERT-Persian: A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language}, year = {2020}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/m3hrdadfi/albert-persian}}, } @article{ParsBERT, title={ParsBERT: Transformer-based Model for Persian Language Understanding}, author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri}, journal={ArXiv}, year={2020}, volume={abs/2005.12515} } ``` ## Questions? Post a Github issue on the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo.
m3hrdadfi/albert-fa-base-v2-ner-arman
2020-12-26T08:36:57.000Z
[ "pytorch", "tf", "albert", "token-classification", "fa", "transformers", "license:apache-2.0" ]
token-classification
[ ".gitattributes", "README.md", "config.json", "eval_results.txt", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "spiece.vocab", "test_predictions.txt", "test_results.txt", "tf_model.h5", "tokenizer_config.json", "training_args.bin" ]
m3hrdadfi
17
transformers
--- language: fa license: apache-2.0 --- # ALBERT Persian A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language > میتونی بهش بگی برت_کوچولو [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) is the first attempt on ALBERT for the Persian Language. The model was trained based on Google's ALBERT BASE Version 2.0 over various writing styles from numerous subjects (e.g., scientific, novels, news) with more than 3.9M documents, 73M sentences, and 1.3B words, like the way we did for ParsBERT. Please follow the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo for the latest information about previous and current models. ## Persian NER [ARMAN, PEYMA] This task aims to extract named entities in the text, such as names and label with appropriate `NER` classes such as locations, organizations, etc. The datasets used for this task contain sentences that are marked with `IOB` format. In this format, tokens that are not part of an entity are tagged as `”O”` the `”B”`tag corresponds to the first word of an object, and the `”I”` tag corresponds to the rest of the terms of the same entity. Both `”B”` and `”I”` tags are followed by a hyphen (or underscore), followed by the entity category. Therefore, the NER task is a multi-class token classification problem that labels the tokens upon being fed a raw text. There are two primary datasets used in Persian NER, `ARMAN`, and `PEYMA`. ### ARMAN ARMAN dataset holds 7,682 sentences with 250,015 sentences tagged over six different classes. 1. Organization 2. Location 3. Facility 4. Event 5. Product 6. Person | Label | # | |:------------:|:-----:| | Organization | 30108 | | Location | 12924 | | Facility | 4458 | | Event | 7557 | | Product | 4389 | | Person | 15645 | **Download** You can download the dataset from [here](https://github.com/HaniehP/PersianNER) ## Results The following table summarizes the F1 score obtained as compared to other models and architectures. | Dataset | ALBERT-fa-base-v2 | ParsBERT-v1 | mBERT | MorphoBERT | Beheshti-NER | LSTM-CRF | Rule-Based CRF | BiLSTM-CRF | |:-------:|:-----------------:|:-----------:|:-----:|:----------:|:------------:|:--------:|:--------------:|:----------:| | ARMAN | 97.43 | 98.79 | 95.89 | 89.9 | 84.03 | 86.55 | - | 77.45 | ### BibTeX entry and citation info Please cite in publications as the following: ```bibtex @misc{ALBERTPersian, author = {Mehrdad Farahani}, title = {ALBERT-Persian: A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language}, year = {2020}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/m3hrdadfi/albert-persian}}, } @article{ParsBERT, title={ParsBERT: Transformer-based Model for Persian Language Understanding}, author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri}, journal={ArXiv}, year={2020}, volume={abs/2005.12515} } ``` ## Questions? Post a Github issue on the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo.
m3hrdadfi/albert-fa-base-v2-ner-peyma
2020-12-26T08:36:20.000Z
[ "pytorch", "tf", "albert", "token-classification", "fa", "transformers", "license:apache-2.0" ]
token-classification
[ ".gitattributes", "README.md", "config.json", "eval_results.txt", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "spiece.vocab", "test_predictions.txt", "test_results.txt", "tf_model.h5", "tokenizer_config.json", "training_args.bin" ]
m3hrdadfi
18
transformers
--- language: fa license: apache-2.0 --- # ALBERT Persian A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language > میتونی بهش بگی برت_کوچولو [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) is the first attempt on ALBERT for the Persian Language. The model was trained based on Google's ALBERT BASE Version 2.0 over various writing styles from numerous subjects (e.g., scientific, novels, news) with more than 3.9M documents, 73M sentences, and 1.3B words, like the way we did for ParsBERT. Please follow the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo for the latest information about previous and current models. ## Persian NER [ARMAN, PEYMA] This task aims to extract named entities in the text, such as names and label with appropriate `NER` classes such as locations, organizations, etc. The datasets used for this task contain sentences that are marked with `IOB` format. In this format, tokens that are not part of an entity are tagged as `”O”` the `”B”`tag corresponds to the first word of an object, and the `”I”` tag corresponds to the rest of the terms of the same entity. Both `”B”` and `”I”` tags are followed by a hyphen (or underscore), followed by the entity category. Therefore, the NER task is a multi-class token classification problem that labels the tokens upon being fed a raw text. There are two primary datasets used in Persian NER, `ARMAN`, and `PEYMA`. ### PEYMA PEYMA dataset includes 7,145 sentences with a total of 302,530 tokens from which 41,148 tokens are tagged with seven different classes. 1. Organization 2. Money 3. Location 4. Date 5. Time 6. Person 7. Percent | Label | # | |:------------:|:-----:| | Organization | 16964 | | Money | 2037 | | Location | 8782 | | Date | 4259 | | Time | 732 | | Person | 7675 | | Percent | 699 | **Download** You can download the dataset from [here](http://nsurl.org/tasks/task-7-named-entity-recognition-ner-for-farsi/) ## Results The following table summarizes the F1 score obtained as compared to other models and architectures. | Dataset | ALBERT-fa-base-v2 | ParsBERT-v1 | mBERT | MorphoBERT | Beheshti-NER | LSTM-CRF | Rule-Based CRF | BiLSTM-CRF | |:-------:|:-----------------:|:-----------:|:-----:|:----------:|:------------:|:--------:|:--------------:|:----------:| | PEYMA | 88.99 | 93.10 | 86.64 | - | 90.59 | - | 84.00 | - | ### BibTeX entry and citation info Please cite in publications as the following: ```bibtex @misc{ALBERTPersian, author = {Mehrdad Farahani}, title = {ALBERT-Persian: A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language}, year = {2020}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/m3hrdadfi/albert-persian}}, } @article{ParsBERT, title={ParsBERT: Transformer-based Model for Persian Language Understanding}, author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri}, journal={ArXiv}, year={2020}, volume={abs/2005.12515} } ``` ## Questions? Post a Github issue on the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo.
m3hrdadfi/albert-fa-base-v2-sentiment-binary
2020-12-26T08:46:58.000Z
[ "pytorch", "tf", "albert", "text-classification", "fa", "transformers", "license:apache-2.0" ]
text-classification
[ ".gitattributes", "README.md", "config.json", "eval_results_alpbert-sentiment.txt", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "test_predictions.txt", "test_results.txt", "tf_model.h5", "tokenizer_config.json", "training_args.bin" ]
m3hrdadfi
65
transformers
--- language: fa license: apache-2.0 --- # ALBERT Persian A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language > میتونی بهش بگی برت_کوچولو [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) is the first attempt on ALBERT for the Persian Language. The model was trained based on Google's ALBERT BASE Version 2.0 over various writing styles from numerous subjects (e.g., scientific, novels, news) with more than 3.9M documents, 73M sentences, and 1.3B words, like the way we did for ParsBERT. Please follow the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo for the latest information about previous and current models. ## Persian Sentiment [Digikala, SnappFood, DeepSentiPers] It aims to classify text, such as comments, based on their emotional bias. We tested three well-known datasets for this task: `Digikala` user comments, `SnappFood` user comments, and `DeepSentiPers` in two binary-form and multi-form types. ## Results The model obtained an F1 score of 87.56% for a composition of all three datasets into a binary-labels `Negative` and `Positive`. ### BibTeX entry and citation info Please cite in publications as the following: ```bibtex @misc{ALBERTPersian, author = {Mehrdad Farahani}, title = {ALBERT-Persian: A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language}, year = {2020}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/m3hrdadfi/albert-persian}}, } @article{ParsBERT, title={ParsBERT: Transformer-based Model for Persian Language Understanding}, author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri}, journal={ArXiv}, year={2020}, volume={abs/2005.12515} } ``` ## Questions? Post a Github issue on the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo.
m3hrdadfi/albert-fa-base-v2-sentiment-deepsentipers-binary
2020-12-26T08:42:08.000Z
[ "pytorch", "tf", "albert", "text-classification", "fa", "transformers", "license:apache-2.0" ]
text-classification
[ ".gitattributes", "README.md", "config.json", "eval_results.txt", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "spiece.vocab", "test_predictions.txt", "test_results.txt", "tf_model.h5", "tokenizer_config.json", "training_args.bin" ]
m3hrdadfi
68
transformers
--- language: fa license: apache-2.0 --- # ALBERT Persian A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language > میتونی بهش بگی برت_کوچولو [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) is the first attempt on ALBERT for the Persian Language. The model was trained based on Google's ALBERT BASE Version 2.0 over various writing styles from numerous subjects (e.g., scientific, novels, news) with more than 3.9M documents, 73M sentences, and 1.3B words, like the way we did for ParsBERT. Please follow the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo for the latest information about previous and current models. ## Persian Sentiment [Digikala, SnappFood, DeepSentiPers] It aims to classify text, such as comments, based on their emotional bias. We tested three well-known datasets for this task: `Digikala` user comments, `SnappFood` user comments, and `DeepSentiPers` in two binary-form and multi-form types. ### DeepSentiPers which is a balanced and augmented version of SentiPers, contains 12,138 user opinions about digital products labeled with five different classes; two positives (i.e., happy and delighted), two negatives (i.e., furious and angry) and one neutral class. Therefore, this dataset can be utilized for both multi-class and binary classification. In the case of binary classification, the neutral class and its corresponding sentences are removed from the dataset. **Binary:** 1. Negative (Furious + Angry) 2. Positive (Happy + Delighted) **Multi** 1. Furious 2. Angry 3. Neutral 4. Happy 5. Delighted | Label | # | |:---------:|:----:| | Furious | 236 | | Angry | 1357 | | Neutral | 2874 | | Happy | 2848 | | Delighted | 2516 | **Download** You can download the dataset from: - [SentiPers](https://github.com/phosseini/sentipers) - [DeepSentiPers](https://github.com/JoyeBright/DeepSentiPers) ## Results The following table summarizes the F1 score obtained as compared to other models and architectures. | Dataset | ALBERT-fa-base-v2 | ParsBERT-v1 | mBERT | DeepSentiPers | |:------------------------:|:-----------------:|:-----------:|:-----:|:-------------:| | SentiPers (Multi Class) | 66.12 | 71.11 | - | 69.33 | | SentiPers (Binary Class) | 91.09 | 92.13 | - | 91.98 | ### BibTeX entry and citation info Please cite in publications as the following: ```bibtex @misc{ALBERTPersian, author = {Mehrdad Farahani}, title = {ALBERT-Persian: A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language}, year = {2020}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/m3hrdadfi/albert-persian}}, } @article{ParsBERT, title={ParsBERT: Transformer-based Model for Persian Language Understanding}, author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri}, journal={ArXiv}, year={2020}, volume={abs/2005.12515} } ``` ## Questions? Post a Github issue on the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo.
m3hrdadfi/albert-fa-base-v2-sentiment-deepsentipers-multi
2020-12-26T08:42:15.000Z
[ "pytorch", "tf", "albert", "text-classification", "fa", "transformers", "license:apache-2.0" ]
text-classification
[ ".gitattributes", "README.md", "config.json", "eval_results.txt", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "spiece.vocab", "test_predictions.txt", "test_results.txt", "tf_model.h5", "tokenizer_config.json", "training_args.bin" ]
m3hrdadfi
58
transformers
--- language: fa license: apache-2.0 --- # ALBERT Persian A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language > میتونی بهش بگی برت_کوچولو [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) is the first attempt on ALBERT for the Persian Language. The model was trained based on Google's ALBERT BASE Version 2.0 over various writing styles from numerous subjects (e.g., scientific, novels, news) with more than 3.9M documents, 73M sentences, and 1.3B words, like the way we did for ParsBERT. Please follow the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo for the latest information about previous and current models. ## Persian Sentiment [Digikala, SnappFood, DeepSentiPers] It aims to classify text, such as comments, based on their emotional bias. We tested three well-known datasets for this task: `Digikala` user comments, `SnappFood` user comments, and `DeepSentiPers` in two binary-form and multi-form types. ### DeepSentiPers which is a balanced and augmented version of SentiPers, contains 12,138 user opinions about digital products labeled with five different classes; two positives (i.e., happy and delighted), two negatives (i.e., furious and angry) and one neutral class. Therefore, this dataset can be utilized for both multi-class and binary classification. In the case of binary classification, the neutral class and its corresponding sentences are removed from the dataset. **Binary:** 1. Negative (Furious + Angry) 2. Positive (Happy + Delighted) **Multi** 1. Furious 2. Angry 3. Neutral 4. Happy 5. Delighted | Label | # | |:---------:|:----:| | Furious | 236 | | Angry | 1357 | | Neutral | 2874 | | Happy | 2848 | | Delighted | 2516 | **Download** You can download the dataset from: - [SentiPers](https://github.com/phosseini/sentipers) - [DeepSentiPers](https://github.com/JoyeBright/DeepSentiPers) ## Results The following table summarizes the F1 score obtained as compared to other models and architectures. | Dataset | ALBERT-fa-base-v2 | ParsBERT-v1 | mBERT | DeepSentiPers | |:------------------------:|:-----------------:|:-----------:|:-----:|:-------------:| | SentiPers (Multi Class) | 66.12 | 71.11 | - | 69.33 | | SentiPers (Binary Class) | 91.09 | 92.13 | - | 91.98 | ### BibTeX entry and citation info Please cite in publications as the following: ```bibtex @misc{ALBERTPersian, author = {Mehrdad Farahani}, title = {ALBERT-Persian: A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language}, year = {2020}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/m3hrdadfi/albert-persian}}, } @article{ParsBERT, title={ParsBERT: Transformer-based Model for Persian Language Understanding}, author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri}, journal={ArXiv}, year={2020}, volume={abs/2005.12515} } ``` ## Questions? Post a Github issue on the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo.
m3hrdadfi/albert-fa-base-v2-sentiment-digikala
2020-12-26T08:48:33.000Z
[ "pytorch", "tf", "albert", "text-classification", "fa", "transformers", "license:apache-2.0" ]
text-classification
[ ".gitattributes", "README.md", "config.json", "eval_results.txt", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "spiece.vocab", "test_predictions.txt", "test_results.txt", "tf_model.h5", "tokenizer_config.json", "training_args.bin" ]
m3hrdadfi
57
transformers
--- language: fa license: apache-2.0 --- # ALBERT Persian A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language > میتونی بهش بگی برت_کوچولو [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) is the first attempt on ALBERT for the Persian Language. The model was trained based on Google's ALBERT BASE Version 2.0 over various writing styles from numerous subjects (e.g., scientific, novels, news) with more than 3.9M documents, 73M sentences, and 1.3B words, like the way we did for ParsBERT. Please follow the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo for the latest information about previous and current models. ## Persian Sentiment [Digikala, SnappFood, DeepSentiPers] It aims to classify text, such as comments, based on their emotional bias. We tested three well-known datasets for this task: `Digikala` user comments, `SnappFood` user comments, and `DeepSentiPers` in two binary-form and multi-form types. ### Digikala Digikala user comments provided by [Open Data Mining Program (ODMP)](https://www.digikala.com/opendata/). This dataset contains 62,321 user comments with three labels: | Label | # | |:---------------:|:------:| | no_idea | 10394 | | not_recommended | 15885 | | recommended | 36042 | **Download** You can download the dataset from [here](https://www.digikala.com/opendata/) ## Results The following table summarizes the F1 score obtained as compared to other models and architectures. | Dataset | ALBERT-fa-base-v2 | ParsBERT-v1 | mBERT | DeepSentiPers | |:------------------------:|:-----------------:|:-----------:|:-----:|:-------------:| | Digikala User Comments | 81.12 | 81.74 | 80.74 | - | ### BibTeX entry and citation info Please cite in publications as the following: ```bibtex @misc{ALBERTPersian, author = {Mehrdad Farahani}, title = {ALBERT-Persian: A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language}, year = {2020}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/m3hrdadfi/albert-persian}}, } @article{ParsBERT, title={ParsBERT: Transformer-based Model for Persian Language Understanding}, author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri}, journal={ArXiv}, year={2020}, volume={abs/2005.12515} } ``` ## Questions? Post a Github issue on the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo.
m3hrdadfi/albert-fa-base-v2-sentiment-multi
2020-12-26T08:46:20.000Z
[ "pytorch", "tf", "albert", "text-classification", "fa", "transformers", "license:apache-2.0" ]
text-classification
[ ".gitattributes", "README.md", "config.json", "eval_results_alpbert-sentiment.txt", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "test_predictions.txt", "test_results.txt", "tf_model.h5", "tokenizer_config.json", "training_args.bin" ]
m3hrdadfi
64
transformers
--- language: fa license: apache-2.0 --- # ALBERT Persian A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language > میتونی بهش بگی برت_کوچولو [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) is the first attempt on ALBERT for the Persian Language. The model was trained based on Google's ALBERT BASE Version 2.0 over various writing styles from numerous subjects (e.g., scientific, novels, news) with more than 3.9M documents, 73M sentences, and 1.3B words, like the way we did for ParsBERT. Please follow the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo for the latest information about previous and current models. ## Persian Sentiment [Digikala, SnappFood, DeepSentiPers] It aims to classify text, such as comments, based on their emotional bias. We tested three well-known datasets for this task: `Digikala` user comments, `SnappFood` user comments, and `DeepSentiPers` in two binary-form and multi-form types. ## Results The model obtained an F1 score of 70.72% for a composition of all three datasets into a multi-labels `Negative`, `Neutral` and `Positive`. ### BibTeX entry and citation info Please cite in publications as the following: ```bibtex @misc{ALBERTPersian, author = {Mehrdad Farahani}, title = {ALBERT-Persian: A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language}, year = {2020}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/m3hrdadfi/albert-persian}}, } @article{ParsBERT, title={ParsBERT: Transformer-based Model for Persian Language Understanding}, author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri}, journal={ArXiv}, year={2020}, volume={abs/2005.12515} } ``` ## Questions? Post a Github issue on the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo.
m3hrdadfi/albert-fa-base-v2-sentiment-snappfood
2020-12-26T08:49:28.000Z
[ "pytorch", "tf", "albert", "text-classification", "fa", "transformers", "license:apache-2.0" ]
text-classification
[ ".gitattributes", "README.md", "config.json", "eval_results.txt", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "test_predictions.txt", "test_results.txt", "tf_model.h5", "tokenizer_config.json", "training_args.bin" ]
m3hrdadfi
68
transformers
--- language: fa license: apache-2.0 --- # ALBERT Persian A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language > میتونی بهش بگی برت_کوچولو [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) is the first attempt on ALBERT for the Persian Language. The model was trained based on Google's ALBERT BASE Version 2.0 over various writing styles from numerous subjects (e.g., scientific, novels, news) with more than 3.9M documents, 73M sentences, and 1.3B words, like the way we did for ParsBERT. Please follow the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo for the latest information about previous and current models. ## Persian Sentiment [Digikala, SnappFood, DeepSentiPers] It aims to classify text, such as comments, based on their emotional bias. We tested three well-known datasets for this task: `Digikala` user comments, `SnappFood` user comments, and `DeepSentiPers` in two binary-form and multi-form types. ### SnappFood [Snappfood](https://snappfood.ir/) (an online food delivery company) user comments containing 70,000 comments with two labels (i.e. polarity classification): 1. Happy 2. Sad | Label | # | |:--------:|:-----:| | Negative | 35000 | | Positive | 35000 | **Download** You can download the dataset from [here](https://drive.google.com/uc?id=15J4zPN1BD7Q_ZIQ39VeFquwSoW8qTxgu) ## Results The following table summarizes the F1 score obtained as compared to other models and architectures. | Dataset | ALBERT-fa-base-v2 | ParsBERT-v1 | mBERT | DeepSentiPers | |:------------------------:|:-----------------:|:-----------:|:-----:|:-------------:| | SnappFood User Comments | 85.79 | 88.12 | 87.87 | - | ### BibTeX entry and citation info Please cite in publications as the following: ```bibtex @misc{ALBERTPersian, author = {Mehrdad Farahani}, title = {ALBERT-Persian: A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language}, year = {2020}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/m3hrdadfi/albert-persian}}, } @article{ParsBERT, title={ParsBERT: Transformer-based Model for Persian Language Understanding}, author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri}, journal={ArXiv}, year={2020}, volume={abs/2005.12515} } ``` ## Questions? Post a Github issue on the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo.
m3hrdadfi/albert-fa-base-v2
2020-12-26T08:26:26.000Z
[ "pytorch", "albert", "masked-lm", "fa", "transformers", "albert-persian", "persian-lm", "license:apache-2.0", "fill-mask" ]
fill-mask
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "spiece.model", "spiece.vocab" ]
m3hrdadfi
28
transformers
--- language: fa tags: - albert-persian - persian-lm license: apache-2.0 --- # ALBERT-Persian A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language > میتونی بهش بگی برت_کوچولو ## Introduction ALBERT-Persian trained on a massive amount of public corpora ([Persian Wikidumps](https://dumps.wikimedia.org/fawiki/), [MirasText](https://github.com/miras-tech/MirasText)) and six other manually crawled text data from a various type of websites ([BigBang Page](https://bigbangpage.com/) `scientific`, [Chetor](https://www.chetor.com/) `lifestyle`, [Eligasht](https://www.eligasht.com/Blog/) `itinerary`, [Digikala](https://www.digikala.com/mag/) `digital magazine`, [Ted Talks](https://www.ted.com/talks) `general conversational`, Books `novels, storybooks, short stories from old to the contemporary era`). Please follow the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo for the latest information about previous and current models. ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?search=albert-fa) to look for fine-tuned versions on a task that interests you. ### How to use - for using any type of Albert you have to install sentencepiece - run this in your notebook ``` !pip install -q sentencepiece ``` #### TensorFlow 2.0 ```python from transformers import AutoConfig, AutoTokenizer, TFAutoModel config = AutoConfig.from_pretrained("m3hrdadfi/albert-fa-base-v2") tokenizer = AutoTokenizer.from_pretrained("m3hrdadfi/albert-fa-base-v2") model = TFAutoModel.from_pretrained("m3hrdadfi/albert-fa-base-v2") text = "ما در هوشواره معتقدیم با انتقال صحیح دانش و آگاهی، همه افراد می‌توانند از ابزارهای هوشمند استفاده کنند. شعار ما هوش مصنوعی برای همه است." tokenizer.tokenize(text) >>> ['▁ما', '▁در', '▁هوش', 'واره', '▁معتقد', 'یم', '▁با', '▁انتقال', '▁صحیح', '▁دانش', '▁و', '▁اگاه', 'ی', '،', '▁همه', '▁افراد', '▁می', '▁توانند', '▁از', '▁ابزارهای', '▁هوشمند', '▁استفاده', '▁کنند', '.', '▁شعار', '▁ما', '▁هوش', '▁مصنوعی', '▁برای', '▁همه', '▁است', '.'] ``` #### Pytorch ```python from transformers import AutoConfig, AutoTokenizer, AutoModel config = AutoConfig.from_pretrained("m3hrdadfi/albert-fa-base-v2") tokenizer = AutoTokenizer.from_pretrained("m3hrdadfi/albert-fa-base-v2") model = AutoModel.from_pretrained("m3hrdadfi/albert-fa-base-v2") ``` ## Training ALBERT-Persian is the first attempt on ALBERT for the Persian Language. The model was trained based on Google's ALBERT BASE Version 2.0 over various writing styles from numerous subjects (e.g., scientific, novels, news) with more than `3.9M` documents, `73M` sentences, and `1.3B` words, like the way we did for [ParsBERT](https://github.com/hooshvare/parsbert). ## Goals Objective goals during training are as below (after 140K steps). ``` bash ***** Eval results ***** global_step = 140000 loss = 2.0080082 masked_lm_accuracy = 0.6141017 masked_lm_loss = 1.9963315 sentence_order_accuracy = 0.985 sentence_order_loss = 0.06908702 ``` ## Derivative models ### Base Config #### Albert Model - [m3hrdadfi/albert-face-base-v2](https://huggingface.co/m3hrdadfi/albert-fa-base-v2) #### Albert Sentiment Analysis - [m3hrdadfi/albert-fa-base-v2-sentiment-digikala](https://huggingface.co/m3hrdadfi/albert-fa-base-v2-sentiment-digikala) - [m3hrdadfi/albert-fa-base-v2-sentiment-snappfood](https://huggingface.co/m3hrdadfi/albert-fa-base-v2-sentiment-snappfood) - [m3hrdadfi/albert-fa-base-v2-sentiment-deepsentipers-binary](https://huggingface.co/m3hrdadfi/albert-fa-base-v2-sentiment-deepsentipers-binary) - [m3hrdadfi/albert-fa-base-v2-sentiment-deepsentipers-multi](https://huggingface.co/m3hrdadfi/albert-fa-base-v2-sentiment-deepsentipers-multi) - [m3hrdadfi/albert-fa-base-v2-sentiment-binary](https://huggingface.co/m3hrdadfi/albert-fa-base-v2-sentiment-binary) - [m3hrdadfi/albert-fa-base-v2-sentiment-multi](https://huggingface.co/m3hrdadfi/albert-fa-base-v2-sentiment-multi) - [m3hrdadfi/albert-fa-base-v2-sentiment-multi](https://huggingface.co/m3hrdadfi/albert-fa-base-v2-sentiment-multi) #### Albert Text Classification - [m3hrdadfi/albert-fa-base-v2-clf-digimag](https://huggingface.co/m3hrdadfi/albert-fa-base-v2-clf-digimag) - [m3hrdadfi/albert-fa-base-v2-clf-persiannews](https://huggingface.co/m3hrdadfi/albert-fa-base-v2-clf-persiannews) #### Albert NER - [m3hrdadfi/albert-fa-base-v2-ner](https://huggingface.co/m3hrdadfi/albert-fa-base-v2-ner) - [m3hrdadfi/albert-fa-base-v2-ner-arman](https://huggingface.co/m3hrdadfi/albert-fa-base-v2-ner-arman) - [m3hrdadfi/albert-fa-base-v2-ner-arman](https://huggingface.co/m3hrdadfi/albert-fa-base-v2-ner-arman) ## Eval results The following tables summarize the F1 scores obtained by ALBERT-Persian as compared to other models and architectures. ### Sentiment Analysis (SA) Task | Dataset | ALBERT-fa-base-v2 | ParsBERT-v1 | mBERT | DeepSentiPers | |:------------------------:|:-----------------:|:-----------:|:-----:|:-------------:| | Digikala User Comments | 81.12 | 81.74 | 80.74 | - | | SnappFood User Comments | 85.79 | 88.12 | 87.87 | - | | SentiPers (Multi Class) | 66.12 | 71.11 | - | 69.33 | | SentiPers (Binary Class) | 91.09 | 92.13 | - | 91.98 | ### Text Classification (TC) Task | Dataset | ALBERT-fa-base-v2 | ParsBERT-v1 | mBERT | |:-----------------:|:-----------------:|:-----------:|:-----:| | Digikala Magazine | 92.33 | 93.59 | 90.72 | | Persian News | 97.01 | 97.19 | 95.79 | ### Named Entity Recognition (NER) Task | Dataset | ALBERT-fa-base-v2 | ParsBERT-v1 | mBERT | MorphoBERT | Beheshti-NER | LSTM-CRF | Rule-Based CRF | BiLSTM-CRF | |:-------:|:-----------------:|:-----------:|:-----:|:----------:|:------------:|:--------:|:--------------:|:----------:| | PEYMA | 88.99 | 93.10 | 86.64 | - | 90.59 | - | 84.00 | - | | ARMAN | 97.43 | 98.79 | 95.89 | 89.9 | 84.03 | 86.55 | - | 77.45 | ### BibTeX entry and citation info Please cite in publications as the following: ```bibtex @misc{ALBERT-Persian, author = {Mehrdad Farahani}, title = {ALBERT-Persian: A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language}, year = {2020}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/m3hrdadfi/albert-persian}}, } @article{ParsBERT, title={ParsBERT: Transformer-based Model for Persian Language Understanding}, author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri}, journal={ArXiv}, year={2020}, volume={abs/2005.12515} } ``` ## Questions? Post a Github issue on the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo.
m3hrdadfi/bert-fa-base-uncased-farstail-mean-tokens
2021-05-28T06:03:42.000Z
[ "pytorch", "jax", "bert", "fa", "transformers", "license:apache-2.0" ]
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "pytorch_model.bin", "sentence_bert_config.json", "special_tokens_map.json", "tokenizer_config.json", "vocab.txt" ]
m3hrdadfi
32
transformers
--- language: fa license: apache-2.0 --- # FarsTail + ParsBERT Please follow the [FarsTail](https://github.com/dml-qom/FarsTail) repo for the latest information about the dataset. For accessing the beneficiary models from this dataset, check out the [Sentence-Transformer](https://github.com/m3hrdadfi/sentence-transformers) repo. ```bibtex @article{amirkhani2020farstail, title={FarsTail: A Persian Natural Language Inference Dataset}, author={Hossein Amirkhani, Mohammad Azari Jafari, Azadeh Amirak, Zohreh Pourjafari, Soroush Faridan Jahromi, and Zeinab Kouhkan}, journal={arXiv preprint arXiv:2009.08820}, year={2020} } ```
m3hrdadfi/bert-fa-base-uncased-farstail
2021-05-28T06:02:52.000Z
[ "pytorch", "jax", "bert", "text-classification", "fa", "transformers", "license:apache-2.0" ]
text-classification
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.txt" ]
m3hrdadfi
32
transformers
--- language: fa license: apache-2.0 --- # FarsTail + ParsBERT Please follow the [FarsTail](https://github.com/dml-qom/FarsTail) repo for the latest information about the dataset. For accessing the beneficiary models from this dataset, check out the [Sentence-Transformer](https://github.com/m3hrdadfi/sentence-transformers) repo ```bibtex @article{amirkhani2020farstail, title={FarsTail: A Persian Natural Language Inference Dataset}, author={Hossein Amirkhani, Mohammad Azari Jafari, Azadeh Amirak, Zohreh Pourjafari, Soroush Faridan Jahromi, and Zeinab Kouhkan}, journal={arXiv preprint arXiv:2009.08820}, year={2020} } ```
m3hrdadfi/bert-fa-base-uncased-wikinli-mean-tokens
2021-05-28T06:00:37.000Z
[ "pytorch", "jax", "bert", "fa", "transformers", "license:apache-2.0" ]
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "pytorch_model.bin", "sentence_bert_config.json", "special_tokens_map.json", "tokenizer_config.json", "vocab.txt" ]
m3hrdadfi
246
transformers
--- language: fa license: apache-2.0 --- # ParsBERT + Sentence Transformers Please follow the [Sentence-Transformer](https://github.com/m3hrdadfi/sentence-transformers) repo for the latest information about previous and current models. ```bibtex @misc{SentenceTransformerWiki, author = {Mehrdad Farahani}, title = {Sentence Embeddings with ParsBERT}, year = {2020}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {https://github.com/m3hrdadfi/sentence-transformers}, } ```
m3hrdadfi/bert-fa-base-uncased-wikinli
2021-05-28T06:01:35.000Z
[ "pytorch", "jax", "bert", "text-classification", "fa", "transformers", "license:apache-2.0" ]
text-classification
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.txt" ]
m3hrdadfi
40
transformers
--- language: fa license: apache-2.0 --- # ParsBERT + Sentence Transformers Please follow the [Sentence-Transformer](https://github.com/m3hrdadfi/sentence-transformers) repo for the latest information about previous and current models. ```bibtex @misc{SentenceTransformerWiki, author = {Mehrdad Farahani}, title = {Sentence Embeddings with ParsBERT}, year = {2020}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {https://github.com/m3hrdadfi/sentence-transformers}, } ```
m3hrdadfi/bert-fa-base-uncased-wikitriplet-mean-tokens
2021-05-28T06:02:17.000Z
[ "pytorch", "jax", "bert", "fa", "transformers", "license:apache-2.0" ]
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "pytorch_model.bin", "sentence_bert_config.json", "special_tokens_map.json", "tokenizer_config.json", "vocab.txt" ]
m3hrdadfi
461
transformers
--- language: fa license: apache-2.0 --- # ParsBERT + Sentence Transformers Please follow the [Sentence-Transformer](https://github.com/m3hrdadfi/sentence-transformers) repo for the latest information about previous and current models. ```bibtex @misc{SentenceTransformerWiki, author = {Mehrdad Farahani}, title = {Sentence Embeddings with ParsBERT}, year = {2020}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {https://github.com/m3hrdadfi/sentence-transformers}, } ```
m3hrdadfi/bert2bert-fa-news-headline
2020-12-11T21:50:16.000Z
[ "pytorch", "encoder-decoder", "seq2seq", "fa", "transformers", "license:apache-2.0", "summarization", "text2text-generation" ]
summarization
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "vocab.txt" ]
m3hrdadfi
79
transformers
--- language: fa license: apache-2.0 tags: - summarization --- A Bert2Bert model on VoA Persian Corpus (a medium-sized corpus of 7.9 million words, 2003-2008) generates headlines. The model achieved a 25.30 ROUGE-2 score. For more detail, please follow the [News Headline Generation](https://github.com/m3hrdadfi/news-headline-generation) repo. ## Eval results The following table summarizes the ROUGE scores obtained by the Bert2Bert model. | % | Precision | Recall | FMeasure | |:-------:|:---------:|:------:|:--------:| | ROUGE-1 | 43.78 | 45.52 | 43.54 | | ROUGE-2 | 24.50 | 25.30* | 24.24 | | ROUGE-L | 41.20 | 42.22 | 40.76 | ## Questions? Post a Github issue on the [News Headline Generation](https://github.com/hooshvare/news-headline-generation/issues) repo.
m3hrdadfi/bert2bert-fa-wiki-summary
2020-12-11T21:50:20.000Z
[ "pytorch", "encoder-decoder", "seq2seq", "fa", "transformers", "license:apache-2.0", "summarization", "text2text-generation" ]
summarization
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.txt" ]
m3hrdadfi
81
transformers
--- language: fa license: apache-2.0 tags: - summarization --- A Bert2Bert model on the Wiki Summary dataset to summarize articles. The model achieved an 8.47 ROUGE-2 score. For more detail, please follow the [Wiki Summary](https://github.com/m3hrdadfi/wiki-summary) repo. ## Eval results The following table summarizes the ROUGE scores obtained by the Bert2Bert model. | % | Precision | Recall | FMeasure | |:-------:|:---------:|:------:|:--------:| | ROUGE-1 | 28.14 | 30.86 | 27.34 | | ROUGE-2 | 07.12 | 08.47* | 07.10 | | ROUGE-L | 28.49 | 25.87 | 25.50 | ## Questions? Post a Github issue on the [Wiki Summary](https://github.com/m3hrdadfi/wiki-summary/issues) repo.
m3hrdadfi/hubert-base-greek-speech-emotion-recognition
2021-06-17T16:05:44.000Z
[ "pytorch", "hubert", "el", "dataset:aesdd", "transformers", "audio", "speech", "speech-emotion-recognition", "license:apache-2.0" ]
[ ".gitattributes", "README.md", "config.json", "preprocessor_config.json", "pytorch_model.bin", "test.csv", "trainer_state.json" ]
m3hrdadfi
5
transformers
m3hrdadfi/hubert-large-greek-speech-emotion-recognition
2021-06-17T16:06:03.000Z
[ "pytorch", "hubert", "el", "dataset:aesdd", "transformers", "audio", "speech", "speech-emotion-recognition", "license:apache-2.0" ]
[ ".gitattributes", "README.md", "config.json", "preprocessor_config.json", "pytorch_model.bin", "test.csv", "trainer_state.json" ]
m3hrdadfi
0
transformers
m3hrdadfi/icelandic-ner-bert
2021-05-27T17:14:13.000Z
[ "pytorch", "tf", "bert", "token-classification", "is", "transformers", "license:apache-2.0" ]
token-classification
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "special_tokens_map.json", "tf_model.h5", "tokenizer.json", "tokenizer_config.json", "vocab.txt" ]
m3hrdadfi
22
transformers
--- language: is license: apache-2.0 widget: - text: "Kristin manneskja getur ekki lagt frásagnir af Jesú Kristi á hilluna vegna þess að hún sé búin að lesa þær ." - text: "Til hvers að kjósa flokk , sem þykist vera Jafnaðarmannaflokkur rétt fyrir kosningar , þegar að það er hægt að kjósa sannnan jafnaðarmannaflokk , sjálfan Jafnaðarmannaflokk Íslands - Samfylkinguna ." - text: "Það sannaðist svo eftirminnilega á plötunni Það þarf fólk eins og þig sem kom út fyrir þremur árum , en á henni hann Fálka úr Keflavík og Gáluna , son sinn , til að útsetja lög hans og spila inn ." - text: "Lögin hafa áður komið út sem aukalög á smáskífum af Hail to the Thief , en á disknum er líka myndband og fleira efni fyrir tölvur ." - text: "Britney gerði honum viðvart og hann ók henni á UCLA-sjúkrahúsið í Santa Monica en það er í nágrenni hljóðversins ." --- # IcelandicNER BERT This model was fine-tuned on the MIM-GOLD-NER dataset for the Icelandic language. The [MIM-GOLD-NER](http://hdl.handle.net/20.500.12537/42) corpus was developed at [Reykjavik University](https://en.ru.is/) in 2018–2020 that covered eight types of entities: - Date - Location - Miscellaneous - Money - Organization - Percent - Person - Time ## Dataset Information | | Records | B-Date | B-Location | B-Miscellaneous | B-Money | B-Organization | B-Percent | B-Person | B-Time | I-Date | I-Location | I-Miscellaneous | I-Money | I-Organization | I-Percent | I-Person | I-Time | |:------|----------:|---------:|-------------:|------------------:|----------:|-----------------:|------------:|-----------:|---------:|---------:|-------------:|------------------:|----------:|-----------------:|------------:|-----------:|---------:| | Train | 39988 | 3409 | 5980 | 4351 | 729 | 5754 | 502 | 11719 | 868 | 2112 | 516 | 3036 | 770 | 2382 | 50 | 5478 | 790 | | Valid | 7063 | 570 | 1034 | 787 | 100 | 1078 | 103 | 2106 | 147 | 409 | 76 | 560 | 104 | 458 | 7 | 998 | 136 | | Test | 8299 | 779 | 1319 | 935 | 153 | 1315 | 108 | 2247 | 172 | 483 | 104 | 660 | 167 | 617 | 10 | 1089 | 158 | ## Evaluation The following tables summarize the scores obtained by model overall and per each class. | entity | precision | recall | f1-score | support | |:-------------:|:---------:|:--------:|:--------:|:-------:| | Date | 0.969466 | 0.978177 | 0.973802 | 779.0 | | Location | 0.955201 | 0.953753 | 0.954476 | 1319.0 | | Miscellaneous | 0.867033 | 0.843850 | 0.855285 | 935.0 | | Money | 0.979730 | 0.947712 | 0.963455 | 153.0 | | Organization | 0.893939 | 0.897338 | 0.895636 | 1315.0 | | Percent | 1.000000 | 1.000000 | 1.000000 | 108.0 | | Person | 0.963028 | 0.973743 | 0.968356 | 2247.0 | | Time | 0.976879 | 0.982558 | 0.979710 | 172.0 | | micro avg | 0.938158 | 0.938958 | 0.938558 | 7028.0 | | macro avg | 0.950659 | 0.947141 | 0.948840 | 7028.0 | | weighted avg | 0.937845 | 0.938958 | 0.938363 | 7028.0 | ## How To Use You use this model with Transformers pipeline for NER. ### Installing requirements ```bash pip install transformers ``` ### How to predict using pipeline ```python from transformers import AutoTokenizer from transformers import AutoModelForTokenClassification # for pytorch from transformers import TFAutoModelForTokenClassification # for tensorflow from transformers import pipeline model_name_or_path = "m3hrdadfi/icelandic-ner-bert" tokenizer = AutoTokenizer.from_pretrained(model_name_or_path) model = AutoModelForTokenClassification.from_pretrained(model_name_or_path) # Pytorch # model = TFAutoModelForTokenClassification.from_pretrained(model_name_or_path) # Tensorflow nlp = pipeline("ner", model=model, tokenizer=tokenizer) example = "Kristin manneskja getur ekki lagt frásagnir af Jesú Kristi á hilluna vegna þess að hún sé búin að lesa þær ." ner_results = nlp(example) print(ner_results) ``` ## Questions? Post a Github issue on the [IcelandicNER Issues](https://github.com/m3hrdadfi/icelandic-ner/issues) repo.
m3hrdadfi/icelandic-ner-distilbert
2021-05-27T17:17:28.000Z
[ "pytorch", "tf", "distilbert", "token-classification", "is", "transformers", "license:apache-2.0" ]
token-classification
[ ".gitattributes", "README.md", "config.json", "eval_results.json", "pytorch_model.bin", "special_tokens_map.json", "tf_model.h5", "tokenizer.json", "tokenizer_config.json", "vocab.txt" ]
m3hrdadfi
25
transformers
--- language: is license: apache-2.0 widget: - text: "Kristin manneskja getur ekki lagt frásagnir af Jesú Kristi á hilluna vegna þess að hún sé búin að lesa þær ." - text: "Til hvers að kjósa flokk , sem þykist vera Jafnaðarmannaflokkur rétt fyrir kosningar , þegar að það er hægt að kjósa sannnan jafnaðarmannaflokk , sjálfan Jafnaðarmannaflokk Íslands - Samfylkinguna ." - text: "Það sannaðist svo eftirminnilega á plötunni Það þarf fólk eins og þig sem kom út fyrir þremur árum , en á henni hann Fálka úr Keflavík og Gáluna , son sinn , til að útsetja lög hans og spila inn ." - text: "Lögin hafa áður komið út sem aukalög á smáskífum af Hail to the Thief , en á disknum er líka myndband og fleira efni fyrir tölvur ." - text: "Britney gerði honum viðvart og hann ók henni á UCLA-sjúkrahúsið í Santa Monica en það er í nágrenni hljóðversins ." --- # IcelandicNER DistilBERT This model was fine-tuned on the MIM-GOLD-NER dataset for the Icelandic language. The [MIM-GOLD-NER](http://hdl.handle.net/20.500.12537/42) corpus was developed at [Reykjavik University](https://en.ru.is/) in 2018–2020 that covered eight types of entities: - Date - Location - Miscellaneous - Money - Organization - Percent - Person - Time ## Dataset Information | | Records | B-Date | B-Location | B-Miscellaneous | B-Money | B-Organization | B-Percent | B-Person | B-Time | I-Date | I-Location | I-Miscellaneous | I-Money | I-Organization | I-Percent | I-Person | I-Time | |:------|----------:|---------:|-------------:|------------------:|----------:|-----------------:|------------:|-----------:|---------:|---------:|-------------:|------------------:|----------:|-----------------:|------------:|-----------:|---------:| | Train | 39988 | 3409 | 5980 | 4351 | 729 | 5754 | 502 | 11719 | 868 | 2112 | 516 | 3036 | 770 | 2382 | 50 | 5478 | 790 | | Valid | 7063 | 570 | 1034 | 787 | 100 | 1078 | 103 | 2106 | 147 | 409 | 76 | 560 | 104 | 458 | 7 | 998 | 136 | | Test | 8299 | 779 | 1319 | 935 | 153 | 1315 | 108 | 2247 | 172 | 483 | 104 | 660 | 167 | 617 | 10 | 1089 | 158 | ## Evaluation The following tables summarize the scores obtained by model overall and per each class. | entity | precision | recall | f1-score | support | |:-------------:|:---------:|:--------:|:--------:|:-------:| | Date | 0.969309 | 0.973042 | 0.971172 | 779.0 | | Location | 0.941221 | 0.946929 | 0.944067 | 1319.0 | | Miscellaneous | 0.848283 | 0.819251 | 0.833515 | 935.0 | | Money | 0.928571 | 0.934641 | 0.931596 | 153.0 | | Organization | 0.874147 | 0.876806 | 0.875475 | 1315.0 | | Percent | 1.000000 | 1.000000 | 1.000000 | 108.0 | | Person | 0.956674 | 0.972853 | 0.964695 | 2247.0 | | Time | 0.965318 | 0.970930 | 0.968116 | 172.0 | | micro avg | 0.926110 | 0.929141 | 0.927623 | 7028.0 | | macro avg | 0.935441 | 0.936807 | 0.936079 | 7028.0 | | weighted avg | 0.925578 | 0.929141 | 0.927301 | 7028.0 | ## How To Use You use this model with Transformers pipeline for NER. ### Installing requirements ```bash pip install transformers ``` ### How to predict using pipeline ```python from transformers import AutoTokenizer from transformers import AutoModelForTokenClassification # for pytorch from transformers import TFAutoModelForTokenClassification # for tensorflow from transformers import pipeline model_name_or_path = "m3hrdadfi/icelandic-ner-distilbert" tokenizer = AutoTokenizer.from_pretrained(model_name_or_path) model = AutoModelForTokenClassification.from_pretrained(model_name_or_path) # Pytorch # model = TFAutoModelForTokenClassification.from_pretrained(model_name_or_path) # Tensorflow nlp = pipeline("ner", model=model, tokenizer=tokenizer) example = "Kristin manneskja getur ekki lagt frásagnir af Jesú Kristi á hilluna vegna þess að hún sé búin að lesa þær ." ner_results = nlp(example) print(ner_results) ``` ## Questions? Post a Github issue on the [IcelandicNER Issues](https://github.com/m3hrdadfi/icelandic-ner/issues) repo.
m3hrdadfi/icelandic-ner-roberta
2021-05-27T17:13:07.000Z
[ "pytorch", "tf", "roberta", "token-classification", "is", "transformers", "license:apache-2.0" ]
token-classification
[ ".gitattributes", "README.md", "config.json", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tf_model.h5", "tokenizer.json", "tokenizer_config.json", "vocab.json" ]
m3hrdadfi
19
transformers
--- language: is license: apache-2.0 widget: - text: "Kristin manneskja getur ekki lagt frásagnir af Jesú Kristi á hilluna vegna þess að hún sé búin að lesa þær ." - text: "Til hvers að kjósa flokk , sem þykist vera Jafnaðarmannaflokkur rétt fyrir kosningar , þegar að það er hægt að kjósa sannnan jafnaðarmannaflokk , sjálfan Jafnaðarmannaflokk Íslands - Samfylkinguna ." - text: "Það sannaðist svo eftirminnilega á plötunni Það þarf fólk eins og þig sem kom út fyrir þremur árum , en á henni hann Fálka úr Keflavík og Gáluna , son sinn , til að útsetja lög hans og spila inn ." - text: "Lögin hafa áður komið út sem aukalög á smáskífum af Hail to the Thief , en á disknum er líka myndband og fleira efni fyrir tölvur ." - text: "Britney gerði honum viðvart og hann ók henni á UCLA-sjúkrahúsið í Santa Monica en það er í nágrenni hljóðversins ." --- # IcelandicNER RoBERTa This model was fine-tuned on the MIM-GOLD-NER dataset for the Icelandic language. The [MIM-GOLD-NER](http://hdl.handle.net/20.500.12537/42) corpus was developed at [Reykjavik University](https://en.ru.is/) in 2018–2020 that covered eight types of entities: - Date - Location - Miscellaneous - Money - Organization - Percent - Person - Time ## Dataset Information | | Records | B-Date | B-Location | B-Miscellaneous | B-Money | B-Organization | B-Percent | B-Person | B-Time | I-Date | I-Location | I-Miscellaneous | I-Money | I-Organization | I-Percent | I-Person | I-Time | |:------|----------:|---------:|-------------:|------------------:|----------:|-----------------:|------------:|-----------:|---------:|---------:|-------------:|------------------:|----------:|-----------------:|------------:|-----------:|---------:| | Train | 39988 | 3409 | 5980 | 4351 | 729 | 5754 | 502 | 11719 | 868 | 2112 | 516 | 3036 | 770 | 2382 | 50 | 5478 | 790 | | Valid | 7063 | 570 | 1034 | 787 | 100 | 1078 | 103 | 2106 | 147 | 409 | 76 | 560 | 104 | 458 | 7 | 998 | 136 | | Test | 8299 | 779 | 1319 | 935 | 153 | 1315 | 108 | 2247 | 172 | 483 | 104 | 660 | 167 | 617 | 10 | 1089 | 158 | ## Evaluation The following tables summarize the scores obtained by model overall and per each class. | entity | precision | recall | f1-score | support | |:-------------:|:---------:|:--------:|:--------:|:-------:| | Date | 0.961881 | 0.971759 | 0.966794 | 779.0 | | Location | 0.963047 | 0.968158 | 0.965595 | 1319.0 | | Miscellaneous | 0.884946 | 0.880214 | 0.882574 | 935.0 | | Money | 0.980132 | 0.967320 | 0.973684 | 153.0 | | Organization | 0.924300 | 0.928517 | 0.926404 | 1315.0 | | Percent | 1.000000 | 1.000000 | 1.000000 | 108.0 | | Person | 0.978591 | 0.976413 | 0.977501 | 2247.0 | | Time | 0.965116 | 0.965116 | 0.965116 | 172.0 | | micro avg | 0.951258 | 0.952476 | 0.951866 | 7028.0 | | macro avg | 0.957252 | 0.957187 | 0.957209 | 7028.0 | | weighted avg | 0.951237 | 0.952476 | 0.951849 | 7028.0 | ## How To Use You use this model with Transformers pipeline for NER. ### Installing requirements ```bash pip install transformers ``` ### How to predict using pipeline ```python from transformers import AutoTokenizer from transformers import AutoModelForTokenClassification # for pytorch from transformers import TFAutoModelForTokenClassification # for tensorflow from transformers import pipeline model_name_or_path = "m3hrdadfi/icelandic-ner-roberta" tokenizer = AutoTokenizer.from_pretrained(model_name_or_path) model = AutoModelForTokenClassification.from_pretrained(model_name_or_path) # Pytorch # model = TFAutoModelForTokenClassification.from_pretrained(model_name_or_path) # Tensorflow nlp = pipeline("ner", model=model, tokenizer=tokenizer) example = "Kristin manneskja getur ekki lagt frásagnir af Jesú Kristi á hilluna vegna þess að hún sé búin að lesa þær ." ner_results = nlp(example) print(ner_results) ``` ## Questions? Post a Github issue on the [IcelandicNER Issues](https://github.com/m3hrdadfi/icelandic-ner/issues) repo.
m3hrdadfi/typo-detector-distilbert-en
2021-06-16T16:14:20.000Z
[ "pytorch", "tf", "distilbert", "token-classification", "en", "transformers" ]
token-classification
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "special_tokens_map.json", "tf_model.h5", "tokenizer.json", "tokenizer_config.json", "vocab.txt" ]
m3hrdadfi
44
transformers
m3hrdadfi/typo-detector-distilbert-fa
2021-06-17T07:42:51.000Z
[ "pytorch", "tf", "distilbert", "token-classification", "fa", "transformers" ]
token-classification
[ ".gitattributes", "README.md", "added_tokens.json", "config.json", "pytorch_model.bin", "special_tokens_map.json", "tf_model.h5", "tokenizer.json", "tokenizer_config.json", "vocab.txt" ]
m3hrdadfi
70
transformers
m3hrdadfi/wav2vec2-base-100k-eating-sound-collection
2021-06-12T07:14:32.000Z
[ "pytorch", "wav2vec2", "transformers", "audio", "automatic-speech-recognition", "audio-classification" ]
automatic-speech-recognition
[ ".gitattributes", "README.md", "all_results.json", "config.json", "eval_results.json", "predict_results.txt", "preprocessor_config.json", "pytorch_model.bin", "test.csv", "train_results.json", "trainer_state.json", "training_args.bin" ]
m3hrdadfi
24
transformers
--- tags: - audio - automatic-speech-recognition - audio-classification --- # Eating Sound Classification using Wav2Vec 2.0 ## How to use ### Requirements ```bash # requirement packages !pip install git+https://github.com/huggingface/datasets.git !pip install git+https://github.com/huggingface/transformers.git !pip install torchaudio !pip install librosa ``` ### Prediction ```python import torch import torch.nn as nn import torch.nn.functional as F import torchaudio from transformers import AutoConfig, Wav2Vec2FeatureExtractor import librosa import IPython.display as ipd import numpy as np import pandas as pd ``` ```python device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model_name_or_path = "m3hrdadfi/wav2vec2-base-100k-eating-sound-collection" config = AutoConfig.from_pretrained(model_name_or_path) feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained(model_name_or_path) sampling_rate = feature_extractor.sampling_rate model = Wav2Vec2ForSpeechClassification.from_pretrained(model_name_or_path).to(device) ``` ```python def speech_file_to_array_fn(path, sampling_rate): speech_array, _sampling_rate = torchaudio.load(path) resampler = torchaudio.transforms.Resample(_sampling_rate) speech = resampler(speech_array).squeeze().numpy() return speech def predict(path, sampling_rate): speech = speech_file_to_array_fn(path, sampling_rate) inputs = feature_extractor(speech, sampling_rate=sampling_rate, return_tensors="pt", padding=True) inputs = {key: inputs[key].to(device) for key in inputs} with torch.no_grad(): logits = model(**inputs).logits scores = F.softmax(logits, dim=1).detach().cpu().numpy()[0] outputs = [{"Label": config.id2label[i], "Score": f"{round(score * 100, 3):.1f}%"} for i, score in enumerate(scores)] return outputs ``` ```python path = "clips_rd/gummies/gummies_6_04.wav" outputs = predict(path, sampling_rate) ``` ```bash [ {'Label': 'aloe', 'Score': '0.0%'}, {'Label': 'burger', 'Score': '0.0%'}, {'Label': 'cabbage', 'Score': '0.0%'}, {'Label': 'candied_fruits', 'Score': '0.0%'}, {'Label': 'carrots', 'Score': '0.0%'}, {'Label': 'chips', 'Score': '0.0%'}, {'Label': 'chocolate', 'Score': '0.0%'}, {'Label': 'drinks', 'Score': '0.0%'}, {'Label': 'fries', 'Score': '0.0%'}, {'Label': 'grapes', 'Score': '0.0%'}, {'Label': 'gummies', 'Score': '99.8%'}, {'Label': 'ice-cream', 'Score': '0.0%'}, {'Label': 'jelly', 'Score': '0.1%'}, {'Label': 'noodles', 'Score': '0.0%'}, {'Label': 'pickles', 'Score': '0.0%'}, {'Label': 'pizza', 'Score': '0.0%'}, {'Label': 'ribs', 'Score': '0.0%'}, {'Label': 'salmon', 'Score': '0.0%'}, {'Label': 'soup', 'Score': '0.0%'}, {'Label': 'wings', 'Score': '0.0%'} ] ``` ## Evaluation The following tables summarize the scores obtained by model overall and per each class. | label | precision | recall | f1-score | support | |:--------------:|:---------:|:------:|:--------:|:-------:| | aloe | 0.989 | 0.807 | 0.889 | 109 | | burger | 1.000 | 0.471 | 0.640 | 119 | | cabbage | 0.907 | 0.970 | 0.937 | 100 | | candied_fruits | 0.952 | 0.988 | 0.970 | 161 | | carrots | 0.970 | 0.992 | 0.981 | 132 | | chips | 0.993 | 0.951 | 0.972 | 144 | | chocolate | 0.828 | 0.914 | 0.869 | 58 | | drinks | 0.982 | 0.948 | 0.965 | 58 | | fries | 0.935 | 0.783 | 0.852 | 129 | | grapes | 0.965 | 0.940 | 0.952 | 116 | | gummies | 0.880 | 0.971 | 0.923 | 136 | | ice-cream | 0.953 | 0.972 | 0.962 | 145 | | jelly | 0.906 | 0.875 | 0.890 | 88 | | noodles | 0.817 | 0.817 | 0.817 | 82 | | pickles | 0.933 | 0.960 | 0.946 | 174 | | pizza | 0.704 | 0.934 | 0.803 | 122 | | ribs | 0.796 | 0.755 | 0.775 | 98 | | salmon | 0.647 | 0.970 | 0.776 | 100 | | soup | 0.941 | 0.857 | 0.897 | 56 | | wings | 0.842 | 0.792 | 0.816 | 101 | | accuracy | 0.890 | 0.890 | 0.890 | 0 | | macro avg | 0.897 | 0.883 | 0.882 | 2228 | | weighted avg | 0.903 | 0.890 | 0.888 | 2228 | ## Questions? Post a Github issue from [HERE](https://github.com/m3hrdadfi/soxan/issues).
m3hrdadfi/wav2vec2-base-100k-gtzan-music-genres
2021-06-12T07:14:16.000Z
[ "pytorch", "wav2vec2", "transformers", "audio", "automatic-speech-recognition", "audio-classification" ]
automatic-speech-recognition
[ ".gitattributes", "README.md", "all_results.json", "config.json", "eval_results.json", "predict_results.txt", "preprocessor_config.json", "pytorch_model.bin", "test.csv", "train_results.json", "trainer_state.json", "training_args.bin" ]
m3hrdadfi
20
transformers
--- tags: - audio - automatic-speech-recognition - audio-classification --- # Music Genre Classification using Wav2Vec 2.0 ## How to use ### Requirements ```bash # requirement packages !pip install git+https://github.com/huggingface/datasets.git !pip install git+https://github.com/huggingface/transformers.git !pip install torchaudio !pip install librosa ``` ### Prediction ```python import torch import torch.nn as nn import torch.nn.functional as F import torchaudio from transformers import AutoConfig, Wav2Vec2FeatureExtractor import librosa import IPython.display as ipd import numpy as np import pandas as pd ``` ```python device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model_name_or_path = "m3hrdadfi/wav2vec2-base-100k-voxpopuli-gtzan-music" config = AutoConfig.from_pretrained(model_name_or_path) feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained(model_name_or_path) sampling_rate = feature_extractor.sampling_rate model = Wav2Vec2ForSpeechClassification.from_pretrained(model_name_or_path).to(device) ``` ```python def speech_file_to_array_fn(path, sampling_rate): speech_array, _sampling_rate = torchaudio.load(path) resampler = torchaudio.transforms.Resample(_sampling_rate) speech = resampler(speech_array).squeeze().numpy() return speech def predict(path, sampling_rate): speech = speech_file_to_array_fn(path, sampling_rate) inputs = feature_extractor(speech, sampling_rate=sampling_rate, return_tensors="pt", padding=True) inputs = {key: inputs[key].to(device) for key in inputs} with torch.no_grad(): logits = model(**inputs).logits scores = F.softmax(logits, dim=1).detach().cpu().numpy()[0] outputs = [{"Label": config.id2label[i], "Score": f"{round(score * 100, 3):.1f}%"} for i, score in enumerate(scores)] return outputs ``` ```python path = "genres_original/disco/disco.00067.wav" outputs = predict(path, sampling_rate) ``` ```bash [ {'Label': 'blues', 'Score': '0.0%'}, {'Label': 'classical', 'Score': '0.0%'}, {'Label': 'country', 'Score': '0.0%'}, {'Label': 'disco', 'Score': '99.8%'}, {'Label': 'hiphop', 'Score': '0.0%'}, {'Label': 'jazz', 'Score': '0.0%'}, {'Label': 'metal', 'Score': '0.0%'}, {'Label': 'pop', 'Score': '0.0%'}, {'Label': 'reggae', 'Score': '0.0%'}, {'Label': 'rock', 'Score': '0.0%'} ] ``` ## Evaluation The following tables summarize the scores obtained by model overall and per each class. | label | precision | recall | f1-score | support | |:------------:|:---------:|:------:|:--------:|:-------:| | blues | 0.792 | 0.950 | 0.864 | 20 | | classical | 0.864 | 0.950 | 0.905 | 20 | | country | 0.812 | 0.650 | 0.722 | 20 | | disco | 0.778 | 0.700 | 0.737 | 20 | | hiphop | 0.933 | 0.700 | 0.800 | 20 | | jazz | 1.000 | 0.850 | 0.919 | 20 | | metal | 0.783 | 0.900 | 0.837 | 20 | | pop | 0.917 | 0.550 | 0.687 | 20 | | reggae | 0.543 | 0.950 | 0.691 | 20 | | rock | 0.611 | 0.550 | 0.579 | 20 | | accuracy | 0.775 | 0.775 | 0.775 | 0 | | macro avg | 0.803 | 0.775 | 0.774 | 200 | | weighted avg | 0.803 | 0.775 | 0.774 | 200 | ## Questions? Post a Github issue from [HERE](https://github.com/m3hrdadfi/soxan/issues).
m3hrdadfi/wav2vec2-large-xlsr-estonian
2021-03-28T20:44:40.000Z
[ "pytorch", "wav2vec2", "et", "dataset:common_voice", "transformers", "audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week", "license:apache-2.0" ]
automatic-speech-recognition
[ ".gitattributes", "README.md", "all_results.json", "config.json", "eval_results.json", "predictions.csv", "preprocessor_config.json", "pytorch_model.bin", "result.bin", "sample1123.flac", "sample910.flac", "special_tokens_map.json", "tokenizer_config.json", "train_results.json", "trainer_state.json", "training_args.bin", "vocab.json" ]
m3hrdadfi
18
transformers
--- language: et datasets: - common_voice tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 widget: - label: Common Voice sample 1123 src: https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-estonian/resolve/main/sample1123.flac - label: Common Voice sample 910 src: https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-estonian/resolve/main/sample910.flac model-index: - name: XLSR Wav2Vec2 Estonian by Mehrdad Farahani results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice et type: common_voice args: et metrics: - name: Test WER type: wer value: 33.93 --- # Wav2Vec2-Large-XLSR-53-Estonian Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Estonian using [Common Voice](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: **Requirements** ```bash # requirement packages !pip install git+https://github.com/huggingface/datasets.git !pip install git+https://github.com/huggingface/transformers.git !pip install torchaudio !pip install librosa !pip install jiwer ``` **Prediction** ```python import librosa import torch import torchaudio from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor from datasets import load_dataset import numpy as np import re import string import IPython.display as ipd chars_to_ignore = [ ",", "?", ".", "!", "-", ";", ":", '""', "%", "'", '"', "�", "#", "!", "?", "«", "»", "(", ")", "؛", ",", "?", ".", "!", "-", ";", ":", '"', "“", "%", "‘", "�", "–", "…", "_", "”", '“', '„' ] chars_to_mapping = { "\u200c": " ", "\u200d": " ", "\u200e": " ", "\u200f": " ", "\ufeff": " ", } def multiple_replace(text, chars_to_mapping): pattern = "|".join(map(re.escape, chars_to_mapping.keys())) return re.sub(pattern, lambda m: chars_to_mapping[m.group()], str(text)) def remove_special_characters(text, chars_to_ignore_regex): text = re.sub(chars_to_ignore_regex, '', text).lower() + " " return text def normalizer(batch, chars_to_ignore, chars_to_mapping): chars_to_ignore_regex = f"""[{"".join(chars_to_ignore)}]""" text = batch["sentence"].lower().strip() text = text.replace("\u0307", " ").strip() text = multiple_replace(text, chars_to_mapping) text = remove_special_characters(text, chars_to_ignore_regex) batch["sentence"] = text return batch def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) speech_array = speech_array.squeeze().numpy() speech_array = librosa.resample(np.asarray(speech_array), sampling_rate, 16_000) batch["speech"] = speech_array return batch def predict(batch): features = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) input_values = features.input_values.to(device) attention_mask = features.attention_mask.to(device) with torch.no_grad(): logits = model(input_values, attention_mask=attention_mask).logits pred_ids = torch.argmax(logits, dim=-1) batch["predicted"] = processor.batch_decode(pred_ids)[0] return batch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") processor = Wav2Vec2Processor.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-estonian") model = Wav2Vec2ForCTC.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-estonian").to(device) dataset = load_dataset("common_voice", "et", split="test[:1%]") dataset = dataset.map( normalizer, fn_kwargs={"chars_to_ignore": chars_to_ignore, "chars_to_mapping": chars_to_mapping}, remove_columns=list(set(dataset.column_names) - set(['sentence', 'path'])) ) dataset = dataset.map(speech_file_to_array_fn) result = dataset.map(predict) max_items = np.random.randint(0, len(result), 10).tolist() for i in max_items: reference, predicted = result["sentence"][i], result["predicted"][i] print("reference:", reference) print("predicted:", predicted) print('---') ``` **Output:** ```text reference: õhulossid lagunevad ning ees ootab maapind predicted: õhulassid lagunevad ning ees ootab maapind --- reference: milliseks kiievisse pääsemise nimel võistlev muusik soome muusikamaastiku hetkeseisu hindab ning kas ta ka ennast sellel tulevikus tegutsemas näeb kuuled videost predicted: milliseks gievisse pääsemise nimel võitlev muusiks soome muusikama aastiku hetke seisu hindab ning kas ta ennast selle tulevikus tegutsemast näeb kuulad videost --- reference: näiteks kui pool seina on tehtud tekib tunne et tahaks tegelikult natuke teistsugust ja hakkame otsast peale predicted: näiteks kui pool seine on tehtud tekib tunnetahaks tegelikult matuka teistsugust jahappanna otsast peane --- reference: neuroesteetilised katsed näitavad et just nägude vaatlemine aktiveerib inimese aju esteetilist keskust predicted: neuroaisteetiliselt katsed näitaval et just nägude vaatlemine aptiveerid inimese aju est eedilist keskust --- reference: paljud inimesed kindlasti kadestavad teid kuid ei julge samamoodi vabalt võtta predicted: paljud inimesed kindlasti kadestavadteid kuid ei julge sama moodi vabalt võtta --- reference: parem on otsida pileteid inkognito veebi kaudu predicted: parem on otsida pileteid ning kognitu veebikaudu --- reference: ja vot siin ma jäin vaikseks predicted: ja vat siisma ja invaikseks --- reference: mida sa iseendale juubeli puhul soovid predicted: mida saise endale jubeli puhul soovid --- reference: kuumuse ja kõrge temperatuuri tõttu kuivas tühjadel karjamaadel rohi mis muutus kergesti süttivaks predicted: kuumuse ja kõrge temperatuuri tõttu kuivast ühjadal karjamaadel rohi mis muutus kergesti süttivaks --- reference: ilmselt on inimesi kelle jaoks on see hea lahendus predicted: ilmselt on inimesi kelle jaoks on see hea lahendus --- ``` ## Evaluation The model can be evaluated as follows on the Estonian test data of Common Voice. ```python import librosa import torch import torchaudio from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor from datasets import load_dataset, load_metric import numpy as np import re import string chars_to_ignore = [ ",", "?", ".", "!", "-", ";", ":", '""', "%", "'", '"', "�", "#", "!", "?", "«", "»", "(", ")", "؛", ",", "?", ".", "!", "-", ";", ":", '"', "“", "%", "‘", "�", "–", "…", "_", "”", '“', '„' ] chars_to_mapping = { "\u200c": " ", "\u200d": " ", "\u200e": " ", "\u200f": " ", "\ufeff": " ", } def multiple_replace(text, chars_to_mapping): pattern = "|".join(map(re.escape, chars_to_mapping.keys())) return re.sub(pattern, lambda m: chars_to_mapping[m.group()], str(text)) def remove_special_characters(text, chars_to_ignore_regex): text = re.sub(chars_to_ignore_regex, '', text).lower() + " " return text def normalizer(batch, chars_to_ignore, chars_to_mapping): chars_to_ignore_regex = f"""[{"".join(chars_to_ignore)}]""" text = batch["sentence"].lower().strip() text = text.replace("\u0307", " ").strip() text = multiple_replace(text, chars_to_mapping) text = remove_special_characters(text, chars_to_ignore_regex) batch["sentence"] = text return batch def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) speech_array = speech_array.squeeze().numpy() speech_array = librosa.resample(np.asarray(speech_array), sampling_rate, 16_000) batch["speech"] = speech_array return batch def predict(batch): features = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) input_values = features.input_values.to(device) attention_mask = features.attention_mask.to(device) with torch.no_grad(): logits = model(input_values, attention_mask=attention_mask).logits pred_ids = torch.argmax(logits, dim=-1) batch["predicted"] = processor.batch_decode(pred_ids)[0] return batch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") processor = Wav2Vec2Processor.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-estonian") model = Wav2Vec2ForCTC.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-estonian").to(device) dataset = load_dataset("common_voice", "et", split="test") dataset = dataset.map( normalizer, fn_kwargs={"chars_to_ignore": chars_to_ignore, "chars_to_mapping": chars_to_mapping}, remove_columns=list(set(dataset.column_names) - set(['sentence', 'path'])) ) dataset = dataset.map(speech_file_to_array_fn) result = dataset.map(predict) wer = load_metric("wer") print("WER: {:.2f}".format(100 * wer.compute(predictions=result["predicted"], references=result["sentence"]))) ``` **Test Result**: - WER: 33.93% ## Training & Report The Common Voice `train`, `validation` datasets were used for training. You can see the training states [here](https://wandb.ai/m3hrdadfi/finetuned_wav2vec_xlsr_estonian/reports/Fine-Tuning-for-Wav2Vec2-Large-XLSR-53-Estonian--Vmlldzo1NjA1MTI?accessToken=k2b2g3a2i12m1sdwf13q8b226pplmmyw12joxo6vk38eb4djellfzmn9fp2725fw) The script used for training can be found [here](https://colab.research.google.com/github/m3hrdadfi/notebooks/blob/main/Fine_Tune_XLSR_Wav2Vec2_on_Estonian_ASR_with_%F0%9F%A4%97_Transformers_ipynb.ipynb)
m3hrdadfi/wav2vec2-large-xlsr-georgian
2021-04-09T05:03:55.000Z
[ "pytorch", "wav2vec2", "ka", "dataset:common_voice", "transformers", "audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week", "license:apache-2.0" ]
automatic-speech-recognition
[ ".gitattributes", "README.md", "config.json", "normalizer.py", "predictions.csv", "preprocessor_config.json", "pytorch_model.bin", "sample566.flac", "sample95.flac", "special_tokens_map.json", "tokenizer_config.json", "trainer_state.json", "training_args.bin", "vocab.json" ]
m3hrdadfi
56
transformers
--- language: ka datasets: - common_voice tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 widget: - label: Common Voice sample 566 src: https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-georgian/resolve/main/sample566.flac - label: Common Voice sample 95 src: https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-georgian/resolve/main/sample95.flac model-index: - name: XLSR Wav2Vec2 Georgian by Mehrdad Farahani results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice ka type: common_voice args: ka metrics: - name: Test WER type: wer value: 43.86 --- # Wav2Vec2-Large-XLSR-53-Georgian Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Georgian using [Common Voice](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: **Requirements** ```bash # requirement packages !pip install git+https://github.com/huggingface/datasets.git !pip install git+https://github.com/huggingface/transformers.git !pip install torchaudio !pip install librosa !pip install jiwer ``` **Normalizer** ```bash !wget -O normalizer.py https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-lithuanian/raw/main/normalizer.py ``` **Prediction** ```python import librosa import torch import torchaudio from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor from datasets import load_dataset import numpy as np import re import string import IPython.display as ipd from normalizer import normalizer def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) speech_array = speech_array.squeeze().numpy() speech_array = librosa.resample(np.asarray(speech_array), sampling_rate, 16_000) batch["speech"] = speech_array return batch def predict(batch): features = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) input_values = features.input_values.to(device) attention_mask = features.attention_mask.to(device) with torch.no_grad(): logits = model(input_values, attention_mask=attention_mask).logits pred_ids = torch.argmax(logits, dim=-1) batch["predicted"] = processor.batch_decode(pred_ids)[0] return batch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") processor = Wav2Vec2Processor.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-georgian") model = Wav2Vec2ForCTC.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-georgian").to(device) dataset = load_dataset("common_voice", "ka", split="test[:1%]") dataset = dataset.map( normalizer, fn_kwargs={"remove_extra_space": True}, remove_columns=list(set(dataset.column_names) - set(['sentence', 'path'])) ) dataset = dataset.map(speech_file_to_array_fn) result = dataset.map(predict) max_items = np.random.randint(0, len(result), 20).tolist() for i in max_items: reference, predicted = result["sentence"][i], result["predicted"][i] print("reference:", reference) print("predicted:", predicted) print('---') ``` **Output:** ```text reference: პრეზიდენტობისას ბუში საქართველოს და უკრაინის დემოკრატიულ მოძრაობების და ნატოში გაწევრიანების აქტიური მხარდამჭერი იყო predicted: პრეზიდენტო ვისას ბუში საქართველოს და უკრაინის დემოკრატიულ მოძრაობების და ნატიში დაწევრიანების აქტიური მხარდამჭერი იყო --- reference: შესაძლებელია მისი დამონება და მსახურ დემონად გადაქცევა predicted: შესაძლებელია მისი დამონებათ და მსახურდემანად გადაქცევა --- reference: ეს გამოსახულებები აღბეჭდილი იყო მოსკოვის დიდი მთავრებისა და მეფეების ბეჭდებზე predicted: ეს გამოსახულებები აღბეჭდილი იყო მოსკოვის დიდი მთავრებისა და მეფეების ბეჭდებზე --- reference: ჯოლიმ ოქროს გლობუსისა და კინომსახიობთა გილდიის ნომინაციები მიიღო predicted: ჯოლი მოქროს გლობუსისა და კინამსახიობთა გილდიის ნომინაციები მიიღო --- reference: შემდგომში საქალაქო ბიბლიოთეკა სარაიონო ბიბლიოთეკად გადაკეთდა გაიზარდა წიგნადი ფონდი predicted: შემდღომში საქალაქო ბიბლიოთეკა სარაიონო ბიბლიოთეკად გადაკეთა გაიზარდა წიგნადი ფოვდი --- reference: აბრამსი დაუკავშირდა მირანდას და ორი თვის განმავლობაში ისინი მუშაობდნენ აღნიშნული სცენის თანმხლებ მელოდიაზე predicted: აბრამში და უკავშირდა მირანდეს და ორითვის განმავლობაში ისინი მუშაობდნენა აღნიშნულის ჩენის მთამხლევით მელოდიაში --- reference: ამჟამად თემთა პალატის ოპოზიციის ლიდერია ლეიბორისტული პარტიის ლიდერი ჯერემი კორბინი predicted: ამჟამად თემთა პალატის ოპოზიციის ლიდერია ლეიბურისტული პარტიის ლიდერი ჯერემი კორვინი --- reference: ორი predicted: ორი --- reference: მას შემდეგ იგი კოლექტივის მუდმივი წევრია predicted: მას შემდეგ იგი კოლექტივის ფუდ მივი წევრია --- reference: აზერბაიჯანულ ფილოსოფიას შეიძლება მივაკუთვნოთ რუსეთის საზოგადო მოღვაწე ჰეიდარ ჯემალი predicted: აზერგვოიჯანალ ფილოსოფიას შეიძლება მივაკუთვნოთ რუსეთის საზოგადო მოღვაწე ჰეიდარ ჯემალი --- reference: ბრონქსში ჯერომის ავენიუ ჰყოფს გამჭოლ ქუჩებს აღმოსავლეთ და დასავლეთ ნაწილებად predicted: რონგში დერომიწ ავენილ პოფს გამ დოლფურქებს აღმოსავლეთ და დასავლეთ ნაწილებად --- reference: ჰაერი არის ჟანგბადის ის ძირითადი წყარო რომელსაც საჭიროებს ყველა ცოცხალი ორგანიზმი predicted: არი არის ჯამუბადესის ძირითადი წყარო რომელსაც საჭიროოებს ყველა ცოცხალი ორგანიზმი --- reference: ჯგუფი უმეტესწილად ასრულებს პოპმუსიკის ჟანრის სიმღერებს predicted: ჯგუფიუმეტესწევად ასრულებს პოპნუსიკის ჟანრის სიმრერებს --- reference: ბაბილინა მუდმივად ცდილობდა შესაძლებლობების ფარგლებში მიეღო ცოდნა და ახალი ინფორმაცია predicted: ბაბილინა მუდმივა ცდილობდა შესაძლებლობების ფარგლებში მიიღო ცოტნა და ახალი ინფორმაცია --- reference: მრევლის რწმენით რომელი ჯგუფიც გაიმარჯვებდა მთელი წლის მანძილზე სიუხვე და ბარაქა არ მოაკლდებოდა predicted: მრევრის რწმენით რომელიჯგუფის გაიმარჯვებდა მთელიჭლის მანძილზა სიუყვეტაბარაქა არ მოაკლდებოდა --- reference: ნინო ჩხეიძეს განსაკუთრებული ღვაწლი მიუძღვის ქუთაისისა და რუსთაველის თეატრების შემოქმედებით ცხოვრებაში predicted: მინო ჩხეიძეს განსაკუთრებული ღოვაწლი მიოცხვის ქუთაისისა და რუსთაველის თეატრების შემოქმედებით ცხოვრებაში --- reference: იგი სამი დიალექტისგან შედგება predicted: იგი სამი დიალეთის გან შედგება --- reference: ფორმით სირაქლემებს წააგვანან predicted: ომიცი რაქლემებს ააგვანამ --- reference: დანი დაიბადა კოლუმბუსში ოჰაიოში predicted: დონი დაიბაოდა კოლუმბუსში ოხვაიოში --- reference: მშენებლობისათვის გამოიყო ადგილი ყოფილი აეროპორტის რაიონში predicted: შენებლობისათვის გამოიყო ადგილი ყოფილი აეროპორტის რაიონში --- ``` ## Evaluation The model can be evaluated as follows on the Georgian test data of Common Voice. ```python import librosa import torch import torchaudio from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor from datasets import load_dataset, load_metric import numpy as np import re import string from normalizer import normalizer def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) speech_array = speech_array.squeeze().numpy() speech_array = librosa.resample(np.asarray(speech_array), sampling_rate, 16_000) batch["speech"] = speech_array return batch def predict(batch): features = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) input_values = features.input_values.to(device) attention_mask = features.attention_mask.to(device) with torch.no_grad(): logits = model(input_values, attention_mask=attention_mask).logits pred_ids = torch.argmax(logits, dim=-1) batch["predicted"] = processor.batch_decode(pred_ids)[0] return batch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") processor = Wav2Vec2Processor.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-georgian") model = Wav2Vec2ForCTC.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-georgian").to(device) dataset = load_dataset("common_voice", "ka", split="test") dataset = dataset.map( normalizer, fn_kwargs={"remove_extra_space": True}, remove_columns=list(set(dataset.column_names) - set(['sentence', 'path'])) ) dataset = dataset.map(speech_file_to_array_fn) result = dataset.map(predict) wer = load_metric("wer") print("WER: {:.2f}".format(100 * wer.compute(predictions=result["predicted"], references=result["sentence"]))) ``` **Test Result**: - WER: 43.86% ## Training & Report The Common Voice `train`, `validation` datasets were used for training. You can see the training states [here](https://wandb.ai/m3hrdadfi/wav2vec2_large_xlsr_ka/reports/Fine-Tuning-for-Wav2Vec2-Large-XLSR-53-Georgian--Vmlldzo1OTQyMzk?accessToken=ytf7jseje66a3byuheh68o6a7215thjviscv5k2ewl5hgq9yqr50yxbko0bnf1d3) The script used for training can be found [here](https://colab.research.google.com/github/m3hrdadfi/notebooks/blob/main/Fine_Tune_XLSR_Wav2Vec2_on_Georgian_ASR_with_%F0%9F%A4%97_Transformers_ipynb.ipynb) ## Questions? Post a Github issue on the [Wav2Vec](https://github.com/m3hrdadfi/wav2vec) repo.
m3hrdadfi/wav2vec2-large-xlsr-icelandic
2021-04-22T03:57:41.000Z
[ "pytorch", "wav2vec2", "is", "dataset:malromur", "transformers", "audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week", "license:apache-2.0" ]
automatic-speech-recognition
[ ".gitattributes", "README.md", "config.json", "malromur_test.csv", "normalizer.py", "predictions.csv", "preprocessor_config.json", "pytorch_model.bin", "sample1608.flac", "sample3860.flac", "special_tokens_map.json", "tokenizer_config.json", "trainer_state.json", "training_args.bin", "vocab.json", "num2words/__init__.py", "num2words/base.py", "num2words/compat.py", "num2words/currency.py", "num2words/lang_EU.py", "num2words/lang_IS.py", "num2words/utils.py" ]
m3hrdadfi
16
transformers
--- language: is datasets: - malromur tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 widget: - label: Malromur sample 1608 src: https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-icelandic/resolve/main/sample1608.flac - label: Malromur sample 3860 src: https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-icelandic/resolve/main/sample3860.flac model-index: - name: XLSR Wav2Vec2 Icelandic by Mehrdad Farahani results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Malromur is type: malromur args: lt metrics: - name: Test WER type: wer value: 09.21 --- # Wav2Vec2-Large-XLSR-53-Icelandic Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Icelandic using [Malromur](https://clarin.is/en/resources/malromur/). When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: **Requirements** ```bash # requirement packages !pip install git+https://github.com/huggingface/datasets.git !pip install git+https://github.com/huggingface/transformers.git !pip install torchaudio !pip install librosa !pip install jiwer !pip install num2words ``` **Normalizer** ```bash # num2word packages # Original source: https://github.com/savoirfairelinux/num2words !mkdir -p ./num2words !wget -O num2words/__init__.py https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-icelandic/raw/main/num2words/__init__.py !wget -O num2words/base.py https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-icelandic/raw/main/num2words/base.py !wget -O num2words/compat.py https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-icelandic/raw/main/num2words/compat.py !wget -O num2words/currency.py https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-icelandic/raw/main/num2words/currency.py !wget -O num2words/lang_EU.py https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-icelandic/raw/main/num2words/lang_EU.py !wget -O num2words/lang_IS.py https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-icelandic/raw/main/num2words/lang_IS.py !wget -O num2words/utils.py https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-icelandic/raw/main/num2words/utils.py # Malromur_test selected based on gender and age !wget -O malromur_test.csv https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-icelandic/raw/main/malromur_test.csv # Normalizer !wget -O normalizer.py https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-icelandic/raw/main/normalizer.py ``` **Prediction** ```python import librosa import torch import torchaudio from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor from datasets import load_dataset import numpy as np import re import string import IPython.display as ipd from normalizer import Normalizer normalizer = Normalizer(lang="is") def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) speech_array = speech_array.squeeze().numpy() speech_array = librosa.resample(np.asarray(speech_array), sampling_rate, 16_000) batch["speech"] = speech_array return batch def predict(batch): features = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) input_values = features.input_values.to(device) attention_mask = features.attention_mask.to(device) with torch.no_grad(): logits = model(input_values, attention_mask=attention_mask).logits pred_ids = torch.argmax(logits, dim=-1) batch["predicted"] = processor.batch_decode(pred_ids) return batch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") processor = Wav2Vec2Processor.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-icelandic") model = Wav2Vec2ForCTC.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-icelandic").to(device) dataset = load_dataset("csv", data_files={"test": "./malromur_test.csv"})["test"] dataset = dataset.map( normalizer, fn_kwargs={"do_lastspace_removing": True, "text_key_name": "cleaned_sentence"}, remove_columns=list(set(dataset.column_names) - set(['cleaned_sentence', 'path'])) ) dataset = dataset.map(speech_file_to_array_fn) result = dataset.map(predict, batched=True, batch_size=8) max_items = np.random.randint(0, len(result), 20).tolist() for i in max_items: reference, predicted = result["cleaned_sentence"][i], result["predicted"][i] print("reference:", reference) print("predicted:", predicted) print('---') ``` **Output:** ```text reference: eða eitthvað annað dýr predicted: eða eitthvað annað dýr --- reference: oddgerður predicted: oddgerður --- reference: eiðný predicted: eiðný --- reference: löndum predicted: löndum --- reference: tileinkaði bróður sínum markið predicted: tileinkaði bróður sínum markið --- reference: þetta er svo mikill hégómi predicted: þetta er svo mikill hégómi --- reference: timarit is predicted: timarit is --- reference: stefna strax upp aftur predicted: stefna strax upp aftur --- reference: brekkuflöt predicted: brekkuflöt --- reference: áætlunarferð frestað vegna veðurs predicted: áætluna ferð frestað vegna veðurs --- reference: sagði af sér vegna kláms predicted: sagði af sér vegni kláms --- reference: grímúlfur predicted: grímúlgur --- reference: lýsti sig saklausan predicted: lýsti sig saklausan --- reference: belgingur is predicted: belgingur is --- reference: sambía predicted: sambía --- reference: geirastöðum predicted: geirastöðum --- reference: varð tvisvar fyrir eigin bíl predicted: var tvisvar fyrir eigin bíl --- reference: reykjavöllum predicted: reykjavöllum --- reference: miklir menn eru þeir þremenningar predicted: miklir menn eru þeir þremenningar --- reference: handverkoghonnun is predicted: handverkoghonnun is --- ``` ## Evaluation The model can be evaluated as follows on the test data of Malromur. ```python import librosa import torch import torchaudio from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor from datasets import load_dataset, load_metric import numpy as np import re import string from normalizer import Normalizer normalizer = Normalizer(lang="is") def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) speech_array = speech_array.squeeze().numpy() speech_array = librosa.resample(np.asarray(speech_array), sampling_rate, 16_000) batch["speech"] = speech_array return batch def predict(batch): features = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) input_values = features.input_values.to(device) attention_mask = features.attention_mask.to(device) with torch.no_grad(): logits = model(input_values, attention_mask=attention_mask).logits pred_ids = torch.argmax(logits, dim=-1) batch["predicted"] = processor.batch_decode(pred_ids) return batch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") processor = Wav2Vec2Processor.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-icelandic") model = Wav2Vec2ForCTC.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-icelandic").to(device) dataset = load_dataset("csv", data_files={"test": "./malromur_test.csv"})["test"] dataset = dataset.map( normalizer, fn_kwargs={"do_lastspace_removing": True, "text_key_name": "cleaned_sentence"}, remove_columns=list(set(dataset.column_names) - set(['cleaned_sentence', 'path'])) ) dataset = dataset.map(speech_file_to_array_fn) result = dataset.map(predict, batched=True, batch_size=8) wer = load_metric("wer") print("WER: {:.2f}".format(100 * wer.compute(predictions=result["predicted"], references=result["cleaned_sentence"]))) ``` **Test Result**: - WER: 09.21% ## Training & Report The Common Voice `train`, `validation` datasets were used for training. You can see the training states [here](https://wandb.ai/m3hrdadfi/wav2vec2_large_xlsr_is/reports/Fine-Tuning-for-Wav2Vec2-Large-XLSR-Icelandic--Vmlldzo2Mjk3ODc?accessToken=j7neoz71mce1fkzt0bch4j0l50witnmme07xe90nvs769kjjtbwneu2wfz3oip16) The script used for training can be found [here](https://colab.research.google.com/github/m3hrdadfi/notebooks/blob/main/Fine_Tune_XLSR_Wav2Vec2_on_Icelandic_ASR_with_%F0%9F%A4%97_Transformers_ipynb.ipynb) ## Questions? Post a Github issue on the [Wav2Vec](https://github.com/m3hrdadfi/wav2vec) repo.
m3hrdadfi/wav2vec2-large-xlsr-lithuanian
2021-04-09T04:50:56.000Z
[ "pytorch", "wav2vec2", "lt", "dataset:common_voice", "transformers", "audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week", "license:apache-2.0" ]
automatic-speech-recognition
[ ".gitattributes", "README.md", "config.json", "normalizer.py", "predictions.csv", "preprocessor_config.json", "pytorch_model.bin", "sample11.flac", "sample74.flac", "special_tokens_map.json", "tokenizer_config.json", "trainer_state.json", "training_args.bin", "vocab.json" ]
m3hrdadfi
27
transformers
--- language: lt datasets: - common_voice tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 widget: - label: Common Voice sample 11 src: https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-lithuanian/resolve/main/sample11.flac - label: Common Voice sample 74 src: https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-lithuanian/resolve/main/sample74.flac model-index: - name: XLSR Wav2Vec2 Lithuanian by Mehrdad Farahani results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice lt type: common_voice args: lt metrics: - name: Test WER type: wer value: 34.66 --- # Wav2Vec2-Large-XLSR-53-Lithuanian Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Lithuanian using [Common Voice](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: **Requirements** ```bash # requirement packages !pip install git+https://github.com/huggingface/datasets.git !pip install git+https://github.com/huggingface/transformers.git !pip install torchaudio !pip install librosa !pip install jiwer ``` **Normalizer** ```bash !wget -O normalizer.py https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-lithuanian/raw/main/normalizer.py ``` **Prediction** ```python import librosa import torch import torchaudio from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor from datasets import load_dataset import numpy as np import re import string import IPython.display as ipd from normalizer import normalizer def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) speech_array = speech_array.squeeze().numpy() speech_array = librosa.resample(np.asarray(speech_array), sampling_rate, 16_000) batch["speech"] = speech_array return batch def predict(batch): features = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) input_values = features.input_values.to(device) attention_mask = features.attention_mask.to(device) with torch.no_grad(): logits = model(input_values, attention_mask=attention_mask).logits pred_ids = torch.argmax(logits, dim=-1) batch["predicted"] = processor.batch_decode(pred_ids)[0] return batch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") processor = Wav2Vec2Processor.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-lithuanian") model = Wav2Vec2ForCTC.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-lithuanian").to(device) dataset = load_dataset("common_voice", "lt", split="test[:1%]") dataset = dataset.map( normalizer, fn_kwargs={"remove_extra_space": True}, remove_columns=list(set(dataset.column_names) - set(['sentence', 'path'])) ) dataset = dataset.map(speech_file_to_array_fn) result = dataset.map(predict) max_items = np.random.randint(0, len(result), 20).tolist() for i in max_items: reference, predicted = result["sentence"][i], result["predicted"][i] print("reference:", reference) print("predicted:", predicted) print('---') ``` **Output:** ```text reference: jos tikslas buvo rasti kelią į ramųjį vandenyną šiaurės amerikoje predicted: jos tikstas buvo rasikelia į ramų į vandenyna šiaurės amerikoje --- reference: pietrytinėje dalyje likusių katalikų kapinių teritorija po antrojo pasaulinio karo dar padidėjo predicted: pietrytinė daljelikusių gatalikų kapinių teritoriją pontro pasaulnio karo dar padidėjo --- reference: koplyčioje pakabintas aušros vartų marijos paveikslas predicted: koplyčioje pakagintas aušos fortų marijos paveikslas --- reference: yra politinių debatų vedėjas predicted: yra politinių debatų vedėjas --- reference: žmogui taip pat gali būti mirtinai pavojingi predicted: žmogui taip pat gali būti mirtinai pavojingi --- reference: tuo pačiu metu kijeve nuverstas netekęs vokietijos paramos skoropadskis predicted: tuo pačiu metu kiei venų verstas netekės vokietijos paramos kropadskis --- reference: visos dvylika komandų tarpusavyje sužaidžia po dvi rungtynes predicted: visos dvylika komandų tarpuso vysų žaidžia po dvi rungtynės --- reference: kaukazo regioną sudaro kaukazo kalnai ir gretimos žemumos predicted: kau kazo regioną sudaro kaukazo kalnai ir gretimos žemumus --- reference: tarptautinių ir rusiškų šaškių kandidatas į sporto meistrus predicted: tarptautinio ir rusiškos šaškių kandidatus į sporto meistrus --- reference: prasideda putorano plynaukštės pietiniame pakraštyje predicted: prasideda futorano prynaukštės pietiniame pakraštyje --- reference: miestas skirstomas į senamiestį ir naujamiestį predicted: miestas skirstomas į senamėsti ir naujamiestė --- reference: tais pačiais metais pelnė bronzą pasaulio taurės kolumbijos etape komandinio sprinto rungtyje predicted: tais pačiais metais pelnį mronsa pasaulio taurės kolumbijos etape komandinio sprento rungtyje --- reference: prasideda putorano plynaukštės pietiniame pakraštyje predicted: prasideda futorano prynaukštės pietiniame pakraštyje --- reference: moterų tarptautinės meistrės vardas yra viena pakopa žemesnis už moterų tarptautinės korespondencinių šachmatų didmeistrės predicted: moterų tarptautinės meistrės vardas yra gana pakopo žymesnis už moterų tarptautinės kūrespondencinių šachmatų didmesčias --- reference: teritoriją dengia tropinės džiunglės predicted: teritorija dengia tropinės žiunglės --- reference: pastaroji dažnai pereina į nimcovičiaus gynybą arba bogoliubovo gynybą predicted: pastaruoji dažnai pereina nimcovičiaus gynyba arba bogalių buvo gymyba --- reference: už tai buvo suimtas ir tris mėnesius sėdėjo butyrkų kalėjime predicted: užtai buvo sujumtas ir tris mėne susiedėjo butirkų kalėjime --- reference: tai didžiausias pagal gyventojų skaičių regionas predicted: tai didžiausias pagal gyventojų skaičių redionus --- reference: vilkyškių miške taip pat auga raganų eglė predicted: vilkiškimiškė taip pat auga ragano eglė --- reference: kitas gavo skaraitiškės dvarą su palivarkais predicted: kitas gavos karaitiškės dvarą spolivarkais --- ``` ## Evaluation The model can be evaluated as follows on the test data of Common Voice. ```python import librosa import torch import torchaudio from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor from datasets import load_dataset, load_metric import numpy as np import re import string from normalizer import normalizer def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) speech_array = speech_array.squeeze().numpy() speech_array = librosa.resample(np.asarray(speech_array), sampling_rate, 16_000) batch["speech"] = speech_array return batch def predict(batch): features = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) input_values = features.input_values.to(device) attention_mask = features.attention_mask.to(device) with torch.no_grad(): logits = model(input_values, attention_mask=attention_mask).logits pred_ids = torch.argmax(logits, dim=-1) batch["predicted"] = processor.batch_decode(pred_ids)[0] return batch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") processor = Wav2Vec2Processor.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-lithuanian") model = Wav2Vec2ForCTC.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-lithuanian").to(device) dataset = load_dataset("common_voice", "lt", split="test") dataset = dataset.map( normalizer, fn_kwargs={"remove_extra_space": True}, remove_columns=list(set(dataset.column_names) - set(['sentence', 'path'])) ) dataset = dataset.map(speech_file_to_array_fn) result = dataset.map(predict) wer = load_metric("wer") print("WER: {:.2f}".format(100 * wer.compute(predictions=result["predicted"], references=result["sentence"]))) ``` ] **Test Result**: - WER: 34.66% ## Training & Report The Common Voice `train`, `validation` datasets were used for training. You can see the training states [here](https://wandb.ai/m3hrdadfi/wav2vec2_large_xlsr_lt/reports/Fine-Tuning-for-Wav2Vec2-Large-XLSR-53-Lithuanian--Vmlldzo1OTM1MTU?accessToken=kdkpara4hcmjvrlpbfsnu4s8cdk3a0xeyrb84ycpr4k701n13hzr9q7s60b00swx) The script used for training can be found [here](https://colab.research.google.com/github/m3hrdadfi/notebooks/blob/main/Fine_Tune_XLSR_Wav2Vec2_on_Lithuanian_ASR_with_%F0%9F%A4%97_Transformers_ipynb.ipynb) ## Questions? Post a Github issue on the [Wav2Vec](https://github.com/m3hrdadfi/wav2vec) repo.
m3hrdadfi/wav2vec2-large-xlsr-persian-shemo
2021-03-29T13:24:56.000Z
[ "pytorch", "wav2vec2", "fa", "dataset:shemo", "transformers", "audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week", "license:apache-2.0" ]
automatic-speech-recognition
[ ".gitattributes", "README.md", "all_results.json", "config.json", "eval_results.json", "predictions.csv", "preprocessor_config.json", "pytorch_model.bin", "result.bin", "sample250.flac", "sample52.flac", "special_tokens_map.json", "tokenizer_config.json", "train_results.json", "trainer_state.json", "training_args.bin", "vocab.json" ]
m3hrdadfi
78
transformers
--- language: fa datasets: - shemo tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 widget: - label: ShEMO sample 250 src: https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-persian-shemo/resolve/main/sample250.flac - label: ShEMO sample 52 src: https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-persian-shemo/resolve/main/sample52.flac model-index: - name: XLSR Wav2Vec2 Persian (Farsi) ShEMO by Mehrdad Farahani results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: ShEMO fa type: shemo args: fa metrics: - name: Test WER type: wer value: 30.00 --- # Wav2Vec2-Large-XLSR-53-Persian ShEMO Fine-tuned [Wav2Vec2-Large-XLSR-53-Persian V2](https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-persian-v2) in Persian (Farsi) using [ShEMO](https://www.kaggle.com/mansourehk/shemo-persian-speech-emotion-detection-database). When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: **Requirements** ```bash # requirement packages !pip install git+https://github.com/huggingface/datasets.git !pip install git+https://github.com/huggingface/transformers.git !pip install torchaudio !pip install librosa !pip install jiwer !pip install hazm !pip install num2fawords ``` **Prediction** ```python import librosa import torch import torchaudio from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor from datasets import load_dataset from num2fawords import words, ordinal_words import numpy as np import hazm import re import string import IPython.display as ipd _normalizer = hazm.Normalizer() chars_to_ignore = [ ",", "?", ".", "!", "-", ";", ":", '""', "%", "'", '"', "�", "#", "!", "؟", "?", "«", "»", "،", "(", ")", "؛", "'ٔ", "٬",'ٔ', ",", "?", ".", "!", "-", ";", ":",'"',"“", "%", "‘", "”", "�", "–", "…", "_", "”", '“', '„', 'ā', 'š', # "ء", ] # In case of farsi chars_to_ignore = chars_to_ignore + list(string.ascii_lowercase + string.digits) chars_to_mapping = { 'ك': 'ک', 'دِ': 'د', 'بِ': 'ب', 'زِ': 'ز', 'ذِ': 'ذ', 'شِ': 'ش', 'سِ': 'س', 'ى': 'ی', 'ي': 'ی', 'أ': 'ا', 'ؤ': 'و', "ے": "ی", "ۀ": "ه", "ﭘ": "پ", "ﮐ": "ک", "ﯽ": "ی", "ﺎ": "ا", "ﺑ": "ب", "ﺘ": "ت", "ﺧ": "خ", "ﺩ": "د", "ﺱ": "س", "ﻀ": "ض", "ﻌ": "ع", "ﻟ": "ل", "ﻡ": "م", "ﻢ": "م", "ﻪ": "ه", "ﻮ": "و", 'ﺍ': "ا", 'ة': "ه", 'ﯾ': "ی", 'ﯿ': "ی", 'ﺒ': "ب", 'ﺖ': "ت", 'ﺪ': "د", 'ﺮ': "ر", 'ﺴ': "س", 'ﺷ': "ش", 'ﺸ': "ش", 'ﻋ': "ع", 'ﻤ': "م", 'ﻥ': "ن", 'ﻧ': "ن", 'ﻭ': "و", 'ﺭ': "ر", "ﮔ": "گ", # "ها": " ها", "ئ": "ی", "a": " ای ", "b": " بی ", "c": " سی ", "d": " دی ", "e": " ایی ", "f": " اف ", "g": " جی ", "h": " اچ ", "i": " آی ", "j": " جی ", "k": " کی ", "l": " ال ", "m": " ام ", "n": " ان ", "o": " او ", "p": " پی ", "q": " کیو ", "r": " آر ", "s": " اس ", "t": " تی ", "u": " یو ", "v": " وی ", "w": " دبلیو ", "x": " اکس ", "y": " وای ", "z": " زد ", "\u200c": " ", "\u200d": " ", "\u200e": " ", "\u200f": " ", "\ufeff": " ", } def multiple_replace(text, chars_to_mapping): pattern = "|".join(map(re.escape, chars_to_mapping.keys())) return re.sub(pattern, lambda m: chars_to_mapping[m.group()], str(text)) def remove_special_characters(text, chars_to_ignore_regex): text = re.sub(chars_to_ignore_regex, '', text).lower() + " " return text def normalizer(batch, chars_to_ignore, chars_to_mapping): chars_to_ignore_regex = f"""[{"".join(chars_to_ignore)}]""" text = batch["sentence"].lower().strip() text = _normalizer.normalize(text) text = multiple_replace(text, chars_to_mapping) text = remove_special_characters(text, chars_to_ignore_regex) text = re.sub(" +", " ", text) _text = [] for word in text.split(): try: word = int(word) _text.append(words(word)) except: _text.append(word) text = " ".join(_text) + " " text = text.strip() + " " batch["sentence"] = text return batch def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) speech_array = speech_array.squeeze().numpy() speech_array = librosa.resample(np.asarray(speech_array), sampling_rate, 16_000) batch["speech"] = speech_array return batch def predict(batch): features = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) input_values = features.input_values.to(device) attention_mask = features.attention_mask.to(device) with torch.no_grad(): logits = model(input_values, attention_mask=attention_mask).logits pred_ids = torch.argmax(logits, dim=-1) batch["predicted"] = processor.batch_decode(pred_ids)[0] return batch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") processor = Wav2Vec2Processor.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-persian-shemo") model = Wav2Vec2ForCTC.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-persian-shemo").to(device) dataset = load_dataset("csv", data_files={"test": "/content/fa/dataset/test.csv"}, delimiter="\t")["test"] dataset = dataset.map( normalizer, fn_kwargs={"chars_to_ignore": chars_to_ignore, "chars_to_mapping": chars_to_mapping}, remove_columns=list(set(dataset.column_names) - set(['sentence', 'path'])) ) dataset = dataset.map(speech_file_to_array_fn) result = dataset.map(predict) max_items = np.random.randint(0, len(result), 20).tolist() for i in max_items: reference, predicted = result["sentence"][i], result["predicted"][i] print("reference:", reference) print("predicted:", predicted) print('---') ``` **Output:** ```text reference: همون شبی که قسم خوردی منو از جونت بیشتر دوست داری و تا آخر عمر کنار من می مونی همون شبی که به من وعده دادی بزرگترین جشن های ازدواج رو برام بگیری predicted: همون شبی که قسم خوردی منو از جونت بیشتر دوستاری و تا آخر عمر کنار من می مونیمو یبی که به من وعض دادین بزرگترین جشن های ازدواج و برام بگیری --- reference: خودتون دم به ساعت فحشش می دین کتکش می زنین بس نیست predicted: خودتون دم به ساعت فشش می دیم کتاکش می زنیم بس نیست --- reference: خونه predicted: خونه --- reference: شلوغش نکن predicted: شلوغش نکن --- reference: برای بقیه سوییت هایی در نظر گرفتم predicted: برای بقی سویید هایی در نظر گرفتم --- reference: برو گمشو برو گمشو برو بیرون predicted: برو گمشو برو گمشو برو بیرون --- reference: فقط یک سال بعد از خاتمه جنگ بود که حقیقت رو فهمیدی predicted: فقط یک سال بعد از خاتمه جنگ بود که حقیقت و فهمیدید --- reference: غیر از اون دو نفری که اینجا خوابیدند کسان دیگه ای از دوستانشو به تو معرفی نکرده predicted: غیر از اون دو نفری که اینجا خوابیدند کسانه دیگه ای از دوستانشو به تو معرفی نکرده --- reference: من می دونم اینجایی درو واز کن کویی کوئک predicted: من می دونم این جایی د رو واز کن کوری فکر --- reference: نویسنده باید چهار تا چشم داشته باشه چهار تا گوش predicted: نویسند باید چهار تا چشم داشته باشه و چهار تا گوش --- reference: غیر از اون دو نفری که اینجا خوابیدند کسان دیگه ای از دوستانشو به تو معرفی نکرده predicted: غیر از اون دو نفری که اینجا خوابیدند کسانه دیگه ای از دوستانشو به تو معرفی نکرده --- reference: پس همراهان من چه می کنن چه می کنن که این سرکرده کولی ها تونسته خودشو اینجا برسونه predicted: پس همرا حال من چه می کنن چه می کنن که این سرکرده کلی ها تونسته خودش رو اینجا برسونه --- reference: گوش بدید مادمازل حقیقت اینه که من دلم می خواد به شما کمک کنم زیبایی و جوانی شما دل منو به رحم میاره به من اعتماد کنید دلم می خواد بتونم شما رو از مرگ نجات بدم predicted: هوش بدید مادماز حقیقت اینه که من دلم می خواد به شما کمک کنم زیبای و جوانی شما دل منو به رحم می آره به من اعتماد کنید دلم می خواد بتونم شما رو از مرگ نجات بدم --- reference: قربان به نظر می رسه شما نه تنها به مرگ رونالد دریو بلکه به مرگ خانم مونرو هم مشکوکید predicted: قربان به نظر می رسه شما نه تن ها به مرگ رونال گریو بلکه به مرگ خانم مونرا مشکوکین --- reference: برای اینکه شما رو دوست دارم predicted: برای اینکه شما رو دوست دارم --- reference: مرتبه اول دنبال جسدی می گشتن که انداخته بودن کنار خیابون predicted: حر تبه اول دنبال جسدی می گشتند که انداخته بودن کنار خیابون --- reference: خونه predicted: خونه --- reference: کدبانوی جدید این طبقه هستم predicted: کدبانوی جدید این طبقه هستم --- reference: و این برات خیلی گرون تموم شد predicted: و این برات خیلی گرون تموم شد --- reference: خب چرا نمی دین به خودشون predicted: خبچرا نمی تون به خودشون ``` ## Evaluation The model can be evaluated as follows on the Persian (Farsi) test data of Common Voice. ```python import librosa import torch import torchaudio from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor from datasets import load_dataset, load_metric from num2fawords import words, ordinal_words import numpy as np import hazm import re import string _normalizer = hazm.Normalizer() chars_to_ignore = [ ",", "?", ".", "!", "-", ";", ":", '""', "%", "'", '"', "�", "#", "!", "؟", "?", "«", "»", "،", "(", ")", "؛", "'ٔ", "٬",'ٔ', ",", "?", ".", "!", "-", ";", ":",'"',"“", "%", "‘", "”", "�", "–", "…", "_", "”", '“', '„', 'ā', 'š', # "ء", ] # In case of farsi chars_to_ignore = chars_to_ignore + list(string.ascii_lowercase + string.digits) chars_to_mapping = { 'ك': 'ک', 'دِ': 'د', 'بِ': 'ب', 'زِ': 'ز', 'ذِ': 'ذ', 'شِ': 'ش', 'سِ': 'س', 'ى': 'ی', 'ي': 'ی', 'أ': 'ا', 'ؤ': 'و', "ے": "ی", "ۀ": "ه", "ﭘ": "پ", "ﮐ": "ک", "ﯽ": "ی", "ﺎ": "ا", "ﺑ": "ب", "ﺘ": "ت", "ﺧ": "خ", "ﺩ": "د", "ﺱ": "س", "ﻀ": "ض", "ﻌ": "ع", "ﻟ": "ل", "ﻡ": "م", "ﻢ": "م", "ﻪ": "ه", "ﻮ": "و", 'ﺍ': "ا", 'ة': "ه", 'ﯾ': "ی", 'ﯿ': "ی", 'ﺒ': "ب", 'ﺖ': "ت", 'ﺪ': "د", 'ﺮ': "ر", 'ﺴ': "س", 'ﺷ': "ش", 'ﺸ': "ش", 'ﻋ': "ع", 'ﻤ': "م", 'ﻥ': "ن", 'ﻧ': "ن", 'ﻭ': "و", 'ﺭ': "ر", "ﮔ": "گ", # "ها": " ها", "ئ": "ی", "a": " ای ", "b": " بی ", "c": " سی ", "d": " دی ", "e": " ایی ", "f": " اف ", "g": " جی ", "h": " اچ ", "i": " آی ", "j": " جی ", "k": " کی ", "l": " ال ", "m": " ام ", "n": " ان ", "o": " او ", "p": " پی ", "q": " کیو ", "r": " آر ", "s": " اس ", "t": " تی ", "u": " یو ", "v": " وی ", "w": " دبلیو ", "x": " اکس ", "y": " وای ", "z": " زد ", "\u200c": " ", "\u200d": " ", "\u200e": " ", "\u200f": " ", "\ufeff": " ", } def multiple_replace(text, chars_to_mapping): pattern = "|".join(map(re.escape, chars_to_mapping.keys())) return re.sub(pattern, lambda m: chars_to_mapping[m.group()], str(text)) def remove_special_characters(text, chars_to_ignore_regex): text = re.sub(chars_to_ignore_regex, '', text).lower() + " " return text def normalizer(batch, chars_to_ignore, chars_to_mapping): chars_to_ignore_regex = f"""[{"".join(chars_to_ignore)}]""" text = batch["sentence"].lower().strip() text = _normalizer.normalize(text) text = multiple_replace(text, chars_to_mapping) text = remove_special_characters(text, chars_to_ignore_regex) text = re.sub(" +", " ", text) _text = [] for word in text.split(): try: word = int(word) _text.append(words(word)) except: _text.append(word) text = " ".join(_text) + " " text = text.strip() + " " batch["sentence"] = text return batch def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) speech_array = speech_array.squeeze().numpy() speech_array = librosa.resample(np.asarray(speech_array), sampling_rate, 16_000) batch["speech"] = speech_array return batch def predict(batch): features = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) input_values = features.input_values.to(device) attention_mask = features.attention_mask.to(device) with torch.no_grad(): logits = model(input_values, attention_mask=attention_mask).logits pred_ids = torch.argmax(logits, dim=-1) batch["predicted"] = processor.batch_decode(pred_ids)[0] return batch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") processor = Wav2Vec2Processor.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-persian-shemo") model = Wav2Vec2ForCTC.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-persian-shemo").to(device) dataset = load_dataset("csv", data_files={"test": "/content/fa/dataset/test.csv"}, delimiter="\t")["test"] dataset = dataset.map( normalizer, fn_kwargs={"chars_to_ignore": chars_to_ignore, "chars_to_mapping": chars_to_mapping}, remove_columns=list(set(dataset.column_names) - set(['sentence', 'path'])) ) dataset = dataset.map(speech_file_to_array_fn) result = dataset.map(predict) wer = load_metric("wer") print("WER: {:.2f}".format(100 * wer.compute(predictions=result["predicted"], references=result["sentence"]))) ``` **Test Result:** - WER: 31.00% ## Training The Common Voice `train`, `validation` datasets were used for training. The script used for training can be found [here](https://colab.research.google.com/github/m3hrdadfi/notebooks/blob/main/Fine_Tune_XLSR_Wav2Vec2_on_Persian_ShEMO_ASR_with_%F0%9F%A4%97_Transformers_ipynb.ipynb)
m3hrdadfi/wav2vec2-large-xlsr-persian-v2
2021-04-21T11:37:33.000Z
[ "pytorch", "wav2vec2", "fa", "dataset:common_voice", "transformers", "audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week", "license:apache-2.0" ]
automatic-speech-recognition
[ ".gitattributes", "README.md", "all_results.json", "config.json", "eval_results.json", "predictions.csv", "preprocessor_config.json", "pytorch_model.bin", "pytorch_model_mob.bin", "sample4024.flac", "sample4084.flac", "special_tokens_map.json", "tokenizer_config.json", "train_results.json", "trainer_state.json", "training_args.bin", "vocab.json" ]
m3hrdadfi
76
transformers
--- language: fa datasets: - common_voice tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 widget: - label: Common Voice sample 4024 src: https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-persian-v2/resolve/main/sample4024.flac - label: Common Voice sample 4084 src: https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-persian-v2/resolve/main/sample4084.flac model-index: - name: XLSR Wav2Vec2 Persian (Farsi) V2 by Mehrdad Farahani results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice fa type: common_voice args: fa metrics: - name: Test WER type: wer value: 31.92 --- # Wav2Vec2-Large-XLSR-53-Persian V2 Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Persian (Farsi) using [Common Voice](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: **Requirements** ```bash # requirement packages !pip install git+https://github.com/huggingface/datasets.git !pip install git+https://github.com/huggingface/transformers.git !pip install torchaudio !pip install librosa !pip install jiwer !pip install hazm ``` **Prediction** ```python import librosa import torch import torchaudio from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor from datasets import load_dataset import numpy as np import hazm import re import string import IPython.display as ipd _normalizer = hazm.Normalizer() chars_to_ignore = [ ",", "?", ".", "!", "-", ";", ":", '""', "%", "'", '"', "�", "#", "!", "؟", "?", "«", "»", "،", "(", ")", "؛", "'ٔ", "٬",'ٔ', ",", "?", ".", "!", "-", ";", ":",'"',"“", "%", "‘", "”", "�", "–", "…", "_", "”", '“', '„', 'ā', 'š', # "ء", ] # In case of farsi chars_to_ignore = chars_to_ignore + list(string.ascii_lowercase + string.digits) chars_to_mapping = { 'ك': 'ک', 'دِ': 'د', 'بِ': 'ب', 'زِ': 'ز', 'ذِ': 'ذ', 'شِ': 'ش', 'سِ': 'س', 'ى': 'ی', 'ي': 'ی', 'أ': 'ا', 'ؤ': 'و', "ے": "ی", "ۀ": "ه", "ﭘ": "پ", "ﮐ": "ک", "ﯽ": "ی", "ﺎ": "ا", "ﺑ": "ب", "ﺘ": "ت", "ﺧ": "خ", "ﺩ": "د", "ﺱ": "س", "ﻀ": "ض", "ﻌ": "ع", "ﻟ": "ل", "ﻡ": "م", "ﻢ": "م", "ﻪ": "ه", "ﻮ": "و", 'ﺍ': "ا", 'ة': "ه", 'ﯾ': "ی", 'ﯿ': "ی", 'ﺒ': "ب", 'ﺖ': "ت", 'ﺪ': "د", 'ﺮ': "ر", 'ﺴ': "س", 'ﺷ': "ش", 'ﺸ': "ش", 'ﻋ': "ع", 'ﻤ': "م", 'ﻥ': "ن", 'ﻧ': "ن", 'ﻭ': "و", 'ﺭ': "ر", "ﮔ": "گ", # "ها": " ها", "ئ": "ی", "a": " ای ", "b": " بی ", "c": " سی ", "d": " دی ", "e": " ایی ", "f": " اف ", "g": " جی ", "h": " اچ ", "i": " آی ", "j": " جی ", "k": " کی ", "l": " ال ", "m": " ام ", "n": " ان ", "o": " او ", "p": " پی ", "q": " کیو ", "r": " آر ", "s": " اس ", "t": " تی ", "u": " یو ", "v": " وی ", "w": " دبلیو ", "x": " اکس ", "y": " وای ", "z": " زد ", "\u200c": " ", "\u200d": " ", "\u200e": " ", "\u200f": " ", "\ufeff": " ", } def multiple_replace(text, chars_to_mapping): pattern = "|".join(map(re.escape, chars_to_mapping.keys())) return re.sub(pattern, lambda m: chars_to_mapping[m.group()], str(text)) def remove_special_characters(text, chars_to_ignore_regex): text = re.sub(chars_to_ignore_regex, '', text).lower() + " " return text def normalizer(batch, chars_to_ignore, chars_to_mapping): chars_to_ignore_regex = f"""[{"".join(chars_to_ignore)}]""" text = batch["sentence"].lower().strip() text = _normalizer.normalize(text) text = multiple_replace(text, chars_to_mapping) text = remove_special_characters(text, chars_to_ignore_regex) text = re.sub(" +", " ", text) text = text.strip() + " " batch["sentence"] = text return batch def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) speech_array = speech_array.squeeze().numpy() speech_array = librosa.resample(np.asarray(speech_array), sampling_rate, 16_000) batch["speech"] = speech_array return batch def predict(batch): features = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) input_values = features.input_values.to(device) attention_mask = features.attention_mask.to(device) with torch.no_grad(): logits = model(input_values, attention_mask=attention_mask).logits pred_ids = torch.argmax(logits, dim=-1) batch["predicted"] = processor.batch_decode(pred_ids)[0] return batch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") processor = Wav2Vec2Processor.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-persian-v2") model = Wav2Vec2ForCTC.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-persian-v2").to(device) dataset = load_dataset("common_voice", "fa", split="test[:1%]") dataset = dataset.map( normalizer, fn_kwargs={"chars_to_ignore": chars_to_ignore, "chars_to_mapping": chars_to_mapping}, remove_columns=list(set(dataset.column_names) - set(['sentence', 'path'])) ) dataset = dataset.map(speech_file_to_array_fn) result = dataset.map(predict) max_items = np.random.randint(0, len(result), 20).tolist() for i in max_items: reference, predicted = result["sentence"][i], result["predicted"][i] print("reference:", reference) print("predicted:", predicted) print('---') ``` **Output:** ```text reference: عجم زنده کردم بدین پارسی predicted: عجم زنده کردم بدین پارسی --- reference: لباس هایم کی آماده خواهند شد predicted: لباس خایم کی آماده خواهند شد --- reference: با مهان همنشین شدم predicted: با مهان همنشین شدم --- reference: یکی از بهترین فیلم هایی بود که در این سال ها دیدم predicted: یکی از بهترین فیلمهایی بود که در این سالها دیدم --- reference: اون خیلی بد ماساژ میده predicted: اون خیلی بد ماساژ میده --- reference: هنوزم بزرگترین دستاورد دولت روحانی اینه که رییسی رییسجمهور نشد predicted: هنوزم بزرگترین دستآوردار دولت روانیاینه که ریسی ریسیومرو نشد --- reference: واسه بدنسازی آماده ای predicted: واسه بعدنسافی آماده ای --- reference: خدای من شماها سالمین predicted: خدای من شما ها سالمین --- reference: بهشون ثابت میشه که دروغ نگفتم predicted: بهشون ثابت میشه که دروغ مگفتم --- reference: آیا ممکن است یک پتو برای من بیاورید predicted: سف کمیتخ لظا --- reference: نزدیک جلو predicted: رزیک جلو --- reference: شایعه پراکن دربارهاش دروغ و شایعه می سازد predicted: شایه پراکن دربارهاش دروغ و شایعه می سازد --- reference: وقتی نیاز است که یک چهره دوستانه بیابند predicted: وقتی نیاز است یک چهره دوستانه بیابند --- reference: ممکنه رادیواکتیوی چیزی باشه predicted: ممکنه به آدیوتیوی چیزی باشه --- reference: دهنتون رو ببندید predicted: دهن جن رو ببندید --- reference: پاشیم بریم قند و شکر و روغنمون رو بگیریم تا تموم نشده predicted: پاشین بریم قند و شکر و روغنمون رو بگیریم تا تموم نشده --- reference: اما قبل از تمام کردن بحث تاریخی باید ذکری هم از ناپیکس بکنیم predicted: اما قبل از تمام کردن بحث تاریخی باید ذکری هم از نایپکس بکنیم --- reference: لطفا کپی امضا شده قرارداد را بازگردانید predicted: لطفا کپی امضال شده قرار داد را باز گردانید --- reference: خیلی هم چیز مهمی نیست predicted: خیلی هم چیز مهمی نیست --- reference: شایعه پراکن دربارهاش دروغ و شایعه می سازد predicted: شایه پراکن دربارهاش دروغ و شایعه می سازد --- ``` ## Evaluation The model can be evaluated as follows on the Persian (Farsi) test data of Common Voice. ```python import librosa import torch import torchaudio from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor from datasets import load_dataset, load_metric import numpy as np import hazm import re import string _normalizer = hazm.Normalizer() chars_to_ignore = [ ",", "?", ".", "!", "-", ";", ":", '""', "%", "'", '"', "�", "#", "!", "؟", "?", "«", "»", "،", "(", ")", "؛", "'ٔ", "٬",'ٔ', ",", "?", ".", "!", "-", ";", ":",'"',"“", "%", "‘", "”", "�", "–", "…", "_", "”", '“', '„', 'ā', 'š', # "ء", ] # In case of farsi chars_to_ignore = chars_to_ignore + list(string.ascii_lowercase + string.digits) chars_to_mapping = { 'ك': 'ک', 'دِ': 'د', 'بِ': 'ب', 'زِ': 'ز', 'ذِ': 'ذ', 'شِ': 'ش', 'سِ': 'س', 'ى': 'ی', 'ي': 'ی', 'أ': 'ا', 'ؤ': 'و', "ے": "ی", "ۀ": "ه", "ﭘ": "پ", "ﮐ": "ک", "ﯽ": "ی", "ﺎ": "ا", "ﺑ": "ب", "ﺘ": "ت", "ﺧ": "خ", "ﺩ": "د", "ﺱ": "س", "ﻀ": "ض", "ﻌ": "ع", "ﻟ": "ل", "ﻡ": "م", "ﻢ": "م", "ﻪ": "ه", "ﻮ": "و", 'ﺍ': "ا", 'ة': "ه", 'ﯾ': "ی", 'ﯿ': "ی", 'ﺒ': "ب", 'ﺖ': "ت", 'ﺪ': "د", 'ﺮ': "ر", 'ﺴ': "س", 'ﺷ': "ش", 'ﺸ': "ش", 'ﻋ': "ع", 'ﻤ': "م", 'ﻥ': "ن", 'ﻧ': "ن", 'ﻭ': "و", 'ﺭ': "ر", "ﮔ": "گ", # "ها": " ها", "ئ": "ی", "a": " ای ", "b": " بی ", "c": " سی ", "d": " دی ", "e": " ایی ", "f": " اف ", "g": " جی ", "h": " اچ ", "i": " آی ", "j": " جی ", "k": " کی ", "l": " ال ", "m": " ام ", "n": " ان ", "o": " او ", "p": " پی ", "q": " کیو ", "r": " آر ", "s": " اس ", "t": " تی ", "u": " یو ", "v": " وی ", "w": " دبلیو ", "x": " اکس ", "y": " وای ", "z": " زد ", "\u200c": " ", "\u200d": " ", "\u200e": " ", "\u200f": " ", "\ufeff": " ", } def multiple_replace(text, chars_to_mapping): pattern = "|".join(map(re.escape, chars_to_mapping.keys())) return re.sub(pattern, lambda m: chars_to_mapping[m.group()], str(text)) def remove_special_characters(text, chars_to_ignore_regex): text = re.sub(chars_to_ignore_regex, '', text).lower() + " " return text def normalizer(batch, chars_to_ignore, chars_to_mapping): chars_to_ignore_regex = f"""[{"".join(chars_to_ignore)}]""" text = batch["sentence"].lower().strip() text = _normalizer.normalize(text) text = multiple_replace(text, chars_to_mapping) text = remove_special_characters(text, chars_to_ignore_regex) text = re.sub(" +", " ", text) text = text.strip() + " " batch["sentence"] = text return batch def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) speech_array = speech_array.squeeze().numpy() speech_array = librosa.resample(np.asarray(speech_array), sampling_rate, 16_000) batch["speech"] = speech_array return batch def predict(batch): features = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) input_values = features.input_values.to(device) attention_mask = features.attention_mask.to(device) with torch.no_grad(): logits = model(input_values, attention_mask=attention_mask).logits pred_ids = torch.argmax(logits, dim=-1) batch["predicted"] = processor.batch_decode(pred_ids)[0] return batch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") processor = Wav2Vec2Processor.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-persian-v2") model = Wav2Vec2ForCTC.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-persian-v2").to(device) dataset = load_dataset("common_voice", "fa", split="test") dataset = dataset.map( normalizer, fn_kwargs={"chars_to_ignore": chars_to_ignore, "chars_to_mapping": chars_to_mapping}, remove_columns=list(set(dataset.column_names) - set(['sentence', 'path'])) ) dataset = dataset.map(speech_file_to_array_fn) result = dataset.map(predict) wer = load_metric("wer") print("WER: {:.2f}".format(100 * wer.compute(predictions=result["predicted"], references=result["sentence"]))) ``` **Test Result:** - WER: 31.92% ## Training The Common Voice `train`, `validation` datasets were used for training. You can see the training states [here](https://wandb.ai/m3hrdadfi/finetuned_wav2vec_xlsr_persian/reports/Fine-Tuning-for-Wav2Vec2-Large-XLSR-53-Persian--Vmlldzo1NjY1NjU?accessToken=pspukt0liicopnwe93wo1ipetqk0gzkuv8669g00wc6hcesk1fh0rfkbd0h46unk) The script used for training can be found [here](https://colab.research.google.com/github/m3hrdadfi/notebooks/blob/main/Fine_Tune_XLSR_Wav2Vec2_on_Persian_ASR_with_%F0%9F%A4%97_Transformers_ipynb.ipynb)
m3hrdadfi/wav2vec2-large-xlsr-persian
2021-03-29T18:08:19.000Z
[ "pytorch", "wav2vec2", "fa", "dataset:common_voice", "transformers", "audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week", "license:apache-2.0" ]
automatic-speech-recognition
[ ".gitattributes", "README.md", "config.json", "preprocessor_config.json", "pytorch_model.bin", "sample1671.flac", "sample687.flac", "special_tokens_map.json", "test_predicted.csv", "tokenizer_config.json", "training_args.bin", "vocab.json" ]
m3hrdadfi
626
transformers
--- language: fa datasets: - common_voice tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 widget: - label: Common Voice sample 687 src: https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-persian/resolve/main/sample687.flac - label: Common Voice sample 1671 src: https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-persian/resolve/main/sample1671.flac model-index: - name: XLSR Wav2Vec2 Persian (Farsi) by Mehrdad Farahani results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice fa type: common_voice args: fa metrics: - name: Test WER type: wer value: 32.20 --- # Wav2Vec2-Large-XLSR-53-Persian Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Persian (Farsi) using [Common Voice](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: **Requirements** ```bash # requirement packages !pip install git+https://github.com/huggingface/datasets.git !pip install git+https://github.com/huggingface/transformers.git !pip install torchaudio !pip install librosa !pip install jiwer !pip install hazm ``` **Prediction** ```python import librosa import torch import torchaudio from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor from datasets import load_dataset import numpy as np import hazm import re import string import IPython.display as ipd _normalizer = hazm.Normalizer() chars_to_ignore = [ ",", "?", ".", "!", "-", ";", ":", '""', "%", "'", '"', "�", "#", "!", "؟", "?", "«", "»", "ء", "،", "(", ")", "؛", "'ٔ", "٬",'ٔ', ",", "?", ".", "!", "-", ";", ":",'"',"“", "%", "‘", "”", "�", "–", "…", "_", "”", '“', '„' ] # In case of farsi chars_to_ignore = chars_to_ignore + list(string.ascii_lowercase + string.digits) chars_to_mapping = { 'ك': 'ک', 'دِ': 'د', 'بِ': 'ب', 'زِ': 'ز', 'ذِ': 'ذ', 'شِ': 'ش', 'سِ': 'س', 'ى': 'ی', 'ي': 'ی', 'أ': 'ا', 'ؤ': 'و', "ے": "ی", "ۀ": "ه", "ﭘ": "پ", "ﮐ": "ک", "ﯽ": "ی", "ﺎ": "ا", "ﺑ": "ب", "ﺘ": "ت", "ﺧ": "خ", "ﺩ": "د", "ﺱ": "س", "ﻀ": "ض", "ﻌ": "ع", "ﻟ": "ل", "ﻡ": "م", "ﻢ": "م", "ﻪ": "ه", "ﻮ": "و", "ئ": "ی", 'ﺍ': "ا", 'ة': "ه", 'ﯾ': "ی", 'ﯿ': "ی", 'ﺒ': "ب", 'ﺖ': "ت", 'ﺪ': "د", 'ﺮ': "ر", 'ﺴ': "س", 'ﺷ': "ش", 'ﺸ': "ش", 'ﻋ': "ع", 'ﻤ': "م", 'ﻥ': "ن", 'ﻧ': "ن", 'ﻭ': "و", 'ﺭ': "ر", "ﮔ": "گ", "\\u200c": " ", "\\u200d": " ", "\\u200e": " ", "\\u200f": " ", "\\ufeff": " ", } def multiple_replace(text, chars_to_mapping): pattern = "|".join(map(re.escape, chars_to_mapping.keys())) return re.sub(pattern, lambda m: chars_to_mapping[m.group()], str(text)) def remove_special_characters(text, chars_to_ignore_regex): text = re.sub(chars_to_ignore_regex, '', text).lower() + " " return text def normalizer(batch, chars_to_ignore, chars_to_mapping): chars_to_ignore_regex = f"""[{"".join(chars_to_ignore)}]""" text = batch["sentence"].lower().strip() text = _normalizer.normalize(text) text = multiple_replace(text, chars_to_mapping) text = remove_special_characters(text, chars_to_ignore_regex) batch["sentence"] = text return batch def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) speech_array = speech_array.squeeze().numpy() speech_array = librosa.resample(np.asarray(speech_array), sampling_rate, 16_000) batch["speech"] = speech_array return batch def predict(batch): features = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) input_values = features.input_values.to(device) attention_mask = features.attention_mask.to(device) with torch.no_grad(): logits = model(input_values, attention_mask=attention_mask).logits pred_ids = torch.argmax(logits, dim=-1) batch["predicted"] = processor.batch_decode(pred_ids)[0] return batch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") processor = Wav2Vec2Processor.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-persian") model = Wav2Vec2ForCTC.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-persian").to(device) dataset = load_dataset("common_voice", "fa", split="test[:1%]") dataset = dataset.map( normalizer, fn_kwargs={"chars_to_ignore": chars_to_ignore, "chars_to_mapping": chars_to_mapping}, remove_columns=list(set(dataset.column_names) - set(['sentence', 'path'])) ) dataset = dataset.map(speech_file_to_array_fn) result = dataset.map(predict) max_items = np.random.randint(0, len(result), 20).tolist() for i in max_items: reference, predicted = result["sentence"][i], result["predicted"][i] print("reference:", reference) print("predicted:", predicted) print('---') ``` **Output:** ```text reference: اطلاعات مسری است predicted: اطلاعات مسری است --- reference: نه منظورم اینه که وقتی که ساکته چه کاریه خودمونه بندازیم زحمت predicted: نه منظورم اینه که وقتی که ساکت چی کاریه خودمونو بندازیم زحمت --- reference: من آب پرتقال می خورم لطفا predicted: من آپ ارتغال می خورم لطفا --- reference: وقت آن رسیده آنها را که قدم پیش میگذارند بزرگ بداریم predicted: وقت آ رسیده آنها را که قدم پیش میگذارند بزرگ بداریم --- reference: سیم باتری دارید predicted: سیم باتری دارید --- reference: این بهتره تا اینکه به بهونه درس و مشق هر روز بره خونه شون predicted: این بهتره تا اینکه به بهمونه درسومش خرروز بره خونه اشون --- reference: ژاکت تنگ است predicted: ژاکت تنگ است --- reference: آت و اشغال های خیابان predicted: آت و اشغال های خیابان --- reference: من به این روند اعتراض دارم predicted: من به این لوند تراج دارم --- reference: کرایه این مکان چند است predicted: کرایه این مکان چند است --- reference: ولی این فرصت این سهم جوانی اعطا نشده است predicted: ولی این فرصت این سحم جوانی اتان نشده است --- reference: متوجه فاجعهای محیطی میشوم predicted: متوجه فاجایهای محیطی میشوم --- reference: ترافیک شدیدیم بود و دیدن نور ماشینا و چراغا و لامپهای مراکز تجاری حس خوبی بهم میدادن predicted: ترافیک شدید ی هم بودا دیدن نور ماشینا و چراغ لامپهای مراکز تجاری حس خولی بهم میدادن --- reference: این مورد عمل ها مربوط به تخصص شما می شود predicted: این مورد عملها مربوط به تخصص شما میشود --- reference: انرژی خیلی کمی دارم predicted: انرژی خیلی کمی دارم --- reference: زیادی خوبی کردنم تهش داستانه predicted: زیادی خوبی کردنم ترش داستانه --- reference: بردهای که پادشاه شود predicted: برده ای که پاده شاه شود --- reference: یونسکو predicted: یونسکو --- reference: شما اخراج هستید predicted: شما اخراج هستید --- reference: من سفر کردن را دوست دارم predicted: من سفر کردم را دوست دارم ``` ## Evaluation The model can be evaluated as follows on the Persian (Farsi) test data of Common Voice. ```python import librosa import torch import torchaudio from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor from datasets import load_dataset, load_metric import numpy as np import hazm import re import string _normalizer = hazm.Normalizer() chars_to_ignore = [ ",", "?", ".", "!", "-", ";", ":", '""', "%", "'", '"', "�", "#", "!", "؟", "?", "«", "»", "ء", "،", "(", ")", "؛", "'ٔ", "٬",'ٔ', ",", "?", ".", "!", "-", ";", ":",'"',"“", "%", "‘", "”", "�", "–", "…", "_", "”", '“', '„' ] # In case of farsi chars_to_ignore = chars_to_ignore + list(string.ascii_lowercase + string.digits) chars_to_mapping = { 'ك': 'ک', 'دِ': 'د', 'بِ': 'ب', 'زِ': 'ز', 'ذِ': 'ذ', 'شِ': 'ش', 'سِ': 'س', 'ى': 'ی', 'ي': 'ی', 'أ': 'ا', 'ؤ': 'و', "ے": "ی", "ۀ": "ه", "ﭘ": "پ", "ﮐ": "ک", "ﯽ": "ی", "ﺎ": "ا", "ﺑ": "ب", "ﺘ": "ت", "ﺧ": "خ", "ﺩ": "د", "ﺱ": "س", "ﻀ": "ض", "ﻌ": "ع", "ﻟ": "ل", "ﻡ": "م", "ﻢ": "م", "ﻪ": "ه", "ﻮ": "و", "ئ": "ی", 'ﺍ': "ا", 'ة': "ه", 'ﯾ': "ی", 'ﯿ': "ی", 'ﺒ': "ب", 'ﺖ': "ت", 'ﺪ': "د", 'ﺮ': "ر", 'ﺴ': "س", 'ﺷ': "ش", 'ﺸ': "ش", 'ﻋ': "ع", 'ﻤ': "م", 'ﻥ': "ن", 'ﻧ': "ن", 'ﻭ': "و", 'ﺭ': "ر", "ﮔ": "گ", "\\u200c": " ", "\\u200d": " ", "\\u200e": " ", "\\u200f": " ", "\\ufeff": " ", } def multiple_replace(text, chars_to_mapping): pattern = "|".join(map(re.escape, chars_to_mapping.keys())) return re.sub(pattern, lambda m: chars_to_mapping[m.group()], str(text)) def remove_special_characters(text, chars_to_ignore_regex): text = re.sub(chars_to_ignore_regex, '', text).lower() + " " return text def normalizer(batch, chars_to_ignore, chars_to_mapping): chars_to_ignore_regex = f"""[{"".join(chars_to_ignore)}]""" text = batch["sentence"].lower().strip() text = _normalizer.normalize(text) text = multiple_replace(text, chars_to_mapping) text = remove_special_characters(text, chars_to_ignore_regex) batch["sentence"] = text return batch def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) speech_array = speech_array.squeeze().numpy() speech_array = librosa.resample(np.asarray(speech_array), sampling_rate, 16_000) batch["speech"] = speech_array return batch def predict(batch): features = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) input_values = features.input_values.to(device) attention_mask = features.attention_mask.to(device) with torch.no_grad(): logits = model(input_values, attention_mask=attention_mask).logits pred_ids = torch.argmax(logits, dim=-1) batch["predicted"] = processor.batch_decode(pred_ids)[0] return batch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") processor = Wav2Vec2Processor.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-persian") model = Wav2Vec2ForCTC.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-persian").to(device) dataset = load_dataset("common_voice", "fa", split="test") dataset = dataset.map( normalizer, fn_kwargs={"chars_to_ignore": chars_to_ignore, "chars_to_mapping": chars_to_mapping}, remove_columns=list(set(dataset.column_names) - set(['sentence', 'path'])) ) dataset = dataset.map(speech_file_to_array_fn) result = dataset.map(predict) wer = load_metric("wer") print("WER: {:.2f}".format(100 * wer.compute(predictions=result["predicted"], references=result["sentence"]))) ``` **Test Result:** - WER: 32.20% ## Training The Common Voice `train`, `validation` datasets were used for training. The script used for training can be found [here](https://colab.research.google.com/github/m3hrdadfi/notebooks/blob/main/Fine_Tune_XLSR_Wav2Vec2_on_Persian_ASR_with_%F0%9F%A4%97_Transformers_ipynb.ipynb)
m3hrdadfi/wav2vec2-large-xlsr-turkish
2021-03-29T07:59:09.000Z
[ "pytorch", "wav2vec2", "tr", "dataset:common_voice", "transformers", "audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week", "license:apache-2.0" ]
automatic-speech-recognition
[ ".gitattributes", "README.md", "all_results.json", "config.json", "eval_results.json", "predictions.csv", "preprocessor_config.json", "pytorch_model.bin", "result.bin", "sample1378.flac", "sample1589.flac", "special_tokens_map.json", "tokenizer_config.json", "train_results.json", "trainer_state.json", "training_args.bin", "vocab.json" ]
m3hrdadfi
65
transformers
--- language: tr datasets: - common_voice tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 widget: - label: Common Voice sample 1378 src: https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-turkish/resolve/main/sample1378.flac - label: Common Voice sample 1589 src: https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-turkish/resolve/main/sample1589.flac model-index: - name: XLSR Wav2Vec2 Turkish by Mehrdad Farahani results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice tr type: common_voice args: tr metrics: - name: Test WER type: wer value: 27.51 --- # Wav2Vec2-Large-XLSR-53-Turkish Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Turkish using [Common Voice](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: **Requirements** ```bash # requirement packages !pip install git+https://github.com/huggingface/datasets.git !pip install git+https://github.com/huggingface/transformers.git !pip install torchaudio !pip install librosa !pip install jiwer ``` **Prediction** ```python import librosa import torch import torchaudio from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor from datasets import load_dataset import numpy as np import re import string import IPython.display as ipd chars_to_ignore = [ ",", "?", ".", "!", "-", ";", ":", '""', "%", "'", '"', "�", "#", "!", "?", "«", "»", "(", ")", "؛", ",", "?", ".", "!", "-", ";", ":", '"', "“", "%", "‘", "�", "–", "…", "_", "”", '“', '„' ] chars_to_mapping = { "\u200c": " ", "\u200d": " ", "\u200e": " ", "\u200f": " ", "\ufeff": " ", } def multiple_replace(text, chars_to_mapping): pattern = "|".join(map(re.escape, chars_to_mapping.keys())) return re.sub(pattern, lambda m: chars_to_mapping[m.group()], str(text)) def remove_special_characters(text, chars_to_ignore_regex): text = re.sub(chars_to_ignore_regex, '', text).lower() + " " return text def normalizer(batch, chars_to_ignore, chars_to_mapping): chars_to_ignore_regex = f"""[{"".join(chars_to_ignore)}]""" text = batch["sentence"].lower().strip() text = text.replace("\u0307", " ").strip() text = multiple_replace(text, chars_to_mapping) text = remove_special_characters(text, chars_to_ignore_regex) batch["sentence"] = text return batch def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) speech_array = speech_array.squeeze().numpy() speech_array = librosa.resample(np.asarray(speech_array), sampling_rate, 16_000) batch["speech"] = speech_array return batch def predict(batch): features = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) input_values = features.input_values.to(device) attention_mask = features.attention_mask.to(device) with torch.no_grad(): logits = model(input_values, attention_mask=attention_mask).logits pred_ids = torch.argmax(logits, dim=-1) batch["predicted"] = processor.batch_decode(pred_ids)[0] return batch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") processor = Wav2Vec2Processor.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-turkish") model = Wav2Vec2ForCTC.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-turkish").to(device) dataset = load_dataset("common_voice", "et", split="test[:1%]") dataset = dataset.map( normalizer, fn_kwargs={"chars_to_ignore": chars_to_ignore, "chars_to_mapping": chars_to_mapping}, remove_columns=list(set(dataset.column_names) - set(['sentence', 'path'])) ) dataset = dataset.map(speech_file_to_array_fn) result = dataset.map(predict) max_items = np.random.randint(0, len(result), 10).tolist() for i in max_items: reference, predicted = result["sentence"][i], result["predicted"][i] print("reference:", reference) print("predicted:", predicted) print('---') ``` **Output:** ```text reference: ülke şu anda iki federasyona üye predicted: ülke şu anda iki federasyona üye --- reference: foruma dört yüzde fazla kişi katıldı predicted: soruma dört yüzden fazla kişi katıldı --- reference: mobi altmış üç çalışanları da mutsuz predicted: mobia haltmış üç çalışanları da mutsur --- reference: kentin mali esnekliğinin düşük olduğu bildirildi predicted: kentin mali esnekleğinin düşük olduğu bildirildi --- reference: fouere iki ülkeyi sorunu abartmamaya çağırdı predicted: foor iki ülkeyi soruna abartmamaya çanayordı --- reference: o ülkeden herhangi bir tepki geldi mi predicted: o ülkeden herhayın bir tepki geldi mi --- reference: bunlara asla sırtımızı dönmeyeceğiz predicted: bunlara asla sırtımızı dönmeyeceğiz --- reference: sizi ayakta tutan nedir predicted: sizi ayakta tutan nedir --- reference: artık insanlar daha bireysel yaşıyor predicted: artık insanlar daha bir eyselli yaşıyor --- reference: her ikisi de diyaloga hazır olduğunu söylüyor predicted: her ikisi de diyaloğa hazır olduğunu söylüyor --- reference: merkez bankasının başlıca amacı düşük enflasyon predicted: merkez bankasının başlrıca anatı güşükyen flasyon --- reference: firefox predicted: fair foks --- reference: ülke halkı çok misafirsever ve dışa dönük predicted: ülke halktı çok isatirtever ve dışa dönük --- reference: ancak kamuoyu bu durumu pek de affetmiyor predicted: ancak kamuonyulgukirmu pek deafıf etmiyor --- reference: i ki madende iki bin beş yüzden fazla kişi çalışıyor predicted: i ki madende iki bin beş yüzden fazla kişi çalışıyor --- reference: sunnyside park dışarıdan oldukça iyi görünüyor predicted: sani sahip park dışarıdan oldukça iyi görünüyor --- reference: büyük ödül on beş bin avro predicted: büyük ödül on beş bin avro --- reference: köyümdeki camiler depoya dönüştürüldü predicted: küyümdeki camiler depoya dönüştürüldü --- reference: maç oldukça diplomatik bir sonuçla birbir bitti predicted: maç oldukça diplomatik bir sonuçla bir birbitti --- reference: kuşların ikisi de karantinada öldüler predicted: kuşların ikiste karantinada özdüler --- ``` ## Evaluation The model can be evaluated as follows on the Turkish test data of Common Voice. ```python import librosa import torch import torchaudio from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor from datasets import load_dataset, load_metric import numpy as np import re import string chars_to_ignore = [ ",", "?", ".", "!", "-", ";", ":", '""', "%", "'", '"', "�", "#", "!", "?", "«", "»", "(", ")", "؛", ",", "?", ".", "!", "-", ";", ":", '"', "“", "%", "‘", "�", "–", "…", "_", "”", '“', '„' ] chars_to_mapping = { "\u200c": " ", "\u200d": " ", "\u200e": " ", "\u200f": " ", "\ufeff": " ", "\u0307": " " } def multiple_replace(text, chars_to_mapping): pattern = "|".join(map(re.escape, chars_to_mapping.keys())) return re.sub(pattern, lambda m: chars_to_mapping[m.group()], str(text)) def remove_special_characters(text, chars_to_ignore_regex): text = re.sub(chars_to_ignore_regex, '', text).lower() + " " return text def normalizer(batch, chars_to_ignore, chars_to_mapping): chars_to_ignore_regex = f"""[{"".join(chars_to_ignore)}]""" text = batch["sentence"].lower().strip() text = text.replace("\u0307", " ").strip() text = multiple_replace(text, chars_to_mapping) text = remove_special_characters(text, chars_to_ignore_regex) text = re.sub(" +", " ", text) text = text.strip() + " " batch["sentence"] = text return batch def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) speech_array = speech_array.squeeze().numpy() speech_array = librosa.resample(np.asarray(speech_array), sampling_rate, 16_000) batch["speech"] = speech_array return batch def predict(batch): features = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) input_values = features.input_values.to(device) attention_mask = features.attention_mask.to(device) with torch.no_grad(): logits = model(input_values, attention_mask=attention_mask).logits pred_ids = torch.argmax(logits, dim=-1) batch["predicted"] = processor.batch_decode(pred_ids)[0] return batch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") processor = Wav2Vec2Processor.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-turkish") model = Wav2Vec2ForCTC.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-turkish").to(device) dataset = load_dataset("common_voice", "tr", split="test") dataset = dataset.map( normalizer, fn_kwargs={"chars_to_ignore": chars_to_ignore, "chars_to_mapping": chars_to_mapping}, remove_columns=list(set(dataset.column_names) - set(['sentence', 'path'])) ) dataset = dataset.map(speech_file_to_array_fn) result = dataset.map(predict) wer = load_metric("wer") print("WER: {:.2f}".format(100 * wer.compute(predictions=result["predicted"], references=result["sentence"]))) ``` ] **Test Result**: - WER: 27.51% ## Training & Report The Common Voice `train`, `validation` datasets were used for training. You can see the training states [here](https://wandb.ai/m3hrdadfi/finetuned_wav2vec_xlsr_turkish/reports/Fine-Tuning-for-Wav2Vec2-Large-XLSR-53-Turkish--Vmlldzo1Njc1MDc?accessToken=02vm5cwbi7d342vyt7h9w9859zex0enltdmjoreyjt3bd5qwv0vs0g3u93iv92q0) The script used for training can be found [here](https://colab.research.google.com/github/m3hrdadfi/notebooks/blob/main/Fine_Tune_XLSR_Wav2Vec2_on_Turkish_ASR_with_%F0%9F%A4%97_Transformers_ipynb.ipynb)
m3hrdadfi/wav2vec2-xlsr-greek-speech-emotion-recognition
2021-06-12T07:13:27.000Z
[ "pytorch", "wav2vec2", "el", "dataset:aesdd", "transformers", "audio", "automatic-speech-recognition", "speech", "speech-emotion-recognition", "license:apache-2.0" ]
automatic-speech-recognition
[ ".gitattributes", "README.md", "added_tokens.json", "all_results.json", "config.json", "eval_results.json", "predict_results.txt", "preprocessor_config.json", "pytorch_model.bin", "special_tokens_map.json", "test.csv", "tokenizer_config.json", "train_results.json", "trainer_state.json", "training_args.bin", "vocab.json" ]
m3hrdadfi
139
transformers
--- language: el datasets: - aesdd tags: - audio - automatic-speech-recognition - speech - speech-emotion-recognition license: apache-2.0 --- # Emotion Recognition in Greek (el) Speech using Wav2Vec 2.0 ## How to use ### Requirements ```bash # requirement packages !pip install git+https://github.com/huggingface/datasets.git !pip install git+https://github.com/huggingface/transformers.git !pip install torchaudio !pip install librosa ``` ### Prediction ```python import torch import torch.nn as nn import torch.nn.functional as F import torchaudio from transformers import AutoConfig, Wav2Vec2FeatureExtractor import librosa import IPython.display as ipd import numpy as np import pandas as pd ``` ```python device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model_name_or_path = "m3hrdadfi/wav2vec2-xlsr-greek-speech-emotion-recognition" config = AutoConfig.from_pretrained(model_name_or_path) feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained(model_name_or_path) sampling_rate = feature_extractor.sampling_rate model = Wav2Vec2ForSpeechClassification.from_pretrained(model_name_or_path).to(device) ``` ```python def speech_file_to_array_fn(path, sampling_rate): speech_array, _sampling_rate = torchaudio.load(path) resampler = torchaudio.transforms.Resample(_sampling_rate) speech = resampler(speech_array).squeeze().numpy() return speech def predict(path, sampling_rate): speech = speech_file_to_array_fn(path, sampling_rate) inputs = feature_extractor(speech, sampling_rate=sampling_rate, return_tensors="pt", padding=True) inputs = {key: inputs[key].to(device) for key in inputs} with torch.no_grad(): logits = model(**inputs).logits scores = F.softmax(logits, dim=1).detach().cpu().numpy()[0] outputs = [{"Emotion": config.id2label[i], "Score": f"{round(score * 100, 3):.1f}%"} for i, score in enumerate(scores)] return outputs ``` ```python path = "/path/to/disgust.wav" outputs = predict(path, sampling_rate) ``` ```bash [ {'Emotion': 'anger', 'Score': '0.0%'}, {'Emotion': 'disgust', 'Score': '99.2%'}, {'Emotion': 'fear', 'Score': '0.1%'}, {'Emotion': 'happiness', 'Score': '0.3%'}, {'Emotion': 'sadness', 'Score': '0.5%'} ] ``` ## Evaluation The following tables summarize the scores obtained by model overall and per each class. | Emotions | precision | recall | f1-score | accuracy | |-----------|-----------|--------|----------|----------| | anger | 0.92 | 1.00 | 0.96 | | | disgust | 0.85 | 0.96 | 0.90 | | | fear | 0.88 | 0.88 | 0.88 | | | happiness | 0.94 | 0.71 | 0.81 | | | sadness | 0.96 | 1.00 | 0.98 | | | | | | Overal | 0.91 | ## Questions? Post a Github issue from [HERE](https://github.com/m3hrdadfi/soxan/issues).
m3tafl0ps/autonlp-NLPIsFun-251844
2021-06-05T17:15:23.000Z
[ "pytorch", "bert", "text-classification", "en", "dataset:m3tafl0ps/autonlp-data-NLPIsFun", "transformers", "autonlp" ]
text-classification
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "sample_input.pkl", "special_tokens_map.json", "tokenizer.json", "tokenizer_config.json", "vocab.txt" ]
m3tafl0ps
32
transformers
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - m3tafl0ps/autonlp-data-NLPIsFun --- # Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 251844 ## Validation Metrics - Loss: 0.38616305589675903 - Accuracy: 0.8356545961002786 - Precision: 0.8253968253968254 - Recall: 0.8571428571428571 - AUC: 0.9222387781709815 - F1: 0.8409703504043127 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/m3tafl0ps/autonlp-NLPIsFun-251844 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("m3tafl0ps/autonlp-NLPIsFun-251844", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("m3tafl0ps/autonlp-NLPIsFun-251844", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
macedonizer/al-roberta-base
2021-05-21T04:08:25.000Z
[ "pytorch", "roberta", "masked-lm", "al", "dataset:wiki-sh", "transformers", "license:apache 2.0", "fill-mask" ]
fill-mask
[ ".gitattributes", "README.md", "config.json", "lets-talk-about-nlp-sh.jpg", "merges.txt", "pytorch_model.bin", "training_args.bin", "vocab.json" ]
macedonizer
772
transformers
--- language: - al thumbnail: https://huggingface.co/macedonizer/al-roberta-base/lets-talk-about-nlp-al.jpg tags: - masked-lm license: Apache 2.0 datasets: - wiki-sh --- # AL-RoBERTa base model Pretrained model on Albanian language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model is case-sensitive: it makes a difference between tirana and Tirana. # Model description RoBERTa is a transformers model pre-trained on a large corpus of text data in a self-supervised fashion. This means it was pre-trained on the raw texts only, with no humans labeling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pre-trained with the Masked language modeling (MLM) objective. Taking a sentence, the model randomly masks 15% of the words in the input then runs the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences, for instance, you can train a standard classifier using the features produced by the BERT model as inputs. # Intended uses & limitations You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions of a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification, or question answering. For tasks such as text generation, you should look at models like GPT2. # How to use You can use this model directly with a pipeline for masked language modeling: \ from transformers import pipeline \ unmasker = pipeline('fill-mask', model='macedonizer/al-roberta-base') \ unmasker("Tirana është \\<mask\\> i Shqipërisë.") \ [{'score': 0.9426872134208679, 'sequence': 'Tirana është kryeqyteti i Shqipërisë', 'token': 7901, 'token_str': ' kryeqyteti'}, {'score': 0.03112833760678768, 'sequence': 'Tirana është kryeqytet i Shqipërisë', 'token': 7439, 'token_str': ' kryeqytet'}, {'score': 0.0022084848023951054, 'sequence': 'Tirana është qytet i Shqipërisë', 'token': 2246, 'token_str': ' qytet'}, {'score': 0.0016222079284489155, 'sequence': 'Tirana është qyteti i Shqipërisë', 'token': 2784, 'token_str': ' qyteti'}, {'score': 0.0008979254635050893, 'sequence': 'Tirana është Kryeqytet i Shqipërisë', 'token': 37653, 'token_str': ' Kryeqytet'}] Here is how to use this model to get the features of a given text in PyTorch: from transformers import RobertaTokenizer, RobertaModel \ tokenizer = RobertaTokenizer.from_pretrained('macedonizer/al-roberta-base') \ model = RobertaModel.from_pretrained('macedonizer/al-roberta-base') \ text = "Replace me by any text you'd like." \ encoded_input = tokenizer(text, return_tensors='pt') \ output = model(**encoded_input)
macedonizer/ba-roberta-base
2021-05-22T05:54:30.000Z
[ "pytorch", "roberta", "masked-lm", "ba", "dataset:wiki-bs", "transformers", "license:apache 2.0", "fill-mask" ]
fill-mask
[ ".gitattributes", "README.md", "abdulah-sidran.jpg", "config.json", "merges.txt", "pytorch_model.bin", "training_args.bin", "vocab.json" ]
macedonizer
19
transformers
--- language: - ba thumbnail: https://huggingface.co/macedonizer/ba-roberta-base/abdulah-sidran.jpg tags: - masked-lm license: Apache 2.0 datasets: - wiki-bs --- # BA-RoBERTa base model Pretrained model on Bosnian language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model is case-sensitive: it makes a difference between sarajevo and Sarajevo. # Model description RoBERTa is a transformers model pre-trained on a large corpus of Bosnian texts in a self-supervised fashion. This means it was pre-trained on the raw texts only, with no humans labeling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pre-trained with the Masked language modeling (MLM) objective. Taking a sentence, the model randomly masks 15% of the words in the input then runs the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences, for instance, you can train a standard classifier using the features produced by the BERT model as inputs. # Intended uses & limitations You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions of a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification, or question answering. For tasks such as text generation, you should look at models like GPT2. # How to use You can use this model directly with a pipeline for masked language modeling: \ from transformers import pipeline \ unmasker = pipeline('fill-mask', model='macedonizer/ba-roberta-base') \ unmasker("Sarajevo je \\<mask\\> grad Bosne i Hercegovine.") \ [{'score': 0.6210788488388062, \ 'sequence': 'Sarajevo je glavni grad Bosne i Hercegovine', \ 'token': 2006, \ 'token_str': ' glavni'}, \ {'score': 0.19640550017356873, \ 'sequence': 'Sarajevo je najveći grad Bosne i Hercegovine', \ 'token': 1707, \ 'token_str': ' najveći'}, \ {'score': 0.0210184995085001, \ 'sequence': 'Sarajevo je srednjovjekovni grad Bosne i Hercegovine', \ 'token': 22596, \ 'token_str': ' srednjovjekovni'}, \ {'score': 0.010822420939803123, \ 'sequence': 'Sarajevo je najmnogoljudniji grad Bosne i Hercegovine', \ 'token': 40186, \ 'token_str': ' najmnogoljudniji'}, \ {'score': 0.006114463787525892, \ 'sequence': 'Sarajevo je službeni grad Bosne i Hercegovine', \ 'token': 8546, \ 'token_str': ' službeni'}] \ Here is how to use this model to get the features of a given text in PyTorch: from transformers import RobertaTokenizer, RobertaModel \ tokenizer = RobertaTokenizer.from_pretrained('macedonizer/ba-roberta-base') \ model = RobertaModel.from_pretrained('macedonizer/ba-roberta-base') \ text = "Replace me by any text you'd like." \ encoded_input = tokenizer(text, return_tensors='pt') \ output = model(**encoded_input)
macedonizer/gr-roberta-base
2021-06-03T17:35:39.000Z
[ "pytorch", "roberta", "masked-lm", "gr", "dataset:wiki-gr", "transformers", "license:apache 2.0", "fill-mask" ]
fill-mask
[ ".gitattributes", "README.md", "config.json", "lets-talk-about-nlp-gr.jpg", "merges.txt", "pytorch_model.bin", "training_args.bin", "vocab.json" ]
macedonizer
20
transformers
--- language: - gr thumbnail: https://huggingface.co/macedonizer/gr-roberta-base/lets-talk-about-nlp-gr.jpg tags: - masked-lm license: Apache 2.0 datasets: - wiki-gr --- # GR-RoBERTa base model Pretrained model on Macedonian language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model is case-sensitive: it makes a difference between Athens and athens. # Model description RoBERTa is a transformers model pre-trained on a large corpus of мацед data in a self-supervised fashion. This means it was pre-trained on the raw texts only, with no humans labeling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pre-trained with the Masked language modeling (MLM) objective. Taking a sentence, the model randomly masks 15% of the words in the input then runs the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. This way, the model learns an inner representation of the Greek language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences, for instance, you can train a standard classifier using the features produced by the BERT model as inputs. # Intended uses & limitations You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions of a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification, or question answering. For tasks such as text generation, you should look at models like GPT2. # How to use You can use this model directly with a pipeline for masked language modeling: from transformers import pipeline \ unmasker = pipeline('fill-mask', model='macedonizer/gr-roberta-base') \ unmasker("Η Αθήνα είναι η \<mask\> της Ελλάδας") \ [{'score': 0.8832866549491882, \ 'sequence': 'Η Αθήνα είναι η πρωτεύουσα της Ελλάδας', \ 'token': 2788, \ 'token_str': ' πρωτεύουσα'}, \ {'score': 0.018105432391166687, \ 'sequence': 'Η Αθήνα είναι η μεγαλύτερη της Ελλάδας', \ 'token': 2363, \ 'token_str': ' μεγαλύτερη'}, \ {'score': 0.015836946666240692, \ 'sequence': 'Η Αθήνα είναι η έδρα της Ελλάδας', \ 'token': 1950, \ 'token_str': ' έδρα'}, \ {'score': 0.015673324465751648, \ 'sequence': 'Η Αθήνα είναι η μόνη της Ελλάδας', \ 'token': 6548, \ 'token_str': ' μόνη'}, \ {'score': 0.01375910360366106, \ 'sequence': 'Η Αθήνα είναι η πόλη της Ελλάδας', \ 'token': 825, \ 'token_str': ' πόλη'}] Here is how to use this model to get the features of a given text in PyTorch: from transformers import RobertaTokenizer, RobertaModel \ tokenizer = RobertaTokenizer.from_pretrained('macedonizer/gr-roberta-base') \ model = RobertaModel.from_pretrained('macedonizer/gr-roberta-base') \ text = "Replace me by any text you'd like." \ encoded_input = tokenizer(text, return_tensors='pt') \ output = model(**encoded_input)
macedonizer/hr-roberta-base
2021-05-20T17:41:13.000Z
[ "pytorch", "jax", "roberta", "masked-lm", "hr", "dataset:wiki-hr", "transformers", "license:apache 2.0", "fill-mask" ]
fill-mask
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "lets-talk-about-nlp-hr.jpg", "merges.txt", "pytorch_model.bin", "vocab.json" ]
macedonizer
36
transformers
--- language: - hr thumbnail: https://huggingface.co/macedonizer/hr-roberta-base/ivo-andric.jpg tags: - masked-lm license: Apache 2.0 datasets: - wiki-hr --- # HR-RoBERTa base model Pretrained model on Macedonian language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model is case-sensitive: it makes a difference between скопје and Скопје. # Model description RoBERTa is a transformers model pre-trained on a large corpus of мацед data in a self-supervised fashion. This means it was pre-trained on the raw texts only, with no humans labeling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pre-trained with the Masked language modeling (MLM) objective. Taking a sentence, the model randomly masks 15% of the words in the input then runs the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences, for instance, you can train a standard classifier using the features produced by the BERT model as inputs. # Intended uses & limitations You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions of a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification, or question answering. For tasks such as text generation, you should look at models like GPT2. # How to use You can use this model directly with a pipeline for masked language modeling: \ from transformers import pipeline \ unmasker = pipeline('fill-mask', model='macedonizer/hr-roberta-base') \ unmasker("Zagrab je \\<mask\\> glavni grad Hrvatske.") \ [ {'sequence': 'Zagreb je glavni grad Hrvatske.', 'score': 0.8750431537628174, 'token': 2026, 'token_str': ' glavni'}, {'sequence': 'Zagreb je najveći grad Hrvatske.', 'score': 0.060711536556482315, 'token': 2474, 'token_str': ' najveći'}, {'sequence': 'Zagreb je prvi grad Hrvatske.', 'score': 0.005241130944341421, 'token': 780, 'token_str': ' prvi'}, {'sequence': 'Zagreb je jedini grad Hrvatske.', 'score': 0.004663003608584404, 'token': 3280, 'token_str': ' jedini'}, {'sequence': 'Zagreb je treći grad Hrvatske.', 'score': 0.003771631745621562, 'token': 3236, 'token_str': ' treći' ] \ Here is how to use this model to get the features of a given text in PyTorch: from transformers import RobertaTokenizer, RobertaModel \ tokenizer = RobertaTokenizer.from_pretrained('macedonizer/hr-roberta-base') \ model = RobertaModel.from_pretrained('macedonizer/hr-roberta-base') \ text = "Replace me by any text you'd like." \ encoded_input = tokenizer(text, return_tensors='pt') \ output = model(**encoded_input)
macedonizer/mk-roberta-base
2021-05-20T17:41:59.000Z
[ "pytorch", "jax", "roberta", "masked-lm", "mk", "dataset:wiki-mk", "dataset:time-mk-news-2010-2015", "transformers", "license:apache 2.0", "fill-mask" ]
fill-mask
[ ".gitattributes", "README.md", "blaze-koneski.jpg", "config.json", "flax_model.msgpack", "lets-talk-about-nlp-blaze-koneski-2.jpg", "lets-talk-about-nlp-blaze-koneski.jpg", "merges.txt", "pytorch_model.bin", "scheduler.pt", "trainer_state.json", "training_args.bin", "vocab.json" ]
macedonizer
39
transformers
--- language: - mk thumbnail: https://huggingface.co/macedonizer/mk-roberta-base/blaze-koneski.jpg tags: - masked-lm license: Apache 2.0 datasets: - wiki-mk - time-mk-news-2010-2015 --- # MK-RoBERTa base model Pretrained model on Macedonian language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model is case-sensitive: it makes a difference between скопје and Скопје. # Model description RoBERTa is a transformers model pre-trained on a large corpus of мацед data in a self-supervised fashion. This means it was pre-trained on the raw texts only, with no humans labeling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pre-trained with the Masked language modeling (MLM) objective. Taking a sentence, the model randomly masks 15% of the words in the input then runs the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences, for instance, you can train a standard classifier using the features produced by the BERT model as inputs. # Intended uses & limitations You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions of a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification, or question answering. For tasks such as text generation, you should look at models like GPT2. # How to use You can use this model directly with a pipeline for masked language modeling: \ from transformers import pipeline \ unmasker = pipeline('fill-mask', model='macedonizer/mk-roberta-base') \ unmasker("Скопје е \\<mask\\> град на Македонија.") \ [{'sequence': 'Скопје е главен град на Македонија.', \ 'score': 0.5900368094444275, \ 'token': 2782, \ 'token_str': ' главен'}, \ {'sequence': 'Скопје е главниот град на Македонија.', \ 'score': 0.1789761781692505, \ 'token': 3177, \ 'token_str': ' главниот'}, \ {'sequence': 'Скопје е административен град на Македонија.', \ 'score': 0.01679774932563305, \ 'token': 9563, \ 'token_str': ' административен'}, \ {'sequence': 'Скопје е мал град на Македонија.', \ 'score': 0.016263898462057114, \ 'token': 2473, \ 'token_str': ' мал'}, \ {'sequence': 'Скопје е најголемиот град на Македонија.', \ 'score': 0.01312252413481474, \ 'token': 4271, \ 'token_str': ' најголемиот'}] \ Here is how to use this model to get the features of a given text in PyTorch: from transformers import RobertaTokenizer, RobertaModel \ tokenizer = RobertaTokenizer.from_pretrained('macedonizer/mk-roberta-base') \ model = RobertaModel.from_pretrained('macedonizer/mk-roberta-base') \ text = "Replace me by any text you'd like." \ encoded_input = tokenizer(text, return_tensors='pt') \ output = model(**encoded_input)
macedonizer/sl-roberta-base
2021-05-20T17:42:53.000Z
[ "pytorch", "jax", "roberta", "masked-lm", "sl", "dataset:wiki-sl", "transformers", "license:apache 2.0", "fill-mask" ]
fill-mask
[ ".gitattributes", "README.md", "config.json", "desktop.ini", "flax_model.msgpack", "ivan-cankar.jpg", "merges.txt", "pytorch_model.bin", "training_args.bin", "vocab.json" ]
macedonizer
26
transformers
--- language: - sl thumbnail: https://huggingface.co/macedonizer/sl-roberta-base/ivan-cankar.jpg tags: - masked-lm license: Apache 2.0 datasets: - wiki-sl --- # HR-RoBERTa base model Pretrained model on Macedonian language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model is case-sensitive: it makes a difference between скопје and Скопје. # Model description RoBERTa is a transformers model pre-trained on a large corpus of мацед data in a self-supervised fashion. This means it was pre-trained on the raw texts only, with no humans labeling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pre-trained with the Masked language modeling (MLM) objective. Taking a sentence, the model randomly masks 15% of the words in the input then runs the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences, for instance, you can train a standard classifier using the features produced by the BERT model as inputs. # Intended uses & limitations You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions of a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification, or question answering. For tasks such as text generation, you should look at models like GPT2. # How to use You can use this model directly with a pipeline for masked language modeling: \ from transformers import pipeline \ unmasker = pipeline('fill-mask', model='macedonizer/hr-roberta-base') \ unmasker("Zagrab je \\<mask\\> glavni grad Hrvatske.") \ [ {'sequence': 'Zagreb je glavni grad Hrvatske.', 'score': 0.8750431537628174, 'token': 2026, 'token_str': ' glavni'}, {'sequence': 'Zagreb je najveći grad Hrvatske.', 'score': 0.060711536556482315, 'token': 2474, 'token_str': ' najveći'}, {'sequence': 'Zagreb je prvi grad Hrvatske.', 'score': 0.005241130944341421, 'token': 780, 'token_str': ' prvi'}, {'sequence': 'Zagreb je jedini grad Hrvatske.', 'score': 0.004663003608584404, 'token': 3280, 'token_str': ' jedini'}, {'sequence': 'Zagreb je treći grad Hrvatske.', 'score': 0.003771631745621562, 'token': 3236, 'token_str': ' treći' ] \ Here is how to use this model to get the features of a given text in PyTorch: from transformers import RobertaTokenizer, RobertaModel \ tokenizer = RobertaTokenizer.from_pretrained('macedonizer/hr-roberta-base') \ model = RobertaModel.from_pretrained('macedonizer/hr-roberta-base') \ text = "Replace me by any text you'd like." \ encoded_input = tokenizer(text, return_tensors='pt') \ output = model(**encoded_input)
macedonizer/sr-roberta-base
2021-05-20T23:53:23.000Z
[ "pytorch", "jax", "roberta", "masked-lm", "sr", "dataset:wiki-sr", "transformers", "license:apache 2.0", "fill-mask" ]
fill-mask
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "lets-talk-about-nlp-sr.jpg", "merges.txt", "pytorch_model.bin", "training_args.bin", "vocab.json" ]
macedonizer
80
transformers
--- language: - sr thumbnail: https://huggingface.co/macedonizer/sr-roberta-base/lets-talk-about-nlp-sr.jpg tags: - masked-lm license: Apache 2.0 datasets: - wiki-sr --- # SR-RoBERTa base model Pretrained model on Serbian language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model is case-sensitive: it makes a difference between скопје and Скопје. # Model description RoBERTa is a transformers model pre-trained on a large corpus of мацед data in a self-supervised fashion. This means it was pre-trained on the raw texts only, with no humans labeling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pre-trained with the Masked language modeling (MLM) objective. Taking a sentence, the model randomly masks 15% of the words in the input then runs the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences, for instance, you can train a standard classifier using the features produced by the BERT model as inputs. # Intended uses & limitations You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions of a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification, or question answering. For tasks such as text generation, you should look at models like GPT2. # How to use You can use this model directly with a pipeline for masked language modeling: \ from transformers import pipeline \ unmasker = pipeline('fill-mask', model='macedonizer/sr-roberta-base') \ unmasker("Београд је <mask> град Србије.") \ [{'score': 0.7834128141403198, 'sequence': 'Београд је главни град Србије', 'token': 3087, 'token_str': ' главни'}, {'score': 0.15424974262714386, 'sequence': 'Београд је највећи град Србије', 'token': 3916, 'token_str': ' највећи'}, {'score': 0.0035441946238279343, 'sequence': 'Београд је најважнији град Србије', 'token': 18577, 'token_str': ' најважнији'}, {'score': 0.003132033161818981, 'sequence': 'Београд је велики град Србије', 'token': 2063, 'token_str': ' велики'}, {'score': 0.0030417360831052065, 'sequence': 'Београд је важан град Србије', 'token': 9463, 'token_str': ' важан'}] Here is how to use this model to get the features of a given text in PyTorch: from transformers import RobertaTokenizer, RobertaModel \ tokenizer = RobertaTokenizer.from_pretrained('macedonizer/mk-roberta-base') \ model = RobertaModel.from_pretrained('macedonizer/sr-roberta-base') \ text = "Replace me by any text you'd like." \ encoded_input = tokenizer(text, return_tensors='pt') \ output = model(**encoded_input)
machinelord/bert_esa_ep4
2021-05-19T22:30:33.000Z
[ "pytorch", "bert", "masked-lm", "transformers", "fill-mask" ]
fill-mask
[ ".DS_Store", ".gitattributes", "config.json", "pytorch_model.bin", "tokenizer.json" ]
machinelord
13
transformers
madisonh/madi
2021-01-27T01:04:15.000Z
[]
[ ".gitattributes" ]
madisonh
0
madlag/albert-base-v2-squad
2021-05-05T13:54:33.000Z
[ "pytorch", "albert", "question-answering", "transformers" ]
question-answering
[ ".gitattributes", "README.md", "config.json", "data_args.json", "eval_metrics.json", "evaluate_timing.json", "model_args.json", "predictions.json", "pytorch_model.bin", "scheduler.pt", "sparse_args.json", "special_tokens_map.json", "spiece.model", "tokenizer_config.json", "trainer_state.json", "training_args.bin" ]
madlag
15
transformers
Albert v2 finetuned on SQuAD v1. Trained using the [nn_pruning](https://github.com/huggingface/nn_pruning/tree/main/examples/question_answering) script, with pruning disabled. [Original results](https://github.com/google-research/albert) are F1=90.2, EM=83.2, we improved them to: ```{ "exact_match": 83.74645222327341, "f1": 90.78776054621733 }```
madlag/bert-base-uncased-squad-v1-sparse0.25
2021-05-19T22:31:23.000Z
[ "pytorch", "tf", "jax", "bert", "question-answering", "en", "dataset:squad", "arxiv:2005.07683", "transformers", "license:mit", "bert-base" ]
question-answering
[ ".gitattributes", ".gitignore", "README.md", "config.json", "flax_model.msgpack", "pytorch_model.bin", "special_tokens_map.json", "tf_model.h5", "tokenizer_config.json", "training_args.json", "vocab.txt" ]
madlag
21
transformers
--- language: en thumbnail: license: mit tags: - question-answering - bert - bert-base datasets: - squad metrics: - squad widget: - text: "Where is located the Eiffel Tower ?" context: "The Eiffel Tower is a wrought-iron lattice tower on the Champ de Mars in Paris, France. It is named after the engineer Gustave Eiffel, whose company designed and built the tower." - text: "Who is Frederic Chopin?" context: "Frédéric François Chopin, born Fryderyk Franciszek Chopin (1 March 1810 – 17 October 1849), was a Polish composer and virtuoso pianist of the Romantic era who wrote primarily for solo piano." --- ## BERT-base uncased model fine-tuned on SQuAD v1 This model is [block-sparse](https://github.com/huggingface/pytorch_block_sparse). That means that with the right runtime it can run roughly 3x faster than an dense network, with 25% of the original weights. This of course has some impact on the accuracy (see below). It uses a modified version of Victor Sanh [Movement Pruning](https://arxiv.org/abs/2005.07683) method. This model was fine-tuned from the HuggingFace [BERT](https://www.aclweb.org/anthology/N19-1423/) base uncased checkpoint on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer). This model is case-insensitive: it does not make a difference between english and English. ## Details | Dataset | Split | # samples | | -------- | ----- | --------- | | SQuAD1.1 | train | 90.6K | | SQuAD1.1 | eval | 11.1k | ### Fine-tuning - Python: `3.8.5` - Machine specs: `CPU: Intel(R) Core(TM) i7-6700K CPU` `Memory: 64 GiB` `GPUs: 1 GeForce GTX 3090, with 24GiB memory` `GPU driver: 455.23.05, CUDA: 11.1` ### Results **Model size**: `418M` | Metric | # Value | # Original ([Table 2](https://www.aclweb.org/anthology/N19-1423.pdf))| | ------ | --------- | --------- | | **EM** | **74.82** | **80.8** | | **F1** | **83.7** | **88.5** | Note that the above results didn't involve any hyperparameter search. ## Example Usage ```python from transformers import pipeline qa_pipeline = pipeline( "question-answering", model="madlag/bert-base-uncased-squad-v1-sparse0.25", tokenizer="madlag/bert-base-uncased-squad-v1-sparse0.25" ) predictions = qa_pipeline({ 'context': "Frédéric François Chopin, born Fryderyk Franciszek Chopin (1 March 1810 – 17 October 1849), was a Polish composer and virtuoso pianist of the Romantic era who wrote primarily for solo piano.", 'question': "Who is Frederic Chopin?", }) print(predictions)
madlag/bert-base-uncased-squad1.1-block-sparse-0.07-v1
2021-05-19T22:31:59.000Z
[ "pytorch", "tf", "bert", "question-answering", "en", "dataset:squad", "arxiv:2005.07683", "transformers", "license:mit", "bert-base" ]
question-answering
[ ".gitattributes", "README.md", "config.json", "model_meta.json", "pytorch_model.bin", "special_tokens_map.json", "tf_model.h5", "tokenizer_config.json", "vocab.txt", "model_card/density.js", "model_card/pruning.svg", "model_card/layer_images/layer_0_attention_output_dense.png", "model_card/layer_images/layer_0_attention_self_key.png", "model_card/layer_images/layer_0_attention_self_query.png", "model_card/layer_images/layer_0_attention_self_value.png", "model_card/layer_images/layer_0_intermediate_dense.png", "model_card/layer_images/layer_0_output_dense.png", "model_card/layer_images/layer_10_attention_output_dense.png", "model_card/layer_images/layer_10_attention_self_key.png", "model_card/layer_images/layer_10_attention_self_query.png", "model_card/layer_images/layer_10_attention_self_value.png", "model_card/layer_images/layer_10_intermediate_dense.png", "model_card/layer_images/layer_10_output_dense.png", "model_card/layer_images/layer_11_attention_output_dense.png", "model_card/layer_images/layer_11_attention_self_key.png", "model_card/layer_images/layer_11_attention_self_query.png", "model_card/layer_images/layer_11_attention_self_value.png", "model_card/layer_images/layer_11_intermediate_dense.png", "model_card/layer_images/layer_11_output_dense.png", "model_card/layer_images/layer_1_attention_output_dense.png", "model_card/layer_images/layer_1_attention_self_key.png", "model_card/layer_images/layer_1_attention_self_query.png", "model_card/layer_images/layer_1_attention_self_value.png", "model_card/layer_images/layer_1_intermediate_dense.png", "model_card/layer_images/layer_1_output_dense.png", "model_card/layer_images/layer_2_attention_output_dense.png", "model_card/layer_images/layer_2_attention_self_key.png", "model_card/layer_images/layer_2_attention_self_query.png", "model_card/layer_images/layer_2_attention_self_value.png", "model_card/layer_images/layer_2_intermediate_dense.png", "model_card/layer_images/layer_2_output_dense.png", "model_card/layer_images/layer_3_attention_output_dense.png", "model_card/layer_images/layer_3_attention_self_key.png", "model_card/layer_images/layer_3_attention_self_query.png", "model_card/layer_images/layer_3_attention_self_value.png", "model_card/layer_images/layer_3_intermediate_dense.png", "model_card/layer_images/layer_3_output_dense.png", "model_card/layer_images/layer_4_attention_output_dense.png", "model_card/layer_images/layer_4_attention_self_key.png", "model_card/layer_images/layer_4_attention_self_query.png", "model_card/layer_images/layer_4_attention_self_value.png", "model_card/layer_images/layer_4_intermediate_dense.png", "model_card/layer_images/layer_4_output_dense.png", "model_card/layer_images/layer_5_attention_output_dense.png", "model_card/layer_images/layer_5_attention_self_key.png", "model_card/layer_images/layer_5_attention_self_query.png", "model_card/layer_images/layer_5_attention_self_value.png", "model_card/layer_images/layer_5_intermediate_dense.png", "model_card/layer_images/layer_5_output_dense.png", "model_card/layer_images/layer_6_attention_output_dense.png", "model_card/layer_images/layer_6_attention_self_key.png", "model_card/layer_images/layer_6_attention_self_query.png", "model_card/layer_images/layer_6_attention_self_value.png", "model_card/layer_images/layer_6_intermediate_dense.png", "model_card/layer_images/layer_6_output_dense.png", "model_card/layer_images/layer_7_attention_output_dense.png", "model_card/layer_images/layer_7_attention_self_key.png", "model_card/layer_images/layer_7_attention_self_query.png", "model_card/layer_images/layer_7_attention_self_value.png", "model_card/layer_images/layer_7_intermediate_dense.png", "model_card/layer_images/layer_7_output_dense.png", "model_card/layer_images/layer_8_attention_output_dense.png", "model_card/layer_images/layer_8_attention_self_key.png", "model_card/layer_images/layer_8_attention_self_query.png", "model_card/layer_images/layer_8_attention_self_value.png", "model_card/layer_images/layer_8_intermediate_dense.png", "model_card/layer_images/layer_8_output_dense.png", "model_card/layer_images/layer_9_attention_output_dense.png", "model_card/layer_images/layer_9_attention_self_key.png", "model_card/layer_images/layer_9_attention_self_query.png", "model_card/layer_images/layer_9_attention_self_value.png", "model_card/layer_images/layer_9_intermediate_dense.png", "model_card/layer_images/layer_9_output_dense.png" ]
madlag
23
transformers
--- language: en thumbnail: license: mit tags: - question-answering - bert - bert-base datasets: - squad metrics: - squad widget: - text: "Where is the Eiffel Tower located?" context: "The Eiffel Tower is a wrought-iron lattice tower on the Champ de Mars in Paris, France. It is named after the engineer Gustave Eiffel, whose company designed and built the tower." - text: "Who is Frederic Chopin?" context: "Frédéric François Chopin, born Fryderyk Franciszek Chopin (1 March 1810 – 17 October 1849), was a Polish composer and virtuoso pianist of the Romantic era who wrote primarily for solo piano." --- ## BERT-base uncased model fine-tuned on SQuAD v1 This model is block sparse: the **linear** layers contains **7.5%** of the original weights. The model contains **28.2%** of the original weights **overall**. The training use a modified version of Victor Sanh [Movement Pruning](https://arxiv.org/abs/2005.07683) method. That means that with the [block-sparse](https://github.com/huggingface/pytorch_block_sparse) runtime it ran **1.92x** faster than an dense networks on the evaluation, at the price of some impact on the accuracy (see below). This model was fine-tuned from the HuggingFace [BERT](https://www.aclweb.org/anthology/N19-1423/) base uncased checkpoint on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer), and distilled from the equivalent model [csarron/bert-base-uncased-squad-v1](https://huggingface.co/csarron/bert-base-uncased-squad-v1). This model is case-insensitive: it does not make a difference between english and English. ## Pruning details A side-effect of the block pruning is that some of the attention heads are completely removed: 106 heads were removed on a total of 144 (73.6%). Here is a detailed view on how the remaining heads are distributed in the network after pruning. ![Pruning details](https://huggingface.co/madlag/bert-base-uncased-squad1.1-block-sparse-0.07-v1/raw/main/model_card/pruning.svg) ## Density plot <script src="/madlag/bert-base-uncased-squad1.1-block-sparse-0.07-v1/raw/main/model_card/density.js" id="9301e950-59b1-497b-a2c5-25c24e07b3a0"></script> ## Details | Dataset | Split | # samples | | -------- | ----- | --------- | | SQuAD1.1 | train | 90.6K | | SQuAD1.1 | eval | 11.1k | ### Fine-tuning - Python: `3.8.5` - Machine specs: ```CPU: Intel(R) Core(TM) i7-6700K CPU Memory: 64 GiB GPUs: 1 GeForce GTX 3090, with 24GiB memory GPU driver: 455.23.05, CUDA: 11.1 ``` ### Results **Pytorch model file size**: `335M` (original BERT: `438M`) | Metric | # Value | # Original ([Table 2](https://www.aclweb.org/anthology/N19-1423.pdf))| | ------ | --------- | --------- | | **EM** | **71.88** | **80.8** | | **F1** | **81.36** | **88.5** | ## Example Usage ```python from transformers import pipeline qa_pipeline = pipeline( "question-answering", model="madlag/bert-base-uncased-squad1.1-block-sparse-0.07-v1", tokenizer="madlag/bert-base-uncased-squad1.1-block-sparse-0.07-v1" ) predictions = qa_pipeline({ 'context': "Frédéric François Chopin, born Fryderyk Franciszek Chopin (1 March 1810 – 17 October 1849), was a Polish composer and virtuoso pianist of the Romantic era who wrote primarily for solo piano.", 'question': "Who is Frederic Chopin?", }) print(predictions) ```
madlag/bert-base-uncased-squad1.1-block-sparse-0.13-v1
2021-05-19T22:32:43.000Z
[ "pytorch", "tf", "bert", "question-answering", "en", "dataset:squad", "arxiv:2005.07683", "transformers", "license:mit", "bert-base" ]
question-answering
[ ".gitattributes", "README.md", "config.json", "model_meta.json", "pytorch_model.bin", "special_tokens_map.json", "tf_model.h5", "tokenizer_config.json", "vocab.txt", "model_card/density.js", "model_card/pruning.svg", "model_card/layer_images/layer_0_attention_output_dense.png", "model_card/layer_images/layer_0_attention_self_key.png", "model_card/layer_images/layer_0_attention_self_query.png", "model_card/layer_images/layer_0_attention_self_value.png", "model_card/layer_images/layer_0_intermediate_dense.png", "model_card/layer_images/layer_0_output_dense.png", "model_card/layer_images/layer_10_attention_output_dense.png", "model_card/layer_images/layer_10_attention_self_key.png", "model_card/layer_images/layer_10_attention_self_query.png", "model_card/layer_images/layer_10_attention_self_value.png", "model_card/layer_images/layer_10_intermediate_dense.png", "model_card/layer_images/layer_10_output_dense.png", "model_card/layer_images/layer_11_attention_output_dense.png", "model_card/layer_images/layer_11_attention_self_key.png", "model_card/layer_images/layer_11_attention_self_query.png", "model_card/layer_images/layer_11_attention_self_value.png", "model_card/layer_images/layer_11_intermediate_dense.png", "model_card/layer_images/layer_11_output_dense.png", "model_card/layer_images/layer_1_attention_output_dense.png", "model_card/layer_images/layer_1_attention_self_key.png", "model_card/layer_images/layer_1_attention_self_query.png", "model_card/layer_images/layer_1_attention_self_value.png", "model_card/layer_images/layer_1_intermediate_dense.png", "model_card/layer_images/layer_1_output_dense.png", "model_card/layer_images/layer_2_attention_output_dense.png", "model_card/layer_images/layer_2_attention_self_key.png", "model_card/layer_images/layer_2_attention_self_query.png", "model_card/layer_images/layer_2_attention_self_value.png", "model_card/layer_images/layer_2_intermediate_dense.png", "model_card/layer_images/layer_2_output_dense.png", "model_card/layer_images/layer_3_attention_output_dense.png", "model_card/layer_images/layer_3_attention_self_key.png", "model_card/layer_images/layer_3_attention_self_query.png", "model_card/layer_images/layer_3_attention_self_value.png", "model_card/layer_images/layer_3_intermediate_dense.png", "model_card/layer_images/layer_3_output_dense.png", "model_card/layer_images/layer_4_attention_output_dense.png", "model_card/layer_images/layer_4_attention_self_key.png", "model_card/layer_images/layer_4_attention_self_query.png", "model_card/layer_images/layer_4_attention_self_value.png", "model_card/layer_images/layer_4_intermediate_dense.png", "model_card/layer_images/layer_4_output_dense.png", "model_card/layer_images/layer_5_attention_output_dense.png", "model_card/layer_images/layer_5_attention_self_key.png", "model_card/layer_images/layer_5_attention_self_query.png", "model_card/layer_images/layer_5_attention_self_value.png", "model_card/layer_images/layer_5_intermediate_dense.png", "model_card/layer_images/layer_5_output_dense.png", "model_card/layer_images/layer_6_attention_output_dense.png", "model_card/layer_images/layer_6_attention_self_key.png", "model_card/layer_images/layer_6_attention_self_query.png", "model_card/layer_images/layer_6_attention_self_value.png", "model_card/layer_images/layer_6_intermediate_dense.png", "model_card/layer_images/layer_6_output_dense.png", "model_card/layer_images/layer_7_attention_output_dense.png", "model_card/layer_images/layer_7_attention_self_key.png", "model_card/layer_images/layer_7_attention_self_query.png", "model_card/layer_images/layer_7_attention_self_value.png", "model_card/layer_images/layer_7_intermediate_dense.png", "model_card/layer_images/layer_7_output_dense.png", "model_card/layer_images/layer_8_attention_output_dense.png", "model_card/layer_images/layer_8_attention_self_key.png", "model_card/layer_images/layer_8_attention_self_query.png", "model_card/layer_images/layer_8_attention_self_value.png", "model_card/layer_images/layer_8_intermediate_dense.png", "model_card/layer_images/layer_8_output_dense.png", "model_card/layer_images/layer_9_attention_output_dense.png", "model_card/layer_images/layer_9_attention_self_key.png", "model_card/layer_images/layer_9_attention_self_query.png", "model_card/layer_images/layer_9_attention_self_value.png", "model_card/layer_images/layer_9_intermediate_dense.png", "model_card/layer_images/layer_9_output_dense.png" ]
madlag
35
transformers
--- language: en thumbnail: license: mit tags: - question-answering - bert - bert-base datasets: - squad metrics: - squad widget: - text: "Where is the Eiffel Tower located?" context: "The Eiffel Tower is a wrought-iron lattice tower on the Champ de Mars in Paris, France. It is named after the engineer Gustave Eiffel, whose company designed and built the tower." - text: "Who is Frederic Chopin?" context: "Frédéric François Chopin, born Fryderyk Franciszek Chopin (1 March 1810 – 17 October 1849), was a Polish composer and virtuoso pianist of the Romantic era who wrote primarily for solo piano." --- ## BERT-base uncased model fine-tuned on SQuAD v1 This model is block sparse: the **linear** layers contains **12.5%** of the original weights. The model contains **32.1%** of the original weights **overall**. The training use a modified version of Victor Sanh [Movement Pruning](https://arxiv.org/abs/2005.07683) method. That means that with the [block-sparse](https://github.com/huggingface/pytorch_block_sparse) runtime it ran **1.65x** faster than an dense networks on the evaluation, at the price of some impact on the accuracy (see below). This model was fine-tuned from the HuggingFace [BERT](https://www.aclweb.org/anthology/N19-1423/) base uncased checkpoint on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer), and distilled from the equivalent model [csarron/bert-base-uncased-squad-v1](https://huggingface.co/csarron/bert-base-uncased-squad-v1). This model is case-insensitive: it does not make a difference between english and English. ## Pruning details A side-effect of the block pruning is that some of the attention heads are completely removed: 97 heads were removed on a total of 144 (67.4%). Here is a detailed view on how the remaining heads are distributed in the network after pruning. ![Pruning details](https://huggingface.co/madlag/bert-base-uncased-squad1.1-block-sparse-0.13-v1/raw/main/model_card/pruning.svg) ## Density plot <script src="/madlag/bert-base-uncased-squad1.1-block-sparse-0.13-v1/raw/main/model_card/density.js" id="34ede51e-2375-4d96-99dd-383de82a2d16"></script> ## Details | Dataset | Split | # samples | | -------- | ----- | --------- | | SQuAD1.1 | train | 90.6K | | SQuAD1.1 | eval | 11.1k | ### Fine-tuning - Python: `3.8.5` - Machine specs: ```CPU: Intel(R) Core(TM) i7-6700K CPU Memory: 64 GiB GPUs: 1 GeForce GTX 3090, with 24GiB memory GPU driver: 455.23.05, CUDA: 11.1 ``` ### Results **Pytorch model file size**: `342M` (original BERT: `438M`) | Metric | # Value | # Original ([Table 2](https://www.aclweb.org/anthology/N19-1423.pdf))| | ------ | --------- | --------- | | **EM** | **74.39** | **80.8** | | **F1** | **83.26** | **88.5** | ## Example Usage ```python from transformers import pipeline qa_pipeline = pipeline( "question-answering", model="madlag/bert-base-uncased-squad1.1-block-sparse-0.13-v1", tokenizer="madlag/bert-base-uncased-squad1.1-block-sparse-0.13-v1" ) predictions = qa_pipeline({ 'context': "Frédéric François Chopin, born Fryderyk Franciszek Chopin (1 March 1810 – 17 October 1849), was a Polish composer and virtuoso pianist of the Romantic era who wrote primarily for solo piano.", 'question': "Who is Frederic Chopin?", }) print(predictions) ```
madlag/bert-base-uncased-squad1.1-block-sparse-0.20-v1
2021-05-19T22:33:15.000Z
[ "pytorch", "tf", "bert", "question-answering", "en", "dataset:squad", "arxiv:2005.07683", "transformers", "license:mit", "bert-base" ]
question-answering
[ ".gitattributes", "README.md", "config.json", "model_meta.json", "pytorch_model.bin", "special_tokens_map.json", "tf_model.h5", "tokenizer_config.json", "vocab.txt", "model_card/density.js", "model_card/network.html", "model_card/pruning.svg", "model_card/layer_images/layer_0_attention_output_dense.png", "model_card/layer_images/layer_0_attention_self_key.png", "model_card/layer_images/layer_0_attention_self_query.png", "model_card/layer_images/layer_0_attention_self_value.png", "model_card/layer_images/layer_0_intermediate_dense.png", "model_card/layer_images/layer_0_output_dense.png", "model_card/layer_images/layer_10_attention_output_dense.png", "model_card/layer_images/layer_10_attention_self_key.png", "model_card/layer_images/layer_10_attention_self_query.png", "model_card/layer_images/layer_10_attention_self_value.png", "model_card/layer_images/layer_10_intermediate_dense.png", "model_card/layer_images/layer_10_output_dense.png", "model_card/layer_images/layer_11_attention_output_dense.png", "model_card/layer_images/layer_11_attention_self_key.png", "model_card/layer_images/layer_11_attention_self_query.png", "model_card/layer_images/layer_11_attention_self_value.png", "model_card/layer_images/layer_11_intermediate_dense.png", "model_card/layer_images/layer_11_output_dense.png", "model_card/layer_images/layer_1_attention_output_dense.png", "model_card/layer_images/layer_1_attention_self_key.png", "model_card/layer_images/layer_1_attention_self_query.png", "model_card/layer_images/layer_1_attention_self_value.png", "model_card/layer_images/layer_1_intermediate_dense.png", "model_card/layer_images/layer_1_output_dense.png", "model_card/layer_images/layer_2_attention_output_dense.png", "model_card/layer_images/layer_2_attention_self_key.png", "model_card/layer_images/layer_2_attention_self_query.png", "model_card/layer_images/layer_2_attention_self_value.png", "model_card/layer_images/layer_2_intermediate_dense.png", "model_card/layer_images/layer_2_output_dense.png", "model_card/layer_images/layer_3_attention_output_dense.png", "model_card/layer_images/layer_3_attention_self_key.png", "model_card/layer_images/layer_3_attention_self_query.png", "model_card/layer_images/layer_3_attention_self_value.png", "model_card/layer_images/layer_3_intermediate_dense.png", "model_card/layer_images/layer_3_output_dense.png", "model_card/layer_images/layer_4_attention_output_dense.png", "model_card/layer_images/layer_4_attention_self_key.png", "model_card/layer_images/layer_4_attention_self_query.png", "model_card/layer_images/layer_4_attention_self_value.png", "model_card/layer_images/layer_4_intermediate_dense.png", "model_card/layer_images/layer_4_output_dense.png", "model_card/layer_images/layer_5_attention_output_dense.png", "model_card/layer_images/layer_5_attention_self_key.png", "model_card/layer_images/layer_5_attention_self_query.png", "model_card/layer_images/layer_5_attention_self_value.png", "model_card/layer_images/layer_5_intermediate_dense.png", "model_card/layer_images/layer_5_output_dense.png", "model_card/layer_images/layer_6_attention_output_dense.png", "model_card/layer_images/layer_6_attention_self_key.png", "model_card/layer_images/layer_6_attention_self_query.png", "model_card/layer_images/layer_6_attention_self_value.png", "model_card/layer_images/layer_6_intermediate_dense.png", "model_card/layer_images/layer_6_output_dense.png", "model_card/layer_images/layer_7_attention_output_dense.png", "model_card/layer_images/layer_7_attention_self_key.png", "model_card/layer_images/layer_7_attention_self_query.png", "model_card/layer_images/layer_7_attention_self_value.png", "model_card/layer_images/layer_7_intermediate_dense.png", "model_card/layer_images/layer_7_output_dense.png", "model_card/layer_images/layer_8_attention_output_dense.png", "model_card/layer_images/layer_8_attention_self_key.png", "model_card/layer_images/layer_8_attention_self_query.png", "model_card/layer_images/layer_8_attention_self_value.png", "model_card/layer_images/layer_8_intermediate_dense.png", "model_card/layer_images/layer_8_output_dense.png", "model_card/layer_images/layer_9_attention_output_dense.png", "model_card/layer_images/layer_9_attention_self_key.png", "model_card/layer_images/layer_9_attention_self_query.png", "model_card/layer_images/layer_9_attention_self_value.png", "model_card/layer_images/layer_9_intermediate_dense.png", "model_card/layer_images/layer_9_output_dense.png" ]
madlag
46
transformers
--- language: en thumbnail: license: mit tags: - question-answering - bert - bert-base datasets: - squad metrics: - squad widget: - text: "Where is the Eiffel Tower located?" context: "The Eiffel Tower is a wrought-iron lattice tower on the Champ de Mars in Paris, France. It is named after the engineer Gustave Eiffel, whose company designed and built the tower." - text: "Who is Frederic Chopin?" context: "Frédéric François Chopin, born Fryderyk Franciszek Chopin (1 March 1810 – 17 October 1849), was a Polish composer and virtuoso pianist of the Romantic era who wrote primarily for solo piano." --- ## BERT-base uncased model fine-tuned on SQuAD v1 This model is block sparse: the **linear** layers contains **20.2%** of the original weights. The model contains **38.1%** of the original weights **overall**. The training use a modified version of Victor Sanh [Movement Pruning](https://arxiv.org/abs/2005.07683) method. That means that with the [block-sparse](https://github.com/huggingface/pytorch_block_sparse) runtime it ran **1.39x** faster than an dense networks on the evaluation, at the price of some impact on the accuracy (see below). This model was fine-tuned from the HuggingFace [BERT](https://www.aclweb.org/anthology/N19-1423/) base uncased checkpoint on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer), and distilled from the equivalent model [csarron/bert-base-uncased-squad-v1](https://huggingface.co/csarron/bert-base-uncased-squad-v1). This model is case-insensitive: it does not make a difference between english and English. ## Pruning details A side-effect of the block pruning is that some of the attention heads are completely removed: 90 heads were removed on a total of 144 (62.5%). Here is a detailed view on how the remaining heads are distributed in the network after pruning. ![Pruning details](https://huggingface.co/madlag/bert-base-uncased-squad1.1-block-sparse-0.20-v1/raw/main/model_card/pruning.svg) ## Density plot <script src="/madlag/bert-base-uncased-squad1.1-block-sparse-0.20-v1/raw/main/model_card/density.js" id="ddbad516-679a-400d-9e28-0182fd89b188"></script> ## Details | Dataset | Split | # samples | | -------- | ----- | --------- | | SQuAD1.1 | train | 90.6K | | SQuAD1.1 | eval | 11.1k | ### Fine-tuning - Python: `3.8.5` - Machine specs: ```CPU: Intel(R) Core(TM) i7-6700K CPU Memory: 64 GiB GPUs: 1 GeForce GTX 3090, with 24GiB memory GPU driver: 455.23.05, CUDA: 11.1 ``` ### Results **Pytorch model file size**: `347M` (original BERT: `438M`) | Metric | # Value | # Original ([Table 2](https://www.aclweb.org/anthology/N19-1423.pdf))| | ------ | --------- | --------- | | **EM** | **76.98** | **80.8** | | **F1** | **85.45** | **88.5** | ## Example Usage ```python from transformers import pipeline qa_pipeline = pipeline( "question-answering", model="madlag/bert-base-uncased-squad1.1-block-sparse-0.20-v1", tokenizer="madlag/bert-base-uncased-squad1.1-block-sparse-0.20-v1" ) predictions = qa_pipeline({ 'context': "Frédéric François Chopin, born Fryderyk Franciszek Chopin (1 March 1810 – 17 October 1849), was a Polish composer and virtuoso pianist of the Romantic era who wrote primarily for solo piano.", 'question': "Who is Frederic Chopin?", }) print(predictions) ```
madlag/bert-base-uncased-squad1.1-block-sparse-0.32-v1
2021-05-19T22:33:45.000Z
[ "pytorch", "tf", "bert", "question-answering", "en", "dataset:squad", "arxiv:2005.07683", "transformers", "license:mit", "bert-base" ]
question-answering
[ ".gitattributes", "README.md", "config.json", "model_meta.json", "pytorch_model.bin", "special_tokens_map.json", "tf_model.h5", "tokenizer_config.json", "vocab.txt", "model_card/density.js", "model_card/pruning.svg", "model_card/layer_images/layer_0_attention_output_dense.png", "model_card/layer_images/layer_0_attention_self_key.png", "model_card/layer_images/layer_0_attention_self_query.png", "model_card/layer_images/layer_0_attention_self_value.png", "model_card/layer_images/layer_0_intermediate_dense.png", "model_card/layer_images/layer_0_output_dense.png", "model_card/layer_images/layer_10_attention_output_dense.png", "model_card/layer_images/layer_10_attention_self_key.png", "model_card/layer_images/layer_10_attention_self_query.png", "model_card/layer_images/layer_10_attention_self_value.png", "model_card/layer_images/layer_10_intermediate_dense.png", "model_card/layer_images/layer_10_output_dense.png", "model_card/layer_images/layer_11_attention_output_dense.png", "model_card/layer_images/layer_11_attention_self_key.png", "model_card/layer_images/layer_11_attention_self_query.png", "model_card/layer_images/layer_11_attention_self_value.png", "model_card/layer_images/layer_11_intermediate_dense.png", "model_card/layer_images/layer_11_output_dense.png", "model_card/layer_images/layer_1_attention_output_dense.png", "model_card/layer_images/layer_1_attention_self_key.png", "model_card/layer_images/layer_1_attention_self_query.png", "model_card/layer_images/layer_1_attention_self_value.png", "model_card/layer_images/layer_1_intermediate_dense.png", "model_card/layer_images/layer_1_output_dense.png", "model_card/layer_images/layer_2_attention_output_dense.png", "model_card/layer_images/layer_2_attention_self_key.png", "model_card/layer_images/layer_2_attention_self_query.png", "model_card/layer_images/layer_2_attention_self_value.png", "model_card/layer_images/layer_2_intermediate_dense.png", "model_card/layer_images/layer_2_output_dense.png", "model_card/layer_images/layer_3_attention_output_dense.png", "model_card/layer_images/layer_3_attention_self_key.png", "model_card/layer_images/layer_3_attention_self_query.png", "model_card/layer_images/layer_3_attention_self_value.png", "model_card/layer_images/layer_3_intermediate_dense.png", "model_card/layer_images/layer_3_output_dense.png", "model_card/layer_images/layer_4_attention_output_dense.png", "model_card/layer_images/layer_4_attention_self_key.png", "model_card/layer_images/layer_4_attention_self_query.png", "model_card/layer_images/layer_4_attention_self_value.png", "model_card/layer_images/layer_4_intermediate_dense.png", "model_card/layer_images/layer_4_output_dense.png", "model_card/layer_images/layer_5_attention_output_dense.png", "model_card/layer_images/layer_5_attention_self_key.png", "model_card/layer_images/layer_5_attention_self_query.png", "model_card/layer_images/layer_5_attention_self_value.png", "model_card/layer_images/layer_5_intermediate_dense.png", "model_card/layer_images/layer_5_output_dense.png", "model_card/layer_images/layer_6_attention_output_dense.png", "model_card/layer_images/layer_6_attention_self_key.png", "model_card/layer_images/layer_6_attention_self_query.png", "model_card/layer_images/layer_6_attention_self_value.png", "model_card/layer_images/layer_6_intermediate_dense.png", "model_card/layer_images/layer_6_output_dense.png", "model_card/layer_images/layer_7_attention_output_dense.png", "model_card/layer_images/layer_7_attention_self_key.png", "model_card/layer_images/layer_7_attention_self_query.png", "model_card/layer_images/layer_7_attention_self_value.png", "model_card/layer_images/layer_7_intermediate_dense.png", "model_card/layer_images/layer_7_output_dense.png", "model_card/layer_images/layer_8_attention_output_dense.png", "model_card/layer_images/layer_8_attention_self_key.png", "model_card/layer_images/layer_8_attention_self_query.png", "model_card/layer_images/layer_8_attention_self_value.png", "model_card/layer_images/layer_8_intermediate_dense.png", "model_card/layer_images/layer_8_output_dense.png", "model_card/layer_images/layer_9_attention_output_dense.png", "model_card/layer_images/layer_9_attention_self_key.png", "model_card/layer_images/layer_9_attention_self_query.png", "model_card/layer_images/layer_9_attention_self_value.png", "model_card/layer_images/layer_9_intermediate_dense.png", "model_card/layer_images/layer_9_output_dense.png" ]
madlag
22
transformers
--- language: en thumbnail: license: mit tags: - question-answering - bert - bert-base datasets: - squad metrics: - squad widget: - text: "Where is the Eiffel Tower located?" context: "The Eiffel Tower is a wrought-iron lattice tower on the Champ de Mars in Paris, France. It is named after the engineer Gustave Eiffel, whose company designed and built the tower." - text: "Who is Frederic Chopin?" context: "Frédéric François Chopin, born Fryderyk Franciszek Chopin (1 March 1810 – 17 October 1849), was a Polish composer and virtuoso pianist of the Romantic era who wrote primarily for solo piano." --- ## BERT-base uncased model fine-tuned on SQuAD v1 This model is block sparse: the **linear** layers contains **31.7%** of the original weights. The model contains **47.0%** of the original weights **overall**. The training use a modified version of Victor Sanh [Movement Pruning](https://arxiv.org/abs/2005.07683) method. That means that with the [block-sparse](https://github.com/huggingface/pytorch_block_sparse) runtime it ran **1.12x** faster than an dense networks on the evaluation, at the price of some impact on the accuracy (see below). This model was fine-tuned from the HuggingFace [BERT](https://www.aclweb.org/anthology/N19-1423/) base uncased checkpoint on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer), and distilled from the equivalent model [csarron/bert-base-uncased-squad-v1](https://huggingface.co/csarron/bert-base-uncased-squad-v1). This model is case-insensitive: it does not make a difference between english and English. ## Pruning details A side-effect of the block pruning is that some of the attention heads are completely removed: 80 heads were removed on a total of 144 (55.6%). Here is a detailed view on how the remaining heads are distributed in the network after pruning. ![Pruning details](https://huggingface.co/madlag/bert-base-uncased-squad1.1-block-sparse-0.32-v1/raw/main/model_card/pruning.svg) ## Density plot <script src="/madlag/bert-base-uncased-squad1.1-block-sparse-0.32-v1/raw/main/model_card/density.js" id="79005f4a-723c-4bf8-bc7f-5ad11676be6c"></script> ## Details | Dataset | Split | # samples | | -------- | ----- | --------- | | SQuAD1.1 | train | 90.6K | | SQuAD1.1 | eval | 11.1k | ### Fine-tuning - Python: `3.8.5` - Machine specs: ```CPU: Intel(R) Core(TM) i7-6700K CPU Memory: 64 GiB GPUs: 1 GeForce GTX 3090, with 24GiB memory GPU driver: 455.23.05, CUDA: 11.1 ``` ### Results **Pytorch model file size**: `355M` (original BERT: `438M`) | Metric | # Value | # Original ([Table 2](https://www.aclweb.org/anthology/N19-1423.pdf))| | ------ | --------- | --------- | | **EM** | **79.04** | **80.8** | | **F1** | **86.70** | **88.5** | ## Example Usage ```python from transformers import pipeline qa_pipeline = pipeline( "question-answering", model="madlag/bert-base-uncased-squad1.1-block-sparse-0.32-v1", tokenizer="madlag/bert-base-uncased-squad1.1-block-sparse-0.32-v1" ) predictions = qa_pipeline({ 'context': "Frédéric François Chopin, born Fryderyk Franciszek Chopin (1 March 1810 – 17 October 1849), was a Polish composer and virtuoso pianist of the Romantic era who wrote primarily for solo piano.", 'question': "Who is Frederic Chopin?", }) print(predictions) ```
madlag/bert-base-uncased-squad1.1-pruned-x3.2-v2
2021-05-19T22:34:32.000Z
[ "pytorch", "jax", "bert", "question-answering", "transformers" ]
question-answering
[ ".gitattributes", "config.json", "data_args.json", "eval_metrics.json", "evaluate_timing.json", "flax_model.msgpack", "model_args.json", "pytorch_model.bin", "sparse_args.json", "sparsity_report.json", "special_tokens_map.json", "speed_report.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
madlag
9
transformers
madlag/bert-base-uncased-squadv1-x1.16-f88.1-d8-unstruct-v1
2021-06-16T15:03:46.000Z
[ "pytorch", "tf", "bert", "question-answering", "en", "dataset:squad", "transformers", "license:mit" ]
question-answering
[ ".gitattributes", "README.md", "config.json", "model_info.json", "pytorch_model.bin", "special_tokens_map.json", "tf_model.h5", "tokenizer_config.json", "vocab.txt", "eval/eval_metrics.json", "eval/evaluate_timing.json", "eval/nbest_predictions.json.tgz", "eval/predictions.json", "eval/sparsity_report.json", "eval/speed_report.json", "model_card/density_info.js", "model_card/pruning_info.js", "model_card/images/layer_0_attention_output_dense.png", "model_card/images/layer_0_attention_self_key.png", "model_card/images/layer_0_attention_self_query.png", "model_card/images/layer_0_attention_self_value.png", "model_card/images/layer_0_intermediate_dense.png", "model_card/images/layer_0_output_dense.png", "model_card/images/layer_10_attention_output_dense.png", "model_card/images/layer_10_attention_self_key.png", "model_card/images/layer_10_attention_self_query.png", "model_card/images/layer_10_attention_self_value.png", "model_card/images/layer_10_intermediate_dense.png", "model_card/images/layer_10_output_dense.png", "model_card/images/layer_11_attention_output_dense.png", "model_card/images/layer_11_attention_self_key.png", "model_card/images/layer_11_attention_self_query.png", "model_card/images/layer_11_attention_self_value.png", "model_card/images/layer_11_intermediate_dense.png", "model_card/images/layer_11_output_dense.png", "model_card/images/layer_1_attention_output_dense.png", "model_card/images/layer_1_attention_self_key.png", "model_card/images/layer_1_attention_self_query.png", "model_card/images/layer_1_attention_self_value.png", "model_card/images/layer_1_intermediate_dense.png", "model_card/images/layer_1_output_dense.png", "model_card/images/layer_2_attention_output_dense.png", "model_card/images/layer_2_attention_self_key.png", "model_card/images/layer_2_attention_self_query.png", "model_card/images/layer_2_attention_self_value.png", "model_card/images/layer_2_intermediate_dense.png", "model_card/images/layer_2_output_dense.png", "model_card/images/layer_3_attention_output_dense.png", "model_card/images/layer_3_attention_self_key.png", "model_card/images/layer_3_attention_self_query.png", "model_card/images/layer_3_attention_self_value.png", "model_card/images/layer_3_intermediate_dense.png", "model_card/images/layer_3_output_dense.png", "model_card/images/layer_4_attention_output_dense.png", "model_card/images/layer_4_attention_self_key.png", "model_card/images/layer_4_attention_self_query.png", "model_card/images/layer_4_attention_self_value.png", "model_card/images/layer_4_intermediate_dense.png", "model_card/images/layer_4_output_dense.png", "model_card/images/layer_5_attention_output_dense.png", "model_card/images/layer_5_attention_self_key.png", "model_card/images/layer_5_attention_self_query.png", "model_card/images/layer_5_attention_self_value.png", "model_card/images/layer_5_intermediate_dense.png", "model_card/images/layer_5_output_dense.png", "model_card/images/layer_6_attention_output_dense.png", "model_card/images/layer_6_attention_self_key.png", "model_card/images/layer_6_attention_self_query.png", "model_card/images/layer_6_attention_self_value.png", "model_card/images/layer_6_intermediate_dense.png", "model_card/images/layer_6_output_dense.png", "model_card/images/layer_7_attention_output_dense.png", "model_card/images/layer_7_attention_self_key.png", "model_card/images/layer_7_attention_self_query.png", "model_card/images/layer_7_attention_self_value.png", "model_card/images/layer_7_intermediate_dense.png", "model_card/images/layer_7_output_dense.png", "model_card/images/layer_8_attention_output_dense.png", "model_card/images/layer_8_attention_self_key.png", "model_card/images/layer_8_attention_self_query.png", "model_card/images/layer_8_attention_self_value.png", "model_card/images/layer_8_intermediate_dense.png", "model_card/images/layer_8_output_dense.png", "model_card/images/layer_9_attention_output_dense.png", "model_card/images/layer_9_attention_self_key.png", "model_card/images/layer_9_attention_self_query.png", "model_card/images/layer_9_attention_self_value.png", "model_card/images/layer_9_intermediate_dense.png", "model_card/images/layer_9_output_dense.png", "training/data_args.json", "training/model_args.json", "training/sparse_args.json", "training/training_args.bin" ]
madlag
17
transformers
--- language: en thumbnail: license: mit tags: - question-answering - - datasets: - squad metrics: - squad widget: - text: "Where is the Eiffel Tower located?" context: "The Eiffel Tower is a wrought-iron lattice tower on the Champ de Mars in Paris, France. It is named after the engineer Gustave Eiffel, whose company designed and built the tower." - text: "Who is Frederic Chopin?" context: "Frédéric François Chopin, born Fryderyk Franciszek Chopin (1 March 1810 – 17 October 1849), was a Polish composer and virtuoso pianist of the Romantic era who wrote primarily for solo piano." --- ## BERT-base uncased model fine-tuned on SQuAD v1 This model was created using the [nn_pruning](https://github.com/huggingface/nn_pruning) python library: the **linear layers contains 8.0%** of the original weights. The model contains **28.0%** of the original weights **overall** (the embeddings account for a significant part of the model, and they are not pruned by this method). With a simple resizing of the linear matrices it ran **1.16x as fast as bert-base-uncased** on the evaluation. This is possible because the pruning method lead to structured matrices: to visualize them, hover below on the plot to see the non-zero/zero parts of each matrix. <div class="graph"><script src="/madlag/bert-base-uncased-squadv1-x1.16-f88.1-d8-unstruct-v1/raw/main/model_card/density_info.js" id="c60d09ec-81ff-4d6f-b616-c3ef09b2175d"></script></div> In terms of accuracy, its **F1 is 88.11**, compared with 88.5 for bert-base-uncased, a **F1 drop of 0.39**. ## Fine-Pruning details This model was fine-tuned from the HuggingFace [model](https://huggingface.co/bert-base-uncased) checkpoint on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer), and distilled from the model [bert-large-uncased-whole-word-masking-finetuned-squad](https://huggingface.co/bert-large-uncased-whole-word-masking-finetuned-squad) This model is case-insensitive: it does not make a difference between english and English. A side-effect of the block pruning is that some of the attention heads are completely removed: 22 heads were removed on a total of 144 (15.3%). Here is a detailed view on how the remaining heads are distributed in the network after pruning. <div class="graph"><script src="/madlag/bert-base-uncased-squadv1-x1.16-f88.1-d8-unstruct-v1/raw/main/model_card/pruning_info.js" id="55528c8b-d5f5-46a5-a35a-dad93725f7e5"></script></div> ## Details of the SQuAD1.1 dataset | Dataset | Split | # samples | | -------- | ----- | --------- | | SQuAD1.1 | train | 90.6K | | SQuAD1.1 | eval | 11.1k | ### Fine-tuning - Python: `3.8.5` - Machine specs: ```CPU: Intel(R) Core(TM) i7-6700K CPU Memory: 64 GiB GPUs: 1 GeForce GTX 3090, with 24GiB memory GPU driver: 455.23.05, CUDA: 11.1 ``` ### Results **Pytorch model file size**: `398MB` (original BERT: `420MB`) | Metric | # Value | # Original ([Table 2](https://www.aclweb.org/anthology/N19-1423.pdf))| Variation | | ------ | --------- | --------- | --------- | | **EM** | **80.94** | **80.8** | **+0.14**| | **F1** | **88.11** | **88.5** | **-0.39**| ## Example Usage Install nn_pruning: it contains the optimization script, which just pack the linear layers into smaller ones by removing empty rows/columns. `pip install nn_pruning` Then you can use the `transformers library` almost as usual: you just have to call `optimize_model` when the pipeline has loaded. ```python from transformers import pipeline from nn_pruning.inference_model_patcher import optimize_model qa_pipeline = pipeline( "question-answering", model="madlag/bert-base-uncased-squadv1-x1.16-f88.1-d8-unstruct-v1", tokenizer="madlag/bert-base-uncased-squadv1-x1.16-f88.1-d8-unstruct-v1" ) print("bert-base-uncased parameters: 152.0M") print(f"Parameters count (includes only head pruning, not feed forward pruning)={int(qa_pipeline.model.num_parameters() / 1E6)}M") qa_pipeline.model = optimize_model(qa_pipeline.model, "dense") print(f"Parameters count after complete optimization={int(qa_pipeline.model.num_parameters() / 1E6)}M") predictions = qa_pipeline({ 'context': "Frédéric François Chopin, born Fryderyk Franciszek Chopin (1 March 1810 – 17 October 1849), was a Polish composer and virtuoso pianist of the Romantic era who wrote primarily for solo piano.", 'question': "Who is Frederic Chopin?", }) print("Predictions", predictions) ```
madlag/bert-base-uncased-squadv1-x1.84-f88.7-d36-hybrid-filled-v1
2021-06-16T14:53:32.000Z
[ "pytorch", "tf", "bert", "question-answering", "en", "dataset:squad", "transformers", "license:mit" ]
question-answering
[ ".gitattributes", "README.md", "config.json", "model_info.json", "pytorch_model.bin", "special_tokens_map.json", "tf_model.h5", "tokenizer_config.json", "vocab.txt", "eval/eval_metrics.json", "eval/evaluate_timing.json", "eval/nbest_predictions.json.tgz", "eval/predictions.json", "eval/sparsity_report.json", "eval/speed_report.json", "model_card/density_info.js", "model_card/pruning_info.js", "model_card/images/layer_0_attention_output_dense.png", "model_card/images/layer_0_attention_self_key.png", "model_card/images/layer_0_attention_self_query.png", "model_card/images/layer_0_attention_self_value.png", "model_card/images/layer_0_intermediate_dense.png", "model_card/images/layer_0_output_dense.png", "model_card/images/layer_10_attention_output_dense.png", "model_card/images/layer_10_attention_self_key.png", "model_card/images/layer_10_attention_self_query.png", "model_card/images/layer_10_attention_self_value.png", "model_card/images/layer_10_intermediate_dense.png", "model_card/images/layer_10_output_dense.png", "model_card/images/layer_11_attention_output_dense.png", "model_card/images/layer_11_attention_self_key.png", "model_card/images/layer_11_attention_self_query.png", "model_card/images/layer_11_attention_self_value.png", "model_card/images/layer_11_intermediate_dense.png", "model_card/images/layer_11_output_dense.png", "model_card/images/layer_1_attention_output_dense.png", "model_card/images/layer_1_attention_self_key.png", "model_card/images/layer_1_attention_self_query.png", "model_card/images/layer_1_attention_self_value.png", "model_card/images/layer_1_intermediate_dense.png", "model_card/images/layer_1_output_dense.png", "model_card/images/layer_2_attention_output_dense.png", "model_card/images/layer_2_attention_self_key.png", "model_card/images/layer_2_attention_self_query.png", "model_card/images/layer_2_attention_self_value.png", "model_card/images/layer_2_intermediate_dense.png", "model_card/images/layer_2_output_dense.png", "model_card/images/layer_3_attention_output_dense.png", "model_card/images/layer_3_attention_self_key.png", "model_card/images/layer_3_attention_self_query.png", "model_card/images/layer_3_attention_self_value.png", "model_card/images/layer_3_intermediate_dense.png", "model_card/images/layer_3_output_dense.png", "model_card/images/layer_4_attention_output_dense.png", "model_card/images/layer_4_attention_self_key.png", "model_card/images/layer_4_attention_self_query.png", "model_card/images/layer_4_attention_self_value.png", "model_card/images/layer_4_intermediate_dense.png", "model_card/images/layer_4_output_dense.png", "model_card/images/layer_5_attention_output_dense.png", "model_card/images/layer_5_attention_self_key.png", "model_card/images/layer_5_attention_self_query.png", "model_card/images/layer_5_attention_self_value.png", "model_card/images/layer_5_intermediate_dense.png", "model_card/images/layer_5_output_dense.png", "model_card/images/layer_6_attention_output_dense.png", "model_card/images/layer_6_attention_self_key.png", "model_card/images/layer_6_attention_self_query.png", "model_card/images/layer_6_attention_self_value.png", "model_card/images/layer_6_intermediate_dense.png", "model_card/images/layer_6_output_dense.png", "model_card/images/layer_7_attention_output_dense.png", "model_card/images/layer_7_attention_self_key.png", "model_card/images/layer_7_attention_self_query.png", "model_card/images/layer_7_attention_self_value.png", "model_card/images/layer_7_intermediate_dense.png", "model_card/images/layer_7_output_dense.png", "model_card/images/layer_8_attention_output_dense.png", "model_card/images/layer_8_attention_self_key.png", "model_card/images/layer_8_attention_self_query.png", "model_card/images/layer_8_attention_self_value.png", "model_card/images/layer_8_intermediate_dense.png", "model_card/images/layer_8_output_dense.png", "model_card/images/layer_9_attention_output_dense.png", "model_card/images/layer_9_attention_self_key.png", "model_card/images/layer_9_attention_self_query.png", "model_card/images/layer_9_attention_self_value.png", "model_card/images/layer_9_intermediate_dense.png", "model_card/images/layer_9_output_dense.png", "training/data_args.json", "training/model_args.json", "training/sparse_args.json", "training/training_args.bin" ]
madlag
22
transformers
--- language: en thumbnail: license: mit tags: - question-answering - - datasets: - squad metrics: - squad widget: - text: "Where is the Eiffel Tower located?" context: "The Eiffel Tower is a wrought-iron lattice tower on the Champ de Mars in Paris, France. It is named after the engineer Gustave Eiffel, whose company designed and built the tower." - text: "Who is Frederic Chopin?" context: "Frédéric François Chopin, born Fryderyk Franciszek Chopin (1 March 1810 – 17 October 1849), was a Polish composer and virtuoso pianist of the Romantic era who wrote primarily for solo piano." --- ## BERT-base uncased model fine-tuned on SQuAD v1 This model was created using the [nn_pruning](https://github.com/huggingface/nn_pruning) python library: the **linear layers contains 36.0%** of the original weights. The model contains **50.0%** of the original weights **overall** (the embeddings account for a significant part of the model, and they are not pruned by this method). With a simple resizing of the linear matrices it ran **1.84x as fast as /home/lagunas/devel/hf/nn_pruning/nn_pruning/analysis/tmp_finetune** on the evaluation. This is possible because the pruning method lead to structured matrices: to visualize them, hover below on the plot to see the non-zero/zero parts of each matrix. <div class="graph"><script src="/madlag/bert-base-uncased-squadv1-x1.84-f88.7-d36-hybrid-filled-v1/raw/main/model_card/density_info.js" id="3aca15eb-8def-482c-800a-d9f8a6e8cea5"></script></div> In terms of accuracy, its **F1 is 88.72**, compared with 88.5 for /home/lagunas/devel/hf/nn_pruning/nn_pruning/analysis/tmp_finetune, a **F1 gain of 0.22**. ## Fine-Pruning details This model was fine-tuned from the HuggingFace [model](https://huggingface.co//home/lagunas/devel/hf/nn_pruning/nn_pruning/analysis/tmp_finetune) checkpoint on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer), and distilled from the model [csarron/bert-base-uncased-squad-v1](https://huggingface.co/csarron/bert-base-uncased-squad-v1) This model is case-insensitive: it does not make a difference between english and English. A side-effect of the block pruning is that some of the attention heads are completely removed: 48 heads were removed on a total of 144 (33.3%). Here is a detailed view on how the remaining heads are distributed in the network after pruning. <div class="graph"><script src="/madlag/bert-base-uncased-squadv1-x1.84-f88.7-d36-hybrid-filled-v1/raw/main/model_card/pruning_info.js" id="95fe9d1f-98f7-40e1-a28f-b90d0da0f1a8"></script></div> ## Details of the SQuAD1.1 dataset | Dataset | Split | # samples | | -------- | ----- | --------- | | SQuAD1.1 | train | 90.6K | | SQuAD1.1 | eval | 11.1k | ### Fine-tuning - Python: `3.8.5` - Machine specs: ```CPU: Intel(R) Core(TM) i7-6700K CPU Memory: 64 GiB GPUs: 1 GeForce GTX 3090, with 24GiB memory GPU driver: 455.23.05, CUDA: 11.1 ``` ### Results **Pytorch model file size**: `379MB` (original BERT: `420MB`) | Metric | # Value | # Original ([Table 2](https://www.aclweb.org/anthology/N19-1423.pdf))| Variation | | ------ | --------- | --------- | --------- | | **EM** | **81.69** | **80.8** | **+0.89**| | **F1** | **88.72** | **88.5** | **+0.22**| ## Example Usage Install nn_pruning: it contains the optimization script, which just pack the linear layers into smaller ones by removing empty rows/columns. `pip install nn_pruning` Then you can use the `transformers library` almost as usual: you just have to call `optimize_model` when the pipeline has loaded. ```python from transformers import pipeline from nn_pruning.inference_model_patcher import optimize_model qa_pipeline = pipeline( "question-answering", model="madlag/bert-base-uncased-squadv1-x1.84-f88.7-d36-hybrid-filled-v1", tokenizer="madlag/bert-base-uncased-squadv1-x1.84-f88.7-d36-hybrid-filled-v1" ) print("/home/lagunas/devel/hf/nn_pruning/nn_pruning/analysis/tmp_finetune parameters: 218.0M") print(f"Parameters count (includes only head pruning, not feed forward pruning)={int(qa_pipeline.model.num_parameters() / 1E6)}M") qa_pipeline.model = optimize_model(qa_pipeline.model, "dense") print(f"Parameters count after complete optimization={int(qa_pipeline.model.num_parameters() / 1E6)}M") predictions = qa_pipeline({ 'context': "Frédéric François Chopin, born Fryderyk Franciszek Chopin (1 March 1810 – 17 October 1849), was a Polish composer and virtuoso pianist of the Romantic era who wrote primarily for solo piano.", 'question': "Who is Frederic Chopin?", }) print("Predictions", predictions) ```
madlag/bert-base-uncased-squadv1-x1.96-f88.3-d27-hybrid-filled-opt-v1
2021-06-16T14:54:10.000Z
[ "pytorch", "tf", "bert", "question-answering", "en", "dataset:squad", "transformers", "license:mit" ]
question-answering
[ ".gitattributes", "README.md", "config.json", "model_info.json", "pytorch_model.bin", "special_tokens_map.json", "tf_model.h5", "tokenizer_config.json", "vocab.txt", "eval/eval_metrics.json", "eval/evaluate_timing.json", "eval/nbest_predictions.json.tgz", "eval/predictions.json", "eval/sparsity_report.json", "eval/speed_report.json", "model_card/density_info.js", "model_card/pruning_info.js", "model_card/images/layer_0_attention_output_dense.png", "model_card/images/layer_0_attention_self_key.png", "model_card/images/layer_0_attention_self_query.png", "model_card/images/layer_0_attention_self_value.png", "model_card/images/layer_0_intermediate_dense.png", "model_card/images/layer_0_output_dense.png", "model_card/images/layer_10_attention_output_dense.png", "model_card/images/layer_10_attention_self_key.png", "model_card/images/layer_10_attention_self_query.png", "model_card/images/layer_10_attention_self_value.png", "model_card/images/layer_10_intermediate_dense.png", "model_card/images/layer_10_output_dense.png", "model_card/images/layer_11_attention_output_dense.png", "model_card/images/layer_11_attention_self_key.png", "model_card/images/layer_11_attention_self_query.png", "model_card/images/layer_11_attention_self_value.png", "model_card/images/layer_11_intermediate_dense.png", "model_card/images/layer_11_output_dense.png", "model_card/images/layer_1_attention_output_dense.png", "model_card/images/layer_1_attention_self_key.png", "model_card/images/layer_1_attention_self_query.png", "model_card/images/layer_1_attention_self_value.png", "model_card/images/layer_1_intermediate_dense.png", "model_card/images/layer_1_output_dense.png", "model_card/images/layer_2_attention_output_dense.png", "model_card/images/layer_2_attention_self_key.png", "model_card/images/layer_2_attention_self_query.png", "model_card/images/layer_2_attention_self_value.png", "model_card/images/layer_2_intermediate_dense.png", "model_card/images/layer_2_output_dense.png", "model_card/images/layer_3_attention_output_dense.png", "model_card/images/layer_3_attention_self_key.png", "model_card/images/layer_3_attention_self_query.png", "model_card/images/layer_3_attention_self_value.png", "model_card/images/layer_3_intermediate_dense.png", "model_card/images/layer_3_output_dense.png", "model_card/images/layer_4_attention_output_dense.png", "model_card/images/layer_4_attention_self_key.png", "model_card/images/layer_4_attention_self_query.png", "model_card/images/layer_4_attention_self_value.png", "model_card/images/layer_4_intermediate_dense.png", "model_card/images/layer_4_output_dense.png", "model_card/images/layer_5_attention_output_dense.png", "model_card/images/layer_5_attention_self_key.png", "model_card/images/layer_5_attention_self_query.png", "model_card/images/layer_5_attention_self_value.png", "model_card/images/layer_5_intermediate_dense.png", "model_card/images/layer_5_output_dense.png", "model_card/images/layer_6_attention_output_dense.png", "model_card/images/layer_6_attention_self_key.png", "model_card/images/layer_6_attention_self_query.png", "model_card/images/layer_6_attention_self_value.png", "model_card/images/layer_6_intermediate_dense.png", "model_card/images/layer_6_output_dense.png", "model_card/images/layer_7_attention_output_dense.png", "model_card/images/layer_7_attention_self_key.png", "model_card/images/layer_7_attention_self_query.png", "model_card/images/layer_7_attention_self_value.png", "model_card/images/layer_7_intermediate_dense.png", "model_card/images/layer_7_output_dense.png", "model_card/images/layer_8_attention_output_dense.png", "model_card/images/layer_8_attention_self_key.png", "model_card/images/layer_8_attention_self_query.png", "model_card/images/layer_8_attention_self_value.png", "model_card/images/layer_8_intermediate_dense.png", "model_card/images/layer_8_output_dense.png", "model_card/images/layer_9_attention_output_dense.png", "model_card/images/layer_9_attention_self_key.png", "model_card/images/layer_9_attention_self_query.png", "model_card/images/layer_9_attention_self_value.png", "model_card/images/layer_9_intermediate_dense.png", "model_card/images/layer_9_output_dense.png", "training/data_args.json", "training/model_args.json", "training/sparse_args.json", "training/training_args.bin" ]
madlag
22
transformers
--- language: en thumbnail: license: mit tags: - question-answering - - datasets: - squad metrics: - squad widget: - text: "Where is the Eiffel Tower located?" context: "The Eiffel Tower is a wrought-iron lattice tower on the Champ de Mars in Paris, France. It is named after the engineer Gustave Eiffel, whose company designed and built the tower." - text: "Who is Frederic Chopin?" context: "Frédéric François Chopin, born Fryderyk Franciszek Chopin (1 March 1810 – 17 October 1849), was a Polish composer and virtuoso pianist of the Romantic era who wrote primarily for solo piano." --- ## BERT-base uncased model fine-tuned on SQuAD v1 This model was created using the [nn_pruning](https://github.com/huggingface/nn_pruning) python library: the **linear layers contains 27.0%** of the original weights. This model **CANNOT be used without using nn_pruning `optimize_model`** function, as it uses NoNorms instead of LayerNorms and this is not currently supported by the Transformers library. It uses ReLUs instead of GeLUs as in the initial BERT network, to speedup inference. This does not need special handling, as it is supported by the Transformers library, and flagged in the model config by the ```"hidden_act": "relu"``` entry. The model contains **43.0%** of the original weights **overall** (the embeddings account for a significant part of the model, and they are not pruned by this method). With a simple resizing of the linear matrices it ran **1.96x as fast as bert-base-uncased** on the evaluation. This is possible because the pruning method lead to structured matrices: to visualize them, hover below on the plot to see the non-zero/zero parts of each matrix. <div class="graph"><script src="/madlag/bert-base-uncased-squadv1-x1.96-f88.3-d27-hybrid-filled-opt-v1/raw/main/model_card/density_info.js" id="aa996a95-2c09-4974-ae46-778cf5b50833"></script></div> In terms of accuracy, its **F1 is 88.33**, compared with 88.5 for bert-base-uncased, a **F1 drop of 0.17**. ## Fine-Pruning details This model was fine-tuned from the HuggingFace [model](https://huggingface.co/bert-base-uncased) checkpoint on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer), and distilled from the model [bert-large-uncased-whole-word-masking-finetuned-squad](https://huggingface.co/bert-large-uncased-whole-word-masking-finetuned-squad) This model is case-insensitive: it does not make a difference between english and English. A side-effect of the block pruning is that some of the attention heads are completely removed: 55 heads were removed on a total of 144 (38.2%). Here is a detailed view on how the remaining heads are distributed in the network after pruning. <div class="graph"><script src="/madlag/bert-base-uncased-squadv1-x1.96-f88.3-d27-hybrid-filled-opt-v1/raw/main/model_card/pruning_info.js" id="d74872e0-a89c-4ce0-b0fa-1c5709b67cd9"></script></div> ## Details of the SQuAD1.1 dataset | Dataset | Split | # samples | | -------- | ----- | --------- | | SQuAD1.1 | train | 90.6K | | SQuAD1.1 | eval | 11.1k | ### Fine-tuning - Python: `3.8.5` - Machine specs: ```CPU: Intel(R) Core(TM) i7-6700K CPU Memory: 64 GiB GPUs: 1 GeForce GTX 3090, with 24GiB memory GPU driver: 455.23.05, CUDA: 11.1 ``` ### Results **Pytorch model file size**: `374MB` (original BERT: `420MB`) | Metric | # Value | # Original ([Table 2](https://www.aclweb.org/anthology/N19-1423.pdf))| Variation | | ------ | --------- | --------- | --------- | | **EM** | **81.31** | **80.8** | **+0.51**| | **F1** | **88.33** | **88.5** | **-0.17**| ## Example Usage Install nn_pruning: it contains the optimization script, which just pack the linear layers into smaller ones by removing empty rows/columns. `pip install nn_pruning` Then you can use the `transformers library` almost as usual: you just have to call `optimize_model` when the pipeline has loaded. ```python from transformers import pipeline from nn_pruning.inference_model_patcher import optimize_model qa_pipeline = pipeline( "question-answering", model="madlag/bert-base-uncased-squadv1-x1.96-f88.3-d27-hybrid-filled-opt-v1", tokenizer="madlag/bert-base-uncased-squadv1-x1.96-f88.3-d27-hybrid-filled-opt-v1" ) print("bert-base-uncased parameters: 191.0M") print(f"Parameters count (includes only head pruning, not feed forward pruning)={int(qa_pipeline.model.num_parameters() / 1E6)}M") qa_pipeline.model = optimize_model(qa_pipeline.model, "dense") print(f"Parameters count after complete optimization={int(qa_pipeline.model.num_parameters() / 1E6)}M") predictions = qa_pipeline({ 'context': "Frédéric François Chopin, born Fryderyk Franciszek Chopin (1 March 1810 – 17 October 1849), was a Polish composer and virtuoso pianist of the Romantic era who wrote primarily for solo piano.", 'question': "Who is Frederic Chopin?", }) print("Predictions", predictions) ```
madlag/bert-base-uncased-squadv1-x2.01-f89.2-d30-hybrid-rewind-opt-v1
2021-06-16T15:02:14.000Z
[ "pytorch", "tf", "bert", "question-answering", "en", "dataset:squad", "transformers", "license:mit" ]
question-answering
[ ".gitattributes", "README.md", "config.json", "model_info.json", "pytorch_model.bin", "special_tokens_map.json", "tf_model.h5", "tokenizer_config.json", "vocab.txt", "eval/eval_metrics.json", "eval/evaluate_timing.json", "eval/nbest_predictions.json.tgz", "eval/predictions.json", "eval/sparsity_report.json", "eval/speed_report.json", "model_card/density_info.js", "model_card/pruning_info.js", "model_card/images/layer_0_attention_output_dense.png", "model_card/images/layer_0_attention_self_key.png", "model_card/images/layer_0_attention_self_query.png", "model_card/images/layer_0_attention_self_value.png", "model_card/images/layer_0_intermediate_dense.png", "model_card/images/layer_0_output_dense.png", "model_card/images/layer_10_attention_output_dense.png", "model_card/images/layer_10_attention_self_key.png", "model_card/images/layer_10_attention_self_query.png", "model_card/images/layer_10_attention_self_value.png", "model_card/images/layer_10_intermediate_dense.png", "model_card/images/layer_10_output_dense.png", "model_card/images/layer_11_attention_output_dense.png", "model_card/images/layer_11_attention_self_key.png", "model_card/images/layer_11_attention_self_query.png", "model_card/images/layer_11_attention_self_value.png", "model_card/images/layer_11_intermediate_dense.png", "model_card/images/layer_11_output_dense.png", "model_card/images/layer_1_attention_output_dense.png", "model_card/images/layer_1_attention_self_key.png", "model_card/images/layer_1_attention_self_query.png", "model_card/images/layer_1_attention_self_value.png", "model_card/images/layer_1_intermediate_dense.png", "model_card/images/layer_1_output_dense.png", "model_card/images/layer_2_attention_output_dense.png", "model_card/images/layer_2_attention_self_key.png", "model_card/images/layer_2_attention_self_query.png", "model_card/images/layer_2_attention_self_value.png", "model_card/images/layer_2_intermediate_dense.png", "model_card/images/layer_2_output_dense.png", "model_card/images/layer_3_attention_output_dense.png", "model_card/images/layer_3_attention_self_key.png", "model_card/images/layer_3_attention_self_query.png", "model_card/images/layer_3_attention_self_value.png", "model_card/images/layer_3_intermediate_dense.png", "model_card/images/layer_3_output_dense.png", "model_card/images/layer_4_attention_output_dense.png", "model_card/images/layer_4_attention_self_key.png", "model_card/images/layer_4_attention_self_query.png", "model_card/images/layer_4_attention_self_value.png", "model_card/images/layer_4_intermediate_dense.png", "model_card/images/layer_4_output_dense.png", "model_card/images/layer_5_attention_output_dense.png", "model_card/images/layer_5_attention_self_key.png", "model_card/images/layer_5_attention_self_query.png", "model_card/images/layer_5_attention_self_value.png", "model_card/images/layer_5_intermediate_dense.png", "model_card/images/layer_5_output_dense.png", "model_card/images/layer_6_attention_output_dense.png", "model_card/images/layer_6_attention_self_key.png", "model_card/images/layer_6_attention_self_query.png", "model_card/images/layer_6_attention_self_value.png", "model_card/images/layer_6_intermediate_dense.png", "model_card/images/layer_6_output_dense.png", "model_card/images/layer_7_attention_output_dense.png", "model_card/images/layer_7_attention_self_key.png", "model_card/images/layer_7_attention_self_query.png", "model_card/images/layer_7_attention_self_value.png", "model_card/images/layer_7_intermediate_dense.png", "model_card/images/layer_7_output_dense.png", "model_card/images/layer_8_attention_output_dense.png", "model_card/images/layer_8_attention_self_key.png", "model_card/images/layer_8_attention_self_query.png", "model_card/images/layer_8_attention_self_value.png", "model_card/images/layer_8_intermediate_dense.png", "model_card/images/layer_8_output_dense.png", "model_card/images/layer_9_attention_output_dense.png", "model_card/images/layer_9_attention_self_key.png", "model_card/images/layer_9_attention_self_query.png", "model_card/images/layer_9_attention_self_value.png", "model_card/images/layer_9_intermediate_dense.png", "model_card/images/layer_9_output_dense.png", "training/data_args.json", "training/model_args.json", "training/sparse_args.json", "training/training_args.bin" ]
madlag
69
transformers
--- language: en thumbnail: license: mit tags: - question-answering - - datasets: - squad metrics: - squad widget: - text: "Where is the Eiffel Tower located?" context: "The Eiffel Tower is a wrought-iron lattice tower on the Champ de Mars in Paris, France. It is named after the engineer Gustave Eiffel, whose company designed and built the tower." - text: "Who is Frederic Chopin?" context: "Frédéric François Chopin, born Fryderyk Franciszek Chopin (1 March 1810 – 17 October 1849), was a Polish composer and virtuoso pianist of the Romantic era who wrote primarily for solo piano." --- ## BERT-base uncased model fine-tuned on SQuAD v1 This model was created using the [nn_pruning](https://github.com/huggingface/nn_pruning) python library: the **linear layers contains 30.0%** of the original weights. This model **CANNOT be used without using nn_pruning `optimize_model`** function, as it uses NoNorms instead of LayerNorms and this is not currently supported by the Transformers library. It uses ReLUs instead of GeLUs as in the initial BERT network, to speedup inference. This does not need special handling, as it is supported by the Transformers library, and flagged in the model config by the ```"hidden_act": "relu"``` entry. The model contains **45.0%** of the original weights **overall** (the embeddings account for a significant part of the model, and they are not pruned by this method). With a simple resizing of the linear matrices it ran **2.01x as fast as bert-base-uncased** on the evaluation. This is possible because the pruning method lead to structured matrices: to visualize them, hover below on the plot to see the non-zero/zero parts of each matrix. <div class="graph"><script src="/madlag/bert-base-uncased-squadv1-x2.01-f89.2-d30-hybrid-rewind-opt-v1/raw/main/model_card/density_info.js" id="c3b978cc-6d18-4fd0-a24b-e4369569d64d"></script></div> In terms of accuracy, its **F1 is 89.19**, compared with 88.5 for bert-base-uncased, a **F1 gain of 0.69**. ## Fine-Pruning details This model was fine-tuned from the HuggingFace [model](https://huggingface.co/bert-base-uncased) checkpoint on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer), and distilled from the model [bert-large-uncased-whole-word-masking-finetuned-squad](https://huggingface.co/bert-large-uncased-whole-word-masking-finetuned-squad) This model is case-insensitive: it does not make a difference between english and English. A side-effect of the block pruning is that some of the attention heads are completely removed: 55 heads were removed on a total of 144 (38.2%). Here is a detailed view on how the remaining heads are distributed in the network after pruning. <div class="graph"><script src="/madlag/bert-base-uncased-squadv1-x2.01-f89.2-d30-hybrid-rewind-opt-v1/raw/main/model_card/pruning_info.js" id="7de38b6d-774c-4313-a5a4-8e32f554d9ec"></script></div> ## Details of the SQuAD1.1 dataset | Dataset | Split | # samples | | -------- | ----- | --------- | | SQuAD1.1 | train | 90.6K | | SQuAD1.1 | eval | 11.1k | ### Fine-tuning - Python: `3.8.5` - Machine specs: ```CPU: Intel(R) Core(TM) i7-6700K CPU Memory: 64 GiB GPUs: 1 GeForce GTX 3090, with 24GiB memory GPU driver: 455.23.05, CUDA: 11.1 ``` ### Results **Pytorch model file size**: `374MB` (original BERT: `420MB`) | Metric | # Value | # Original ([Table 2](https://www.aclweb.org/anthology/N19-1423.pdf))| Variation | | ------ | --------- | --------- | --------- | | **EM** | **82.21** | **80.8** | **+1.41**| | **F1** | **89.19** | **88.5** | **+0.69**| ## Example Usage Install nn_pruning: it contains the optimization script, which just pack the linear layers into smaller ones by removing empty rows/columns. `pip install nn_pruning` Then you can use the `transformers library` almost as usual: you just have to call `optimize_model` when the pipeline has loaded. ```python from transformers import pipeline from nn_pruning.inference_model_patcher import optimize_model qa_pipeline = pipeline( "question-answering", model="madlag/bert-base-uncased-squadv1-x2.01-f89.2-d30-hybrid-rewind-opt-v1", tokenizer="madlag/bert-base-uncased-squadv1-x2.01-f89.2-d30-hybrid-rewind-opt-v1" ) print("bert-base-uncased parameters: 200.0M") print(f"Parameters count (includes only head pruning, not feed forward pruning)={int(qa_pipeline.model.num_parameters() / 1E6)}M") qa_pipeline.model = optimize_model(qa_pipeline.model, "dense") print(f"Parameters count after complete optimization={int(qa_pipeline.model.num_parameters() / 1E6)}M") predictions = qa_pipeline({ 'context': "Frédéric François Chopin, born Fryderyk Franciszek Chopin (1 March 1810 – 17 October 1849), was a Polish composer and virtuoso pianist of the Romantic era who wrote primarily for solo piano.", 'question': "Who is Frederic Chopin?", }) print("Predictions", predictions) ```
madlag/bert-base-uncased-squadv1-x2.32-f86.6-d15-hybrid-v1
2021-06-16T15:06:42.000Z
[ "pytorch", "tf", "bert", "question-answering", "en", "dataset:squad", "transformers", "license:mit" ]
question-answering
[ ".gitattributes", "README.md", "config.json", "model_info.json", "pytorch_model.bin", "special_tokens_map.json", "tf_model.h5", "tokenizer_config.json", "vocab.txt", "eval/eval_metrics.json", "eval/evaluate_timing.json", "eval/nbest_predictions.json.tgz", "eval/predictions.json", "eval/sparsity_report.json", "eval/speed_report.json", "model_card/density_info.js", "model_card/pruning_info.js", "model_card/images/layer_0_attention_output_dense.png", "model_card/images/layer_0_attention_self_key.png", "model_card/images/layer_0_attention_self_query.png", "model_card/images/layer_0_attention_self_value.png", "model_card/images/layer_0_intermediate_dense.png", "model_card/images/layer_0_output_dense.png", "model_card/images/layer_10_attention_output_dense.png", "model_card/images/layer_10_attention_self_key.png", "model_card/images/layer_10_attention_self_query.png", "model_card/images/layer_10_attention_self_value.png", "model_card/images/layer_10_intermediate_dense.png", "model_card/images/layer_10_output_dense.png", "model_card/images/layer_11_attention_output_dense.png", "model_card/images/layer_11_attention_self_key.png", "model_card/images/layer_11_attention_self_query.png", "model_card/images/layer_11_attention_self_value.png", "model_card/images/layer_11_intermediate_dense.png", "model_card/images/layer_11_output_dense.png", "model_card/images/layer_1_attention_output_dense.png", "model_card/images/layer_1_attention_self_key.png", "model_card/images/layer_1_attention_self_query.png", "model_card/images/layer_1_attention_self_value.png", "model_card/images/layer_1_intermediate_dense.png", "model_card/images/layer_1_output_dense.png", "model_card/images/layer_2_attention_output_dense.png", "model_card/images/layer_2_attention_self_key.png", "model_card/images/layer_2_attention_self_query.png", "model_card/images/layer_2_attention_self_value.png", "model_card/images/layer_2_intermediate_dense.png", "model_card/images/layer_2_output_dense.png", "model_card/images/layer_3_attention_output_dense.png", "model_card/images/layer_3_attention_self_key.png", "model_card/images/layer_3_attention_self_query.png", "model_card/images/layer_3_attention_self_value.png", "model_card/images/layer_3_intermediate_dense.png", "model_card/images/layer_3_output_dense.png", "model_card/images/layer_4_attention_output_dense.png", "model_card/images/layer_4_attention_self_key.png", "model_card/images/layer_4_attention_self_query.png", "model_card/images/layer_4_attention_self_value.png", "model_card/images/layer_4_intermediate_dense.png", "model_card/images/layer_4_output_dense.png", "model_card/images/layer_5_attention_output_dense.png", "model_card/images/layer_5_attention_self_key.png", "model_card/images/layer_5_attention_self_query.png", "model_card/images/layer_5_attention_self_value.png", "model_card/images/layer_5_intermediate_dense.png", "model_card/images/layer_5_output_dense.png", "model_card/images/layer_6_attention_output_dense.png", "model_card/images/layer_6_attention_self_key.png", "model_card/images/layer_6_attention_self_query.png", "model_card/images/layer_6_attention_self_value.png", "model_card/images/layer_6_intermediate_dense.png", "model_card/images/layer_6_output_dense.png", "model_card/images/layer_7_attention_output_dense.png", "model_card/images/layer_7_attention_self_key.png", "model_card/images/layer_7_attention_self_query.png", "model_card/images/layer_7_attention_self_value.png", "model_card/images/layer_7_intermediate_dense.png", "model_card/images/layer_7_output_dense.png", "model_card/images/layer_8_attention_output_dense.png", "model_card/images/layer_8_attention_self_key.png", "model_card/images/layer_8_attention_self_query.png", "model_card/images/layer_8_attention_self_value.png", "model_card/images/layer_8_intermediate_dense.png", "model_card/images/layer_8_output_dense.png", "model_card/images/layer_9_attention_output_dense.png", "model_card/images/layer_9_attention_self_key.png", "model_card/images/layer_9_attention_self_query.png", "model_card/images/layer_9_attention_self_value.png", "model_card/images/layer_9_intermediate_dense.png", "model_card/images/layer_9_output_dense.png", "training/data_args.json", "training/model_args.json", "training/sparse_args.json", "training/training_args.bin" ]
madlag
51
transformers
--- language: en thumbnail: license: mit tags: - question-answering - - datasets: - squad metrics: - squad widget: - text: "Where is the Eiffel Tower located?" context: "The Eiffel Tower is a wrought-iron lattice tower on the Champ de Mars in Paris, France. It is named after the engineer Gustave Eiffel, whose company designed and built the tower." - text: "Who is Frederic Chopin?" context: "Frédéric François Chopin, born Fryderyk Franciszek Chopin (1 March 1810 – 17 October 1849), was a Polish composer and virtuoso pianist of the Romantic era who wrote primarily for solo piano." --- ## BERT-base uncased model fine-tuned on SQuAD v1 This model was created using the [nn_pruning](https://github.com/huggingface/nn_pruning) python library: the **linear layers contains 15.0%** of the original weights. The model contains **34.0%** of the original weights **overall** (the embeddings account for a significant part of the model, and they are not pruned by this method). With a simple resizing of the linear matrices it ran **2.32x as fast as bert-base-uncased** on the evaluation. This is possible because the pruning method lead to structured matrices: to visualize them, hover below on the plot to see the non-zero/zero parts of each matrix. <div class="graph"><script src="/madlag/bert-base-uncased-squadv1-x2.32-f86.6-d15-hybrid-v1/raw/main/model_card/density_info.js" id="1ff1ba08-69d3-4a20-9f29-494033c72860"></script></div> In terms of accuracy, its **F1 is 86.64**, compared with 88.5 for bert-base-uncased, a **F1 drop of 1.86**. ## Fine-Pruning details This model was fine-tuned from the HuggingFace [model](https://huggingface.co/bert-base-uncased) checkpoint on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer), and distilled from the model [bert-large-uncased-whole-word-masking-finetuned-squad](https://huggingface.co/bert-large-uncased-whole-word-masking-finetuned-squad) This model is case-insensitive: it does not make a difference between english and English. A side-effect of the block pruning is that some of the attention heads are completely removed: 63 heads were removed on a total of 144 (43.8%). Here is a detailed view on how the remaining heads are distributed in the network after pruning. <div class="graph"><script src="/madlag/bert-base-uncased-squadv1-x2.32-f86.6-d15-hybrid-v1/raw/main/model_card/pruning_info.js" id="e092ee84-28af-4821-8127-11914f68e306"></script></div> ## Details of the SQuAD1.1 dataset | Dataset | Split | # samples | | -------- | ----- | --------- | | SQuAD1.1 | train | 90.6K | | SQuAD1.1 | eval | 11.1k | ### Fine-tuning - Python: `3.8.5` - Machine specs: ```CPU: Intel(R) Core(TM) i7-6700K CPU Memory: 64 GiB GPUs: 1 GeForce GTX 3090, with 24GiB memory GPU driver: 455.23.05, CUDA: 11.1 ``` ### Results **Pytorch model file size**: `368MB` (original BERT: `420MB`) | Metric | # Value | # Original ([Table 2](https://www.aclweb.org/anthology/N19-1423.pdf))| Variation | | ------ | --------- | --------- | --------- | | **EM** | **78.77** | **80.8** | **-2.03**| | **F1** | **86.64** | **88.5** | **-1.86**| ## Example Usage Install nn_pruning: it contains the optimization script, which just pack the linear layers into smaller ones by removing empty rows/columns. `pip install nn_pruning` Then you can use the `transformers library` almost as usual: you just have to call `optimize_model` when the pipeline has loaded. ```python from transformers import pipeline from nn_pruning.inference_model_patcher import optimize_model qa_pipeline = pipeline( "question-answering", model="madlag/bert-base-uncased-squadv1-x2.32-f86.6-d15-hybrid-v1", tokenizer="madlag/bert-base-uncased-squadv1-x2.32-f86.6-d15-hybrid-v1" ) print("bert-base-uncased parameters: 165.0M") print(f"Parameters count (includes only head pruning, not feed forward pruning)={int(qa_pipeline.model.num_parameters() / 1E6)}M") qa_pipeline.model = optimize_model(qa_pipeline.model, "dense") print(f"Parameters count after complete optimization={int(qa_pipeline.model.num_parameters() / 1E6)}M") predictions = qa_pipeline({ 'context': "Frédéric François Chopin, born Fryderyk Franciszek Chopin (1 March 1810 – 17 October 1849), was a Polish composer and virtuoso pianist of the Romantic era who wrote primarily for solo piano.", 'question': "Who is Frederic Chopin?", }) print("Predictions", predictions) ```
madlag/bert-base-uncased-squadv1-x2.44-f87.7-d26-hybrid-filled-v1
2021-06-16T14:53:51.000Z
[ "pytorch", "tf", "bert", "question-answering", "en", "dataset:squad", "transformers", "license:mit" ]
question-answering
[ ".gitattributes", "README.md", "config.json", "model_info.json", "pytorch_model.bin", "special_tokens_map.json", "tf_model.h5", "tokenizer_config.json", "vocab.txt", "eval/eval_metrics.json", "eval/evaluate_timing.json", "eval/nbest_predictions.json.tgz", "eval/predictions.json", "eval/sparsity_report.json", "eval/speed_report.json", "model_card/density_info.js", "model_card/pruning_info.js", "model_card/images/layer_0_attention_output_dense.png", "model_card/images/layer_0_attention_self_key.png", "model_card/images/layer_0_attention_self_query.png", "model_card/images/layer_0_attention_self_value.png", "model_card/images/layer_0_intermediate_dense.png", "model_card/images/layer_0_output_dense.png", "model_card/images/layer_10_attention_output_dense.png", "model_card/images/layer_10_attention_self_key.png", "model_card/images/layer_10_attention_self_query.png", "model_card/images/layer_10_attention_self_value.png", "model_card/images/layer_10_intermediate_dense.png", "model_card/images/layer_10_output_dense.png", "model_card/images/layer_11_attention_output_dense.png", "model_card/images/layer_11_attention_self_key.png", "model_card/images/layer_11_attention_self_query.png", "model_card/images/layer_11_attention_self_value.png", "model_card/images/layer_11_intermediate_dense.png", "model_card/images/layer_11_output_dense.png", "model_card/images/layer_1_attention_output_dense.png", "model_card/images/layer_1_attention_self_key.png", "model_card/images/layer_1_attention_self_query.png", "model_card/images/layer_1_attention_self_value.png", "model_card/images/layer_1_intermediate_dense.png", "model_card/images/layer_1_output_dense.png", "model_card/images/layer_2_attention_output_dense.png", "model_card/images/layer_2_attention_self_key.png", "model_card/images/layer_2_attention_self_query.png", "model_card/images/layer_2_attention_self_value.png", "model_card/images/layer_2_intermediate_dense.png", "model_card/images/layer_2_output_dense.png", "model_card/images/layer_3_attention_output_dense.png", "model_card/images/layer_3_attention_self_key.png", "model_card/images/layer_3_attention_self_query.png", "model_card/images/layer_3_attention_self_value.png", "model_card/images/layer_3_intermediate_dense.png", "model_card/images/layer_3_output_dense.png", "model_card/images/layer_4_attention_output_dense.png", "model_card/images/layer_4_attention_self_key.png", "model_card/images/layer_4_attention_self_query.png", "model_card/images/layer_4_attention_self_value.png", "model_card/images/layer_4_intermediate_dense.png", "model_card/images/layer_4_output_dense.png", "model_card/images/layer_5_attention_output_dense.png", "model_card/images/layer_5_attention_self_key.png", "model_card/images/layer_5_attention_self_query.png", "model_card/images/layer_5_attention_self_value.png", "model_card/images/layer_5_intermediate_dense.png", "model_card/images/layer_5_output_dense.png", "model_card/images/layer_6_attention_output_dense.png", "model_card/images/layer_6_attention_self_key.png", "model_card/images/layer_6_attention_self_query.png", "model_card/images/layer_6_attention_self_value.png", "model_card/images/layer_6_intermediate_dense.png", "model_card/images/layer_6_output_dense.png", "model_card/images/layer_7_attention_output_dense.png", "model_card/images/layer_7_attention_self_key.png", "model_card/images/layer_7_attention_self_query.png", "model_card/images/layer_7_attention_self_value.png", "model_card/images/layer_7_intermediate_dense.png", "model_card/images/layer_7_output_dense.png", "model_card/images/layer_8_attention_output_dense.png", "model_card/images/layer_8_attention_self_key.png", "model_card/images/layer_8_attention_self_query.png", "model_card/images/layer_8_attention_self_value.png", "model_card/images/layer_8_intermediate_dense.png", "model_card/images/layer_8_output_dense.png", "model_card/images/layer_9_attention_output_dense.png", "model_card/images/layer_9_attention_self_key.png", "model_card/images/layer_9_attention_self_query.png", "model_card/images/layer_9_attention_self_value.png", "model_card/images/layer_9_intermediate_dense.png", "model_card/images/layer_9_output_dense.png", "training/data_args.json", "training/model_args.json", "training/sparse_args.json", "training/training_args.bin" ]
madlag
27
transformers
--- language: en thumbnail: license: mit tags: - question-answering - - datasets: - squad metrics: - squad widget: - text: "Where is the Eiffel Tower located?" context: "The Eiffel Tower is a wrought-iron lattice tower on the Champ de Mars in Paris, France. It is named after the engineer Gustave Eiffel, whose company designed and built the tower." - text: "Who is Frederic Chopin?" context: "Frédéric François Chopin, born Fryderyk Franciszek Chopin (1 March 1810 – 17 October 1849), was a Polish composer and virtuoso pianist of the Romantic era who wrote primarily for solo piano." --- ## BERT-base uncased model fine-tuned on SQuAD v1 This model was created using the [nn_pruning](https://github.com/huggingface/nn_pruning) python library: the **linear layers contains 26.0%** of the original weights. The model contains **42.0%** of the original weights **overall** (the embeddings account for a significant part of the model, and they are not pruned by this method). With a simple resizing of the linear matrices it ran **2.44x as fast as /home/lagunas/devel/hf/nn_pruning/nn_pruning/analysis/tmp_finetune** on the evaluation. This is possible because the pruning method lead to structured matrices: to visualize them, hover below on the plot to see the non-zero/zero parts of each matrix. <div class="graph"><script src="/madlag/bert-base-uncased-squadv1-x2.44-f87.7-d26-hybrid-filled-v1/raw/main/model_card/density_info.js" id="d5d1b3e9-73f5-4cfc-8e33-3745054bc7d0"></script></div> In terms of accuracy, its **F1 is 87.71**, compared with 88.5 for /home/lagunas/devel/hf/nn_pruning/nn_pruning/analysis/tmp_finetune, a **F1 drop of 0.79**. ## Fine-Pruning details This model was fine-tuned from the HuggingFace [model](https://huggingface.co//home/lagunas/devel/hf/nn_pruning/nn_pruning/analysis/tmp_finetune) checkpoint on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer), and distilled from the model [csarron/bert-base-uncased-squad-v1](https://huggingface.co/csarron/bert-base-uncased-squad-v1) This model is case-insensitive: it does not make a difference between english and English. A side-effect of the block pruning is that some of the attention heads are completely removed: 80 heads were removed on a total of 144 (55.6%). Here is a detailed view on how the remaining heads are distributed in the network after pruning. <div class="graph"><script src="/madlag/bert-base-uncased-squadv1-x2.44-f87.7-d26-hybrid-filled-v1/raw/main/model_card/pruning_info.js" id="ccef8803-4310-4434-997e-c9dc158cabdb"></script></div> ## Details of the SQuAD1.1 dataset | Dataset | Split | # samples | | -------- | ----- | --------- | | SQuAD1.1 | train | 90.6K | | SQuAD1.1 | eval | 11.1k | ### Fine-tuning - Python: `3.8.5` - Machine specs: ```CPU: Intel(R) Core(TM) i7-6700K CPU Memory: 64 GiB GPUs: 1 GeForce GTX 3090, with 24GiB memory GPU driver: 455.23.05, CUDA: 11.1 ``` ### Results **Pytorch model file size**: `355MB` (original BERT: `420MB`) | Metric | # Value | # Original ([Table 2](https://www.aclweb.org/anthology/N19-1423.pdf))| Variation | | ------ | --------- | --------- | --------- | | **EM** | **80.03** | **80.8** | **-0.77**| | **F1** | **87.71** | **88.5** | **-0.79**| ## Example Usage Install nn_pruning: it contains the optimization script, which just pack the linear layers into smaller ones by removing empty rows/columns. `pip install nn_pruning` Then you can use the `transformers library` almost as usual: you just have to call `optimize_model` when the pipeline has loaded. ```python from transformers import pipeline from nn_pruning.inference_model_patcher import optimize_model qa_pipeline = pipeline( "question-answering", model="madlag/bert-base-uncased-squadv1-x2.44-f87.7-d26-hybrid-filled-v1", tokenizer="madlag/bert-base-uncased-squadv1-x2.44-f87.7-d26-hybrid-filled-v1" ) print("/home/lagunas/devel/hf/nn_pruning/nn_pruning/analysis/tmp_finetune parameters: 189.0M") print(f"Parameters count (includes only head pruning, not feed forward pruning)={int(qa_pipeline.model.num_parameters() / 1E6)}M") qa_pipeline.model = optimize_model(qa_pipeline.model, "dense") print(f"Parameters count after complete optimization={int(qa_pipeline.model.num_parameters() / 1E6)}M") predictions = qa_pipeline({ 'context': "Frédéric François Chopin, born Fryderyk Franciszek Chopin (1 March 1810 – 17 October 1849), was a Polish composer and virtuoso pianist of the Romantic era who wrote primarily for solo piano.", 'question': "Who is Frederic Chopin?", }) print("Predictions", predictions) ```
madlag/bert-large-uncased-mnli
2021-05-19T22:40:43.000Z
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "trainer_state.json", "training_args.bin", "vocab.txt" ]
madlag
107
transformers
## BERT-large finetuned on MNLI. The [reference finetuned model](https://github.com/google-research/bert) has an accuracy of 86.05, we get 86.7: ``` {'eval_loss': 0.3984006643295288, 'eval_accuracy': 0.8667345899133979} ```
madlag/bert-large-uncased-squadv2
2021-05-19T22:43:07.000Z
[ "pytorch", "jax", "bert", "question-answering", "arxiv:1810.04805", "transformers" ]
question-answering
[ ".gitattributes", "README.md", "config.json", "eval_results.json", "flax_model.msgpack", "pytorch_model.bin", "run.sh", "scheduler.pt", "special_tokens_map.json", "tokenizer_config.json", "trainer_state.json", "training_args.bin", "vocab.txt" ]
madlag
55
transformers
## BERT-large finetuned on squad v2. F1 on dev (from paper)[https://arxiv.org/pdf/1810.04805v2.pdf] is 81.9, we reach 81.58. ``` {'exact': 78.6321906847469, 'f1': 81.5816656803201, 'total': 11873, 'HasAns_exact': 73.73481781376518, 'HasAns_f1': 79.64222615088413, 'HasAns_total': 5928, 'NoAns_exact': 83.51555929352396, 'NoAns_f1': 83.51555929352396, 'NoAns_total': 5945, 'best_exact': 78.6321906847469, 'best_exact_thresh': 0.0, 'best_f1': 81.58166568032006, 'best_f1_thresh': 0.0, 'epoch': 1.59} ``` ``` python run_qa.py \ --model_name_or_path bert-large-uncased \ --dataset_name squad_v2 \ --do_train \ --do_eval \ --save_steps 2500 \ --eval_steps 2500 \ --evaluation_strategy steps \ --per_device_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 2 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir bert-large-uncased-squadv2 \ --version_2_with_negative 1 ```
madlag/bert-large-uncased-whole-word-masking-finetuned-squadv2
2021-05-19T22:45:40.000Z
[ "pytorch", "jax", "bert", "question-answering", "transformers" ]
question-answering
[ ".gitattributes", "README.md", "all_results.json", "config.json", "eval_results.json", "flax_model.msgpack", "pytorch_model.bin", "run.sh", "special_tokens_map.json", "tokenizer_config.json", "train_results.json", "trainer_state.json", "training_args.bin", "vocab.txt" ]
madlag
49
transformers
Used [run.sh](https://huggingface.co/madlag/bert-large-uncased-whole-word-masking-finetuned-squadv2/blob/main/run.sh) used to train using transformers/example/question_answering code. Evaluation results : F1= 85.85 , a much better result than the original 81.9 from the BERT paper, due to the use of the "whole-word-masking" variation. ``` { "HasAns_exact": 80.58367071524967, "HasAns_f1": 86.64594807945029, "HasAns_total": 5928, "NoAns_exact": 85.06307821698907, "NoAns_f1": 85.06307821698907, "NoAns_total": 5945, "best_exact": 82.82658131895899, "best_exact_thresh": 0.0, "best_f1": 85.85337995578023, "best_f1_thresh": 0.0, "epoch": 2.0, "eval_samples": 12134, "exact": 82.82658131895899, "f1": 85.85337995578037, "total": 11873 } ```
madlag/bert-large-uncased-wwm-squadv2-x2.15-f83.2-d25-hybrid-v1
2021-06-16T17:10:27.000Z
[ "pytorch", "tf", "bert", "question-answering", "en", "dataset:squad_v2", "transformers", "license:mit" ]
question-answering
[ ".gitattributes", "README.md", "config.json", "model_info.json", "pytorch_model.bin", "special_tokens_map.json", "tf_model.h5", "tokenizer_config.json", "vocab.txt", "eval/eval_metrics.json", "eval/evaluate_timing.json", "eval/nbest_predictions.json.tgz", "eval/null_odds.json", "eval/predictions.json", "eval/sparsity_report.json", "eval/speed_report.json", "model_card/density_info.js", "model_card/pruning_info.js", "model_card/images/layer_0_attention_output_dense.png", "model_card/images/layer_0_attention_self_key.png", "model_card/images/layer_0_attention_self_query.png", "model_card/images/layer_0_attention_self_value.png", "model_card/images/layer_0_intermediate_dense.png", "model_card/images/layer_0_output_dense.png", "model_card/images/layer_10_attention_output_dense.png", "model_card/images/layer_10_attention_self_key.png", "model_card/images/layer_10_attention_self_query.png", "model_card/images/layer_10_attention_self_value.png", "model_card/images/layer_10_intermediate_dense.png", "model_card/images/layer_10_output_dense.png", "model_card/images/layer_11_attention_output_dense.png", "model_card/images/layer_11_attention_self_key.png", "model_card/images/layer_11_attention_self_query.png", "model_card/images/layer_11_attention_self_value.png", "model_card/images/layer_11_intermediate_dense.png", "model_card/images/layer_11_output_dense.png", "model_card/images/layer_12_attention_output_dense.png", "model_card/images/layer_12_attention_self_key.png", "model_card/images/layer_12_attention_self_query.png", "model_card/images/layer_12_attention_self_value.png", "model_card/images/layer_12_intermediate_dense.png", "model_card/images/layer_12_output_dense.png", "model_card/images/layer_13_attention_output_dense.png", "model_card/images/layer_13_attention_self_key.png", "model_card/images/layer_13_attention_self_query.png", "model_card/images/layer_13_attention_self_value.png", "model_card/images/layer_13_intermediate_dense.png", "model_card/images/layer_13_output_dense.png", "model_card/images/layer_14_attention_output_dense.png", "model_card/images/layer_14_attention_self_key.png", "model_card/images/layer_14_attention_self_query.png", "model_card/images/layer_14_attention_self_value.png", "model_card/images/layer_14_intermediate_dense.png", "model_card/images/layer_14_output_dense.png", "model_card/images/layer_15_attention_output_dense.png", "model_card/images/layer_15_attention_self_key.png", "model_card/images/layer_15_attention_self_query.png", "model_card/images/layer_15_attention_self_value.png", "model_card/images/layer_15_intermediate_dense.png", "model_card/images/layer_15_output_dense.png", "model_card/images/layer_16_attention_output_dense.png", "model_card/images/layer_16_attention_self_key.png", "model_card/images/layer_16_attention_self_query.png", "model_card/images/layer_16_attention_self_value.png", "model_card/images/layer_16_intermediate_dense.png", "model_card/images/layer_16_output_dense.png", "model_card/images/layer_17_attention_output_dense.png", "model_card/images/layer_17_attention_self_key.png", "model_card/images/layer_17_attention_self_query.png", "model_card/images/layer_17_attention_self_value.png", "model_card/images/layer_17_intermediate_dense.png", "model_card/images/layer_17_output_dense.png", "model_card/images/layer_18_attention_output_dense.png", "model_card/images/layer_18_attention_self_key.png", "model_card/images/layer_18_attention_self_query.png", "model_card/images/layer_18_attention_self_value.png", "model_card/images/layer_18_intermediate_dense.png", "model_card/images/layer_18_output_dense.png", "model_card/images/layer_19_attention_output_dense.png", "model_card/images/layer_19_attention_self_key.png", "model_card/images/layer_19_attention_self_query.png", "model_card/images/layer_19_attention_self_value.png", "model_card/images/layer_19_intermediate_dense.png", "model_card/images/layer_19_output_dense.png", "model_card/images/layer_1_attention_output_dense.png", "model_card/images/layer_1_attention_self_key.png", "model_card/images/layer_1_attention_self_query.png", "model_card/images/layer_1_attention_self_value.png", "model_card/images/layer_1_intermediate_dense.png", "model_card/images/layer_1_output_dense.png", "model_card/images/layer_20_attention_output_dense.png", "model_card/images/layer_20_attention_self_key.png", "model_card/images/layer_20_attention_self_query.png", "model_card/images/layer_20_attention_self_value.png", "model_card/images/layer_20_intermediate_dense.png", "model_card/images/layer_20_output_dense.png", "model_card/images/layer_21_attention_output_dense.png", "model_card/images/layer_21_attention_self_key.png", "model_card/images/layer_21_attention_self_query.png", "model_card/images/layer_21_attention_self_value.png", "model_card/images/layer_21_intermediate_dense.png", "model_card/images/layer_21_output_dense.png", "model_card/images/layer_22_attention_output_dense.png", "model_card/images/layer_22_attention_self_key.png", "model_card/images/layer_22_attention_self_query.png", "model_card/images/layer_22_attention_self_value.png", "model_card/images/layer_22_intermediate_dense.png", "model_card/images/layer_22_output_dense.png", "model_card/images/layer_23_attention_output_dense.png", "model_card/images/layer_23_attention_self_key.png", "model_card/images/layer_23_attention_self_query.png", "model_card/images/layer_23_attention_self_value.png", "model_card/images/layer_23_intermediate_dense.png", "model_card/images/layer_23_output_dense.png", "model_card/images/layer_2_attention_output_dense.png", "model_card/images/layer_2_attention_self_key.png", "model_card/images/layer_2_attention_self_query.png", "model_card/images/layer_2_attention_self_value.png", "model_card/images/layer_2_intermediate_dense.png", "model_card/images/layer_2_output_dense.png", "model_card/images/layer_3_attention_output_dense.png", "model_card/images/layer_3_attention_self_key.png", "model_card/images/layer_3_attention_self_query.png", "model_card/images/layer_3_attention_self_value.png", "model_card/images/layer_3_intermediate_dense.png", "model_card/images/layer_3_output_dense.png", "model_card/images/layer_4_attention_output_dense.png", "model_card/images/layer_4_attention_self_key.png", "model_card/images/layer_4_attention_self_query.png", "model_card/images/layer_4_attention_self_value.png", "model_card/images/layer_4_intermediate_dense.png", "model_card/images/layer_4_output_dense.png", "model_card/images/layer_5_attention_output_dense.png", "model_card/images/layer_5_attention_self_key.png", "model_card/images/layer_5_attention_self_query.png", "model_card/images/layer_5_attention_self_value.png", "model_card/images/layer_5_intermediate_dense.png", "model_card/images/layer_5_output_dense.png", "model_card/images/layer_6_attention_output_dense.png", "model_card/images/layer_6_attention_self_key.png", "model_card/images/layer_6_attention_self_query.png", "model_card/images/layer_6_attention_self_value.png", "model_card/images/layer_6_intermediate_dense.png", "model_card/images/layer_6_output_dense.png", "model_card/images/layer_7_attention_output_dense.png", "model_card/images/layer_7_attention_self_key.png", "model_card/images/layer_7_attention_self_query.png", "model_card/images/layer_7_attention_self_value.png", "model_card/images/layer_7_intermediate_dense.png", "model_card/images/layer_7_output_dense.png", "model_card/images/layer_8_attention_output_dense.png", "model_card/images/layer_8_attention_self_key.png", "model_card/images/layer_8_attention_self_query.png", "model_card/images/layer_8_attention_self_value.png", "model_card/images/layer_8_intermediate_dense.png", "model_card/images/layer_8_output_dense.png", "model_card/images/layer_9_attention_output_dense.png", "model_card/images/layer_9_attention_self_key.png", "model_card/images/layer_9_attention_self_query.png", "model_card/images/layer_9_attention_self_value.png", "model_card/images/layer_9_intermediate_dense.png", "model_card/images/layer_9_output_dense.png", "training/data_args.json", "training/model_args.json", "training/sparse_args.json", "training/training_args.bin" ]
madlag
21
transformers
--- language: en thumbnail: license: mit tags: - question-answering - - datasets: - squad_v2 metrics: - squad_v2 widget: - text: "Where is the Eiffel Tower located?" context: "The Eiffel Tower is a wrought-iron lattice tower on the Champ de Mars in Paris, France. It is named after the engineer Gustave Eiffel, whose company designed and built the tower." - text: "Who is Frederic Chopin?" context: "Frédéric François Chopin, born Fryderyk Franciszek Chopin (1 March 1810 – 17 October 1849), was a Polish composer and virtuoso pianist of the Romantic era who wrote primarily for solo piano." --- ## bert-large-uncased-whole-word-masking model fine-tuned on SQuAD v2 This model was created using the [nn_pruning](https://github.com/huggingface/nn_pruning) python library: the **linear layers contains 25.0%** of the original weights. The model contains **32.0%** of the original weights **overall** (the embeddings account for a significant part of the model, and they are not pruned by this method). With a simple resizing of the linear matrices it ran **2.15x as fast as bert-large-uncased-whole-word-masking** on the evaluation. This is possible because the pruning method lead to structured matrices: to visualize them, hover below on the plot to see the non-zero/zero parts of each matrix. <div class="graph"><script src="/madlag/bert-large-uncased-wwm-squadv2-x2.15-f83.2-d25-hybrid-v1/raw/main/model_card/density_info.js" id="d55f6096-07eb-4cc1-b284-90ec6ced516c"></script></div> In terms of accuracy, its **F1 is 83.22**, compared with 85.85 for bert-large-uncased-whole-word-masking, a **F1 drop of 2.63**. ## Fine-Pruning details This model was fine-tuned from the HuggingFace [model](https://huggingface.co/bert-large-uncased-whole-word-masking) checkpoint on [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer), and distilled from the model [madlag/bert-large-uncased-whole-word-masking-finetuned-squadv2](https://huggingface.co/madlag/bert-large-uncased-whole-word-masking-finetuned-squadv2). This model is case-insensitive: it does not make a difference between english and English. A side-effect of the block pruning is that some of the attention heads are completely removed: 155 heads were removed on a total of 384 (40.4%). Here is a detailed view on how the remaining heads are distributed in the network after pruning. <div class="graph"><script src="/madlag/bert-large-uncased-wwm-squadv2-x2.15-f83.2-d25-hybrid-v1/raw/main/model_card/pruning_info.js" id="a474f11e-7e05-495e-bb21-4af0edfb6661"></script></div> ## Details of the SQuAD1.1 dataset | Dataset | Split | # samples | | -------- | ----- | --------- | | SQuAD 2.0 | train | 130.0K | | SQuAD 2.0 | eval | 11.9k | ### Fine-tuning - Python: `3.8.5` - Machine specs: ```CPU: Intel(R) Core(TM) i7-6700K CPU Memory: 64 GiB GPUs: 1 GeForce GTX 3090, with 24GiB memory GPU driver: 455.23.05, CUDA: 11.1 ``` ### Results **Pytorch model file size**: `1119MB` (original BERT: `1228.0MB`) | Metric | # Value | # Original ([Table 2](https://www.aclweb.org/anthology/N19-1423.pdf))| Variation | | ------ | --------- | --------- | --------- | | **EM** | **80.19** | **82.83** | **-3.64**| | **F1** | **83.22** | **85.85** | **-2.63**| ``` { "HasAns_exact": 76.48448043184885, "HasAns_f1": 82.55514100819374, "HasAns_total": 5928, "NoAns_exact": 83.8856181665265, "NoAns_f1": 83.8856181665265, "NoAns_total": 5945, "best_exact": 80.19034784805862, "best_exact_thresh": 0.0, "best_f1": 83.22133208932635, "best_f1_thresh": 0.0, "exact": 80.19034784805862, "f1": 83.22133208932645, "total": 11873 } ``` ## Example Usage Install nn_pruning: it contains the optimization script, which just pack the linear layers into smaller ones by removing empty rows/columns. `pip install nn_pruning` Then you can use the `transformers library` almost as usual: you just have to call `optimize_model` when the pipeline has loaded. ```python from transformers import pipeline from nn_pruning.inference_model_patcher import optimize_model qa_pipeline = pipeline( "question-answering", model="madlag/bert-large-uncased-wwm-squadv2-x2.15-f83.2-d25-hybrid-v1", tokenizer="madlag/bert-large-uncased-wwm-squadv2-x2.15-f83.2-d25-hybrid-v1" ) print("bert-large-uncased-whole-word-masking parameters: 497.0M") print(f"Parameters count (includes only head pruning, not feed forward pruning)={int(qa_pipeline.model.num_parameters() / 1E6)}M") qa_pipeline.model = optimize_model(qa_pipeline.model, "dense") print(f"Parameters count after complete optimization={int(qa_pipeline.model.num_parameters() / 1E6)}M") predictions = qa_pipeline({ 'context': "Frédéric François Chopin, born Fryderyk Franciszek Chopin (1 March 1810 – 17 October 1849), was a Polish composer and virtuoso pianist of the Romantic era who wrote primarily for solo piano.", 'question': "Who is Frederic Chopin?", }) print("Predictions", predictions) ```
madlag/bert-large-uncased-wwm-squadv2-x2.63-f82.6-d16-hybrid-v1
2021-06-16T17:12:46.000Z
[ "pytorch", "tf", "bert", "question-answering", "en", "dataset:squad_v2", "transformers", "license:mit" ]
question-answering
[ ".gitattributes", "README.md", "config.json", "model_info.json", "pytorch_model.bin", "special_tokens_map.json", "tf_model.h5", "tokenizer_config.json", "vocab.txt", "eval/eval_metrics.json", "eval/evaluate_timing.json", "eval/nbest_predictions.json.tgz", "eval/null_odds.json", "eval/predictions.json", "eval/sparsity_report.json", "eval/speed_report.json", "model_card/density_info.js", "model_card/pruning_info.js", "model_card/images/layer_0_attention_output_dense.png", "model_card/images/layer_0_attention_self_key.png", "model_card/images/layer_0_attention_self_query.png", "model_card/images/layer_0_attention_self_value.png", "model_card/images/layer_0_intermediate_dense.png", "model_card/images/layer_0_output_dense.png", "model_card/images/layer_10_attention_output_dense.png", "model_card/images/layer_10_attention_self_key.png", "model_card/images/layer_10_attention_self_query.png", "model_card/images/layer_10_attention_self_value.png", "model_card/images/layer_10_intermediate_dense.png", "model_card/images/layer_10_output_dense.png", "model_card/images/layer_11_attention_output_dense.png", "model_card/images/layer_11_attention_self_key.png", "model_card/images/layer_11_attention_self_query.png", "model_card/images/layer_11_attention_self_value.png", "model_card/images/layer_11_intermediate_dense.png", "model_card/images/layer_11_output_dense.png", "model_card/images/layer_12_attention_output_dense.png", "model_card/images/layer_12_attention_self_key.png", "model_card/images/layer_12_attention_self_query.png", "model_card/images/layer_12_attention_self_value.png", "model_card/images/layer_12_intermediate_dense.png", "model_card/images/layer_12_output_dense.png", "model_card/images/layer_13_attention_output_dense.png", "model_card/images/layer_13_attention_self_key.png", "model_card/images/layer_13_attention_self_query.png", "model_card/images/layer_13_attention_self_value.png", "model_card/images/layer_13_intermediate_dense.png", "model_card/images/layer_13_output_dense.png", "model_card/images/layer_14_attention_output_dense.png", "model_card/images/layer_14_attention_self_key.png", "model_card/images/layer_14_attention_self_query.png", "model_card/images/layer_14_attention_self_value.png", "model_card/images/layer_14_intermediate_dense.png", "model_card/images/layer_14_output_dense.png", "model_card/images/layer_15_attention_output_dense.png", "model_card/images/layer_15_attention_self_key.png", "model_card/images/layer_15_attention_self_query.png", "model_card/images/layer_15_attention_self_value.png", "model_card/images/layer_15_intermediate_dense.png", "model_card/images/layer_15_output_dense.png", "model_card/images/layer_16_attention_output_dense.png", "model_card/images/layer_16_attention_self_key.png", "model_card/images/layer_16_attention_self_query.png", "model_card/images/layer_16_attention_self_value.png", "model_card/images/layer_16_intermediate_dense.png", "model_card/images/layer_16_output_dense.png", "model_card/images/layer_17_attention_output_dense.png", "model_card/images/layer_17_attention_self_key.png", "model_card/images/layer_17_attention_self_query.png", "model_card/images/layer_17_attention_self_value.png", "model_card/images/layer_17_intermediate_dense.png", "model_card/images/layer_17_output_dense.png", "model_card/images/layer_18_attention_output_dense.png", "model_card/images/layer_18_attention_self_key.png", "model_card/images/layer_18_attention_self_query.png", "model_card/images/layer_18_attention_self_value.png", "model_card/images/layer_18_intermediate_dense.png", "model_card/images/layer_18_output_dense.png", "model_card/images/layer_19_attention_output_dense.png", "model_card/images/layer_19_attention_self_key.png", "model_card/images/layer_19_attention_self_query.png", "model_card/images/layer_19_attention_self_value.png", "model_card/images/layer_19_intermediate_dense.png", "model_card/images/layer_19_output_dense.png", "model_card/images/layer_1_attention_output_dense.png", "model_card/images/layer_1_attention_self_key.png", "model_card/images/layer_1_attention_self_query.png", "model_card/images/layer_1_attention_self_value.png", "model_card/images/layer_1_intermediate_dense.png", "model_card/images/layer_1_output_dense.png", "model_card/images/layer_20_attention_output_dense.png", "model_card/images/layer_20_attention_self_key.png", "model_card/images/layer_20_attention_self_query.png", "model_card/images/layer_20_attention_self_value.png", "model_card/images/layer_20_intermediate_dense.png", "model_card/images/layer_20_output_dense.png", "model_card/images/layer_21_attention_output_dense.png", "model_card/images/layer_21_attention_self_key.png", "model_card/images/layer_21_attention_self_query.png", "model_card/images/layer_21_attention_self_value.png", "model_card/images/layer_21_intermediate_dense.png", "model_card/images/layer_21_output_dense.png", "model_card/images/layer_22_attention_output_dense.png", "model_card/images/layer_22_attention_self_key.png", "model_card/images/layer_22_attention_self_query.png", "model_card/images/layer_22_attention_self_value.png", "model_card/images/layer_22_intermediate_dense.png", "model_card/images/layer_22_output_dense.png", "model_card/images/layer_23_attention_output_dense.png", "model_card/images/layer_23_attention_self_key.png", "model_card/images/layer_23_attention_self_query.png", "model_card/images/layer_23_attention_self_value.png", "model_card/images/layer_23_intermediate_dense.png", "model_card/images/layer_23_output_dense.png", "model_card/images/layer_2_attention_output_dense.png", "model_card/images/layer_2_attention_self_key.png", "model_card/images/layer_2_attention_self_query.png", "model_card/images/layer_2_attention_self_value.png", "model_card/images/layer_2_intermediate_dense.png", "model_card/images/layer_2_output_dense.png", "model_card/images/layer_3_attention_output_dense.png", "model_card/images/layer_3_attention_self_key.png", "model_card/images/layer_3_attention_self_query.png", "model_card/images/layer_3_attention_self_value.png", "model_card/images/layer_3_intermediate_dense.png", "model_card/images/layer_3_output_dense.png", "model_card/images/layer_4_attention_output_dense.png", "model_card/images/layer_4_attention_self_key.png", "model_card/images/layer_4_attention_self_query.png", "model_card/images/layer_4_attention_self_value.png", "model_card/images/layer_4_intermediate_dense.png", "model_card/images/layer_4_output_dense.png", "model_card/images/layer_5_attention_output_dense.png", "model_card/images/layer_5_attention_self_key.png", "model_card/images/layer_5_attention_self_query.png", "model_card/images/layer_5_attention_self_value.png", "model_card/images/layer_5_intermediate_dense.png", "model_card/images/layer_5_output_dense.png", "model_card/images/layer_6_attention_output_dense.png", "model_card/images/layer_6_attention_self_key.png", "model_card/images/layer_6_attention_self_query.png", "model_card/images/layer_6_attention_self_value.png", "model_card/images/layer_6_intermediate_dense.png", "model_card/images/layer_6_output_dense.png", "model_card/images/layer_7_attention_output_dense.png", "model_card/images/layer_7_attention_self_key.png", "model_card/images/layer_7_attention_self_query.png", "model_card/images/layer_7_attention_self_value.png", "model_card/images/layer_7_intermediate_dense.png", "model_card/images/layer_7_output_dense.png", "model_card/images/layer_8_attention_output_dense.png", "model_card/images/layer_8_attention_self_key.png", "model_card/images/layer_8_attention_self_query.png", "model_card/images/layer_8_attention_self_value.png", "model_card/images/layer_8_intermediate_dense.png", "model_card/images/layer_8_output_dense.png", "model_card/images/layer_9_attention_output_dense.png", "model_card/images/layer_9_attention_self_key.png", "model_card/images/layer_9_attention_self_query.png", "model_card/images/layer_9_attention_self_value.png", "model_card/images/layer_9_intermediate_dense.png", "model_card/images/layer_9_output_dense.png", "training/data_args.json", "training/model_args.json", "training/sparse_args.json", "training/training_args.bin" ]
madlag
9
transformers
--- language: en thumbnail: license: mit tags: - question-answering - - datasets: - squad_v2 metrics: - squad_v2 widget: - text: "Where is the Eiffel Tower located?" context: "The Eiffel Tower is a wrought-iron lattice tower on the Champ de Mars in Paris, France. It is named after the engineer Gustave Eiffel, whose company designed and built the tower." - text: "Who is Frederic Chopin?" context: "Frédéric François Chopin, born Fryderyk Franciszek Chopin (1 March 1810 – 17 October 1849), was a Polish composer and virtuoso pianist of the Romantic era who wrote primarily for solo piano." --- ## bert-large-uncased-whole-word-masking model fine-tuned on SQuAD v2 This model was created using the [nn_pruning](https://github.com/huggingface/nn_pruning) python library: the **linear layers contains 16.0%** of the original weights. The model contains **24.0%** of the original weights **overall** (the embeddings account for a significant part of the model, and they are not pruned by this method). With a simple resizing of the linear matrices it ran **2.63x as fast as bert-large-uncased-whole-word-masking** on the evaluation. This is possible because the pruning method lead to structured matrices: to visualize them, hover below on the plot to see the non-zero/zero parts of each matrix. <div class="graph"><script src="/madlag/bert-large-uncased-wwm-squadv2-x2.63-f82.6-d16-hybrid-v1/raw/main/model_card/density_info.js" id="0e65059e-a61d-4561-947e-b8f47b818bb8"></script></div> In terms of accuracy, its **F1 is 82.57**, compared with 85.85 for bert-large-uncased-whole-word-masking, a **F1 drop of 3.28**. ## Fine-Pruning details This model was fine-tuned from the HuggingFace [model](https://huggingface.co/bert-large-uncased-whole-word-masking) checkpoint on [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer), and distilled from the model [madlag/bert-large-uncased-whole-word-masking-finetuned-squadv2](https://huggingface.co/madlag/bert-large-uncased-whole-word-masking-finetuned-squadv2). This model is case-insensitive: it does not make a difference between english and English. A side-effect of the block pruning is that some of the attention heads are completely removed: 190 heads were removed on a total of 384 (49.5%). Here is a detailed view on how the remaining heads are distributed in the network after pruning. <div class="graph"><script src="/madlag/bert-large-uncased-wwm-squadv2-x2.63-f82.6-d16-hybrid-v1/raw/main/model_card/pruning_info.js" id="f7ae9ec9-d050-46d0-b237-3025165e9504"></script></div> ## Details of the SQuAD1.1 dataset | Dataset | Split | # samples | | -------- | ----- | --------- | | SQuAD 2.0 | train | 130.0K | | SQuAD 2.0 | eval | 11.9k | ### Fine-tuning - Python: `3.8.5` - Machine specs: ```CPU: Intel(R) Core(TM) i7-6700K CPU Memory: 64 GiB GPUs: 1 GeForce GTX 3090, with 24GiB memory GPU driver: 455.23.05, CUDA: 11.1 ``` ### Results **Pytorch model file size**: `1084MB` (original BERT: `1228.0MB`) | Metric | # Value | # Original ([Table 2](https://www.aclweb.org/anthology/N19-1423.pdf))| Variation | | ------ | --------- | --------- | --------- | | **EM** | **79.70** | **82.83** | **-4.13**| | **F1** | **82.57** | **85.85** | **-3.28**| ``` { "HasAns_exact": 74.8144399460189, "HasAns_f1": 80.555306012496, "HasAns_total": 5928, "NoAns_exact": 84.57527333894029, "NoAns_f1": 84.57527333894029, "NoAns_total": 5945, "best_exact": 79.70184452118251, "best_exact_thresh": 0.0, "best_f1": 82.56816761071966, "best_f1_thresh": 0.0, "exact": 79.70184452118251, "f1": 82.56816761071981, "total": 11873 } ``` ## Example Usage Install nn_pruning: it contains the optimization script, which just pack the linear layers into smaller ones by removing empty rows/columns. `pip install nn_pruning` Then you can use the `transformers library` almost as usual: you just have to call `optimize_model` when the pipeline has loaded. ```python from transformers import pipeline from nn_pruning.inference_model_patcher import optimize_model qa_pipeline = pipeline( "question-answering", model="madlag/bert-large-uncased-wwm-squadv2-x2.63-f82.6-d16-hybrid-v1", tokenizer="madlag/bert-large-uncased-wwm-squadv2-x2.63-f82.6-d16-hybrid-v1" ) print("bert-large-uncased-whole-word-masking parameters: 445.0M") print(f"Parameters count (includes only head pruning, not feed forward pruning)={int(qa_pipeline.model.num_parameters() / 1E6)}M") qa_pipeline.model = optimize_model(qa_pipeline.model, "dense") print(f"Parameters count after complete optimization={int(qa_pipeline.model.num_parameters() / 1E6)}M") predictions = qa_pipeline({ 'context': "Frédéric François Chopin, born Fryderyk Franciszek Chopin (1 March 1810 – 17 October 1849), was a Polish composer and virtuoso pianist of the Romantic era who wrote primarily for solo piano.", 'question': "Who is Frederic Chopin?", }) print("Predictions", predictions) ```
maelfabien/marcel_customer_service
2021-04-13T15:43:17.000Z
[ "pytorch", "camembert", "causal-lm", "transformers", "text-generation" ]
text-generation
[ ".gitattributes", "config.json", "pytorch_model.bin", "sentencepiece.bpe.model", "special_tokens_map.json", "tokenizer_config.json" ]
maelfabien
9
transformers
maelfabien/marcel_customer_service_large
2021-04-13T23:23:56.000Z
[ "pytorch", "camembert", "causal-lm", "transformers", "text-generation" ]
text-generation
[ ".gitattributes", "config.json", "pytorch_model.bin", "sentencepiece.bpe.model", "special_tokens_map.json", "tokenizer_config.json" ]
maelfabien
13
transformers
maelfabien/marcel_customer_service_medium
2021-04-13T23:42:16.000Z
[ "pytorch", "camembert", "causal-lm", "transformers", "text-generation" ]
text-generation
[ ".gitattributes", "config.json", "pytorch_model.bin", "sentencepiece.bpe.model", "special_tokens_map.json", "tokenizer_config.json" ]
maelfabien
7
transformers
maelfabien/marcel_customer_service_medium_masked
2021-04-14T13:27:45.000Z
[ "pytorch", "camembert", "masked-lm", "transformers", "fill-mask" ]
fill-mask
[ ".gitattributes", "config.json", "pytorch_model.bin", "sentencepiece.bpe.model", "special_tokens_map.json", "tokenizer_config.json" ]
maelfabien
12
transformers
maelfabien/marcel_customer_service_xlarge
2021-04-14T12:42:05.000Z
[ "pytorch", "camembert", "causal-lm", "transformers", "text-generation" ]
text-generation
[ ".gitattributes", "config.json", "pytorch_model.bin", "sentencepiece.bpe.model", "special_tokens_map.json", "tokenizer_config.json" ]
maelfabien
7
transformers
maelfabien/marcel_customer_service_xlarge_masked
2021-04-14T13:21:49.000Z
[ "pytorch", "camembert", "masked-lm", "transformers", "fill-mask" ]
fill-mask
[ ".gitattributes", "config.json", "pytorch_model.bin", "sentencepiece.bpe.model", "special_tokens_map.json", "tokenizer_config.json" ]
maelfabien
21
transformers