Search is not available for this dataset
pipeline_tag
stringclasses
48 values
library_name
stringclasses
205 values
text
stringlengths
0
18.3M
metadata
stringlengths
2
1.07B
id
stringlengths
5
122
last_modified
null
tags
listlengths
1
1.84k
sha
null
created_at
stringlengths
25
25
fill-mask
transformers
{}
asifm43/bert-bn
null
[ "transformers", "pytorch", "roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
asini/wav2vec2-base-timit-demo-colab
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-timit-demo This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4847 - Wer: 0.3462 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.487 | 4.0 | 500 | 1.3466 | 1.0153 | | 0.6134 | 8.0 | 1000 | 0.4807 | 0.4538 | | 0.2214 | 12.0 | 1500 | 0.4684 | 0.3984 | | 0.1233 | 16.0 | 2000 | 0.5070 | 0.3779 | | 0.0847 | 20.0 | 2500 | 0.4965 | 0.3705 | | 0.0611 | 24.0 | 3000 | 0.4881 | 0.3535 | | 0.0464 | 28.0 | 3500 | 0.4847 | 0.3462 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.2+cu102 - Datasets 1.18.3 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-timit-demo", "results": []}]}
asini/wav2vec2-timit-demo
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
automatic-speech-recognition
transformers
{}
asini/wav2vec_tuto
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
aslijaniya/vit-base-patch16-224-in21k-euroSat
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
assansanogo/AS
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
assasin/DialoGPT-small-Joshua
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
# BERT-Large-Uncased for Sentiment Analysis This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) originally released in ["BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding"](https://arxiv.org/abs/1810.04805) and trained on the [Stanford Sentiment Treebank v2 (SST2)](https://nlp.stanford.edu/sentiment/); part of the [General Language Understanding Evaluation (GLUE)](https://gluebenchmark.com) benchmark. This model was fine-tuned by the team at [AssemblyAI](https://www.assemblyai.com) and is released with the [corresponding blog post](). ## Usage To download and utilize this model for sentiment analysis please execute the following: ```python import torch.nn.functional as F from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("assemblyai/bert-large-uncased-sst2") model = AutoModelForSequenceClassification.from_pretrained("assemblyai/bert-large-uncased-sst2") tokenized_segments = tokenizer(["AssemblyAI is the best speech-to-text API for modern developers with performance being second to none!"], return_tensors="pt", padding=True, truncation=True) tokenized_segments_input_ids, tokenized_segments_attention_mask = tokenized_segments.input_ids, tokenized_segments.attention_mask model_predictions = F.softmax(model(input_ids=tokenized_segments_input_ids, attention_mask=tokenized_segments_attention_mask)['logits'], dim=1) print("Positive probability: "+str(model_predictions[0][1].item()*100)+"%") print("Negative probability: "+str(model_predictions[0][0].item()*100)+"%") ``` For questions about how to use this model feel free to contact the team at [AssemblyAI](https://www.assemblyai.com)!
{}
assemblyai/bert-large-uncased-sst2
null
[ "transformers", "pytorch", "bert", "text-classification", "arxiv:1810.04805", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
# DistilBERT-Base-Uncased for Duplicate Question Detection This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) originally released in ["DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter"](https://arxiv.org/abs/1910.01108) and trained on the [Quora Question Pairs](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) dataset; part of the [General Language Understanding Evaluation (GLUE)](https://gluebenchmark.com) benchmark. This model was fine-tuned by the team at [AssemblyAI](https://www.assemblyai.com) and is released with the [corresponding blog post](). ## Usage To download and utilize this model for duplicate question detection please execute the following: ```python import torch.nn.functional as F from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("assemblyai/distilbert-base-uncased-qqp") model = AutoModelForSequenceClassification.from_pretrained("assemblyai/distilbert-base-uncased-qqp") tokenized_segments = tokenizer(["How many hours does it take to fly from California to New York?"], ["What is the flight time from New York to Seattle?"], return_tensors="pt", padding=True, truncation=True) tokenized_segments_input_ids, tokenized_segments_attention_mask = tokenized_segments.input_ids, tokenized_segments.attention_mask model_predictions = F.softmax(model(input_ids=tokenized_segments_input_ids, attention_mask=tokenized_segments_attention_mask)['logits'], dim=1) print("Duplicate probability: "+str(model_predictions[0][1].item()*100)+"%") print("Non-duplicate probability: "+str(model_predictions[0][0].item()*100)+"%") ``` For questions about how to use this model feel free to contact the team at [AssemblyAI](https://www.assemblyai.com)!
{}
assemblyai/distilbert-base-uncased-qqp
null
[ "transformers", "pytorch", "distilbert", "text-classification", "arxiv:1910.01108", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
# DistilBERT-Base-Uncased for Sentiment Analysis This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) originally released in ["DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter"](https://arxiv.org/abs/1910.01108) and trained on the [Stanford Sentiment Treebank v2 (SST2)](https://nlp.stanford.edu/sentiment/); part of the [General Language Understanding Evaluation (GLUE)](https://gluebenchmark.com) benchmark. This model was fine-tuned by the team at [AssemblyAI](https://www.assemblyai.com) and is released with the [corresponding blog post](). ## Usage To download and utilize this model for sentiment analysis please execute the following: ```python import torch.nn.functional as F from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("assemblyai/distilbert-base-uncased-sst2") model = AutoModelForSequenceClassification.from_pretrained("assemblyai/distilbert-base-uncased-sst2") tokenized_segments = tokenizer(["AssemblyAI is the best speech-to-text API for modern developers with performance being second to none!"], return_tensors="pt", padding=True, truncation=True) tokenized_segments_input_ids, tokenized_segments_attention_mask = tokenized_segments.input_ids, tokenized_segments.attention_mask model_predictions = F.softmax(model(input_ids=tokenized_segments_input_ids, attention_mask=tokenized_segments_attention_mask)['logits'], dim=1) print("Positive probability: "+str(model_predictions[0][1].item()*100)+"%") print("Negative probability: "+str(model_predictions[0][0].item()*100)+"%") ``` For questions about how to use this model feel free to contact the team at [AssemblyAI](https://www.assemblyai.com)!
{}
assemblyai/distilbert-base-uncased-sst2
null
[ "transformers", "pytorch", "distilbert", "text-classification", "arxiv:1910.01108", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
automatic-speech-recognition
transformers
{}
assij/wav2vec2-common_voice-tr-demo-dist
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
assij/wav2vec2-common_voice-tr-demo
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
# Description This model takes a tweet with the word "jew" in it, and determines if it's antisemitic. Training data: This model was trained on 4k tweets, where ~50% were labeled as antisemitic. I labeled them myself based on personal experience and knowledge about common antisemitic tropes. Note: The goal for this model is not to be used as a final say on what is or is not antisemitic, but rather as a first pass on what might be antisemitic and should be reviewed by human experts. Please keep in mind that I'm not an expert on antisemitism or hatespeech. Whether something is antisemitic or not depends on the context, as for any hate speech, and everyone has a different definition for what is hate speech. If you would like to collaborate on antisemitism detection, please feel free to contact me at [email protected] This model is not ready for production, it needs more evaluation and more training data. # Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 21194454 - CO2 Emissions (in grams): 2.0686690092905224 - Dataset: https://huggingface.co/datasets/astarostap/autonlp-data-antisemitism-2 ## Validation Metrics - Loss: 0.5291365385055542 - Accuracy: 0.7572692793931732 - Precision: 0.7126948775055679 - Recall: 0.835509138381201 - AUC: 0.8185826549941126 - F1: 0.7692307692307693 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/astarostap/autonlp-antisemitism-2-21194454 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("astarostap/autonlp-antisemitism-2-21194454", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("astarostap/autonlp-antisemitism-2-21194454", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
{"language": "en", "tags": "autonlp", "datasets": ["astarostap/autonlp-data-antisemitism-2"], "widget": [{"text": "the jews have a lot of power"}], "co2_eq_emissions": 2.0686690092905224}
astarostap/autonlp-antisemitism-2-21194454
null
[ "transformers", "pytorch", "bert", "text-classification", "autonlp", "en", "dataset:astarostap/autonlp-data-antisemitism-2", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
This model takes a tweet with the word "jew" in it, and determines if it's antisemitic. *Training data:* This model was trained on 4k tweets, where ~50% were labeled as antisemitic. I labeled them myself based on personal experience and knowledge about common antisemitic tropes. *Note:* The goal for this model is not to be used as a final say on what is or is not antisemitic, but rather as a first pass on what might be antisemitic and should be reviewed by human experts. Please keep in mind that I'm not an expert on antisemitism or hatespeech. Whether something is antisemitic or not depends on the context, as for any hate speech, and everyone has a different definition for what is hate speech. If you would like to collaborate on antisemitism detection, please feel free to contact me at [email protected] This model is not ready for production, it needs more evaluation and more training data.
{"license": "mit", "widget": [{"text": "Jews run the world."}]}
astarostap/distilbert-cased-antisemitic-tweets
null
[ "transformers", "pytorch", "distilbert", "text-classification", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
# friendly_JA-Model (T5 fine-tuned model) MT model trained using the friendly_JA Corpus attempting to make Japanese easier/more accessible to occidental people by using the Latin/English derived katakana lexicon instead of the standard Sino-Japanese lexicon # Examples | input | output| |---|---| |最適化を応用した機械翻訳モデルは高精度だ|オプティマイゼーションを応用したマシントランスレーションモデルは高いアキュラシーだ| |彼は架空の世界に住んでいる|彼はイマジナリー世界に住んでいる| |新型コロナウイルスに感染してしまった|コロナウイルスにかかってしまった| |深層学習は難しい|ディープラーニングはむずかしい| |新たな概念を紹介する|新しいコンセプトを紹介する| |津波の警報が流れた|ツナミのアラートが流れた| |南海トラフの災害は震源地による|南海トラフのディザスターはエピセンターによる| |息子は際どい内容の本を読んでしまった|子どもはセンシティブなコンテンツの本を読んでしまった| |彼女は非現金決済で払った|彼女はキャッシュレスで払った| |係員は会議の予定を調整している|担当の人はアジェンダを調整している| |友人とカラオケに行く予定があったが、彼女はどうしても美術館に行きたかった|友だちとカラオケに行くスケジュールがあったが、彼女はどうしてもミュージアムに行きたかった| |国際会議に参加しました|インターナショナルコンファレンスに参加しました| |部長は今日の会議に参加できかねました|部長は今日のミーティングに参加できませんでした。| |新型コロナウイルスの予防接種による心膜炎が多数報告されている|コロナウイルスのワクチンによるペリカーダイティスがレポートされている| |私はジョジョの奇妙な冒険が好き|私はジョジョのビザールアドベンチャーが好き| |新型コロナウイルスウイルス オミクロン株 1人死亡 8249人感染|コロナウイルス オミクロンバリアント 1人死んだ 8249人インフェクション| |2021年10月4日から岸田文雄は日本の総理大臣として勤めている|2021年10月4日から岸田文雄は日本のプライムミニスターとして働いている| # References t5 japanese pre-trained model: sonoisa t5-base-japanese (https://huggingface.co/sonoisa/t5-base-japanese) # License Shield: [![CC BY 4.0][cc-by-shield]][cc-by] This work is licensed under a [Creative Commons Attribution 4.0 International License][cc-by]. [![CC BY 4.0][cc-by-image]][cc-by] [cc-by]: http://creativecommons.org/licenses/by/4.0/ [cc-by-image]: https://i.creativecommons.org/l/by/4.0/88x31.png [cc-by-shield]: https://img.shields.io/badge/License-CC%20BY%204.0-lightgrey.svg
{"language": ["ja"], "license": "cc-by-4.0", "tags": ["japanese", "easy-japanese", "friendly-japanese", "sino-japanese", "katakana"], "datasets": ["astremo/friendly_JA_corpus"], "metrics": ["bleu"]}
astremo/friendly_JA
null
[ "transformers", "pytorch", "t5", "text2text-generation", "japanese", "easy-japanese", "friendly-japanese", "sino-japanese", "katakana", "ja", "dataset:astremo/friendly_JA_corpus", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
#Harry Potter DialoGPT Model
{"tags": ["conversational"]}
astrobreazy/DialoGPT-small-harrypotter
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
astrologer/solution
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
git clone https://github.com/saic-mdal/lama.git
{}
asyou20/1234
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
question-answering
transformers
{}
aszidon/distilbertcustom
null
[ "transformers", "pytorch", "distilbert", "question-answering", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
question-answering
transformers
{}
aszidon/distilbertcustom2
null
[ "transformers", "pytorch", "distilbert", "question-answering", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
question-answering
transformers
{}
aszidon/distilbertcustom3
null
[ "transformers", "pytorch", "distilbert", "question-answering", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
question-answering
transformers
{}
aszidon/distilbertcustom4
null
[ "transformers", "pytorch", "distilbert", "question-answering", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
question-answering
transformers
{}
aszidon/distilbertcustom5
null
[ "transformers", "pytorch", "distilbert", "question-answering", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
# LayoutLM ## Model description LayoutLM is a simple but effective pre-training method of text and layout for document image understanding and information extraction tasks, such as form understanding and receipt understanding. LayoutLM archives the SOTA results on multiple datasets. For more details, please refer to our paper: [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou, [KDD 2020](https://www.kdd.org/kdd2020/accepted-papers) ## Training data We pre-train LayoutLM on IIT-CDIP Test Collection 1.0\* dataset with two settings. * LayoutLM-Base, Uncased (11M documents, 2 epochs): 12-layer, 768-hidden, 12-heads, 113M parameters **(This Model)** * LayoutLM-Large, Uncased (11M documents, 2 epochs): 24-layer, 1024-hidden, 16-heads, 343M parameters ## Citation If you find LayoutLM useful in your research, please cite the following paper: ``` latex @misc{xu2019layoutlm, title={LayoutLM: Pre-training of Text and Layout for Document Image Understanding}, author={Yiheng Xu and Minghao Li and Lei Cui and Shaohan Huang and Furu Wei and Ming Zhou}, year={2019}, eprint={1912.13318}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{}
atahmasb/tf-layoutlm-base-uncased
null
[ "transformers", "tf", "layoutlm", "arxiv:1912.13318", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
# LayoutLM ## Model description LayoutLM is a simple but effective pre-training method of text and layout for document image understanding and information extraction tasks, such as form understanding and receipt understanding. LayoutLM archives the SOTA results on multiple datasets. For more details, please refer to our paper: [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou, [KDD 2020](https://www.kdd.org/kdd2020/accepted-papers) ## Training data We pre-train LayoutLM on IIT-CDIP Test Collection 1.0\* dataset with two settings. * LayoutLM-Base, Uncased (11M documents, 2 epochs): 12-layer, 768-hidden, 12-heads, 113M parameters * LayoutLM-Large, Uncased (11M documents, 2 epochs): 24-layer, 1024-hidden, 16-heads, 343M parameters **(This Model)** ## Citation If you find LayoutLM useful in your research, please cite the following paper: ``` latex @misc{xu2019layoutlm, title={LayoutLM: Pre-training of Text and Layout for Document Image Understanding}, author={Yiheng Xu and Minghao Li and Lei Cui and Shaohan Huang and Furu Wei and Ming Zhou}, year={2019}, eprint={1912.13318}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{}
atahmasb/tf-layoutlm-large-uncased
null
[ "transformers", "tf", "layoutlm", "arxiv:1912.13318", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
ataryes/snlp_project
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
atelders/popolibot
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
atf1998/distilbert-base-uncased-finetuned-squad
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8508 - Matthews Correlation: 0.5452 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5221 | 1.0 | 535 | 0.5370 | 0.4246 | | 0.3462 | 2.0 | 1070 | 0.5157 | 0.5183 | | 0.2332 | 3.0 | 1605 | 0.6324 | 0.5166 | | 0.1661 | 4.0 | 2140 | 0.7616 | 0.5370 | | 0.1263 | 5.0 | 2675 | 0.8508 | 0.5452 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.13.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model-index": [{"name": "distilbert-base-uncased-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.5451837431775948, "name": "Matthews Correlation"}]}]}]}
athar/distilbert-base-uncased-finetuned-cola
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
athar/gpt2-wikitext2
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
question-answering
transformers
{}
atharvamundada99/bert-large-question-answering-finetuned-legal
null
[ "transformers", "pytorch", "bert", "question-answering", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
{}
atharvapatil128/JakeBot
null
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
athipudr/huggy
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# Harry Potter DialoGPT Model
{"tags": ["conversational"]}
atkh6673/DialoGPT-small-harrypotter
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# Trump DialoGPT Model
{"tags": ["conversational"]}
atkh6673/DialoGPT-small-trump
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
atob/bert-base-uncased-squad2
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
atob/unqover-distilbert-base-uncased-newsqa
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
atomicx27/DialoGPT-small-ironman
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# Dumbledore DialoGPT Model
{"tags": ["conversational"]}
atomsspawn/DialoGPT-small-dumbledore
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
一个测试Paddle 服务器模型的项目
{}
atu/paddle_detection
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
atuo/distilbert-base-uncased-finetuned-event
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
# AraELECTRA <img src="https://raw.githubusercontent.com/aub-mind/arabert/master/AraELECTRA.png" width="100" align="left"/> **ELECTRA** is a method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). AraELECTRA achieves state-of-the-art results on Arabic QA dataset. For a detailed description, please refer to the AraELECTRA paper [AraELECTRA: Pre-Training Text Discriminators for Arabic Language Understanding](https://arxiv.org/abs/2012.15516). ## How to use the discriminator in `transformers` ```python from transformers import ElectraForPreTraining, ElectraTokenizerFast import torch discriminator = ElectraForPreTraining.from_pretrained("aubmindlab/araelectra-base-discriminator") tokenizer = ElectraTokenizerFast.from_pretrained("aubmindlab/araelectra-base-discriminator") sentence = "" fake_sentence = "" fake_tokens = tokenizer.tokenize(fake_sentence) fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt") discriminator_outputs = discriminator(fake_inputs) predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2) [print("%7s" % token, end="") for token in fake_tokens] [print("%7s" % int(prediction), end="") for prediction in predictions.tolist()] ``` # Model Model | HuggingFace Model Name | Size (MB/Params)| ---|:---:|:---: AraELECTRA-base-generator | [araelectra-base-generator](https://huggingface.co/aubmindlab/araelectra-base-generator) | 227MB/60M | AraELECTRA-base-discriminator | [araelectra-base-discriminator](https://huggingface.co/aubmindlab/araelectra-base-discriminator) | 516MB/135M | # Compute Model | Hardware | num of examples (seq len = 512) | Batch Size | Num of Steps | Time (in days) ---|:---:|:---:|:---:|:---:|:---: AraELECTRA-base | TPUv3-8 | - | 256 | 2M | 24 # Dataset The pretraining data used for the new **AraELECTRA** model is also used for **AraGPT2 and AraBERTv2**. The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation) For the new dataset we added the unshuffled OSCAR corpus, after we thoroughly filter it, to the previous dataset used in AraBERTv1 but with out the websites that we previously crawled: - OSCAR unshuffled and filtered. - [Arabic Wikipedia dump](https://archive.org/details/arwiki-20190201) from 2020/09/01 - [The 1.5B words Arabic Corpus](https://www.semanticscholar.org/paper/1.5-billion-words-Arabic-Corpus-El-Khair/f3eeef4afb81223df96575adadf808fe7fe440b4) - [The OSIAN Corpus](https://www.aclweb.org/anthology/W19-4619) - Assafir news articles. Huge thank you for Assafir for giving us the data # Preprocessing It is recommended to apply our preprocessing function before training/testing on any dataset. **Install the arabert python package to segment text for AraBERT v1 & v2 or to clean your data `pip install arabert`** ```python from arabert.preprocess import ArabertPreprocessor model_name="araelectra-base" arabert_prep = ArabertPreprocessor(model_name=model_name) text = "ولن نبالغ إذا قلنا إن هاتف أو كمبيوتر المكتب في زمننا هذا ضروري" arabert_prep.preprocess(text) >>> output: ولن نبالغ إذا قلنا : إن هاتف أو كمبيوتر المكتب في زمننا هذا ضروري ``` # TensorFlow 1.x models **You can find the PyTorch, TF2 and TF1 models in HuggingFace's Transformer Library under the ```aubmindlab``` username** - `wget https://huggingface.co/aubmindlab/MODEL_NAME/resolve/main/tf1_model.tar.gz` where `MODEL_NAME` is any model under the `aubmindlab` name # If you used this model please cite us as : ``` @inproceedings{antoun-etal-2021-araelectra, title = "{A}ra{ELECTRA}: Pre-Training Text Discriminators for {A}rabic Language Understanding", author = "Antoun, Wissam and Baly, Fady and Hajj, Hazem", booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop", month = apr, year = "2021", address = "Kyiv, Ukraine (Virtual)", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2021.wanlp-1.20", pages = "191--195", } ``` # Acknowledgments Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the [AUB MIND Lab](https://sites.aub.edu.lb/mindlab/) Members for the continous support. Also thanks to [Yakshof](https://www.yakshof.com/#/) and Assafir for data and storage access. Another thanks for Habib Rahal (https://www.behance.net/rahalhabib), for putting a face to AraBERT. # Contacts **Wissam Antoun**: [Linkedin](https://www.linkedin.com/in/wissam-antoun-622142b4/) | [Twitter](https://twitter.com/wissam_antoun) | [Github](https://github.com/WissamAntoun) | <[email protected]> | <[email protected]> **Fady Baly**: [Linkedin](https://www.linkedin.com/in/fadybaly/) | [Twitter](https://twitter.com/fadybaly) | [Github](https://github.com/fadybaly) | <[email protected]> | <[email protected]>
{"language": "ar", "datasets": ["wikipedia", "Osian", "1.5B-Arabic-Corpus", "oscar-arabic-unshuffled", "Assafir(private)"]}
aubmindlab/araelectra-base-discriminator
null
[ "transformers", "pytorch", "tf", "tensorboard", "electra", "pretraining", "ar", "arxiv:1406.2661", "arxiv:2012.15516", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
# AraELECTRA <img src="https://raw.githubusercontent.com/aub-mind/arabert/master/AraELECTRA.png" width="100" align="left"/> **ELECTRA** is a method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). AraELECTRA achieves state-of-the-art results on Arabic QA dataset. For a detailed description, please refer to the AraELECTRA paper [AraELECTRA: Pre-Training Text Discriminators for Arabic Language Understanding](https://arxiv.org/abs/2012.15516). ## How to use the generator in `transformers` ```python from transformers import pipeline fill_mask = pipeline( "fill-mask", model="aubmindlab/araelectra-base-generator", tokenizer="aubmindlab/araelectra-base-generator" ) print( fill_mask(" عاصمة لبنان هي [MASK] .) ) ``` # Preprocessing It is recommended to apply our preprocessing function before training/testing on any dataset. **Install the arabert python package to segment text for AraBERT v1 & v2 or to clean your data `pip install arabert`** ```python from arabert.preprocess import ArabertPreprocessor model_name="aubmindlab/araelectra-base" arabert_prep = ArabertPreprocessor(model_name=model_name) text = "ولن نبالغ إذا قلنا إن هاتف أو كمبيوتر المكتب في زمننا هذا ضروري" arabert_prep.preprocess(text) >>> output: ولن نبالغ إذا قلنا : إن هاتف أو كمبيوتر المكتب في زمننا هذا ضروري ``` # Model Model | HuggingFace Model Name | Size (MB/Params)| ---|:---:|:---: AraELECTRA-base-generator | [araelectra-base-generator](https://huggingface.co/aubmindlab/araelectra-base-generator) | 227MB/60M | AraELECTRA-base-discriminator | [araelectra-base-discriminator](https://huggingface.co/aubmindlab/araelectra-base-discriminator) | 516MB/135M | # Compute Model | Hardware | num of examples (seq len = 512) | Batch Size | Num of Steps | Time (in days) ---|:---:|:---:|:---:|:---:|:---: AraELECTRA-base | TPUv3-8 | - | 256 | 2M | 24 # Dataset The pretraining data used for the new AraELECTRA model is also used for **AraGPT2 and AraELECTRA**. The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation) For the new dataset we added the unshuffled OSCAR corpus, after we thoroughly filter it, to the previous dataset used in AraBERTv1 but with out the websites that we previously crawled: - OSCAR unshuffled and filtered. - [Arabic Wikipedia dump](https://archive.org/details/arwiki-20190201) from 2020/09/01 - [The 1.5B words Arabic Corpus](https://www.semanticscholar.org/paper/1.5-billion-words-Arabic-Corpus-El-Khair/f3eeef4afb81223df96575adadf808fe7fe440b4) - [The OSIAN Corpus](https://www.aclweb.org/anthology/W19-4619) - Assafir news articles. Huge thank you for Assafir for giving us the data # TensorFlow 1.x models **You can find the PyTorch, TF2 and TF1 models in HuggingFace's Transformer Library under the ```aubmindlab``` username** - `wget https://huggingface.co/aubmindlab/MODEL_NAME/resolve/main/tf1_model.tar.gz` where `MODEL_NAME` is any model under the `aubmindlab` name # If you used this model please cite us as : ``` @inproceedings{antoun-etal-2021-araelectra, title = "{A}ra{ELECTRA}: Pre-Training Text Discriminators for {A}rabic Language Understanding", author = "Antoun, Wissam and Baly, Fady and Hajj, Hazem", booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop", month = apr, year = "2021", address = "Kyiv, Ukraine (Virtual)", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2021.wanlp-1.20", pages = "191--195", } ``` # Acknowledgments Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the [AUB MIND Lab](https://sites.aub.edu.lb/mindlab/) Members for the continous support. Also thanks to [Yakshof](https://www.yakshof.com/#/) and Assafir for data and storage access. Another thanks for Habib Rahal (https://www.behance.net/rahalhabib), for putting a face to AraBERT. # Contacts **Wissam Antoun**: [Linkedin](https://www.linkedin.com/in/wissam-antoun-622142b4/) | [Twitter](https://twitter.com/wissam_antoun) | [Github](https://github.com/WissamAntoun) | <[email protected]> | <[email protected]> **Fady Baly**: [Linkedin](https://www.linkedin.com/in/fadybaly/) | [Twitter](https://twitter.com/fadybaly) | [Github](https://github.com/fadybaly) | <[email protected]> | <[email protected]>
{"language": "ar", "datasets": ["wikipedia", "Osian", "1.5B-Arabic-Corpus", "oscar-arabic-unshuffled", "Assafir(private)"], "widget": [{"text": " \u0639\u0627\u0635\u0645\u0629 \u0644\u0628\u0646\u0627\u0646 \u0647\u064a [MASK] ."}]}
aubmindlab/araelectra-base-generator
null
[ "transformers", "pytorch", "tf", "tensorboard", "safetensors", "electra", "fill-mask", "ar", "arxiv:1406.2661", "arxiv:2012.15516", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# Arabic GPT2 <img src="https://raw.githubusercontent.com/aub-mind/arabert/master/AraGPT2.png" width="100" align="left"/> You can find more information in our paper [AraGPT2](https://arxiv.org/abs/2012.15520) The code in this repository was used to train all GPT2 variants. The code support training and fine-tuning GPT2 on GPUs and TPUs via the TPUEstimator API. GPT2-base and medium uses the code from the `gpt2` folder and can trains models from the [minimaxir/gpt-2-simple](https://github.com/minimaxir/gpt-2-simple) repository. These models were trained using the `lamb` optimizer and follow the same architecture as `gpt2` and are fully compatible with the `transformers` library. GPT2-large and GPT2-mega were trained using the [imcaspar/gpt2-ml](https://github.com/imcaspar/gpt2-ml/) library, and follow the `grover` architecture. You can use the pytorch classes found in `grover/modeling_gpt2.py` as a direct replacement for classes in the `transformers` library (it should support version `v4.x` from `transformers`). Both models are trained using the `adafactor` optimizer, since the `adam` and `lamb` optimizer use too much memory causing the model to not even fit 1 batch on a TPU core. AraGPT2 is trained on the same large Arabic Dataset as AraBERTv2. # Usage ## Testing the model using `transformers`: ```python from transformers import GPT2TokenizerFast, pipeline #for base and medium from transformers import GPT2LMHeadModel #for large and mega # pip install arabert from arabert.aragpt2.grover.modeling_gpt2 import GPT2LMHeadModel from arabert.preprocess import ArabertPreprocessor MODEL_NAME='aubmindlab/aragpt2-base' arabert_prep = ArabertPreprocessor(model_name=MODEL_NAME) text="" text_clean = arabert_prep.preprocess(text) model = GPT2LMHeadModel.from_pretrained(MODEL_NAME) tokenizer = GPT2TokenizerFast.from_pretrained(MODEL_NAME) generation_pipeline = pipeline("text-generation",model=model,tokenizer=tokenizer) #feel free to try different decoding settings generation_pipeline(text, pad_token_id=tokenizer.eos_token_id, num_beams=10, max_length=200, top_p=0.9, repetition_penalty = 3.0, no_repeat_ngram_size = 3)[0]['generated_text'] ``` ## Finetunning using `transformers`: Follow the guide linked [here](https://towardsdatascience.com/fine-tuning-gpt2-on-colab-gpu-for-free-340468c92ed) ## Finetuning using our code with TF 1.15.4: Create the Training TFRecords: ```bash python create_pretraining_data.py --input_file=<RAW TEXT FILE with documents/article separated by an empty line> --output_file=<OUTPUT TFRecord> --tokenizer_dir=<Directory with the GPT2 Tokenizer files> ``` Finetuning: ```bash python3 run_pretraining.py \\r\n --input_file="gs://<GS_BUCKET>/pretraining_data/*" \\r\n --output_dir="gs://<GS_BUCKET>/pretraining_model/" \\r\n --config_file="config/small_hparams.json" \\r\n --batch_size=128 \\r\n --eval_batch_size=8 \\r\n --num_train_steps= \\r\n --num_warmup_steps= \\r\n --learning_rate= \\r\n --save_checkpoints_steps= \\r\n --max_seq_length=1024 \\r\n --max_eval_steps= \\r\n --optimizer="lamb" \\r\n --iterations_per_loop=5000 \\r\n --keep_checkpoint_max=10 \\r\n --use_tpu=True \\r\n --tpu_name=<TPU NAME> \\r\n --do_train=True \\r\n --do_eval=False ``` # Model Sizes Model | Optimizer | Context size | Embedding Size | Num of heads | Num of layers | Model Size / Num of Params | ---|:---:|:---:|:---:|:---:|:---:|:---: AraGPT2-base | `lamb` | 1024 | 768 | 12 | 12 | 527MB / 135M | AraGPT2-medium | `lamb` | 1024 | 1024 | 16 | 24 | 1.38G/370M | AraGPT2-large | `adafactor` | 1024 | 1280 | 20 | 36 | 2.98GB/792M | AraGPT2-mega | `adafactor` | 1024 | 1536 | 25 | 48 | 5.5GB/1.46B | All models are available in the `HuggingFace` model page under the [aubmindlab](https://huggingface.co/aubmindlab/) name. Checkpoints are available in PyTorch, TF2 and TF1 formats. ## Compute Model | Hardware | num of examples (seq len = 1024) | Batch Size | Num of Steps | Time (in days) ---|:---:|:---:|:---:|:---:|:---: AraGPT2-base | TPUv3-128 | 9.7M | 1792 | 125K | 1.5 AraGPT2-medium | TPUv3-8 | 9.7M | 1152 | 85K | 1.5 AraGPT2-large | TPUv3-128 | 9.7M | 256 | 220k | 3 AraGPT2-mega | TPUv3-128 | 9.7M | 256 | 780K | 9 # Dataset The pretraining data used for the new AraGPT2 model is also used for **AraBERTv2 and AraELECTRA**. The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation) For the new dataset we added the unshuffled OSCAR corpus after we thoroughly filter it, to the dataset used in AraBERTv1 but without the websites that we previously crawled: - OSCAR unshuffled and filtered. - [Arabic Wikipedia dump](https://archive.org/details/arwiki-20190201) from 2020/09/01 - [The 1.5B words Arabic Corpus](https://www.semanticscholar.org/paper/1.5-billion-words-Arabic-Corpus-El-Khair/f3eeef4afb81223df96575adadf808fe7fe440b4) - [The OSIAN Corpus](https://www.aclweb.org/anthology/W19-4619) - Assafir news articles. Huge thank you for Assafir for giving us the data # Disclaimer The text generated by AraGPT2 is automatically generated by a neural network model trained on a large amount of texts, which does not represent the authors' or their institutes' official attitudes and preferences. The text generated by AraGPT2 should only be used for research and scientific purposes. If it infringes on your rights and interests or violates social morality, please do not propagate it. # If you used this model please cite us as : ``` @inproceedings{antoun-etal-2021-aragpt2, title = "{A}ra{GPT}2: Pre-Trained Transformer for {A}rabic Language Generation", author = "Antoun, Wissam and Baly, Fady and Hajj, Hazem", booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop", month = apr, year = "2021", address = "Kyiv, Ukraine (Virtual)", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2021.wanlp-1.21", pages = "196--207", } ``` # Acknowledgments Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the [AUB MIND Lab](https://sites.aub.edu.lb/mindlab/) Members for the continuous support. Also thanks to [Yakshof](https://www.yakshof.com/#/) and Assafir for data and storage access. Another thanks for Habib Rahal (https://www.behance.net/rahalhabib), for putting a face to AraBERT. # Contacts **Wissam Antoun**: [Linkedin](https://www.linkedin.com/in/wissam-antoun-622142b4/) | [Twitter](https://twitter.com/wissam_antoun) | [Github](https://github.com/WissamAntoun) | <[email protected]> | <[email protected]> **Fady Baly**: [Linkedin](https://www.linkedin.com/in/fadybaly/) | [Twitter](https://twitter.com/fadybaly) | [Github](https://github.com/fadybaly) | <[email protected]> | <[email protected]>
{"language": "ar", "datasets": ["wikipedia", "Osian", "1.5B-Arabic-Corpus", "oscar-arabic-unshuffled", "Assafir(private)"], "widget": [{"text": "\u064a\u062d\u0643\u0649 \u0623\u0646 \u0645\u0632\u0627\u0631\u0639\u0627 \u0645\u062e\u0627\u062f\u0639\u0627 \u0642\u0627\u0645 \u0628\u0628\u064a\u0639 \u0628\u0626\u0631 \u0627\u0644\u0645\u0627\u0621 \u0627\u0644\u0645\u0648\u062c\u0648\u062f \u0641\u064a \u0623\u0631\u0636\u0647 \u0644\u062c\u0627\u0631\u0647 \u0645\u0642\u0627\u0628\u0644 \u0645\u0628\u0644\u063a \u0643\u0628\u064a\u0631 \u0645\u0646 \u0627\u0644\u0645\u0627\u0644"}, {"text": "\u0627\u0644\u0642\u062f\u0633 \u0645\u062f\u064a\u0646\u0629 \u062a\u0627\u0631\u064a\u062e\u064a\u0629\u060c \u0628\u0646\u0627\u0647\u0627 \u0627\u0644\u0643\u0646\u0639\u0627\u0646\u064a\u0648\u0646 \u0641\u064a"}, {"text": "\u0643\u0627\u0646 \u064a\u0627 \u0645\u0627 \u0643\u0627\u0646 \u0641\u064a \u0642\u062f\u064a\u0645 \u0627\u0644\u0632\u0645\u0627\u0646"}]}
aubmindlab/aragpt2-base
null
[ "transformers", "pytorch", "tf", "jax", "tensorboard", "safetensors", "gpt2", "text-generation", "ar", "arxiv:2012.15520", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# Arabic GPT2 <img src="https://raw.githubusercontent.com/aub-mind/arabert/master/AraGPT2.png" width="100" align="left"/> You can find more information in our paper [AraGPT2](https://arxiv.org/abs/2012.15520) The code in this repository was used to train all GPT2 variants. The code support training and fine-tuning GPT2 on GPUs and TPUs via the TPUEstimator API. GPT2-base and medium uses the code from the `gpt2` folder and can trains models from the [minimaxir/gpt-2-simple](https://github.com/minimaxir/gpt-2-simple) repository. These models were trained using the `lamb` optimizer and follow the same architecture as `gpt2` and are fully compatible with the `transformers` library. GPT2-large and GPT2-mega were trained using the [imcaspar/gpt2-ml](https://github.com/imcaspar/gpt2-ml/) library, and follow the `grover` architecture. You can use the pytorch classes found in `grover/modeling_gpt2.py` as a direct replacement for classes in the `transformers` library (it should support version `v4.x` from `transformers`). Both models are trained using the `adafactor` optimizer, since the `adam` and `lamb` optimizer use too much memory causing the model to not even fit 1 batch on a TPU core. AraGPT2 is trained on the same large Arabic Dataset as AraBERTv2. # Usage ## Testing the model using `transformers`: ```python from transformers import GPT2TokenizerFast, pipeline #for base and medium from transformers import GPT2LMHeadModel #for large and mega # pip install arabert from arabert.aragpt2.grover.modeling_gpt2 import GPT2LMHeadModel from arabert.preprocess import ArabertPreprocessor MODEL_NAME='aubmindlab/aragpt2-large' arabert_prep = ArabertPreprocessor(model_name=MODEL_NAME) text="" text_clean = arabert_prep.preprocess(text) model = GPT2LMHeadModel.from_pretrained(MODEL_NAME) tokenizer = GPT2TokenizerFast.from_pretrained(MODEL_NAME) generation_pipeline = pipeline("text-generation",model=model,tokenizer=tokenizer) #feel free to try different decoding settings generation_pipeline(text, pad_token_id=tokenizer.eos_token_id, num_beams=10, max_length=200, top_p=0.9, repetition_penalty = 3.0, no_repeat_ngram_size = 3)[0]['generated_text'] >>> ``` ## Finetunning using `transformers`: Follow the guide linked [here](https://towardsdatascience.com/fine-tuning-gpt2-on-colab-gpu-for-free-340468c92ed) ## Finetuning using our code with TF 1.15.4: Create the Training TFRecords: ```bash python create_pretraining_data.py --input_file=<RAW TEXT FILE with documents/article separated by an empty line> --output_file=<OUTPUT TFRecord> --tokenizer_dir=<Directory with the GPT2 Tokenizer files> ``` Finetuning: ```bash python3 run_pretraining.py \\\r\n --input_file="gs://<GS_BUCKET>/pretraining_data/*" \\\r\n --output_dir="gs://<GS_BUCKET>/pretraining_model/" \\\r\n --config_file="config/small_hparams.json" \\\r\n --batch_size=128 \\\r\n --eval_batch_size=8 \\\r\n --num_train_steps= \\\r\n --num_warmup_steps= \\\r\n --learning_rate= \\\r\n --save_checkpoints_steps= \\\r\n --max_seq_length=1024 \\\r\n --max_eval_steps= \\\r\n --optimizer="lamb" \\\r\n --iterations_per_loop=5000 \\\r\n --keep_checkpoint_max=10 \\\r\n --use_tpu=True \\\r\n --tpu_name=<TPU NAME> \\\r\n --do_train=True \\\r\n --do_eval=False ``` # Model Sizes Model | Optimizer | Context size | Embedding Size | Num of heads | Num of layers | Model Size / Num of Params | ---|:---:|:---:|:---:|:---:|:---:|:---: AraGPT2-base | `lamb` | 1024 | 768 | 12 | 12 | 527MB/135M | AraGPT2-medium | `lamb` | 1024 | 1024 | 16 | 24 |1.38G/370M | AraGPT2-large | `adafactor` | 1024 | 1280 | 20 | 36 | 2.98GB/792M | AraGPT2-mega | `adafactor` | 1024 | 1536 | 25 | 48 | 5.5GB/1.46B | All models are available in the `HuggingFace` model page under the [aubmindlab](https://huggingface.co/aubmindlab/) name. Checkpoints are available in PyTorch, TF2 and TF1 formats. ## Compute For Dataset Source see the [Dataset Section](#Dataset) Model | Hardware | num of examples (seq len = 1024) | Batch Size | Num of Steps | Time (in days) ---|:---:|:---:|:---:|:---:|:---: AraGPT2-base | TPUv3-128 | 9.7M | 1792 | 125K | 1.5 AraGPT2-medium | TPUv3-8 | 9.7M | 1152 | 85K | 1.5 AraGPT2-large | TPUv3-128 | 9.7M | 256 | 220k | 3 AraGPT2-mega | TPUv3-128 | 9.7M | 256 | 780K | 9 # Dataset The pretraining data used for the new AraBERT model is also used for **GPT2 and ELECTRA**. The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation) For the new dataset we added the unshuffled OSCAR corpus, after we thoroughly filter it, to the previous dataset used in AraBERTv1 but with out the websites that we previously crawled: - OSCAR unshuffled and filtered. - [Arabic Wikipedia dump](https://archive.org/details/arwiki-20190201) from 2020/09/01 - [The 1.5B words Arabic Corpus](https://www.semanticscholar.org/paper/1.5-billion-words-Arabic-Corpus-El-Khair/f3eeef4afb81223df96575adadf808fe7fe440b4) - [The OSIAN Corpus](https://www.aclweb.org/anthology/W19-4619) - Assafir news articles. Huge thank you for Assafir for giving us the data # Disclaimer The text generated by GPT2 Arabic is automatically generated by a neural network model trained on a large amount of texts, which does not represent the authors' or their institutes' official attitudes and preferences. The text generated by GPT2 Arabic should only be used for research and scientific purposes. If it infringes on your rights and interests or violates social morality, please do not propagate it. # If you used this model please cite us as : ``` @inproceedings{antoun-etal-2021-aragpt2, title = "{A}ra{GPT}2: Pre-Trained Transformer for {A}rabic Language Generation", author = "Antoun, Wissam and Baly, Fady and Hajj, Hazem", booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop", month = apr, year = "2021", address = "Kyiv, Ukraine (Virtual)", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2021.wanlp-1.21", pages = "196--207", } ``` # Acknowledgments Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the [AUB MIND Lab](https://sites.aub.edu.lb/mindlab/) Members for the continuous support. Also thanks to [Yakshof](https://www.yakshof.com/#/) and Assafir for data and storage access. Another thanks for Habib Rahal (https://www.behance.net/rahalhabib), for putting a face to AraBERT. # Contacts **Wissam Antoun**: [Linkedin](https://www.linkedin.com/in/wissam-antoun-622142b4/) | [Twitter](https://twitter.com/wissam_antoun) | [Github](https://github.com/WissamAntoun) | <[email protected]> | <[email protected]> **Fady Baly**: [Linkedin](https://www.linkedin.com/in/fadybaly/) | [Twitter](https://twitter.com/fadybaly) | [Github](https://github.com/fadybaly) | <[email protected]> | <[email protected]>
{"language": "ar", "datasets": ["wikipedia", "Osian", "1.5B-Arabic-Corpus", "oscar-arabic-unshuffled", "Assafir(private)"], "inference": false, "widget": [{"text": "\u064a\u062d\u0643\u0649 \u0623\u0646 \u0645\u0632\u0627\u0631\u0639\u0627 \u0645\u062e\u0627\u062f\u0639\u0627 \u0642\u0627\u0645 \u0628\u0628\u064a\u0639 \u0628\u0626\u0631 \u0627\u0644\u0645\u0627\u0621 \u0627\u0644\u0645\u0648\u062c\u0648\u062f \u0641\u064a \u0623\u0631\u0636\u0647 \u0644\u062c\u0627\u0631\u0647 \u0645\u0642\u0627\u0628\u0644 \u0645\u0628\u0644\u063a \u0643\u0628\u064a\u0631 \u0645\u0646 \u0627\u0644\u0645\u0627\u0644"}, {"text": "\u0627\u0644\u0642\u062f\u0633 \u0645\u062f\u064a\u0646\u0629 \u062a\u0627\u0631\u064a\u062e\u064a\u0629\u060c \u0628\u0646\u0627\u0647\u0627 \u0627\u0644\u0643\u0646\u0639\u0627\u0646\u064a\u0648\u0646 \u0641\u064a"}, {"text": "\u0643\u0627\u0646 \u064a\u0627 \u0645\u0627 \u0643\u0627\u0646 \u0641\u064a \u0642\u062f\u064a\u0645 \u0627\u0644\u0632\u0645\u0627\u0646"}]}
aubmindlab/aragpt2-large
null
[ "transformers", "pytorch", "jax", "tensorboard", "safetensors", "gpt2", "text-generation", "ar", "arxiv:2012.15520", "autotrain_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# Arabic GPT2 <img src="https://raw.githubusercontent.com/aub-mind/arabert/master/AraGPT2.png" width="100" align="left"/> You can find more information in our paper [AraGPT2](https://arxiv.org/abs/2012.15520) The code in this repository was used to train all GPT2 variants. The code support training and fine-tuning GPT2 on GPUs and TPUs via the TPUEstimator API. GPT2-base and medium uses the code from the `gpt2` folder and can trains models from the [minimaxir/gpt-2-simple](https://github.com/minimaxir/gpt-2-simple) repository. These models were trained using the `lamb` optimizer and follow the same architecture as `gpt2` and are fully compatible with the `transformers` library. GPT2-large and GPT2-mega were trained using the [imcaspar/gpt2-ml](https://github.com/imcaspar/gpt2-ml/) library, and follow the `grover` architecture. You can use the pytorch classes found in `grover/modeling_gpt2.py` as a direct replacement for classes in the `transformers` library (it should support version `v4.x` from `transformers`). Both models are trained using the `adafactor` optimizer, since the `adam` and `lamb` optimizer use too much memory causing the model to not even fit 1 batch on a TPU core. AraGPT2 is trained on the same large Arabic Dataset as AraBERTv2. # Usage ## Testing the model using `transformers`: ```python from transformers import GPT2TokenizerFast, pipeline #for base and medium from transformers import GPT2LMHeadModel #for large and mega # pip install arabert from arabert.aragpt2.grover.modeling_gpt2 import GPT2LMHeadModel from arabert.preprocess import ArabertPreprocessor MODEL_NAME='aubmindlab/aragpt2-medium' arabert_prep = ArabertPreprocessor(model_name=MODEL_NAME) text="" text_clean = arabert_prep.preprocess(text) model = GPT2LMHeadModel.from_pretrained(MODEL_NAME) tokenizer = GPT2TokenizerFast.from_pretrained(MODEL_NAME) generation_pipeline = pipeline("text-generation",model=model,tokenizer=tokenizer) #feel free to try different decoding settings generation_pipeline(text, pad_token_id=tokenizer.eos_token_id, num_beams=10, max_length=200, top_p=0.9, repetition_penalty = 3.0, no_repeat_ngram_size = 3)[0]['generated_text'] ``` ## Finetunning using `transformers`: Follow the guide linked [here](https://towardsdatascience.com/fine-tuning-gpt2-on-colab-gpu-for-free-340468c92ed) ## Finetuning using our code with TF 1.15.4: Create the Training TFRecords: ```bash python create_pretraining_data.py --input_file=<RAW TEXT FILE with documents/article separated by an empty line> --output_file=<OUTPUT TFRecord> --tokenizer_dir=<Directory with the GPT2 Tokenizer files> ``` Finetuning: ```bash python3 run_pretraining.py \\\n --input_file="gs://<GS_BUCKET>/pretraining_data/*" \\\n --output_dir="gs://<GS_BUCKET>/pretraining_model/" \\\n --config_file="config/small_hparams.json" \\\n --batch_size=128 \\\n --eval_batch_size=8 \\\n --num_train_steps= \\\n --num_warmup_steps= \\\n --learning_rate= \\\n --save_checkpoints_steps= \\\n --max_seq_length=1024 \\\n --max_eval_steps= \\\n --optimizer="lamb" \\\n --iterations_per_loop=5000 \\\n --keep_checkpoint_max=10 \\\n --use_tpu=True \\\n --tpu_name=<TPU NAME> \\\n --do_train=True \\\n --do_eval=False ``` # Model Sizes Model | Optimizer | Context size | Embedding Size | Num of heads | Num of layers | Model Size / Num of Params | ---|:---:|:---:|:---:|:---:|:---:|:---: AraGPT2-base | `lamb` | 1024 | 768 | 12 | 12 | 527MB / 135M | AraGPT2-medium | `lamb` | 1024 | 1024 | 16 | 24 | 1.38G/370M | AraGPT2-large | `adafactor` | 1024 | 1280 | 20 | 36 | 2.98GB/792M | AraGPT2-mega | `adafactor` | 1024 | 1536 | 25 | 48 | 5.5GB/1.46B | All models are available in the `HuggingFace` model page under the [aubmindlab](https://huggingface.co/aubmindlab/) name. Checkpoints are available in PyTorch, TF2 and TF1 formats. ## Compute Model | Hardware | num of examples (seq len = 1024) | Batch Size | Num of Steps | Time (in days) ---|:---:|:---:|:---:|:---:|:---: AraGPT2-base | TPUv3-128 | 9.7M | 1792 | 125K | 1.5 AraGPT2-medium | TPUv3-8 | 9.7M | 80 | 1M | 15 AraGPT2-large | TPUv3-128 | 9.7M | 256 | 220k | 3 AraGPT2-mega | TPUv3-128 | 9.7M | 256 | 780K | 9 # Dataset The pretraining data used for the new AraGPT2 model is also used for **AraBERTv2 and AraELECTRA**. The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation) For the new dataset we added the unshuffled OSCAR corpus, after we thoroughly filter it, to the dataset used in AraBERTv1 but with out the websites that we previously crawled: - OSCAR unshuffled and filtered. - [Arabic Wikipedia dump](https://archive.org/details/arwiki-20190201) from 2020/09/01 - [The 1.5B words Arabic Corpus](https://www.semanticscholar.org/paper/1.5-billion-words-Arabic-Corpus-El-Khair/f3eeef4afb81223df96575adadf808fe7fe440b4) - [The OSIAN Corpus](https://www.aclweb.org/anthology/W19-4619) - Assafir news articles. Huge thank you for Assafir for giving us the data # Disclaimer The text generated by AraGPT2 is automatically generated by a neural network model trained on a large amount of texts, which does not represent the authors' or their institutes' official attitudes and preferences. The text generated by AraGPT2 should only be used for research and scientific purposes. If it infringes on your rights and interests or violates social morality, please do not propagate it. # If you used this model please cite us as : ``` @inproceedings{antoun-etal-2021-aragpt2, title = "{A}ra{GPT}2: Pre-Trained Transformer for {A}rabic Language Generation", author = "Antoun, Wissam and Baly, Fady and Hajj, Hazem", booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop", month = apr, year = "2021", address = "Kyiv, Ukraine (Virtual)", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2021.wanlp-1.21", pages = "196--207", } ``` # Acknowledgments Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the [AUB MIND Lab](https://sites.aub.edu.lb/mindlab/) Members for the continuous support. Also thanks to [Yakshof](https://www.yakshof.com/#/) and Assafir for data and storage access. Another thanks for Habib Rahal (https://www.behance.net/rahalhabib), for putting a face to AraBERT. # Contacts **Wissam Antoun**: [Linkedin](https://www.linkedin.com/in/wissam-antoun-622142b4/) | [Twitter](https://twitter.com/wissam_antoun) | [Github](https://github.com/WissamAntoun) | <[email protected]> | <[email protected]> **Fady Baly**: [Linkedin](https://www.linkedin.com/in/fadybaly/) | [Twitter](https://twitter.com/fadybaly) | [Github](https://github.com/fadybaly) | <[email protected]> | <[email protected]>
{"language": "ar", "datasets": ["wikipedia", "Osian", "1.5B-Arabic-Corpus", "oscar-arabic-unshuffled", "Assafir(private)"], "widget": [{"text": "\u064a\u062d\u0643\u0649 \u0623\u0646 \u0645\u0632\u0627\u0631\u0639\u0627 \u0645\u062e\u0627\u062f\u0639\u0627 \u0642\u0627\u0645 \u0628\u0628\u064a\u0639 \u0628\u0626\u0631 \u0627\u0644\u0645\u0627\u0621 \u0627\u0644\u0645\u0648\u062c\u0648\u062f \u0641\u064a \u0623\u0631\u0636\u0647 \u0644\u062c\u0627\u0631\u0647 \u0645\u0642\u0627\u0628\u0644 \u0645\u0628\u0644\u063a \u0643\u0628\u064a\u0631 \u0645\u0646 \u0627\u0644\u0645\u0627\u0644"}, {"text": "\u0627\u0644\u0642\u062f\u0633 \u0645\u062f\u064a\u0646\u0629 \u062a\u0627\u0631\u064a\u062e\u064a\u0629\u060c \u0628\u0646\u0627\u0647\u0627 \u0627\u0644\u0643\u0646\u0639\u0627\u0646\u064a\u0648\u0646 \u0641\u064a"}, {"text": "\u0643\u0627\u0646 \u064a\u0627 \u0645\u0627 \u0643\u0627\u0646 \u0641\u064a \u0642\u062f\u064a\u0645 \u0627\u0644\u0632\u0645\u0627\u0646"}]}
aubmindlab/aragpt2-medium
null
[ "transformers", "pytorch", "tf", "jax", "tensorboard", "safetensors", "gpt2", "text-generation", "ar", "arxiv:2012.15520", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
# AraGPT2 Detector Machine generated detector model from the [AraGPT2: Pre-Trained Transformer for Arabic Language Generation paper](https://arxiv.org/abs/2012.15520) This model is trained on the long text passages, and achieves a 99.4% F1-Score. # How to use it: ```python from transformers import pipeline from arabert.preprocess import ArabertPreprocessor processor = ArabertPreprocessor(model="aubmindlab/araelectra-base-discriminator") pipe = pipeline("sentiment-analysis", model = "aubmindlab/aragpt2-mega-detector-long") text = " " text_prep = processor.preprocess(text) result = pipe(text_prep) # [{'label': 'machine-generated', 'score': 0.9977743625640869}] ``` # If you used this model please cite us as : ``` @misc{antoun2020aragpt2, title={AraGPT2: Pre-Trained Transformer for Arabic Language Generation}, author={Wissam Antoun and Fady Baly and Hazem Hajj}, year={2020}, eprint={2012.15520}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` # Contacts **Wissam Antoun**: [Linkedin](https://www.linkedin.com/in/wissam-antoun-622142b4/) | [Twitter](https://twitter.com/wissam_antoun) | [Github](https://github.com/WissamAntoun) | <[email protected]> | <[email protected]> **Fady Baly**: [Linkedin](https://www.linkedin.com/in/fadybaly/) | [Twitter](https://twitter.com/fadybaly) | [Github](https://github.com/fadybaly) | <[email protected]> | <[email protected]>
{"language": "ar", "widget": [{"text": "\u0648\u0625\u0630\u0627 \u0643\u0627\u0646 \u0647\u0646\u0627\u0643 \u0645\u0646 \u0644\u0627 \u064a\u0632\u0627\u0644 \u064a\u0639\u062a\u0642\u062f \u0623\u0646 \u0644\u0628\u0646\u0627\u0646 \u0647\u0648 \u0633\u0648\u064a\u0633\u0631\u0627 \u0627\u0644\u0634\u0631\u0642 \u060c \u0641\u0647\u0648 \u0645\u062e\u0637\u0626 \u0625\u0644\u0649 \u062d\u062f \u0628\u0639\u064a\u062f . \u0641\u0644\u0628\u0646\u0627\u0646 \u0644\u064a\u0633 \u0633\u0648\u064a\u0633\u0631\u0627 \u060c \u0648\u0644\u0627 \u064a\u0645\u0643\u0646 \u0623\u0646 \u064a\u0643\u0648\u0646 \u0643\u0630\u0644\u0643 . \u0644\u0642\u062f \u0639\u0627\u0634 \u0627\u0644\u0644\u0628\u0646\u0627\u0646\u064a\u0648\u0646 \u0641\u064a \u0647\u0630\u0627 \u0627\u0644\u0628\u0644\u062f \u0645\u0646\u0630 \u0645\u0627 \u064a\u0632\u064a\u062f \u0639\u0646 \u0623\u0644\u0641 \u0648\u062e\u0645\u0633\u0645\u0626\u0629 \u0639\u0627\u0645 \u060c \u0623\u064a \u0645\u0646\u0630 \u062a\u0623\u0633\u064a\u0633 \u0627\u0644\u0625\u0645\u0627\u0631\u0629 \u0627\u0644\u0634\u0647\u0627\u0628\u064a\u0629 \u0627\u0644\u062a\u064a \u0623\u0633\u0633\u0647\u0627 \u0627\u0644\u0623\u0645\u064a\u0631 \u0641\u062e\u0631 \u0627\u0644\u062f\u064a\u0646 \u0627\u0644\u0645\u0639\u0646\u064a \u0627\u0644\u062b\u0627\u0646\u064a ( 1697 - 1742 )"}]}
aubmindlab/aragpt2-mega-detector-long
null
[ "transformers", "pytorch", "safetensors", "electra", "text-classification", "ar", "arxiv:2012.15520", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# Arabic GPT2 <img src="https://raw.githubusercontent.com/aub-mind/arabert/master/AraGPT2.png" width="100" align="left"/> You can find more information in our paper [AraGPT2](https://arxiv.org/abs/2012.15520) The code in this repository was used to train all GPT2 variants. The code support training and fine-tuning GPT2 on GPUs and TPUs via the TPUEstimator API. GPT2-base and medium uses the code from the `gpt2` folder and can trains models from the [minimaxir/gpt-2-simple](https://github.com/minimaxir/gpt-2-simple) repository. These models were trained using the `lamb` optimizer and follow the same architecture as `gpt2` and are fully compatible with the `transformers` library. GPT2-large and GPT2-mega were trained using the [imcaspar/gpt2-ml](https://github.com/imcaspar/gpt2-ml/) library, and follow the `grover` architecture. You can use the pytorch classes found in `grover/modeling_gpt2.py` as a direct replacement for classes in the `transformers` library (it should support version `v4.x` from `transformers`). Both models are trained using the `adafactor` optimizer, since the `adam` and `lamb` optimizer use too much memory causing the model to not even fit 1 batch on a TPU core. AraGPT2 is trained on the same large Arabic Dataset as AraBERTv2. # Usage ## Testing the model using `transformers`: You need to use the GPT2LMHeadModel from `arabert`: `pip install arabert` ```python from transformers import GPT2TokenizerFast, pipeline #for base and medium from transformers import GPT2LMHeadModel #for large and mega from arabert.aragpt2.grover.modeling_gpt2 import GPT2LMHeadModel from arabert.preprocess import ArabertPreprocessor MODEL_NAME='aubmindlab/aragpt2-mega' arabert_prep = ArabertPreprocessor(model_name=MODEL_NAME) text="" text_clean = arabert_prep.preprocess(text) model = GPT2LMHeadModel.from_pretrained(MODEL_NAME) tokenizer = GPT2TokenizerFast.from_pretrained(MODEL_NAME) generation_pipeline = pipeline("text-generation",model=model,tokenizer=tokenizer) #feel free to try different decoding settings generation_pipeline(text, pad_token_id=tokenizer.eos_token_id, num_beams=10, max_length=200, top_p=0.9, repetition_penalty = 3.0, no_repeat_ngram_size = 3)[0]['generated_text'] >>> ``` ## Finetunning using `transformers`: Follow the guide linked [here](https://towardsdatascience.com/fine-tuning-gpt2-on-colab-gpu-for-free-340468c92ed) ## Finetuning using our code with TF 1.15.4: Create the Training TFRecords: ```bash python create_pretraining_data.py --input_file=<RAW TEXT FILE with documents/article separated by an empty line> --output_file=<OUTPUT TFRecord> --tokenizer_dir=<Directory with the GPT2 Tokenizer files> ``` Finetuning: ```bash python3 run_pretraining.py \\r\n --input_file="gs://<GS_BUCKET>/pretraining_data/*" \\r\n --output_dir="gs://<GS_BUCKET>/pretraining_model/" \\r\n --config_file="config/small_hparams.json" \\r\n --batch_size=128 \\r\n --eval_batch_size=8 \\r\n --num_train_steps= \\r\n --num_warmup_steps= \\r\n --learning_rate= \\r\n --save_checkpoints_steps= \\r\n --max_seq_length=1024 \\r\n --max_eval_steps= \\r\n --optimizer="lamb" \\r\n --iterations_per_loop=5000 \\r\n --keep_checkpoint_max=10 \\r\n --use_tpu=True \\r\n --tpu_name=<TPU NAME> \\r\n --do_train=True \\r\n --do_eval=False ``` # Model Sizes Model | Optimizer | Context size | Embedding Size | Num of heads | Num of layers | Model Size / Num of Params | ---|:---:|:---:|:---:|:---:|:---:|:---: AraGPT2-base | `lamb` | 1024 | 768 | 12 | 12 | 527MB/135M | AraGPT2-medium | `lamb` | 1024 | 1024 | 16 | 24 | 1.38G/370M | AraGPT2-large | `adafactor` | 1024 | 1280 | 20 | 36 | 2.98GB/792M | AraGPT2-mega | `adafactor` | 1024 | 1536 | 25 | 48 | 5.5GB/1.46B | All models are available in the `HuggingFace` model page under the [aubmindlab](https://huggingface.co/aubmindlab/) name. Checkpoints are available in PyTorch, TF2 and TF1 formats. ## Compute For Dataset Source see the [Dataset Section](#Dataset) Model | Hardware | num of examples (seq len = 1024) | Batch Size | Num of Steps | Time (in days) ---|:---:|:---:|:---:|:---:|:---: AraGPT2-base | TPUv3-128 | 9.7M | 1792 | 125K | 1.5 AraGPT2-medium | TPUv3-8 | 9.7M | 1152 | 85K | 1.5 AraGPT2-large | TPUv3-128 | 9.7M | 256 | 220k | 3 AraGPT2-mega | TPUv3-128 | 9.7M | 256 | 780K | 9 # Dataset The pretraining data used for the new AraBERT model is also used for **GPT2 and ELECTRA**. The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation) For the new dataset we added the unshuffled OSCAR corpus, after we thoroughly filter it, to the previous dataset used in AraBERTv1 but with out the websites that we previously crawled: - OSCAR unshuffled and filtered. - [Arabic Wikipedia dump](https://archive.org/details/arwiki-20190201) from 2020/09/01 - [The 1.5B words Arabic Corpus](https://www.semanticscholar.org/paper/1.5-billion-words-Arabic-Corpus-El-Khair/f3eeef4afb81223df96575adadf808fe7fe440b4) - [The OSIAN Corpus](https://www.aclweb.org/anthology/W19-4619) - Assafir news articles. Huge thank you for Assafir for giving us the data # Disclaimer The text generated by GPT2 Arabic is automatically generated by a neural network model trained on a large amount of texts, which does not represent the authors' or their institutes' official attitudes and preferences. The text generated by GPT2 Arabic should only be used for research and scientific purposes. If it infringes on your rights and interests or violates social morality, please do not propagate it. # If you used this model please cite us as : ``` @inproceedings{antoun-etal-2021-aragpt2, title = "{A}ra{GPT}2: Pre-Trained Transformer for {A}rabic Language Generation", author = "Antoun, Wissam and Baly, Fady and Hajj, Hazem", booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop", month = apr, year = "2021", address = "Kyiv, Ukraine (Virtual)", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2021.wanlp-1.21", pages = "196--207", } ``` # Acknowledgments Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the [AUB MIND Lab](https://sites.aub.edu.lb/mindlab/) Members for the continuous support. Also thanks to [Yakshof](https://www.yakshof.com/#/) and Assafir for data and storage access. Another thanks for Habib Rahal (https://www.behance.net/rahalhabib), for putting a face to AraBERT. # Contacts **Wissam Antoun**: [Linkedin](https://www.linkedin.com/in/wissam-antoun-622142b4/) | [Twitter](https://twitter.com/wissam_antoun) | [Github](https://github.com/WissamAntoun) | <[email protected]> | <[email protected]> **Fady Baly**: [Linkedin](https://www.linkedin.com/in/fadybaly/) | [Twitter](https://twitter.com/fadybaly) | [Github](https://github.com/fadybaly) | <[email protected]> | <[email protected]>
{"language": "ar", "license": "other", "datasets": ["wikipedia", "Osian", "1.5B-Arabic-Corpus", "oscar-arabic-unshuffled", "Assafir(private)"], "license_name": "custom", "license_link": "https://github.com/aub-mind/arabert/blob/master/aragpt2/LICENSE", "inference": false, "widget": [{"text": "\u064a\u062d\u0643\u0649 \u0623\u0646 \u0645\u0632\u0627\u0631\u0639\u0627 \u0645\u062e\u0627\u062f\u0639\u0627 \u0642\u0627\u0645 \u0628\u0628\u064a\u0639 \u0628\u0626\u0631 \u0627\u0644\u0645\u0627\u0621 \u0627\u0644\u0645\u0648\u062c\u0648\u062f \u0641\u064a \u0623\u0631\u0636\u0647 \u0644\u062c\u0627\u0631\u0647 \u0645\u0642\u0627\u0628\u0644 \u0645\u0628\u0644\u063a \u0643\u0628\u064a\u0631 \u0645\u0646 \u0627\u0644\u0645\u0627\u0644"}, {"text": "\u0627\u0644\u0642\u062f\u0633 \u0645\u062f\u064a\u0646\u0629 \u062a\u0627\u0631\u064a\u062e\u064a\u0629\u060c \u0628\u0646\u0627\u0647\u0627 \u0627\u0644\u0643\u0646\u0639\u0627\u0646\u064a\u0648\u0646 \u0641\u064a"}, {"text": "\u0643\u0627\u0646 \u064a\u0627 \u0645\u0627 \u0643\u0627\u0646 \u0641\u064a \u0642\u062f\u064a\u0645 \u0627\u0644\u0632\u0645\u0627\u0646"}]}
aubmindlab/aragpt2-mega
null
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "ar", "arxiv:2012.15520", "license:other", "autotrain_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
# !!! A newer version of this model is available !!! [AraBERTv2](https://huggingface.co/aubmindlab/bert-base-arabertv2) # AraBERT v1 & v2 : Pre-training BERT for Arabic Language Understanding <img src="https://raw.githubusercontent.com/aub-mind/arabert/master/arabert_logo.png" width="100" align="left"/> **AraBERT** is an Arabic pretrained lanaguage model based on [Google's BERT architechture](https://github.com/google-research/bert). AraBERT uses the same BERT-Base config. More details are available in the [AraBERT Paper](https://arxiv.org/abs/2003.00104) and in the [AraBERT Meetup](https://github.com/WissamAntoun/pydata_khobar_meetup) There are two versions of the model, AraBERTv0.1 and AraBERTv1, with the difference being that AraBERTv1 uses pre-segmented text where prefixes and suffixes were splitted using the [Farasa Segmenter](http://alt.qcri.org/farasa/segmenter.html). We evalaute AraBERT models on different downstream tasks and compare them to [mBERT]((https://github.com/google-research/bert/blob/master/multilingual.md)), and other state of the art models (*To the extent of our knowledge*). The Tasks were Sentiment Analysis on 6 different datasets ([HARD](https://github.com/elnagara/HARD-Arabic-Dataset), [ASTD-Balanced](https://www.aclweb.org/anthology/D15-1299), [ArsenTD-Lev](https://staff.aub.edu.lb/~we07/Publications/ArSentD-LEV_Sentiment_Corpus.pdf), [LABR](https://github.com/mohamedadaly/LABR)), Named Entity Recognition with the [ANERcorp](http://curtis.ml.cmu.edu/w/courses/index.php/ANERcorp), and Arabic Question Answering on [Arabic-SQuAD and ARCD](https://github.com/husseinmozannar/SOQAL) # AraBERTv2 ## What's New! AraBERT now comes in 4 new variants to replace the old v1 versions: More Detail in the AraBERT folder and in the [README](https://github.com/aub-mind/arabert/blob/master/AraBERT/README.md) and in the [AraBERT Paper](https://arxiv.org/abs/2003.00104v2) Model | HuggingFace Model Name | Size (MB/Params)| Pre-Segmentation | DataSet (Sentences/Size/nWords) | ---|:---:|:---:|:---:|:---: AraBERTv0.2-base | [bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) | 543MB / 136M | No | 200M / 77GB / 8.6B | AraBERTv0.2-large| [bert-large-arabertv02](https://huggingface.co/aubmindlab/bert-large-arabertv02) | 1.38G 371M | No | 200M / 77GB / 8.6B | AraBERTv2-base| [bert-base-arabertv2](https://huggingface.co/aubmindlab/bert-base-arabertv2) | 543MB 136M | Yes | 200M / 77GB / 8.6B | AraBERTv2-large| [bert-large-arabertv2](https://huggingface.co/aubmindlab/bert-large-arabertv2) | 1.38G 371M | Yes | 200M / 77GB / 8.6B | AraBERTv0.1-base| [bert-base-arabertv01](https://huggingface.co/aubmindlab/bert-base-arabertv01) | 543MB 136M | No | 77M / 23GB / 2.7B | AraBERTv1-base| [bert-base-arabert](https://huggingface.co/aubmindlab/bert-base-arabert) | 543MB 136M | Yes | 77M / 23GB / 2.7B | All models are available in the `HuggingFace` model page under the [aubmindlab](https://huggingface.co/aubmindlab/) name. Checkpoints are available in PyTorch, TF2 and TF1 formats. ## Better Pre-Processing and New Vocab We identified an issue with AraBERTv1's wordpiece vocabulary. The issue came from punctuations and numbers that were still attached to words when learned the wordpiece vocab. We now insert a space between numbers and characters and around punctuation characters. The new vocabulary was learnt using the `BertWordpieceTokenizer` from the `tokenizers` library, and should now support the Fast tokenizer implementation from the `transformers` library. **P.S.**: All the old BERT codes should work with the new BERT, just change the model name and check the new preprocessing dunction **Please read the section on how to use the [preprocessing function](#Preprocessing)** ## Bigger Dataset and More Compute We used ~3.5 times more data, and trained for longer. For Dataset Sources see the [Dataset Section](#Dataset) Model | Hardware | num of examples with seq len (128 / 512) |128 (Batch Size/ Num of Steps) | 512 (Batch Size/ Num of Steps) | Total Steps | Total Time (in Days) | ---|:---:|:---:|:---:|:---:|:---:|:---: AraBERTv0.2-base | TPUv3-8 | 420M / 207M |2560 / 1M | 384/ 2M | 3M | - AraBERTv0.2-large | TPUv3-128 | 420M / 207M | 13440 / 250K | 2056 / 300K | 550K | - AraBERTv2-base | TPUv3-8 | 520M / 245M |13440 / 250K | 2056 / 300K | 550K | - AraBERTv2-large | TPUv3-128 | 520M / 245M | 13440 / 250K | 2056 / 300K | 550K | - AraBERT-base (v1/v0.1) | TPUv2-8 | - |512 / 900K | 128 / 300K| 1.2M | 4 days # Dataset The pretraining data used for the new AraBERT model is also used for Arabic **GPT2 and ELECTRA**. The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation) For the new dataset we added the unshuffled OSCAR corpus, after we thoroughly filter it, to the previous dataset used in AraBERTv1 but with out the websites that we previously crawled: - OSCAR unshuffled and filtered. - [Arabic Wikipedia dump](https://archive.org/details/arwiki-20190201) from 2020/09/01 - [The 1.5B words Arabic Corpus](https://www.semanticscholar.org/paper/1.5-billion-words-Arabic-Corpus-El-Khair/f3eeef4afb81223df96575adadf808fe7fe440b4) - [The OSIAN Corpus](https://www.aclweb.org/anthology/W19-4619) - Assafir news articles. Huge thank you for Assafir for giving us the data # Preprocessing It is recommended to apply our preprocessing function before training/testing on any dataset. **Install farasapy to segment text for AraBERT v1 & v2 `pip install farasapy`** ```python from arabert.preprocess import ArabertPreprocessor model_name="bert-base-arabert" arabert_prep = ArabertPreprocessor(model_name=model_name) text = "ولن نبالغ إذا قلنا إن هاتف أو كمبيوتر المكتب في زمننا هذا ضروري" arabert_prep.preprocess(text) >>>"و+ لن نبالغ إذا قل +نا إن هاتف أو كمبيوتر ال+ مكتب في زمن +نا هذا ضروري" ``` ## Accepted_models ``` bert-base-arabertv01 bert-base-arabert bert-base-arabertv02 bert-base-arabertv2 bert-large-arabertv02 bert-large-arabertv2 araelectra-base aragpt2-base aragpt2-medium aragpt2-large aragpt2-mega ``` # TensorFlow 1.x models The TF1.x model are available in the HuggingFace models repo. You can download them as follows: - via git-lfs: clone all the models in a repo ```bash curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash sudo apt-get install git-lfs git lfs install git clone https://huggingface.co/aubmindlab/MODEL_NAME tar -C ./MODEL_NAME -zxvf /content/MODEL_NAME/tf1_model.tar.gz ``` where `MODEL_NAME` is any model under the `aubmindlab` name - via `wget`: - Go to the tf1_model.tar.gz file on huggingface.co/models/aubmindlab/MODEL_NAME. - copy the `oid sha256` - then run `wget https://cdn-lfs.huggingface.co/aubmindlab/aragpt2-base/INSERT_THE_SHA_HERE` (ex: for `aragpt2-base`: `wget https://cdn-lfs.huggingface.co/aubmindlab/aragpt2-base/3766fc03d7c2593ff2fb991d275e96b81b0ecb2098b71ff315611d052ce65248`) # If you used this model please cite us as : Google Scholar has our Bibtex wrong (missing name), use this instead ``` @inproceedings{antoun2020arabert, title={AraBERT: Transformer-based Model for Arabic Language Understanding}, author={Antoun, Wissam and Baly, Fady and Hajj, Hazem}, booktitle={LREC 2020 Workshop Language Resources and Evaluation Conference 11--16 May 2020}, pages={9} } ``` # Acknowledgments Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the [AUB MIND Lab](https://sites.aub.edu.lb/mindlab/) Members for the continous support. Also thanks to [Yakshof](https://www.yakshof.com/#/) and Assafir for data and storage access. Another thanks for Habib Rahal (https://www.behance.net/rahalhabib), for putting a face to AraBERT. ## Contacts **Wissam Antoun**: [Linkedin](https://www.linkedin.com/in/wissam-antoun-622142b4/) | [Twitter](https://twitter.com/wissam_antoun) | [Github](https://github.com/WissamAntoun) | <[email protected]> | <[email protected]> **Fady Baly**: [Linkedin](https://www.linkedin.com/in/fadybaly/) | [Twitter](https://twitter.com/fadybaly) | [Github](https://github.com/fadybaly) | <[email protected]> | <[email protected]>
{"language": "ar", "datasets": ["wikipedia", "Osian", "1.5B-Arabic-Corpus", "oscar-arabic-unshuffled", "Assafir(private)"], "widget": [{"text": " \u0639\u0627\u0635\u0645 +\u0629 \u0644\u0628\u0646\u0627\u0646 \u0647\u064a [MASK] ."}]}
aubmindlab/bert-base-arabert
null
[ "transformers", "pytorch", "tf", "jax", "safetensors", "bert", "fill-mask", "ar", "arxiv:2003.00104", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
# !!! A newer version of this model is available !!! [AraBERTv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) # AraBERT v1 & v2 : Pre-training BERT for Arabic Language Understanding <img src="https://raw.githubusercontent.com/aub-mind/arabert/master/arabert_logo.png" width="100" align="left"/> **AraBERT** is an Arabic pretrained lanaguage model based on [Google's BERT architechture](https://github.com/google-research/bert). AraBERT uses the same BERT-Base config. More details are available in the [AraBERT Paper](https://arxiv.org/abs/2003.00104) and in the [AraBERT Meetup](https://github.com/WissamAntoun/pydata_khobar_meetup) There are two versions of the model, AraBERTv0.1 and AraBERTv1, with the difference being that AraBERTv1 uses pre-segmented text where prefixes and suffixes were splitted using the [Farasa Segmenter](http://alt.qcri.org/farasa/segmenter.html). We evalaute AraBERT models on different downstream tasks and compare them to [mBERT]((https://github.com/google-research/bert/blob/master/multilingual.md)), and other state of the art models (*To the extent of our knowledge*). The Tasks were Sentiment Analysis on 6 different datasets ([HARD](https://github.com/elnagara/HARD-Arabic-Dataset), [ASTD-Balanced](https://www.aclweb.org/anthology/D15-1299), [ArsenTD-Lev](https://staff.aub.edu.lb/~we07/Publications/ArSentD-LEV_Sentiment_Corpus.pdf), [LABR](https://github.com/mohamedadaly/LABR)), Named Entity Recognition with the [ANERcorp](http://curtis.ml.cmu.edu/w/courses/index.php/ANERcorp), and Arabic Question Answering on [Arabic-SQuAD and ARCD](https://github.com/husseinmozannar/SOQAL) # AraBERTv2 ## What's New! AraBERT now comes in 4 new variants to replace the old v1 versions: More Detail in the AraBERT folder and in the [README](https://github.com/aub-mind/arabert/blob/master/AraBERT/README.md) and in the [AraBERT Paper](https://arxiv.org/abs/2003.00104v2) Model | HuggingFace Model Name | Size (MB/Params)| Pre-Segmentation | DataSet (Sentences/Size/nWords) | ---|:---:|:---:|:---:|:---: AraBERTv0.2-base | [bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) | 543MB / 136M | No | 200M / 77GB / 8.6B | AraBERTv0.2-large| [bert-large-arabertv02](https://huggingface.co/aubmindlab/bert-large-arabertv02) | 1.38G 371M | No | 200M / 77GB / 8.6B | AraBERTv2-base| [bert-base-arabertv2](https://huggingface.co/aubmindlab/bert-base-arabertv2) | 543MB 136M | Yes | 200M / 77GB / 8.6B | AraBERTv2-large| [bert-large-arabertv2](https://huggingface.co/aubmindlab/bert-large-arabertv2) | 1.38G 371M | Yes | 200M / 77GB / 8.6B | AraBERTv0.1-base| [bert-base-arabertv01](https://huggingface.co/aubmindlab/bert-base-arabertv01) | 543MB 136M | No | 77M / 23GB / 2.7B | AraBERTv1-base| [bert-base-arabert](https://huggingface.co/aubmindlab/bert-base-arabert) | 543MB 136M | Yes | 77M / 23GB / 2.7B | All models are available in the `HuggingFace` model page under the [aubmindlab](https://huggingface.co/aubmindlab/) name. Checkpoints are available in PyTorch, TF2 and TF1 formats. ## Better Pre-Processing and New Vocab We identified an issue with AraBERTv1's wordpiece vocabulary. The issue came from punctuations and numbers that were still attached to words when learned the wordpiece vocab. We now insert a space between numbers and characters and around punctuation characters. The new vocabulary was learnt using the `BertWordpieceTokenizer` from the `tokenizers` library, and should now support the Fast tokenizer implementation from the `transformers` library. **P.S.**: All the old BERT codes should work with the new BERT, just change the model name and check the new preprocessing dunction **Please read the section on how to use the [preprocessing function](#Preprocessing)** ## Bigger Dataset and More Compute We used ~3.5 times more data, and trained for longer. For Dataset Sources see the [Dataset Section](#Dataset) Model | Hardware | num of examples with seq len (128 / 512) |128 (Batch Size/ Num of Steps) | 512 (Batch Size/ Num of Steps) | Total Steps | Total Time (in Days) | ---|:---:|:---:|:---:|:---:|:---:|:---: AraBERTv0.2-base | TPUv3-8 | 420M / 207M |2560 / 1M | 384/ 2M | 3M | - AraBERTv0.2-large | TPUv3-128 | 420M / 207M | 13440 / 250K | 2056 / 300K | 550K | - AraBERTv2-base | TPUv3-8 | 520M / 245M |13440 / 250K | 2056 / 300K | 550K | - AraBERTv2-large | TPUv3-128 | 520M / 245M | 13440 / 250K | 2056 / 300K | 550K | - AraBERT-base (v1/v0.1) | TPUv2-8 | - |512 / 900K | 128 / 300K| 1.2M | 4 days # Dataset The pretraining data used for the new AraBERT model is also used for Arabic **GPT2 and ELECTRA**. The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation) For the new dataset we added the unshuffled OSCAR corpus, after we thoroughly filter it, to the previous dataset used in AraBERTv1 but with out the websites that we previously crawled: - OSCAR unshuffled and filtered. - [Arabic Wikipedia dump](https://archive.org/details/arwiki-20190201) from 2020/09/01 - [The 1.5B words Arabic Corpus](https://www.semanticscholar.org/paper/1.5-billion-words-Arabic-Corpus-El-Khair/f3eeef4afb81223df96575adadf808fe7fe440b4) - [The OSIAN Corpus](https://www.aclweb.org/anthology/W19-4619) - Assafir news articles. Huge thank you for Assafir for giving us the data # Preprocessing It is recommended to apply our preprocessing function before training/testing on any dataset. **Install farasapy to segment text for AraBERT v1 & v2 `pip install farasapy`** ```python from arabert.preprocess import ArabertPreprocessor model_name="bert-base-arabertv01" arabert_prep = ArabertPreprocessor(model_name=model_name) text = "ولن نبالغ إذا قلنا إن هاتف أو كمبيوتر المكتب في زمننا هذا ضروري" arabert_prep.preprocess(text) ``` ## Accepted_models ``` bert-base-arabertv01 bert-base-arabert bert-base-arabertv02 bert-base-arabertv2 bert-large-arabertv02 bert-large-arabertv2 araelectra-base aragpt2-base aragpt2-medium aragpt2-large aragpt2-mega ``` # TensorFlow 1.x models The TF1.x model are available in the HuggingFace models repo. You can download them as follows: - via git-lfs: clone all the models in a repo ```bash curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash sudo apt-get install git-lfs git lfs install git clone https://huggingface.co/aubmindlab/MODEL_NAME tar -C ./MODEL_NAME -zxvf /content/MODEL_NAME/tf1_model.tar.gz ``` where `MODEL_NAME` is any model under the `aubmindlab` name - via `wget`: - Go to the tf1_model.tar.gz file on huggingface.co/models/aubmindlab/MODEL_NAME. - copy the `oid sha256` - then run `wget https://cdn-lfs.huggingface.co/aubmindlab/aragpt2-base/INSERT_THE_SHA_HERE` (ex: for `aragpt2-base`: `wget https://cdn-lfs.huggingface.co/aubmindlab/aragpt2-base/3766fc03d7c2593ff2fb991d275e96b81b0ecb2098b71ff315611d052ce65248`) # If you used this model please cite us as : Google Scholar has our Bibtex wrong (missing name), use this instead ``` @inproceedings{antoun2020arabert, title={AraBERT: Transformer-based Model for Arabic Language Understanding}, author={Antoun, Wissam and Baly, Fady and Hajj, Hazem}, booktitle={LREC 2020 Workshop Language Resources and Evaluation Conference 11--16 May 2020}, pages={9} } ``` # Acknowledgments Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the [AUB MIND Lab](https://sites.aub.edu.lb/mindlab/) Members for the continous support. Also thanks to [Yakshof](https://www.yakshof.com/#/) and Assafir for data and storage access. Another thanks for Habib Rahal (https://www.behance.net/rahalhabib), for putting a face to AraBERT. # Contacts **Wissam Antoun**: [Linkedin](https://www.linkedin.com/in/wissam-antoun-622142b4/) | [Twitter](https://twitter.com/wissam_antoun) | [Github](https://github.com/WissamAntoun) | <[email protected]> | <[email protected]> **Fady Baly**: [Linkedin](https://www.linkedin.com/in/fadybaly/) | [Twitter](https://twitter.com/fadybaly) | [Github](https://github.com/fadybaly) | <[email protected]> | <[email protected]>
{"language": "ar", "datasets": ["wikipedia", "OSIAN", "1.5B_Arabic_Corpus"], "widget": [{"text": " \u0639\u0627\u0635\u0645\u0629 \u0644\u0628\u0646\u0627\u0646 \u0647\u064a [MASK] ."}]}
aubmindlab/bert-base-arabertv01
null
[ "transformers", "pytorch", "tf", "jax", "safetensors", "bert", "fill-mask", "ar", "dataset:wikipedia", "dataset:OSIAN", "dataset:1.5B_Arabic_Corpus", "arxiv:2003.00104", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
<img src="https://raw.githubusercontent.com/aub-mind/arabert/master/arabert_logo.png" width="100" align="center"/> # AraBERTv0.2-Twitter AraBERTv0.2-Twitter-base/large are two new models for Arabic dialects and tweets, trained by continuing the pre-training using the MLM task on ~60M Arabic tweets (filtered from a collection on 100M). The two new models have had emojies added to their vocabulary in addition to common words that weren't at first present. The pre-training was done with a max sentence length of 64 only for 1 epoch. **AraBERT** is an Arabic pretrained language model based on [Google's BERT architechture](https://github.com/google-research/bert). AraBERT uses the same BERT-Base config. More details are available in the [AraBERT Paper](https://arxiv.org/abs/2003.00104) and in the [AraBERT Meetup](https://github.com/WissamAntoun/pydata_khobar_meetup) ## Other Models Model | HuggingFace Model Name | Size (MB/Params)| Pre-Segmentation | DataSet (Sentences/Size/nWords) | ---|:---:|:---:|:---:|:---: AraBERTv0.2-base | [bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) | 543MB / 136M | No | 200M / 77GB / 8.6B | AraBERTv0.2-large| [bert-large-arabertv02](https://huggingface.co/aubmindlab/bert-large-arabertv02) | 1.38G / 371M | No | 200M / 77GB / 8.6B | AraBERTv2-base| [bert-base-arabertv2](https://huggingface.co/aubmindlab/bert-base-arabertv2) | 543MB / 136M | Yes | 200M / 77GB / 8.6B | AraBERTv2-large| [bert-large-arabertv2](https://huggingface.co/aubmindlab/bert-large-arabertv2) | 1.38G / 371M | Yes | 200M / 77GB / 8.6B | AraBERTv0.1-base| [bert-base-arabertv01](https://huggingface.co/aubmindlab/bert-base-arabertv01) | 543MB / 136M | No | 77M / 23GB / 2.7B | AraBERTv1-base| [bert-base-arabert](https://huggingface.co/aubmindlab/bert-base-arabert) | 543MB / 136M | Yes | 77M / 23GB / 2.7B | AraBERTv0.2-Twitter-base| [bert-base-arabertv02-twitter](https://huggingface.co/aubmindlab/bert-base-arabertv02-twitter) | 543MB / 136M | No | Same as v02 + 60M Multi-Dialect Tweets| AraBERTv0.2-Twitter-large| [bert-large-arabertv02-twitter](https://huggingface.co/aubmindlab/bert-large-arabertv02-twitter) | 1.38G / 371M | No | Same as v02 + 60M Multi-Dialect Tweets| # Preprocessing **The model is trained on a sequence length of 64, using max length beyond 64 might result in degraded performance** It is recommended to apply our preprocessing function before training/testing on any dataset. The preprocessor will keep and space out emojis when used with a "twitter" model. ```python from arabert.preprocess import ArabertPreprocessor from transformers import AutoTokenizer, AutoModelForMaskedLM model_name="aubmindlab/bert-base-arabertv02-twitter" arabert_prep = ArabertPreprocessor(model_name=model_name) text = "ولن نبالغ إذا قلنا إن هاتف أو كمبيوتر المكتب في زمننا هذا ضروري" arabert_prep.preprocess(text) tokenizer = AutoTokenizer.from_pretrained("aubmindlab/bert-base-arabertv02-twitter") model = AutoModelForMaskedLM.from_pretrained("aubmindlab/bert-base-arabertv02-twitter") ``` # If you used this model please cite us as : Google Scholar has our Bibtex wrong (missing name), use this instead ``` @inproceedings{antoun2020arabert, title={AraBERT: Transformer-based Model for Arabic Language Understanding}, author={Antoun, Wissam and Baly, Fady and Hajj, Hazem}, booktitle={LREC 2020 Workshop Language Resources and Evaluation Conference 11--16 May 2020}, pages={9} } ``` # Acknowledgments Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the [AUB MIND Lab](https://sites.aub.edu.lb/mindlab/) Members for the continuous support. Also thanks to [Yakshof](https://www.yakshof.com/#/) and Assafir for data and storage access. Another thanks for Habib Rahal (https://www.behance.net/rahalhabib), for putting a face to AraBERT. # Contacts **Wissam Antoun**: [Linkedin](https://www.linkedin.com/in/wissam-antoun-622142b4/) | [Twitter](https://twitter.com/wissam_antoun) | [Github](https://github.com/WissamAntoun) | <[email protected]> | <[email protected]> **Fady Baly**: [Linkedin](https://www.linkedin.com/in/fadybaly/) | [Twitter](https://twitter.com/fadybaly) | [Github](https://github.com/fadybaly) | <[email protected]> | <[email protected]>
{"language": "ar", "datasets": ["wikipedia", "Osian", "1.5B-Arabic-Corpus", "oscar-arabic-unshuffled", "Assafir(private)", "Twitter(private)"], "widget": [{"text": " \u0639\u0627\u0635\u0645\u0629 \u0644\u0628\u0646\u0627\u0646 \u0647\u064a [MASK] ."}]}
aubmindlab/bert-base-arabertv02-twitter
null
[ "transformers", "pytorch", "tensorboard", "safetensors", "bert", "fill-mask", "ar", "arxiv:2003.00104", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
# AraBERT v1 & v2 : Pre-training BERT for Arabic Language Understanding <img src="https://raw.githubusercontent.com/aub-mind/arabert/master/arabert_logo.png" width="100" align="left"/> **AraBERT** is an Arabic pretrained language model based on [Google's BERT architechture](https://github.com/google-research/bert). AraBERT uses the same BERT-Base config. More details are available in the [AraBERT Paper](https://arxiv.org/abs/2003.00104) and in the [AraBERT Meetup](https://github.com/WissamAntoun/pydata_khobar_meetup) There are two versions of the model, AraBERTv0.1 and AraBERTv1, with the difference being that AraBERTv1 uses pre-segmented text where prefixes and suffixes were split using the [Farasa Segmenter](http://alt.qcri.org/farasa/segmenter.html). We evaluate AraBERT models on different downstream tasks and compare them to [mBERT]((https://github.com/google-research/bert/blob/master/multilingual.md)), and other state of the art models (*To the extent of our knowledge*). The Tasks were Sentiment Analysis on 6 different datasets ([HARD](https://github.com/elnagara/HARD-Arabic-Dataset), [ASTD-Balanced](https://www.aclweb.org/anthology/D15-1299), [ArsenTD-Lev](https://staff.aub.edu.lb/~we07/Publications/ArSentD-LEV_Sentiment_Corpus.pdf), [LABR](https://github.com/mohamedadaly/LABR)), Named Entity Recognition with the [ANERcorp](http://curtis.ml.cmu.edu/w/courses/index.php/ANERcorp), and Arabic Question Answering on [Arabic-SQuAD and ARCD](https://github.com/husseinmozannar/SOQAL) # AraBERTv2 ## What's New! AraBERT now comes in 4 new variants to replace the old v1 versions: More Detail in the AraBERT folder and in the [README](https://github.com/aub-mind/arabert/blob/master/AraBERT/README.md) and in the [AraBERT Paper](https://arxiv.org/abs/2003.00104v2) Model | HuggingFace Model Name | Size (MB/Params)| Pre-Segmentation | DataSet (Sentences/Size/nWords) | ---|:---:|:---:|:---:|:---: AraBERTv0.2-base | [bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) | 543MB / 136M | No | 200M / 77GB / 8.6B | AraBERTv0.2-large| [bert-large-arabertv02](https://huggingface.co/aubmindlab/bert-large-arabertv02) | 1.38G 371M | No | 200M / 77GB / 8.6B | AraBERTv2-base| [bert-base-arabertv2](https://huggingface.co/aubmindlab/bert-base-arabertv2) | 543MB 136M | Yes | 200M / 77GB / 8.6B | AraBERTv2-large| [bert-large-arabertv2](https://huggingface.co/aubmindlab/bert-large-arabertv2) | 1.38G 371M | Yes | 200M / 77GB / 8.6B | AraBERTv0.2-Twitter-base| [bert-base-arabertv02-twitter](https://huggingface.co/aubmindlab/bert-base-arabertv02-twitter) | 543MB / 136M | No | Same as v02 + 60M Multi-Dialect Tweets| AraBERTv0.2-Twitter-large| [bert-large-arabertv02-twitter](https://huggingface.co/aubmindlab/bert-large-arabertv02-twitter) | 1.38G / 371M | No | Same as v02 + 60M Multi-Dialect Tweets| AraBERTv0.1-base| [bert-base-arabertv01](https://huggingface.co/aubmindlab/bert-base-arabertv01) | 543MB 136M | No | 77M / 23GB / 2.7B | AraBERTv1-base| [bert-base-arabert](https://huggingface.co/aubmindlab/bert-base-arabert) | 543MB 136M | Yes | 77M / 23GB / 2.7B | All models are available in the `HuggingFace` model page under the [aubmindlab](https://huggingface.co/aubmindlab/) name. Checkpoints are available in PyTorch, TF2 and TF1 formats. ## Better Pre-Processing and New Vocab We identified an issue with AraBERTv1's wordpiece vocabulary. The issue came from punctuations and numbers that were still attached to words when learned the wordpiece vocab. We now insert a space between numbers and characters and around punctuation characters. The new vocabulary was learned using the `BertWordpieceTokenizer` from the `tokenizers` library, and should now support the Fast tokenizer implementation from the `transformers` library. **P.S.**: All the old BERT codes should work with the new BERT, just change the model name and check the new preprocessing function **Please read the section on how to use the [preprocessing function](#Preprocessing)** ## Bigger Dataset and More Compute We used ~3.5 times more data, and trained for longer. For Dataset Sources see the [Dataset Section](#Dataset) Model | Hardware | num of examples with seq len (128 / 512) |128 (Batch Size/ Num of Steps) | 512 (Batch Size/ Num of Steps) | Total Steps | Total Time (in Days) | ---|:---:|:---:|:---:|:---:|:---:|:---: AraBERTv0.2-base | TPUv3-8 | 420M / 207M | 2560 / 1M | 384/ 2M | 3M | - AraBERTv0.2-large | TPUv3-128 | 420M / 207M | 13440 / 250K | 2056 / 300K | 550K | 7 AraBERTv2-base | TPUv3-8 | 420M / 207M | 2560 / 1M | 384/ 2M | 3M | - AraBERTv2-large | TPUv3-128 | 520M / 245M | 13440 / 250K | 2056 / 300K | 550K | 7 AraBERT-base (v1/v0.1) | TPUv2-8 | - |512 / 900K | 128 / 300K| 1.2M | 4 # Dataset The pretraining data used for the new AraBERT model is also used for Arabic **GPT2 and ELECTRA**. The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation) For the new dataset we added the unshuffled OSCAR corpus, after we thoroughly filter it, to the previous dataset used in AraBERTv1 but with out the websites that we previously crawled: - OSCAR unshuffled and filtered. - [Arabic Wikipedia dump](https://archive.org/details/arwiki-20190201) from 2020/09/01 - [The 1.5B words Arabic Corpus](https://www.semanticscholar.org/paper/1.5-billion-words-Arabic-Corpus-El-Khair/f3eeef4afb81223df96575adadf808fe7fe440b4) - [The OSIAN Corpus](https://www.aclweb.org/anthology/W19-4619) - Assafir news articles. Huge thank you for Assafir for providing us the data # Preprocessing It is recommended to apply our preprocessing function before training/testing on any dataset. **Install the arabert python package to segment text for AraBERT v1 & v2 or to clean your data `pip install arabert`** ```python from arabert.preprocess import ArabertPreprocessor model_name="aubmindlab/bert-large-arabertv02" arabert_prep = ArabertPreprocessor(model_name=model_name) text = "ولن نبالغ إذا قلنا: إن هاتف أو كمبيوتر المكتب في زمننا هذا ضروري" arabert_prep.preprocess(text) >>> output: ولن نبالغ إذا قلنا : إن هاتف أو كمبيوتر المكتب في زمننا هذا ضروري ``` # TensorFlow 1.x models The TF1.x model are available in the HuggingFace models repo. You can download them as follows: - via git-lfs: clone all the models in a repo ```bash curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash sudo apt-get install git-lfs git lfs install git clone https://huggingface.co/aubmindlab/MODEL_NAME tar -C ./MODEL_NAME -zxvf /content/MODEL_NAME/tf1_model.tar.gz ``` where `MODEL_NAME` is any model under the `aubmindlab` name - via `wget`: - Go to the tf1_model.tar.gz file on huggingface.co/models/aubmindlab/MODEL_NAME. - copy the `oid sha256` - then run `wget https://cdn-lfs.huggingface.co/aubmindlab/aragpt2-base/INSERT_THE_SHA_HERE` (ex: for `aragpt2-base`: `wget https://cdn-lfs.huggingface.co/aubmindlab/aragpt2-base/3766fc03d7c2593ff2fb991d275e96b81b0ecb2098b71ff315611d052ce65248`) # If you used this model please cite us as : Google Scholar has our Bibtex wrong (missing name), use this instead ``` @inproceedings{antoun2020arabert, title={AraBERT: Transformer-based Model for Arabic Language Understanding}, author={Antoun, Wissam and Baly, Fady and Hajj, Hazem}, booktitle={LREC 2020 Workshop Language Resources and Evaluation Conference 11--16 May 2020}, pages={9} } ``` # Acknowledgments Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the [AUB MIND Lab](https://sites.aub.edu.lb/mindlab/) Members for the continuous support. Also thanks to [Yakshof](https://www.yakshof.com/#/) and Assafir for data and storage access. Another thanks for Habib Rahal (https://www.behance.net/rahalhabib), for putting a face to AraBERT. # Contacts **Wissam Antoun**: [Linkedin](https://www.linkedin.com/in/wissam-antoun-622142b4/) | [Twitter](https://twitter.com/wissam_antoun) | [Github](https://github.com/WissamAntoun) | <[email protected]> | <[email protected]> **Fady Baly**: [Linkedin](https://www.linkedin.com/in/fadybaly/) | [Twitter](https://twitter.com/fadybaly) | [Github](https://github.com/fadybaly) | <[email protected]> | <[email protected]>
{"language": "ar", "datasets": ["wikipedia", "Osian", "1.5B-Arabic-Corpus", "oscar-arabic-unshuffled", "Assafir(private)"], "widget": [{"text": " \u0639\u0627\u0635\u0645\u0629 \u0644\u0628\u0646\u0627\u0646 \u0647\u064a [MASK] ."}]}
aubmindlab/bert-base-arabertv02
null
[ "transformers", "pytorch", "tf", "jax", "tensorboard", "safetensors", "bert", "fill-mask", "ar", "arxiv:2003.00104", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
# AraBERT v1 & v2 : Pre-training BERT for Arabic Language Understanding <img src="https://raw.githubusercontent.com/aub-mind/arabert/master/arabert_logo.png" width="100" align="left"/> **AraBERT** is an Arabic pretrained lanaguage model based on [Google's BERT architechture](https://github.com/google-research/bert). AraBERT uses the same BERT-Base config. More details are available in the [AraBERT Paper](https://arxiv.org/abs/2003.00104) and in the [AraBERT Meetup](https://github.com/WissamAntoun/pydata_khobar_meetup) There are two versions of the model, AraBERTv0.1 and AraBERTv1, with the difference being that AraBERTv1 uses pre-segmented text where prefixes and suffixes were splitted using the [Farasa Segmenter](http://alt.qcri.org/farasa/segmenter.html). We evalaute AraBERT models on different downstream tasks and compare them to [mBERT]((https://github.com/google-research/bert/blob/master/multilingual.md)), and other state of the art models (*To the extent of our knowledge*). The Tasks were Sentiment Analysis on 6 different datasets ([HARD](https://github.com/elnagara/HARD-Arabic-Dataset), [ASTD-Balanced](https://www.aclweb.org/anthology/D15-1299), [ArsenTD-Lev](https://staff.aub.edu.lb/~we07/Publications/ArSentD-LEV_Sentiment_Corpus.pdf), [LABR](https://github.com/mohamedadaly/LABR)), Named Entity Recognition with the [ANERcorp](http://curtis.ml.cmu.edu/w/courses/index.php/ANERcorp), and Arabic Question Answering on [Arabic-SQuAD and ARCD](https://github.com/husseinmozannar/SOQAL) # AraBERTv2 ## What's New! AraBERT now comes in 4 new variants to replace the old v1 versions: More Detail in the AraBERT folder and in the [README](https://github.com/aub-mind/arabert/blob/master/AraBERT/README.md) and in the [AraBERT Paper](https://arxiv.org/abs/2003.00104v2) Model | HuggingFace Model Name | Size (MB/Params)| Pre-Segmentation | DataSet (Sentences/Size/nWords) | ---|:---:|:---:|:---:|:---: AraBERTv0.2-base | [bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) | 543MB / 136M | No | 200M / 77GB / 8.6B | AraBERTv0.2-large| [bert-large-arabertv02](https://huggingface.co/aubmindlab/bert-large-arabertv02) | 1.38G 371M | No | 200M / 77GB / 8.6B | AraBERTv2-base| [bert-base-arabertv2](https://huggingface.co/aubmindlab/bert-base-arabertv2) | 543MB 136M | Yes | 200M / 77GB / 8.6B | AraBERTv2-large| [bert-large-arabertv2](https://huggingface.co/aubmindlab/bert-large-arabertv2) | 1.38G 371M | Yes | 200M / 77GB / 8.6B | AraBERTv0.1-base| [bert-base-arabertv01](https://huggingface.co/aubmindlab/bert-base-arabertv01) | 543MB 136M | No | 77M / 23GB / 2.7B | AraBERTv1-base| [bert-base-arabert](https://huggingface.co/aubmindlab/bert-base-arabert) | 543MB 136M | Yes | 77M / 23GB / 2.7B | All models are available in the `HuggingFace` model page under the [aubmindlab](https://huggingface.co/aubmindlab/) name. Checkpoints are available in PyTorch, TF2 and TF1 formats. ## Better Pre-Processing and New Vocab We identified an issue with AraBERTv1's wordpiece vocabulary. The issue came from punctuations and numbers that were still attached to words when learned the wordpiece vocab. We now insert a space between numbers and characters and around punctuation characters. The new vocabulary was learnt using the `BertWordpieceTokenizer` from the `tokenizers` library, and should now support the Fast tokenizer implementation from the `transformers` library. **P.S.**: All the old BERT codes should work with the new BERT, just change the model name and check the new preprocessing dunction **Please read the section on how to use the [preprocessing function](#Preprocessing)** ## Bigger Dataset and More Compute We used ~3.5 times more data, and trained for longer. For Dataset Sources see the [Dataset Section](#Dataset) Model | Hardware | num of examples with seq len (128 / 512) |128 (Batch Size/ Num of Steps) | 512 (Batch Size/ Num of Steps) | Total Steps | Total Time (in Days) | ---|:---:|:---:|:---:|:---:|:---:|:---: AraBERTv0.2-base | TPUv3-8 | 420M / 207M | 2560 / 1M | 384/ 2M | 3M | - AraBERTv0.2-large | TPUv3-128 | 420M / 207M | 13440 / 250K | 2056 / 300K | 550K | 7 AraBERTv2-base | TPUv3-8 | 420M / 207M | 2560 / 1M | 384/ 2M | 3M | - AraBERTv2-large | TPUv3-128 | 520M / 245M | 13440 / 250K | 2056 / 300K | 550K | 7 AraBERT-base (v1/v0.1) | TPUv2-8 | - |512 / 900K | 128 / 300K| 1.2M | 4 # Dataset The pretraining data used for the new AraBERT model is also used for Arabic **AraGPT2 and AraELECTRA**. The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation) For the new dataset we added the unshuffled OSCAR corpus, after we thoroughly filter it, to the previous dataset used in AraBERTv1 but with out the websites that we previously crawled: - OSCAR unshuffled and filtered. - [Arabic Wikipedia dump](https://archive.org/details/arwiki-20190201) from 2020/09/01 - [The 1.5B words Arabic Corpus](https://www.semanticscholar.org/paper/1.5-billion-words-Arabic-Corpus-El-Khair/f3eeef4afb81223df96575adadf808fe7fe440b4) - [The OSIAN Corpus](https://www.aclweb.org/anthology/W19-4619) - Assafir news articles. Huge thank you for Assafir for giving us the data # Preprocessing It is recommended to apply our preprocessing function before training/testing on any dataset. **Install farasapy to segment text for AraBERT v1 & v2 `pip install farasapy`** ```python from arabert.preprocess import ArabertPreprocessor model_name="bert-base-arabertv2" arabert_prep = ArabertPreprocessor(model_name=model_name) text = "ولن نبالغ إذا قلنا إن هاتف أو كمبيوتر المكتب في زمننا هذا ضروري" arabert_prep.preprocess(text) >>>"و+ لن نبالغ إذا قل +نا إن هاتف أو كمبيوتر ال+ مكتب في زمن +نا هذا ضروري" ``` ## Accepted_models ``` bert-base-arabertv01 bert-base-arabert bert-base-arabertv02 bert-base-arabertv2 bert-large-arabertv02 bert-large-arabertv2 araelectra-base aragpt2-base aragpt2-medium aragpt2-large aragpt2-mega ``` # TensorFlow 1.x models The TF1.x model are available in the HuggingFace models repo. You can download them as follows: - via git-lfs: clone all the models in a repo ```bash curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash sudo apt-get install git-lfs git lfs install git clone https://huggingface.co/aubmindlab/MODEL_NAME tar -C ./MODEL_NAME -zxvf /content/MODEL_NAME/tf1_model.tar.gz ``` where `MODEL_NAME` is any model under the `aubmindlab` name - via `wget`: - Go to the tf1_model.tar.gz file on huggingface.co/models/aubmindlab/MODEL_NAME. - copy the `oid sha256` - then run `wget https://cdn-lfs.huggingface.co/aubmindlab/aragpt2-base/INSERT_THE_SHA_HERE` (ex: for `aragpt2-base`: `wget https://cdn-lfs.huggingface.co/aubmindlab/aragpt2-base/3766fc03d7c2593ff2fb991d275e96b81b0ecb2098b71ff315611d052ce65248`) # If you used this model please cite us as : Google Scholar has our Bibtex wrong (missing name), use this instead ``` @inproceedings{antoun2020arabert, title={AraBERT: Transformer-based Model for Arabic Language Understanding}, author={Antoun, Wissam and Baly, Fady and Hajj, Hazem}, booktitle={LREC 2020 Workshop Language Resources and Evaluation Conference 11--16 May 2020}, pages={9} } ``` # Acknowledgments Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the [AUB MIND Lab](https://sites.aub.edu.lb/mindlab/) Members for the continous support. Also thanks to [Yakshof](https://www.yakshof.com/#/) and Assafir for data and storage access. Another thanks for Habib Rahal (https://www.behance.net/rahalhabib), for putting a face to AraBERT. # Contacts **Wissam Antoun**: [Linkedin](https://www.linkedin.com/in/wissam-antoun-622142b4/) | [Twitter](https://twitter.com/wissam_antoun) | [Github](https://github.com/WissamAntoun) | <[email protected]> | <[email protected]> **Fady Baly**: [Linkedin](https://www.linkedin.com/in/fadybaly/) | [Twitter](https://twitter.com/fadybaly) | [Github](https://github.com/fadybaly) | <[email protected]> | <[email protected]>
{"language": "ar", "datasets": ["wikipedia", "Osian", "1.5B-Arabic-Corpus", "oscar-arabic-unshuffled"], "widget": [{"text": " \u0639\u0627\u0635\u0645 +\u0629 \u0644\u0628\u0646\u0627\u0646 \u0647\u064a [MASK] ."}]}
aubmindlab/bert-base-arabertv2
null
[ "transformers", "pytorch", "tf", "jax", "safetensors", "bert", "fill-mask", "ar", "dataset:wikipedia", "dataset:Osian", "dataset:1.5B-Arabic-Corpus", "dataset:oscar-arabic-unshuffled", "arxiv:2003.00104", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
<img src="https://raw.githubusercontent.com/aub-mind/arabert/master/arabert_logo.png" width="100" align="center"/> # AraBERTv0.2-Twitter AraBERTv0.2-Twitter-base/large are two new models for Arabic dialects and tweets, trained by continuing the pre-training using the MLM task on ~60M Arabic tweets (filtered from a collection on 100M). The two new models have had emojies added to their vocabulary in addition to common words that weren't at first present. The pre-training was done with a max sentence length of 64 only for 1 epoch. **AraBERT** is an Arabic pretrained language model based on [Google's BERT architechture](https://github.com/google-research/bert). AraBERT uses the same BERT-Base config. More details are available in the [AraBERT Paper](https://arxiv.org/abs/2003.00104) and in the [AraBERT Meetup](https://github.com/WissamAntoun/pydata_khobar_meetup) ## Other Models Model | HuggingFace Model Name | Size (MB/Params)| Pre-Segmentation | DataSet (Sentences/Size/nWords) | ---|:---:|:---:|:---:|:---: AraBERTv0.2-base | [bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) | 543MB / 136M | No | 200M / 77GB / 8.6B | AraBERTv0.2-large| [bert-large-arabertv02](https://huggingface.co/aubmindlab/bert-large-arabertv02) | 1.38G / 371M | No | 200M / 77GB / 8.6B | AraBERTv2-base| [bert-base-arabertv2](https://huggingface.co/aubmindlab/bert-base-arabertv2) | 543MB / 136M | Yes | 200M / 77GB / 8.6B | AraBERTv2-large| [bert-large-arabertv2](https://huggingface.co/aubmindlab/bert-large-arabertv2) | 1.38G / 371M | Yes | 200M / 77GB / 8.6B | AraBERTv0.1-base| [bert-base-arabertv01](https://huggingface.co/aubmindlab/bert-base-arabertv01) | 543MB / 136M | No | 77M / 23GB / 2.7B | AraBERTv1-base| [bert-base-arabert](https://huggingface.co/aubmindlab/bert-base-arabert) | 543MB / 136M | Yes | 77M / 23GB / 2.7B | AraBERTv0.2-Twitter-base| [bert-base-arabertv02-twitter](https://huggingface.co/aubmindlab/bert-base-arabertv02-twitter) | 543MB / 136M | No | Same as v02 + 60M Multi-Dialect Tweets| AraBERTv0.2-Twitter-large| [bert-large-arabertv02-twitter](https://huggingface.co/aubmindlab/bert-large-arabertv02-twitter) | 1.38G / 371M | No | Same as v02 + 60M Multi-Dialect Tweets| # Preprocessing **The model is trained on a sequence length of 64, using max length beyond 64 might result in degraded performance** It is recommended to apply our preprocessing function before training/testing on any dataset. The preprocessor will keep and space out emojis when used with a "twitter" model. ```python from arabert.preprocess import ArabertPreprocessor from transformers import AutoTokenizer, AutoModelForMaskedLM model_name="aubmindlab/bert-base-arabertv02-twitter" arabert_prep = ArabertPreprocessor(model_name=model_name) text = "ولن نبالغ إذا قلنا إن هاتف أو كمبيوتر المكتب في زمننا هذا ضروري" arabert_prep.preprocess(text) tokenizer = AutoTokenizer.from_pretrained("aubmindlab/bert-base-arabertv02-twitter") model = AutoModelForMaskedLM.from_pretrained("aubmindlab/bert-base-arabertv02-twitter") ``` # If you used this model please cite us as : Google Scholar has our Bibtex wrong (missing name), use this instead ``` @inproceedings{antoun2020arabert, title={AraBERT: Transformer-based Model for Arabic Language Understanding}, author={Antoun, Wissam and Baly, Fady and Hajj, Hazem}, booktitle={LREC 2020 Workshop Language Resources and Evaluation Conference 11--16 May 2020}, pages={9} } ``` # Acknowledgments Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the [AUB MIND Lab](https://sites.aub.edu.lb/mindlab/) Members for the continuous support. Also thanks to [Yakshof](https://www.yakshof.com/#/) and Assafir for data and storage access. Another thanks for Habib Rahal (https://www.behance.net/rahalhabib), for putting a face to AraBERT. # Contacts **Wissam Antoun**: [Linkedin](https://www.linkedin.com/in/wissam-antoun-622142b4/) | [Twitter](https://twitter.com/wissam_antoun) | [Github](https://github.com/WissamAntoun) | <[email protected]> | <[email protected]> **Fady Baly**: [Linkedin](https://www.linkedin.com/in/fadybaly/) | [Twitter](https://twitter.com/fadybaly) | [Github](https://github.com/fadybaly) | <[email protected]> | <[email protected]>
{"language": "ar", "datasets": ["wikipedia", "Osian", "1.5B-Arabic-Corpus", "oscar-arabic-unshuffled", "Assafir(private)", "Twitter(private)"], "widget": [{"text": " \u0639\u0627\u0635\u0645\u0629 \u0644\u0628\u0646\u0627\u0646 \u0647\u064a [MASK] ."}]}
aubmindlab/bert-large-arabertv02-twitter
null
[ "transformers", "pytorch", "tensorboard", "safetensors", "bert", "fill-mask", "ar", "arxiv:2003.00104", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
# AraBERT v1 & v2 : Pre-training BERT for Arabic Language Understanding <img src="https://raw.githubusercontent.com/aub-mind/arabert/master/arabert_logo.png" width="100" align="left"/> **AraBERT** is an Arabic pretrained lanaguage model based on [Google's BERT architechture](https://github.com/google-research/bert). AraBERT uses the same BERT-Base config. More details are available in the [AraBERT Paper](https://arxiv.org/abs/2003.00104) and in the [AraBERT Meetup](https://github.com/WissamAntoun/pydata_khobar_meetup) There are two versions of the model, AraBERTv0.1 and AraBERTv1, with the difference being that AraBERTv1 uses pre-segmented text where prefixes and suffixes were splitted using the [Farasa Segmenter](http://alt.qcri.org/farasa/segmenter.html). We evalaute AraBERT models on different downstream tasks and compare them to [mBERT]((https://github.com/google-research/bert/blob/master/multilingual.md)), and other state of the art models (*To the extent of our knowledge*). The Tasks were Sentiment Analysis on 6 different datasets ([HARD](https://github.com/elnagara/HARD-Arabic-Dataset), [ASTD-Balanced](https://www.aclweb.org/anthology/D15-1299), [ArsenTD-Lev](https://staff.aub.edu.lb/~we07/Publications/ArSentD-LEV_Sentiment_Corpus.pdf), [LABR](https://github.com/mohamedadaly/LABR)), Named Entity Recognition with the [ANERcorp](http://curtis.ml.cmu.edu/w/courses/index.php/ANERcorp), and Arabic Question Answering on [Arabic-SQuAD and ARCD](https://github.com/husseinmozannar/SOQAL) # AraBERTv2 ## What's New! AraBERT now comes in 4 new variants to replace the old v1 versions: More Detail in the AraBERT folder and in the [README](https://github.com/aub-mind/arabert/blob/master/AraBERT/README.md) and in the [AraBERT Paper](https://arxiv.org/abs/2003.00104v2) Model | HuggingFace Model Name | Size (MB/Params)| Pre-Segmentation | DataSet (Sentences/Size/nWords) | ---|:---:|:---:|:---:|:---: AraBERTv0.2-base | [bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) | 543MB / 136M | No | 200M / 77GB / 8.6B | AraBERTv0.2-large| [bert-large-arabertv02](https://huggingface.co/aubmindlab/bert-large-arabertv02) | 1.38G 371M | No | 200M / 77GB / 8.6B | AraBERTv2-base| [bert-base-arabertv2](https://huggingface.co/aubmindlab/bert-base-arabertv2) | 543MB 136M | Yes | 200M / 77GB / 8.6B | AraBERTv2-large| [bert-large-arabertv2](https://huggingface.co/aubmindlab/bert-large-arabertv2) | 1.38G 371M | Yes | 200M / 77GB / 8.6B | AraBERTv0.1-base| [bert-base-arabertv01](https://huggingface.co/aubmindlab/bert-base-arabertv01) | 543MB 136M | No | 77M / 23GB / 2.7B | AraBERTv1-base| [bert-base-arabert](https://huggingface.co/aubmindlab/bert-base-arabert) | 543MB 136M | Yes | 77M / 23GB / 2.7B | All models are available in the `HuggingFace` model page under the [aubmindlab](https://huggingface.co/aubmindlab/) name. Checkpoints are available in PyTorch, TF2 and TF1 formats. ## Better Pre-Processing and New Vocab We identified an issue with AraBERTv1's wordpiece vocabulary. The issue came from punctuations and numbers that were still attached to words when learned the wordpiece vocab. We now insert a space between numbers and characters and around punctuation characters. The new vocabulary was learnt using the `BertWordpieceTokenizer` from the `tokenizers` library, and should now support the Fast tokenizer implementation from the `transformers` library. **P.S.**: All the old BERT codes should work with the new BERT, just change the model name and check the new preprocessing dunction **Please read the section on how to use the [preprocessing function](#Preprocessing)** ## Bigger Dataset and More Compute We used ~3.5 times more data, and trained for longer. For Dataset Sources see the [Dataset Section](#Dataset) Model | Hardware | num of examples with seq len (128 / 512) |128 (Batch Size/ Num of Steps) | 512 (Batch Size/ Num of Steps) | Total Steps | Total Time (in Days) | ---|:---:|:---:|:---:|:---:|:---:|:---: AraBERTv0.2-base | TPUv3-8 | 420M / 207M | 2560 / 1M | 384/ 2M | 3M | - AraBERTv0.2-large | TPUv3-128 | 420M / 207M | 13440 / 250K | 2056 / 300K | 550K | 7 AraBERTv2-base | TPUv3-8 | 420M / 207M | 2560 / 1M | 384/ 2M | 3M | - AraBERTv2-large | TPUv3-128 | 520M / 245M | 13440 / 250K | 2056 / 300K | 550K | 7 AraBERT-base (v1/v0.1) | TPUv2-8 | - |512 / 900K | 128 / 300K| 1.2M | 4 # Dataset The pretraining data used for the new AraBERT model is also used for Arabic **GPT2 and ELECTRA**. The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation) For the new dataset we added the unshuffled OSCAR corpus, after we thoroughly filter it, to the previous dataset used in AraBERTv1 but with out the websites that we previously crawled: - OSCAR unshuffled and filtered. - [Arabic Wikipedia dump](https://archive.org/details/arwiki-20190201) from 2020/09/01 - [The 1.5B words Arabic Corpus](https://www.semanticscholar.org/paper/1.5-billion-words-Arabic-Corpus-El-Khair/f3eeef4afb81223df96575adadf808fe7fe440b4) - [The OSIAN Corpus](https://www.aclweb.org/anthology/W19-4619) - Assafir news articles. Huge thank you for Assafir for giving us the data # Preprocessing It is recommended to apply our preprocessing function before training/testing on any dataset. **Install farasapy to segment text for AraBERT v1 & v2 `pip install farasapy`** ```python from arabert.preprocess import ArabertPreprocessor model_name="bert-large-arabertv02" arabert_prep = ArabertPreprocessor(model_name=model_name) text = "ولن نبالغ إذا قلنا إن هاتف أو كمبيوتر المكتب في زمننا هذا ضروري" arabert_prep.preprocess(text) ``` ## Accepted_models ``` bert-base-arabertv01 bert-base-arabert bert-base-arabertv02 bert-base-arabertv2 bert-large-arabertv02 bert-large-arabertv2 araelectra-base aragpt2-base aragpt2-medium aragpt2-large aragpt2-mega ``` # TensorFlow 1.x models The TF1.x model are available in the HuggingFace models repo. You can download them as follows: - via git-lfs: clone all the models in a repo ```bash curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash sudo apt-get install git-lfs git lfs install git clone https://huggingface.co/aubmindlab/MODEL_NAME tar -C ./MODEL_NAME -zxvf /content/MODEL_NAME/tf1_model.tar.gz ``` where `MODEL_NAME` is any model under the `aubmindlab` name - via `wget`: - Go to the tf1_model.tar.gz file on huggingface.co/models/aubmindlab/MODEL_NAME. - copy the `oid sha256` - then run `wget https://cdn-lfs.huggingface.co/aubmindlab/aragpt2-base/INSERT_THE_SHA_HERE` (ex: for `aragpt2-base`: `wget https://cdn-lfs.huggingface.co/aubmindlab/aragpt2-base/3766fc03d7c2593ff2fb991d275e96b81b0ecb2098b71ff315611d052ce65248`) # If you used this model please cite us as : Google Scholar has our Bibtex wrong (missing name), use this instead ``` @inproceedings{antoun2020arabert, title={AraBERT: Transformer-based Model for Arabic Language Understanding}, author={Antoun, Wissam and Baly, Fady and Hajj, Hazem}, booktitle={LREC 2020 Workshop Language Resources and Evaluation Conference 11--16 May 2020}, pages={9} } ``` # Acknowledgments Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the [AUB MIND Lab](https://sites.aub.edu.lb/mindlab/) Members for the continous support. Also thanks to [Yakshof](https://www.yakshof.com/#/) and Assafir for data and storage access. Another thanks for Habib Rahal (https://www.behance.net/rahalhabib), for putting a face to AraBERT. # Contacts **Wissam Antoun**: [Linkedin](https://www.linkedin.com/in/wissam-antoun-622142b4/) | [Twitter](https://twitter.com/wissam_antoun) | [Github](https://github.com/WissamAntoun) | <[email protected]> | <[email protected]> **Fady Baly**: [Linkedin](https://www.linkedin.com/in/fadybaly/) | [Twitter](https://twitter.com/fadybaly) | [Github](https://github.com/fadybaly) | <[email protected]> | <[email protected]>
{"language": "ar", "datasets": ["wikipedia", "Osian", "1.5B-Arabic-Corpus", "oscar-arabic-unshuffled"], "widget": [{"text": " \u0639\u0627\u0635\u0645\u0629 \u0644\u0628\u0646\u0627\u0646 \u0647\u064a [MASK] ."}]}
aubmindlab/bert-large-arabertv02
null
[ "transformers", "pytorch", "tf", "jax", "tensorboard", "safetensors", "bert", "fill-mask", "ar", "dataset:wikipedia", "dataset:Osian", "dataset:1.5B-Arabic-Corpus", "dataset:oscar-arabic-unshuffled", "arxiv:2003.00104", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
# AraBERT v1 & v2 : Pre-training BERT for Arabic Language Understanding <img src="https://raw.githubusercontent.com/aub-mind/arabert/master/arabert_logo.png" width="100" align="left"/> **AraBERT** is an Arabic pretrained language model based on [Google's BERT architechture](https://github.com/google-research/bert). AraBERT uses the same BERT-Base config. More details are available in the [AraBERT Paper](https://arxiv.org/abs/2003.00104) and in the [AraBERT Meetup](https://github.com/WissamAntoun/pydata_khobar_meetup) There are two versions of the model, AraBERTv0.1 and AraBERTv1, with the difference being that AraBERTv1 uses pre-segmented text where prefixes and suffixes were split using the [Farasa Segmenter](http://alt.qcri.org/farasa/segmenter.html). We evaluate AraBERT models on different downstream tasks and compare them to [mBERT]((https://github.com/google-research/bert/blob/master/multilingual.md)), and other state of the art models (*To the extent of our knowledge*). The Tasks were Sentiment Analysis on 6 different datasets ([HARD](https://github.com/elnagara/HARD-Arabic-Dataset), [ASTD-Balanced](https://www.aclweb.org/anthology/D15-1299), [ArsenTD-Lev](https://staff.aub.edu.lb/~we07/Publications/ArSentD-LEV_Sentiment_Corpus.pdf), [LABR](https://github.com/mohamedadaly/LABR)), Named Entity Recognition with the [ANERcorp](http://curtis.ml.cmu.edu/w/courses/index.php/ANERcorp), and Arabic Question Answering on [Arabic-SQuAD and ARCD](https://github.com/husseinmozannar/SOQAL) # AraBERTv2 ## What's New! AraBERT now comes in 4 new variants to replace the old v1 versions: More Detail in the AraBERT folder and in the [README](https://github.com/aub-mind/arabert/blob/master/AraBERT/README.md) and in the [AraBERT Paper](https://arxiv.org/abs/2003.00104v2) Model | HuggingFace Model Name | Size (MB/Params)| Pre-Segmentation | DataSet (Sentences/Size/nWords) | ---|:---:|:---:|:---:|:---: AraBERTv0.2-base | [bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) | 543MB / 136M | No | 200M / 77GB / 8.6B | AraBERTv0.2-large| [bert-large-arabertv02](https://huggingface.co/aubmindlab/bert-large-arabertv02) | 1.38G 371M | No | 200M / 77GB / 8.6B | AraBERTv2-base| [bert-base-arabertv2](https://huggingface.co/aubmindlab/bert-base-arabertv2) | 543MB 136M | Yes | 200M / 77GB / 8.6B | AraBERTv2-large| [bert-large-arabertv2](https://huggingface.co/aubmindlab/bert-large-arabertv2) | 1.38G 371M | Yes | 200M / 77GB / 8.6B | AraBERTv0.2-Twitter-base| [bert-base-arabertv02-twitter](https://huggingface.co/aubmindlab/bert-base-arabertv02-twitter) | 543MB / 136M | No | Same as v02 + 60M Multi-Dialect Tweets| AraBERTv0.2-Twitter-large| [bert-large-arabertv02-twitter](https://huggingface.co/aubmindlab/bert-large-arabertv02-twitter) | 1.38G / 371M | No | Same as v02 + 60M Multi-Dialect Tweets| AraBERTv0.1-base| [bert-base-arabertv01](https://huggingface.co/aubmindlab/bert-base-arabertv01) | 543MB 136M | No | 77M / 23GB / 2.7B | AraBERTv1-base| [bert-base-arabert](https://huggingface.co/aubmindlab/bert-base-arabert) | 543MB 136M | Yes | 77M / 23GB / 2.7B | All models are available in the `HuggingFace` model page under the [aubmindlab](https://huggingface.co/aubmindlab/) name. Checkpoints are available in PyTorch, TF2 and TF1 formats. ## Better Pre-Processing and New Vocab We identified an issue with AraBERTv1's wordpiece vocabulary. The issue came from punctuations and numbers that were still attached to words when learned the wordpiece vocab. We now insert a space between numbers and characters and around punctuation characters. The new vocabulary was learned using the `BertWordpieceTokenizer` from the `tokenizers` library, and should now support the Fast tokenizer implementation from the `transformers` library. **P.S.**: All the old BERT codes should work with the new BERT, just change the model name and check the new preprocessing function **Please read the section on how to use the [preprocessing function](#Preprocessing)** ## Bigger Dataset and More Compute We used ~3.5 times more data, and trained for longer. For Dataset Sources see the [Dataset Section](#Dataset) Model | Hardware | num of examples with seq len (128 / 512) |128 (Batch Size/ Num of Steps) | 512 (Batch Size/ Num of Steps) | Total Steps | Total Time (in Days) | ---|:---:|:---:|:---:|:---:|:---:|:---: AraBERTv0.2-base | TPUv3-8 | 420M / 207M | 2560 / 1M | 384/ 2M | 3M | - AraBERTv0.2-large | TPUv3-128 | 420M / 207M | 13440 / 250K | 2056 / 300K | 550K | 7 AraBERTv2-base | TPUv3-8 | 420M / 207M | 2560 / 1M | 384/ 2M | 3M | - AraBERTv2-large | TPUv3-128 | 520M / 245M | 13440 / 250K | 2056 / 300K | 550K | 7 AraBERT-base (v1/v0.1) | TPUv2-8 | - |512 / 900K | 128 / 300K| 1.2M | 4 # Dataset The pretraining data used for the new AraBERT model is also used for Arabic **GPT2 and ELECTRA**. The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation) For the new dataset we added the unshuffled OSCAR corpus, after we thoroughly filter it, to the previous dataset used in AraBERTv1 but with out the websites that we previously crawled: - OSCAR unshuffled and filtered. - [Arabic Wikipedia dump](https://archive.org/details/arwiki-20190201) from 2020/09/01 - [The 1.5B words Arabic Corpus](https://www.semanticscholar.org/paper/1.5-billion-words-Arabic-Corpus-El-Khair/f3eeef4afb81223df96575adadf808fe7fe440b4) - [The OSIAN Corpus](https://www.aclweb.org/anthology/W19-4619) - Assafir news articles. Huge thank you for Assafir for providing us the data # Preprocessing It is recommended to apply our preprocessing function before training/testing on any dataset. **Install the arabert python package to segment text for AraBERT v1 & v2 or to clean your data `pip install arabert`** ```python from arabert.preprocess import ArabertPreprocessor model_name="aubmindlab/bert-large-arabertv2" arabert_prep = ArabertPreprocessor(model_name=model_name) text = "ولن نبالغ إذا قلنا إن هاتف أو كمبيوتر المكتب في زمننا هذا ضروري" arabert_prep.preprocess(text) >>>"و+ لن نبالغ إذا قل +نا إن هاتف أو كمبيوتر ال+ مكتب في زمن +نا هذا ضروري" ``` # TensorFlow 1.x models The TF1.x model are available in the HuggingFace models repo. You can download them as follows: - via git-lfs: clone all the models in a repo ```bash curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash sudo apt-get install git-lfs git lfs install git clone https://huggingface.co/aubmindlab/MODEL_NAME tar -C ./MODEL_NAME -zxvf /content/MODEL_NAME/tf1_model.tar.gz ``` where `MODEL_NAME` is any model under the `aubmindlab` name - via `wget`: - Go to the tf1_model.tar.gz file on huggingface.co/models/aubmindlab/MODEL_NAME. - copy the `oid sha256` - then run `wget https://cdn-lfs.huggingface.co/aubmindlab/aragpt2-base/INSERT_THE_SHA_HERE` (ex: for `aragpt2-base`: `wget https://cdn-lfs.huggingface.co/aubmindlab/aragpt2-base/3766fc03d7c2593ff2fb991d275e96b81b0ecb2098b71ff315611d052ce65248`) # If you used this model please cite us as : Google Scholar has our Bibtex wrong (missing name), use this instead ``` @inproceedings{antoun2020arabert, title={AraBERT: Transformer-based Model for Arabic Language Understanding}, author={Antoun, Wissam and Baly, Fady and Hajj, Hazem}, booktitle={LREC 2020 Workshop Language Resources and Evaluation Conference 11--16 May 2020}, pages={9} } ``` # Acknowledgments Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the [AUB MIND Lab](https://sites.aub.edu.lb/mindlab/) Members for the continuous support. Also thanks to [Yakshof](https://www.yakshof.com/#/) and Assafir for data and storage access. Another thanks for Habib Rahal (https://www.behance.net/rahalhabib), for putting a face to AraBERT. # Contacts **Wissam Antoun**: [Linkedin](https://www.linkedin.com/in/wissam-antoun-622142b4/) | [Twitter](https://twitter.com/wissam_antoun) | [Github](https://github.com/WissamAntoun) | <[email protected]> | <[email protected]> **Fady Baly**: [Linkedin](https://www.linkedin.com/in/fadybaly/) | [Twitter](https://twitter.com/fadybaly) | [Github](https://github.com/fadybaly) | <[email protected]> | <[email protected]>
{"language": "ar", "datasets": ["wikipedia", "Osian", "1.5B-Arabic-Corpus", "oscar-arabic-unshuffled", "Assafir(private)"], "widget": [{"text": " \u0639\u0627\u0635\u0645 +\u0629 \u0644\u0628\u0646\u0627\u0646 \u0647\u064a [MASK] ."}]}
aubmindlab/bert-large-arabertv2
null
[ "transformers", "pytorch", "tf", "jax", "tensorboard", "safetensors", "bert", "fill-mask", "ar", "arxiv:2003.00104", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
This folder contain a Google T5 Transformer Fine-tuned to generate paraphrases using: - Para_NMT_50M_Paraphrasing_train_small.csv 134337 lines of pair sentences 19Mbytes - Para_NMT_50M_Paraphrasing_val_small.csv 14928 lines of pair sentences 2.0Mbytes Training Start Time: Sun Mar 14 18:27:15 2021 Training End Time: Sun Mar 14 22:19:00 2021
{}
auday/paraphraser_model1
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
This folder contain a Google T5 Transformer Fine-tuned to generate paraphrases using: - Quora_pair_train 134337 lines of pair sentences 14 Mbytes - Quora_pair_val 14928 lines of pair sentences 1.6 Mbytes training epoch: 6 Start Time: Sun Mar 14 18:27:15 2021 End Time: Sun Mar 14 22:19:00 2021
{}
auday/paraphraser_model2
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
audreyakwenye/activity_matching
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
#Harry Potter DialoGPT Model
{"tags": ["conversational"]}
augustojaba/DialoGPT-small-harrypotter
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
augustoortiz/bert-finetuned-squad
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
question-answering
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # augustoortiz/bert-finetuned-squad2 This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.2223 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 11091, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Epoch | |:----------:|:-----:| | 1.2223 | 0 | ### Framework versions - Transformers 4.17.0.dev0 - TensorFlow 2.8.0 - Datasets 1.18.3 - Tokenizers 0.11.0
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "model-index": [{"name": "augustoortiz/bert-finetuned-squad2", "results": []}]}
augustoortiz/bert-finetuned-squad2
null
[ "transformers", "tf", "bert", "question-answering", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
auravadima/sample
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
# Austin MeDeBERTa This model was developed using further MLM pre-training on [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base), using a dataset of 1.1M clinical notes from the Austin Health EMR. The notes span discharge summaries, inpatient notes, radiology reports and histopathology reports. ## Model description This is the base version of the original DeBERTa model. The architecture and tokenizer are unchanged. ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 9 - eval_batch_size: 9 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:------:|:---------------:| | 0.9756 | 0.51 | 40000 | 0.9127 | | 0.8876 | 1.01 | 80000 | 0.8221 | | 0.818 | 1.52 | 120000 | 0.7786 | | 0.7836 | 2.03 | 160000 | 0.7438 | | 0.7672 | 2.54 | 200000 | 0.7165 | | 0.734 | 3.04 | 240000 | 0.6948 | | 0.7079 | 3.55 | 280000 | 0.6749 | | 0.6987 | 4.06 | 320000 | 0.6598 | | 0.6771 | 4.57 | 360000 | 0.6471 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu113 - Datasets 1.15.1 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_trainer"], "model-index": [{"name": "deberta-pretrained-large", "results": []}]}
austin/Austin-MeDeBERTa
null
[ "transformers", "pytorch", "deberta", "fill-mask", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # adr-ner This model is a fine-tuned version of [austin/Austin-MeDeBERTa](https://huggingface.co/austin/Austin-MeDeBERTa) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0434 - Precision: 0.7305 - Recall: 0.6934 - F1: 0.7115 - Accuracy: 0.9941 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 107 | 0.0630 | 0.0 | 0.0 | 0.0 | 0.9876 | | No log | 2.0 | 214 | 0.0308 | 0.4282 | 0.3467 | 0.3832 | 0.9900 | | No log | 3.0 | 321 | 0.0254 | 0.5544 | 0.5603 | 0.5573 | 0.9920 | | No log | 4.0 | 428 | 0.0280 | 0.6430 | 0.5751 | 0.6071 | 0.9929 | | 0.0465 | 5.0 | 535 | 0.0266 | 0.5348 | 0.7146 | 0.6118 | 0.9915 | | 0.0465 | 6.0 | 642 | 0.0423 | 0.7632 | 0.5793 | 0.6587 | 0.9939 | | 0.0465 | 7.0 | 749 | 0.0336 | 0.6957 | 0.6765 | 0.6860 | 0.9939 | | 0.0465 | 8.0 | 856 | 0.0370 | 0.6876 | 0.6702 | 0.6788 | 0.9936 | | 0.0465 | 9.0 | 963 | 0.0349 | 0.6555 | 0.7040 | 0.6789 | 0.9932 | | 0.0044 | 10.0 | 1070 | 0.0403 | 0.6910 | 0.6808 | 0.6858 | 0.9938 | | 0.0044 | 11.0 | 1177 | 0.0415 | 0.7140 | 0.6808 | 0.6970 | 0.9939 | | 0.0044 | 12.0 | 1284 | 0.0440 | 0.7349 | 0.6681 | 0.6999 | 0.9941 | | 0.0044 | 13.0 | 1391 | 0.0423 | 0.7097 | 0.6977 | 0.7036 | 0.9941 | | 0.0044 | 14.0 | 1498 | 0.0435 | 0.7174 | 0.6977 | 0.7074 | 0.9941 | | 0.0006 | 15.0 | 1605 | 0.0434 | 0.7305 | 0.6934 | 0.7115 | 0.9941 | ### Framework versions - Transformers 4.14.1 - Pytorch 1.10.0+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "adr-ner", "results": []}]}
austin/adr-ner
null
[ "transformers", "pytorch", "deberta", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
autonomousvision/Projected_GAN_Pokemon
null
[ "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
{}
auychai/distilbert-base-uncased-finetuned-emotion
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
auychai/model1
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
# ReadMe 这是readme的文本内容
{"language": ["python"], "license": "mit", "tags": ["tag1", "tag2"], "datasets": ["dataset1", "dataset2"], "metrics": ["metric1", "metric2"], "thumbnail": "url to a thumbnail used in social sharing"}
avadesian/pg
null
[ "tag1", "tag2", "dataset:dataset1", "dataset:dataset2", "license:mit", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
avadhesh06/as
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
{}
averyanalex/panorama-rugpt3large
null
[ "transformers", "pytorch", "safetensors", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
avglinsky/bert-base-uncased-finetuned-ner
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
{}
aviator-neural/bert-base-uncased-sst2
null
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-donald_trump This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.8721 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 391 | 2.8721 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_trainer"], "model-index": [{"name": "gpt2-donald_trump", "results": []}]}
aviator-neural/gpt2-donald_trump
null
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mbart_jokes This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.0282 ## Model description This model is trained of jokes dataset , where you can ask a question and the model gives funny answer. ## Intended uses & limitations ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.3455 | 1.0 | 1914 | 3.0282 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "mbart_jokes", "results": []}]}
aviator-neural/mbart_jokes
null
[ "transformers", "pytorch", "tensorboard", "bart", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
aviator-neural/wav2vec2-large-xlsr-hindi-demo-colab
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
{}
avichr/ar_hd
null
[ "transformers", "pytorch", "jax", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
## HeBERT: Pre-trained BERT for Polarity Analysis and Emotion Recognition HeBERT is a Hebrew pretrained language model. It is based on Google's BERT architecture and it is BERT-Base config [(Devlin et al. 2018)](https://arxiv.org/abs/1810.04805). <br> ### HeBert was trained on three dataset: 1. A Hebrew version of OSCAR [(Ortiz, 2019)](https://oscar-corpus.com/): ~9.8 GB of data, including 1 billion words and over 20.8 millions sentences. 2. A Hebrew dump of [Wikipedia](https://dumps.wikimedia.org/hewiki/latest/): ~650 MB of data, including over 63 millions words and 3.8 millions sentences 3. Emotion UGC data that was collected for the purpose of this study. (described below) We evaluated the model on emotion recognition and sentiment analysis, for a downstream tasks. ### Emotion UGC Data Description Our User Genrated Content (UGC) is comments written on articles collected from 3 major news sites, between January 2020 to August 2020,. Total data size ~150 MB of data, including over 7 millions words and 350K sentences. 4000 sentences annotated by crowd members (3-10 annotators per sentence) for 8 emotions (anger, disgust, expectation , fear, happy, sadness, surprise and trust) and overall sentiment / polarity<br> In order to valid the annotation, we search an agreement between raters to emotion in each sentence using krippendorff's alpha [(krippendorff, 1970)](https://journals.sagepub.com/doi/pdf/10.1177/001316447003000105). We left sentences that got alpha > 0.7. Note that while we found a general agreement between raters about emotion like happy, trust and disgust, there are few emotion with general disagreement about them, apparently given the complexity of finding them in the text (e.g. expectation and surprise). ## How to use ### For masked-LM model (can be fine-tunned to any down-stream task) ``` from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT") model = AutoModel.from_pretrained("avichr/heBERT") from transformers import pipeline fill_mask = pipeline( "fill-mask", model="avichr/heBERT", tokenizer="avichr/heBERT" ) fill_mask("הקורונה לקחה את [MASK] ולנו לא נשאר דבר.") ``` ### For sentiment classification model (polarity ONLY): ``` from transformers import AutoTokenizer, AutoModel, pipeline tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT_sentiment_analysis") #same as 'avichr/heBERT' tokenizer model = AutoModel.from_pretrained("avichr/heBERT_sentiment_analysis") # how to use? sentiment_analysis = pipeline( "sentiment-analysis", model="avichr/heBERT_sentiment_analysis", tokenizer="avichr/heBERT_sentiment_analysis", return_all_scores = True ) >>> sentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים') [[{'label': 'natural', 'score': 0.9978172183036804}, {'label': 'positive', 'score': 0.0014792329166084528}, {'label': 'negative', 'score': 0.0007035882445052266}]] >>> sentiment_analysis('קפה זה טעים') [[{'label': 'natural', 'score': 0.00047328314394690096}, {'label': 'possitive', 'score': 0.9994067549705505}, {'label': 'negetive', 'score': 0.00011996887042187154}]] >>> sentiment_analysis('אני לא אוהב את העולם') [[{'label': 'natural', 'score': 9.214012970915064e-05}, {'label': 'possitive', 'score': 8.876807987689972e-05}, {'label': 'negetive', 'score': 0.9998190999031067}]] ``` Our model is also available on AWS! for more information visit [AWS' git](https://github.com/aws-samples/aws-lambda-docker-serverless-inference/tree/main/hebert-sentiment-analysis-inference-docker-lambda) ### For NER model: ``` from transformers import pipeline # how to use? NER = pipeline( "token-classification", model="avichr/heBERT_NER", tokenizer="avichr/heBERT_NER", ) NER('דויד לומד באוניברסיטה העברית שבירושלים') ``` ## Stay tuned! We are still working on our model and will edit this page as we progress.<br> Note that we have released only sentiment analysis (polarity) at this point, emotion detection will be released later on.<br> our git: https://github.com/avichaychriqui/HeBERT ## If you use this model please cite us as : Chriqui, A., & Yahav, I. (2022). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. INFORMS Journal on Data Science, forthcoming. ``` @article{chriqui2021hebert, title={HeBERT \& HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition}, author={Chriqui, Avihay and Yahav, Inbal}, journal={INFORMS Journal on Data Science}, year={2022} } ```
{}
avichr/heBERT
null
[ "transformers", "pytorch", "jax", "bert", "fill-mask", "arxiv:1810.04805", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
transformers
# HeBERT: Pre-trained BERT for Polarity Analysis and Emotion Recognition <img align="right" src="https://github.com/avichaychriqui/HeBERT/blob/main/data/heBERT_logo.png?raw=true" width="250"> HeBERT is a Hebrew pretrained language model. It is based on [Google's BERT](https://arxiv.org/abs/1810.04805) architecture and it is BERT-Base config. <br> HeBert was trained on three dataset: 1. A Hebrew version of [OSCAR](https://oscar-corpus.com/): ~9.8 GB of data, including 1 billion words and over 20.8 millions sentences. 2. A Hebrew dump of [Wikipedia](https://dumps.wikimedia.org/): ~650 MB of data, including over 63 millions words and 3.8 millions sentences 3. Emotion User Generated Content (UGC) data that was collected for the purpose of this study (described below). ## Named-entity recognition (NER) The ability of the model to classify named entities in text, such as persons' names, organizations, and locations; tested on a labeled dataset from [Ben Mordecai and M Elhadad (2005)](https://www.cs.bgu.ac.il/~elhadad/nlpproj/naama/), and evaluated with F1-score. ### How to use ``` from transformers import pipeline # how to use? NER = pipeline( "token-classification", model="avichr/heBERT_NER", tokenizer="avichr/heBERT_NER", ) NER('דויד לומד באוניברסיטה העברית שבירושלים') ``` ## Other tasks [**Emotion Recognition Model**](https://huggingface.co/avichr/hebEMO_trust). An online model can be found at [huggingface spaces](https://huggingface.co/spaces/avichr/HebEMO_demo) or as [colab notebook](https://colab.research.google.com/drive/1Jw3gOWjwVMcZslu-ttXoNeD17lms1-ff?usp=sharing) <br> [**Sentiment Analysis**](https://huggingface.co/avichr/heBERT_sentiment_analysis). <br> [**masked-LM model**](https://huggingface.co/avichr/heBERT) (can be fine-tunned to any down-stream task). ## Contact us [Avichay Chriqui](mailto:[email protected]) <br> [Inbal yahav](mailto:[email protected]) <br> The Coller Semitic Languages AI Lab <br> Thank you, תודה, شكرا <br> ## If you used this model please cite us as : Chriqui, A., & Yahav, I. (2021). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. arXiv preprint arXiv:2102.01909. ``` @article{chriqui2021hebert, title={HeBERT \& HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition}, author={Chriqui, Avihay and Yahav, Inbal}, journal={arXiv preprint arXiv:2102.01909}, year={2021} } ``` [git](https://github.com/avichaychriqui/HeBERT)
{}
avichr/heBERT_NER
null
[ "transformers", "pytorch", "bert", "token-classification", "arxiv:1810.04805", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
## HeBERT: Pre-trained BERT for Polarity Analysis and Emotion Recognition HeBERT is a Hebrew pre-trained language model. It is based on Google's BERT architecture and it is BERT-Base config [(Devlin et al. 2018)](https://arxiv.org/abs/1810.04805). <br> HeBert was trained on three datasets: 1. A Hebrew version of OSCAR [(Ortiz, 2019)](https://oscar-corpus.com/): ~9.8 GB of data, including 1 billion words and over 20.8 million sentences. 2. A Hebrew dump of Wikipedia: ~650 MB of data, including over 63 million words and 3.8 million sentences 3. Emotion UGC data was collected for the purpose of this study. (described below) We evaluated the model on emotion recognition and sentiment analysis, for downstream tasks. ### Emotion UGC Data Description Our User-Generated Content (UGC) is comments written on articles collected from 3 major news sites, between January 2020 to August 2020, Total data size of ~150 MB of data, including over 7 million words and 350K sentences. 4000 sentences annotated by crowd members (3-10 annotators per sentence) for 8 emotions (anger, disgust, expectation, fear, happy, sadness, surprise, and trust) and overall sentiment/polarity <br> In order to validate the annotation, we search for an agreement between raters to emotion in each sentence using Krippendorff's alpha [(krippendorff, 1970)](https://journals.sagepub.com/doi/pdf/10.1177/001316447003000105). We left sentences that got alpha > 0.7. Note that while we found a general agreement between raters about emotions like happiness, trust, and disgust, there are few emotions with general disagreement about them, apparently given the complexity of finding them in the text (e.g. expectation and surprise). ### Performance #### sentiment analysis | | precision | recall | f1-score | |--------------|-----------|--------|----------| | natural | 0.83 | 0.56 | 0.67 | | positive | 0.96 | 0.92 | 0.94 | | negative | 0.97 | 0.99 | 0.98 | | accuracy | | | 0.97 | | macro avg | 0.92 | 0.82 | 0.86 | | weighted avg | 0.96 | 0.97 | 0.96 | ## How to use ### For masked-LM model (can be fine-tunned to any down-stream task) ``` from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT") model = AutoModel.from_pretrained("avichr/heBERT") from transformers import pipeline fill_mask = pipeline( "fill-mask", model="avichr/heBERT", tokenizer="avichr/heBERT" ) fill_mask("הקורונה לקחה את [MASK] ולנו לא נשאר דבר.") ``` ### For sentiment classification model (polarity ONLY): ``` from transformers import AutoTokenizer, AutoModel, pipeline tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT_sentiment_analysis") #same as 'avichr/heBERT' tokenizer model = AutoModel.from_pretrained("avichr/heBERT_sentiment_analysis") # how to use? sentiment_analysis = pipeline( "sentiment-analysis", model="avichr/heBERT_sentiment_analysis", tokenizer="avichr/heBERT_sentiment_analysis", return_all_scores = True ) >>> sentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים') [[{'label': 'natural', 'score': 0.9978172183036804}, {'label': 'positive', 'score': 0.0014792329166084528}, {'label': 'negative', 'score': 0.0007035882445052266}]] >>> sentiment_analysis('קפה זה טעים') [[{'label': 'natural', 'score': 0.00047328314394690096}, {'label': 'possitive', 'score': 0.9994067549705505}, {'label': 'negetive', 'score': 0.00011996887042187154}]] >>> sentiment_analysis('אני לא אוהב את העולם') [[{'label': 'natural', 'score': 9.214012970915064e-05}, {'label': 'possitive', 'score': 8.876807987689972e-05}, {'label': 'negetive', 'score': 0.9998190999031067}]] ``` Our model is also available on AWS! for more information visit [AWS' git](https://github.com/aws-samples/aws-lambda-docker-serverless-inference/tree/main/hebert-sentiment-analysis-inference-docker-lambda) ## Stay tuned! We are still working on our model and will edit this page as we progress.<br> Note that we have released only sentiment analysis (polarity) at this point, emotion detection will be released later on.<br> our git: https://github.com/avichaychriqui/HeBERT ## If you used this model please cite us as : Chriqui, A., & Yahav, I. (2021). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. arXiv preprint arXiv:2102.01909. ``` @article{chriqui2021hebert, title={HeBERT \\\\\\\\\\\\\\\\& HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition}, author={Chriqui, Avihay and Yahav, Inbal}, journal={arXiv preprint arXiv:2102.01909}, year={2021} } ```
{}
avichr/heBERT_sentiment_analysis
null
[ "transformers", "pytorch", "jax", "bert", "text-classification", "arxiv:1810.04805", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
# HebEMO - Emotion Recognition Model for Modern Hebrew <img align="right" src="https://github.com/avichaychriqui/HeBERT/blob/main/data/heBERT_logo.png?raw=true" width="250"> HebEMO is a tool that detects polarity and extracts emotions from modern Hebrew User-Generated Content (UGC), which was trained on a unique Covid-19 related dataset that we collected and annotated. HebEMO yielded a high performance of weighted average F1-score = 0.96 for polarity classification. Emotion detection reached an F1-score of 0.78-0.97, with the exception of *surprise*, which the model failed to capture (F1 = 0.41). These results are better than the best-reported performance, even when compared to the English language. ## Emotion UGC Data Description Our UGC data includes comments posted on news articles collected from 3 major Israeli news sites, between January 2020 to August 2020. The total size of the data is ~150 MB, including over 7 million words and 350K sentences. ~2000 sentences were annotated by crowd members (3-10 annotators per sentence) for overall sentiment (polarity) and [eight emotions](https://en.wikipedia.org/wiki/Robert_Plutchik#Plutchik's_wheel_of_emotions): anger, disgust, anticipation , fear, joy, sadness, surprise and trust. The percentage of sentences in which each emotion appeared is found in the table below. | | anger | disgust | expectation | fear | happy | sadness | surprise | trust | sentiment | |------:|------:|--------:|------------:|-----:|------:|--------:|---------:|------:|-----------| | **ratio** | 0.78 | 0.83 | 0.58 | 0.45 | 0.12 | 0.59 | 0.17 | 0.11 | 0.25 | ## Performance ### Emotion Recognition | emotion | f1-score | precision | recall | |-------------|----------|-----------|----------| | anger | 0.96 | 0.99 | 0.93 | | disgust | 0.97 | 0.98 | 0.96 | |anticipation | 0.82 | 0.80 | 0.87 | | fear | 0.79 | 0.88 | 0.72 | | joy | 0.90 | 0.97 | 0.84 | | sadness | 0.90 | 0.86 | 0.94 | | surprise | 0.40 | 0.44 | 0.37 | | trust | 0.83 | 0.86 | 0.80 | *The above metrics is for positive class (meaning, the emotion is reflected in the text).* ### Sentiment (Polarity) Analysis | | precision | recall | f1-score | |--------------|-----------|--------|----------| | neutral | 0.83 | 0.56 | 0.67 | | positive | 0.96 | 0.92 | 0.94 | | negative | 0.97 | 0.99 | 0.98 | | accuracy | | | 0.97 | | macro avg | 0.92 | 0.82 | 0.86 | | weighted avg | 0.96 | 0.97 | 0.96 | *Sentiment (polarity) analysis model is also available on AWS! for more information visit [AWS' git](https://github.com/aws-samples/aws-lambda-docker-serverless-inference/tree/main/hebert-sentiment-analysis-inference-docker-lambda)* ## How to use ### Emotion Recognition Model An online model can be found at [huggingface spaces](https://huggingface.co/spaces/avichr/HebEMO_demo) or as [colab notebook](https://colab.research.google.com/drive/1Jw3gOWjwVMcZslu-ttXoNeD17lms1-ff?usp=sharing) ``` # !pip install pyplutchik==0.0.7 # !pip install transformers==4.14.1 !git clone https://github.com/avichaychriqui/HeBERT.git from HeBERT.src.HebEMO import * HebEMO_model = HebEMO() HebEMO_model.hebemo(input_path = 'data/text_example.txt') # return analyzed pandas.DataFrame hebEMO_df = HebEMO_model.hebemo(text='החיים יפים ומאושרים', plot=True) ``` <img src="https://github.com/avichaychriqui/HeBERT/blob/main/data/hebEMO1.png?raw=true" width="300" height="300" /> ### For sentiment classification model (polarity ONLY): from transformers import AutoTokenizer, AutoModel, pipeline tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT_sentiment_analysis") #same as 'avichr/heBERT' tokenizer model = AutoModel.from_pretrained("avichr/heBERT_sentiment_analysis") # how to use? sentiment_analysis = pipeline( "sentiment-analysis", model="avichr/heBERT_sentiment_analysis", tokenizer="avichr/heBERT_sentiment_analysis", return_all_scores = True ) sentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים') >>> [[{'label': 'neutral', 'score': 0.9978172183036804}, >>> {'label': 'positive', 'score': 0.0014792329166084528}, >>> {'label': 'negative', 'score': 0.0007035882445052266}]] sentiment_analysis('קפה זה טעים') >>> [[{'label': 'neutral', 'score': 0.00047328314394690096}, >>> {'label': 'possitive', 'score': 0.9994067549705505}, >>> {'label': 'negetive', 'score': 0.00011996887042187154}]] sentiment_analysis('אני לא אוהב את העולם') >>> [[{'label': 'neutral', 'score': 9.214012970915064e-05}, >>> {'label': 'possitive', 'score': 8.876807987689972e-05}, >>> {'label': 'negetive', 'score': 0.9998190999031067}]] ## Contact us [Avichay Chriqui](mailto:[email protected]) <br> [Inbal yahav](mailto:[email protected]) <br> The Coller Semitic Languages AI Lab <br> Thank you, תודה, شكرا <br> ## If you used this model please cite us as : Chriqui, A., & Yahav, I. (2022). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. INFORMS Journal on Data Science, forthcoming. ``` @article{chriqui2021hebert, title={HeBERT \& HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition}, author={Chriqui, Avihay and Yahav, Inbal}, journal={INFORMS Journal on Data Science}, year={2022} } ```
{}
avichr/hebEMO_anger
null
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
# HebEMO - Emotion Recognition Model for Modern Hebrew <img align="right" src="https://github.com/avichaychriqui/HeBERT/blob/main/data/heBERT_logo.png?raw=true" width="250"> HebEMO is a tool that detects polarity and extracts emotions from modern Hebrew User-Generated Content (UGC), which was trained on a unique Covid-19 related dataset that we collected and annotated. HebEMO yielded a high performance of weighted average F1-score = 0.96 for polarity classification. Emotion detection reached an F1-score of 0.78-0.97, with the exception of *surprise*, which the model failed to capture (F1 = 0.41). These results are better than the best-reported performance, even when compared to the English language. ## Emotion UGC Data Description Our UGC data includes comments posted on news articles collected from 3 major Israeli news sites, between January 2020 to August 2020. The total size of the data is ~150 MB, including over 7 million words and 350K sentences. ~2000 sentences were annotated by crowd members (3-10 annotators per sentence) for overall sentiment (polarity) and [eight emotions](https://en.wikipedia.org/wiki/Robert_Plutchik#Plutchik's_wheel_of_emotions): anger, disgust, anticipation , fear, joy, sadness, surprise and trust. The percentage of sentences in which each emotion appeared is found in the table below. | | anger | disgust | expectation | fear | happy | sadness | surprise | trust | sentiment | |------:|------:|--------:|------------:|-----:|------:|--------:|---------:|------:|-----------| | **ratio** | 0.78 | 0.83 | 0.58 | 0.45 | 0.12 | 0.59 | 0.17 | 0.11 | 0.25 | ## Performance ### Emotion Recognition | emotion | f1-score | precision | recall | |-------------|----------|-----------|----------| | anger | 0.96 | 0.99 | 0.93 | | disgust | 0.97 | 0.98 | 0.96 | |anticipation | 0.82 | 0.80 | 0.87 | | fear | 0.79 | 0.88 | 0.72 | | joy | 0.90 | 0.97 | 0.84 | | sadness | 0.90 | 0.86 | 0.94 | | surprise | 0.40 | 0.44 | 0.37 | | trust | 0.83 | 0.86 | 0.80 | *The above metrics is for positive class (meaning, the emotion is reflected in the text).* ### Sentiment (Polarity) Analysis | | precision | recall | f1-score | |--------------|-----------|--------|----------| | neutral | 0.83 | 0.56 | 0.67 | | positive | 0.96 | 0.92 | 0.94 | | negative | 0.97 | 0.99 | 0.98 | | accuracy | | | 0.97 | | macro avg | 0.92 | 0.82 | 0.86 | | weighted avg | 0.96 | 0.97 | 0.96 | *Sentiment (polarity) analysis model is also available on AWS! for more information visit [AWS' git](https://github.com/aws-samples/aws-lambda-docker-serverless-inference/tree/main/hebert-sentiment-analysis-inference-docker-lambda)* ## How to use ### Emotion Recognition Model An online model can be found at [huggingface spaces](https://huggingface.co/spaces/avichr/HebEMO_demo) or as [colab notebook](https://colab.research.google.com/drive/1Jw3gOWjwVMcZslu-ttXoNeD17lms1-ff?usp=sharing) ``` # !pip install pyplutchik==0.0.7 # !pip install transformers==4.14.1 !git clone https://github.com/avichaychriqui/HeBERT.git from HeBERT.src.HebEMO import * HebEMO_model = HebEMO() HebEMO_model.hebemo(input_path = 'data/text_example.txt') # return analyzed pandas.DataFrame hebEMO_df = HebEMO_model.hebemo(text='החיים יפים ומאושרים', plot=True) ``` <img src="https://github.com/avichaychriqui/HeBERT/blob/main/data/hebEMO1.png?raw=true" width="300" height="300" /> ### For sentiment classification model (polarity ONLY): from transformers import AutoTokenizer, AutoModel, pipeline tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT_sentiment_analysis") #same as 'avichr/heBERT' tokenizer model = AutoModel.from_pretrained("avichr/heBERT_sentiment_analysis") # how to use? sentiment_analysis = pipeline( "sentiment-analysis", model="avichr/heBERT_sentiment_analysis", tokenizer="avichr/heBERT_sentiment_analysis", return_all_scores = True ) sentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים') >>> [[{'label': 'neutral', 'score': 0.9978172183036804}, >>> {'label': 'positive', 'score': 0.0014792329166084528}, >>> {'label': 'negative', 'score': 0.0007035882445052266}]] sentiment_analysis('קפה זה טעים') >>> [[{'label': 'neutral', 'score': 0.00047328314394690096}, >>> {'label': 'possitive', 'score': 0.9994067549705505}, >>> {'label': 'negetive', 'score': 0.00011996887042187154}]] sentiment_analysis('אני לא אוהב את העולם') >>> [[{'label': 'neutral', 'score': 9.214012970915064e-05}, >>> {'label': 'possitive', 'score': 8.876807987689972e-05}, >>> {'label': 'negetive', 'score': 0.9998190999031067}]] ## Contact us [Avichay Chriqui](mailto:[email protected]) <br> [Inbal yahav](mailto:[email protected]) <br> The Coller Semitic Languages AI Lab <br> Thank you, תודה, شكرا <br> ## If you used this model please cite us as : Chriqui, A., & Yahav, I. (2022). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. INFORMS Journal on Data Science, forthcoming. ``` @article{chriqui2021hebert, title={HeBERT \& HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition}, author={Chriqui, Avihay and Yahav, Inbal}, journal={INFORMS Journal on Data Science}, year={2022} } ```
{}
avichr/hebEMO_anticipation
null
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
# HebEMO - Emotion Recognition Model for Modern Hebrew <img align="right" src="https://github.com/avichaychriqui/HeBERT/blob/main/data/heBERT_logo.png?raw=true" width="250"> HebEMO is a tool that detects polarity and extracts emotions from modern Hebrew User-Generated Content (UGC), which was trained on a unique Covid-19 related dataset that we collected and annotated. HebEMO yielded a high performance of weighted average F1-score = 0.96 for polarity classification. Emotion detection reached an F1-score of 0.78-0.97, with the exception of *surprise*, which the model failed to capture (F1 = 0.41). These results are better than the best-reported performance, even when compared to the English language. ## Emotion UGC Data Description Our UGC data includes comments posted on news articles collected from 3 major Israeli news sites, between January 2020 to August 2020. The total size of the data is ~150 MB, including over 7 million words and 350K sentences. ~2000 sentences were annotated by crowd members (3-10 annotators per sentence) for overall sentiment (polarity) and [eight emotions](https://en.wikipedia.org/wiki/Robert_Plutchik#Plutchik's_wheel_of_emotions): anger, disgust, anticipation , fear, joy, sadness, surprise and trust. The percentage of sentences in which each emotion appeared is found in the table below. | | anger | disgust | expectation | fear | happy | sadness | surprise | trust | sentiment | |------:|------:|--------:|------------:|-----:|------:|--------:|---------:|------:|-----------| | **ratio** | 0.78 | 0.83 | 0.58 | 0.45 | 0.12 | 0.59 | 0.17 | 0.11 | 0.25 | ## Performance ### Emotion Recognition | emotion | f1-score | precision | recall | |-------------|----------|-----------|----------| | anger | 0.96 | 0.99 | 0.93 | | disgust | 0.97 | 0.98 | 0.96 | |anticipation | 0.82 | 0.80 | 0.87 | | fear | 0.79 | 0.88 | 0.72 | | joy | 0.90 | 0.97 | 0.84 | | sadness | 0.90 | 0.86 | 0.94 | | surprise | 0.40 | 0.44 | 0.37 | | trust | 0.83 | 0.86 | 0.80 | *The above metrics is for positive class (meaning, the emotion is reflected in the text).* ### Sentiment (Polarity) Analysis | | precision | recall | f1-score | |--------------|-----------|--------|----------| | neutral | 0.83 | 0.56 | 0.67 | | positive | 0.96 | 0.92 | 0.94 | | negative | 0.97 | 0.99 | 0.98 | | accuracy | | | 0.97 | | macro avg | 0.92 | 0.82 | 0.86 | | weighted avg | 0.96 | 0.97 | 0.96 | *Sentiment (polarity) analysis model is also available on AWS! for more information visit [AWS' git](https://github.com/aws-samples/aws-lambda-docker-serverless-inference/tree/main/hebert-sentiment-analysis-inference-docker-lambda)* ## How to use ### Emotion Recognition Model An online model can be found at [huggingface spaces](https://huggingface.co/spaces/avichr/HebEMO_demo) or as [colab notebook](https://colab.research.google.com/drive/1Jw3gOWjwVMcZslu-ttXoNeD17lms1-ff?usp=sharing) ``` # !pip install pyplutchik==0.0.7 # !pip install transformers==4.14.1 !git clone https://github.com/avichaychriqui/HeBERT.git from HeBERT.src.HebEMO import * HebEMO_model = HebEMO() HebEMO_model.hebemo(input_path = 'data/text_example.txt') # return analyzed pandas.DataFrame hebEMO_df = HebEMO_model.hebemo(text='החיים יפים ומאושרים', plot=True) ``` <img src="https://github.com/avichaychriqui/HeBERT/blob/main/data/hebEMO1.png?raw=true" width="300" height="300" /> ### For sentiment classification model (polarity ONLY): from transformers import AutoTokenizer, AutoModel, pipeline tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT_sentiment_analysis") #same as 'avichr/heBERT' tokenizer model = AutoModel.from_pretrained("avichr/heBERT_sentiment_analysis") # how to use? sentiment_analysis = pipeline( "sentiment-analysis", model="avichr/heBERT_sentiment_analysis", tokenizer="avichr/heBERT_sentiment_analysis", return_all_scores = True ) sentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים') >>> [[{'label': 'neutral', 'score': 0.9978172183036804}, >>> {'label': 'positive', 'score': 0.0014792329166084528}, >>> {'label': 'negative', 'score': 0.0007035882445052266}]] sentiment_analysis('קפה זה טעים') >>> [[{'label': 'neutral', 'score': 0.00047328314394690096}, >>> {'label': 'possitive', 'score': 0.9994067549705505}, >>> {'label': 'negetive', 'score': 0.00011996887042187154}]] sentiment_analysis('אני לא אוהב את העולם') >>> [[{'label': 'neutral', 'score': 9.214012970915064e-05}, >>> {'label': 'possitive', 'score': 8.876807987689972e-05}, >>> {'label': 'negetive', 'score': 0.9998190999031067}]] ## Contact us [Avichay Chriqui](mailto:[email protected]) <br> [Inbal yahav](mailto:[email protected]) <br> The Coller Semitic Languages AI Lab <br> Thank you, תודה, شكرا <br> ## If you used this model please cite us as : Chriqui, A., & Yahav, I. (2022). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. INFORMS Journal on Data Science, forthcoming. ``` @article{chriqui2021hebert, title={HeBERT \& HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition}, author={Chriqui, Avihay and Yahav, Inbal}, journal={INFORMS Journal on Data Science}, year={2022} } ```
{}
avichr/hebEMO_disgust
null
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
# HebEMO - Emotion Recognition Model for Modern Hebrew <img align="right" src="https://github.com/avichaychriqui/HeBERT/blob/main/data/heBERT_logo.png?raw=true" width="250"> HebEMO is a tool that detects polarity and extracts emotions from modern Hebrew User-Generated Content (UGC), which was trained on a unique Covid-19 related dataset that we collected and annotated. HebEMO yielded a high performance of weighted average F1-score = 0.96 for polarity classification. Emotion detection reached an F1-score of 0.78-0.97, with the exception of *surprise*, which the model failed to capture (F1 = 0.41). These results are better than the best-reported performance, even when compared to the English language. ## Emotion UGC Data Description Our UGC data includes comments posted on news articles collected from 3 major Israeli news sites, between January 2020 to August 2020. The total size of the data is ~150 MB, including over 7 million words and 350K sentences. ~2000 sentences were annotated by crowd members (3-10 annotators per sentence) for overall sentiment (polarity) and [eight emotions](https://en.wikipedia.org/wiki/Robert_Plutchik#Plutchik's_wheel_of_emotions): anger, disgust, anticipation , fear, joy, sadness, surprise and trust. The percentage of sentences in which each emotion appeared is found in the table below. | | anger | disgust | expectation | fear | happy | sadness | surprise | trust | sentiment | |------:|------:|--------:|------------:|-----:|------:|--------:|---------:|------:|-----------| | **ratio** | 0.78 | 0.83 | 0.58 | 0.45 | 0.12 | 0.59 | 0.17 | 0.11 | 0.25 | ## Performance ### Emotion Recognition | emotion | f1-score | precision | recall | |-------------|----------|-----------|----------| | anger | 0.96 | 0.99 | 0.93 | | disgust | 0.97 | 0.98 | 0.96 | |anticipation | 0.82 | 0.80 | 0.87 | | fear | 0.79 | 0.88 | 0.72 | | joy | 0.90 | 0.97 | 0.84 | | sadness | 0.90 | 0.86 | 0.94 | | surprise | 0.40 | 0.44 | 0.37 | | trust | 0.83 | 0.86 | 0.80 | *The above metrics is for positive class (meaning, the emotion is reflected in the text).* ### Sentiment (Polarity) Analysis | | precision | recall | f1-score | |--------------|-----------|--------|----------| | neutral | 0.83 | 0.56 | 0.67 | | positive | 0.96 | 0.92 | 0.94 | | negative | 0.97 | 0.99 | 0.98 | | accuracy | | | 0.97 | | macro avg | 0.92 | 0.82 | 0.86 | | weighted avg | 0.96 | 0.97 | 0.96 | *Sentiment (polarity) analysis model is also available on AWS! for more information visit [AWS' git](https://github.com/aws-samples/aws-lambda-docker-serverless-inference/tree/main/hebert-sentiment-analysis-inference-docker-lambda)* ## How to use ### Emotion Recognition Model An online model can be found at [huggingface spaces](https://huggingface.co/spaces/avichr/HebEMO_demo) or as [colab notebook](https://colab.research.google.com/drive/1Jw3gOWjwVMcZslu-ttXoNeD17lms1-ff?usp=sharing) ``` # !pip install pyplutchik==0.0.7 # !pip install transformers==4.14.1 !git clone https://github.com/avichaychriqui/HeBERT.git from HeBERT.src.HebEMO import * HebEMO_model = HebEMO() HebEMO_model.hebemo(input_path = 'data/text_example.txt') # return analyzed pandas.DataFrame hebEMO_df = HebEMO_model.hebemo(text='החיים יפים ומאושרים', plot=True) ``` <img src="https://github.com/avichaychriqui/HeBERT/blob/main/data/hebEMO1.png?raw=true" width="300" height="300" /> ### For sentiment classification model (polarity ONLY): from transformers import AutoTokenizer, AutoModel, pipeline tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT_sentiment_analysis") #same as 'avichr/heBERT' tokenizer model = AutoModel.from_pretrained("avichr/heBERT_sentiment_analysis") # how to use? sentiment_analysis = pipeline( "sentiment-analysis", model="avichr/heBERT_sentiment_analysis", tokenizer="avichr/heBERT_sentiment_analysis", return_all_scores = True ) sentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים') >>> [[{'label': 'neutral', 'score': 0.9978172183036804}, >>> {'label': 'positive', 'score': 0.0014792329166084528}, >>> {'label': 'negative', 'score': 0.0007035882445052266}]] sentiment_analysis('קפה זה טעים') >>> [[{'label': 'neutral', 'score': 0.00047328314394690096}, >>> {'label': 'possitive', 'score': 0.9994067549705505}, >>> {'label': 'negetive', 'score': 0.00011996887042187154}]] sentiment_analysis('אני לא אוהב את העולם') >>> [[{'label': 'neutral', 'score': 9.214012970915064e-05}, >>> {'label': 'possitive', 'score': 8.876807987689972e-05}, >>> {'label': 'negetive', 'score': 0.9998190999031067}]] ## Contact us [Avichay Chriqui](mailto:[email protected]) <br> [Inbal yahav](mailto:[email protected]) <br> The Coller Semitic Languages AI Lab <br> Thank you, תודה, شكرا <br> ## If you used this model please cite us as : Chriqui, A., & Yahav, I. (2022). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. INFORMS Journal on Data Science, forthcoming. ``` @article{chriqui2021hebert, title={HeBERT \& HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition}, author={Chriqui, Avihay and Yahav, Inbal}, journal={INFORMS Journal on Data Science}, year={2022} } ```
{}
avichr/hebEMO_fear
null
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
# HebEMO - Emotion Recognition Model for Modern Hebrew <img align="right" src="https://github.com/avichaychriqui/HeBERT/blob/main/data/heBERT_logo.png?raw=true" width="250"> HebEMO is a tool that detects polarity and extracts emotions from modern Hebrew User-Generated Content (UGC), which was trained on a unique Covid-19 related dataset that we collected and annotated. HebEMO yielded a high performance of weighted average F1-score = 0.96 for polarity classification. Emotion detection reached an F1-score of 0.78-0.97, with the exception of *surprise*, which the model failed to capture (F1 = 0.41). These results are better than the best-reported performance, even when compared to the English language. ## Emotion UGC Data Description Our UGC data includes comments posted on news articles collected from 3 major Israeli news sites, between January 2020 to August 2020. The total size of the data is ~150 MB, including over 7 million words and 350K sentences. ~2000 sentences were annotated by crowd members (3-10 annotators per sentence) for overall sentiment (polarity) and [eight emotions](https://en.wikipedia.org/wiki/Robert_Plutchik#Plutchik's_wheel_of_emotions): anger, disgust, anticipation , fear, joy, sadness, surprise and trust. The percentage of sentences in which each emotion appeared is found in the table below. | | anger | disgust | expectation | fear | happy | sadness | surprise | trust | sentiment | |------:|------:|--------:|------------:|-----:|------:|--------:|---------:|------:|-----------| | **ratio** | 0.78 | 0.83 | 0.58 | 0.45 | 0.12 | 0.59 | 0.17 | 0.11 | 0.25 | ## Performance ### Emotion Recognition | emotion | f1-score | precision | recall | |-------------|----------|-----------|----------| | anger | 0.96 | 0.99 | 0.93 | | disgust | 0.97 | 0.98 | 0.96 | |anticipation | 0.82 | 0.80 | 0.87 | | fear | 0.79 | 0.88 | 0.72 | | joy | 0.90 | 0.97 | 0.84 | | sadness | 0.90 | 0.86 | 0.94 | | surprise | 0.40 | 0.44 | 0.37 | | trust | 0.83 | 0.86 | 0.80 | *The above metrics is for positive class (meaning, the emotion is reflected in the text).* ### Sentiment (Polarity) Analysis | | precision | recall | f1-score | |--------------|-----------|--------|----------| | neutral | 0.83 | 0.56 | 0.67 | | positive | 0.96 | 0.92 | 0.94 | | negative | 0.97 | 0.99 | 0.98 | | accuracy | | | 0.97 | | macro avg | 0.92 | 0.82 | 0.86 | | weighted avg | 0.96 | 0.97 | 0.96 | *Sentiment (polarity) analysis model is also available on AWS! for more information visit [AWS' git](https://github.com/aws-samples/aws-lambda-docker-serverless-inference/tree/main/hebert-sentiment-analysis-inference-docker-lambda)* ## How to use ### Emotion Recognition Model An online model can be found at [huggingface spaces](https://huggingface.co/spaces/avichr/HebEMO_demo) or as [colab notebook](https://colab.research.google.com/drive/1Jw3gOWjwVMcZslu-ttXoNeD17lms1-ff?usp=sharing) ``` # !pip install pyplutchik==0.0.7 # !pip install transformers==4.14.1 !git clone https://github.com/avichaychriqui/HeBERT.git from HeBERT.src.HebEMO import * HebEMO_model = HebEMO() HebEMO_model.hebemo(input_path = 'data/text_example.txt') # return analyzed pandas.DataFrame hebEMO_df = HebEMO_model.hebemo(text='החיים יפים ומאושרים', plot=True) ``` <img src="https://github.com/avichaychriqui/HeBERT/blob/main/data/hebEMO1.png?raw=true" width="300" height="300" /> ### For sentiment classification model (polarity ONLY): from transformers import AutoTokenizer, AutoModel, pipeline tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT_sentiment_analysis") #same as 'avichr/heBERT' tokenizer model = AutoModel.from_pretrained("avichr/heBERT_sentiment_analysis") # how to use? sentiment_analysis = pipeline( "sentiment-analysis", model="avichr/heBERT_sentiment_analysis", tokenizer="avichr/heBERT_sentiment_analysis", return_all_scores = True ) sentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים') >>> [[{'label': 'neutral', 'score': 0.9978172183036804}, >>> {'label': 'positive', 'score': 0.0014792329166084528}, >>> {'label': 'negative', 'score': 0.0007035882445052266}]] sentiment_analysis('קפה זה טעים') >>> [[{'label': 'neutral', 'score': 0.00047328314394690096}, >>> {'label': 'possitive', 'score': 0.9994067549705505}, >>> {'label': 'negetive', 'score': 0.00011996887042187154}]] sentiment_analysis('אני לא אוהב את העולם') >>> [[{'label': 'neutral', 'score': 9.214012970915064e-05}, >>> {'label': 'possitive', 'score': 8.876807987689972e-05}, >>> {'label': 'negetive', 'score': 0.9998190999031067}]] ## Contact us [Avichay Chriqui](mailto:[email protected]) <br> [Inbal yahav](mailto:[email protected]) <br> The Coller Semitic Languages AI Lab <br> Thank you, תודה, شكرا <br> ## If you used this model please cite us as : Chriqui, A., & Yahav, I. (2021). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. arXiv preprint arXiv:2102.01909. ``` @article{chriqui2021hebert, title={HeBERT \& HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition}, author={Chriqui, Avihay and Yahav, Inbal}, journal={arXiv preprint arXiv:2102.01909}, year={2021} } ```
{}
avichr/hebEMO_joy
null
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
# HebEMO - Emotion Recognition Model for Modern Hebrew <img align="right" src="https://github.com/avichaychriqui/HeBERT/blob/main/data/heBERT_logo.png?raw=true" width="250"> HebEMO is a tool that detects polarity and extracts emotions from modern Hebrew User-Generated Content (UGC), which was trained on a unique Covid-19 related dataset that we collected and annotated. HebEMO yielded a high performance of weighted average F1-score = 0.96 for polarity classification. Emotion detection reached an F1-score of 0.78-0.97, with the exception of *surprise*, which the model failed to capture (F1 = 0.41). These results are better than the best-reported performance, even when compared to the English language. ## Emotion UGC Data Description Our UGC data includes comments posted on news articles collected from 3 major Israeli news sites, between January 2020 to August 2020. The total size of the data is ~150 MB, including over 7 million words and 350K sentences. ~2000 sentences were annotated by crowd members (3-10 annotators per sentence) for overall sentiment (polarity) and [eight emotions](https://en.wikipedia.org/wiki/Robert_Plutchik#Plutchik's_wheel_of_emotions): anger, disgust, anticipation , fear, joy, sadness, surprise and trust. The percentage of sentences in which each emotion appeared is found in the table below. | | anger | disgust | expectation | fear | happy | sadness | surprise | trust | sentiment | |------:|------:|--------:|------------:|-----:|------:|--------:|---------:|------:|-----------| | **ratio** | 0.78 | 0.83 | 0.58 | 0.45 | 0.12 | 0.59 | 0.17 | 0.11 | 0.25 | ## Performance ### Emotion Recognition | emotion | f1-score | precision | recall | |-------------|----------|-----------|----------| | anger | 0.96 | 0.99 | 0.93 | | disgust | 0.97 | 0.98 | 0.96 | |anticipation | 0.82 | 0.80 | 0.87 | | fear | 0.79 | 0.88 | 0.72 | | joy | 0.90 | 0.97 | 0.84 | | sadness | 0.90 | 0.86 | 0.94 | | surprise | 0.40 | 0.44 | 0.37 | | trust | 0.83 | 0.86 | 0.80 | *The above metrics is for positive class (meaning, the emotion is reflected in the text).* ### Sentiment (Polarity) Analysis | | precision | recall | f1-score | |--------------|-----------|--------|----------| | neutral | 0.83 | 0.56 | 0.67 | | positive | 0.96 | 0.92 | 0.94 | | negative | 0.97 | 0.99 | 0.98 | | accuracy | | | 0.97 | | macro avg | 0.92 | 0.82 | 0.86 | | weighted avg | 0.96 | 0.97 | 0.96 | *Sentiment (polarity) analysis model is also available on AWS! for more information visit [AWS' git](https://github.com/aws-samples/aws-lambda-docker-serverless-inference/tree/main/hebert-sentiment-analysis-inference-docker-lambda)* ## How to use ### Emotion Recognition Model An online model can be found at [huggingface spaces](https://huggingface.co/spaces/avichr/HebEMO_demo) or as [colab notebook](https://colab.research.google.com/drive/1Jw3gOWjwVMcZslu-ttXoNeD17lms1-ff?usp=sharing) ``` # !pip install pyplutchik==0.0.7 # !pip install transformers==4.14.1 !git clone https://github.com/avichaychriqui/HeBERT.git from HeBERT.src.HebEMO import * HebEMO_model = HebEMO() HebEMO_model.hebemo(input_path = 'data/text_example.txt') # return analyzed pandas.DataFrame hebEMO_df = HebEMO_model.hebemo(text='החיים יפים ומאושרים', plot=True) ``` <img src="https://github.com/avichaychriqui/HeBERT/blob/main/data/hebEMO1.png?raw=true" width="300" height="300" /> ### For sentiment classification model (polarity ONLY): from transformers import AutoTokenizer, AutoModel, pipeline tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT_sentiment_analysis") #same as 'avichr/heBERT' tokenizer model = AutoModel.from_pretrained("avichr/heBERT_sentiment_analysis") # how to use? sentiment_analysis = pipeline( "sentiment-analysis", model="avichr/heBERT_sentiment_analysis", tokenizer="avichr/heBERT_sentiment_analysis", return_all_scores = True ) sentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים') >>> [[{'label': 'neutral', 'score': 0.9978172183036804}, >>> {'label': 'positive', 'score': 0.0014792329166084528}, >>> {'label': 'negative', 'score': 0.0007035882445052266}]] sentiment_analysis('קפה זה טעים') >>> [[{'label': 'neutral', 'score': 0.00047328314394690096}, >>> {'label': 'possitive', 'score': 0.9994067549705505}, >>> {'label': 'negetive', 'score': 0.00011996887042187154}]] sentiment_analysis('אני לא אוהב את העולם') >>> [[{'label': 'neutral', 'score': 9.214012970915064e-05}, >>> {'label': 'possitive', 'score': 8.876807987689972e-05}, >>> {'label': 'negetive', 'score': 0.9998190999031067}]] ## Contact us [Avichay Chriqui](mailto:[email protected]) <br> [Inbal yahav](mailto:[email protected]) <br> The Coller Semitic Languages AI Lab <br> Thank you, תודה, شكرا <br> ## If you used this model please cite us as : Chriqui, A., & Yahav, I. (2022). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. INFORMS Journal on Data Science, forthcoming. ``` @article{chriqui2021hebert, title={HeBERT \& HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition}, author={Chriqui, Avihay and Yahav, Inbal}, journal={INFORMS Journal on Data Science}, year={2022} } ```
{}
avichr/hebEMO_sadness
null
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
# HebEMO - Emotion Recognition Model for Modern Hebrew <img align="right" src="https://github.com/avichaychriqui/HeBERT/blob/main/data/heBERT_logo.png?raw=true" width="250"> HebEMO is a tool that detects polarity and extracts emotions from modern Hebrew User-Generated Content (UGC), which was trained on a unique Covid-19 related dataset that we collected and annotated. HebEMO yielded a high performance of weighted average F1-score = 0.96 for polarity classification. Emotion detection reached an F1-score of 0.78-0.97, with the exception of *surprise*, which the model failed to capture (F1 = 0.41). These results are better than the best-reported performance, even when compared to the English language. ## Emotion UGC Data Description Our UGC data includes comments posted on news articles collected from 3 major Israeli news sites, between January 2020 to August 2020. The total size of the data is ~150 MB, including over 7 million words and 350K sentences. ~2000 sentences were annotated by crowd members (3-10 annotators per sentence) for overall sentiment (polarity) and [eight emotions](https://en.wikipedia.org/wiki/Robert_Plutchik#Plutchik's_wheel_of_emotions): anger, disgust, anticipation , fear, joy, sadness, surprise and trust. The percentage of sentences in which each emotion appeared is found in the table below. | | anger | disgust | expectation | fear | happy | sadness | surprise | trust | sentiment | |------:|------:|--------:|------------:|-----:|------:|--------:|---------:|------:|-----------| | **ratio** | 0.78 | 0.83 | 0.58 | 0.45 | 0.12 | 0.59 | 0.17 | 0.11 | 0.25 | ## Performance ### Emotion Recognition | emotion | f1-score | precision | recall | |-------------|----------|-----------|----------| | anger | 0.96 | 0.99 | 0.93 | | disgust | 0.97 | 0.98 | 0.96 | |anticipation | 0.82 | 0.80 | 0.87 | | fear | 0.79 | 0.88 | 0.72 | | joy | 0.90 | 0.97 | 0.84 | | sadness | 0.90 | 0.86 | 0.94 | | surprise | 0.40 | 0.44 | 0.37 | | trust | 0.83 | 0.86 | 0.80 | *The above metrics is for positive class (meaning, the emotion is reflected in the text).* ### Sentiment (Polarity) Analysis | | precision | recall | f1-score | |--------------|-----------|--------|----------| | neutral | 0.83 | 0.56 | 0.67 | | positive | 0.96 | 0.92 | 0.94 | | negative | 0.97 | 0.99 | 0.98 | | accuracy | | | 0.97 | | macro avg | 0.92 | 0.82 | 0.86 | | weighted avg | 0.96 | 0.97 | 0.96 | *Sentiment (polarity) analysis model is also available on AWS! for more information visit [AWS' git](https://github.com/aws-samples/aws-lambda-docker-serverless-inference/tree/main/hebert-sentiment-analysis-inference-docker-lambda)* ## How to use ### Emotion Recognition Model An online model can be found at [huggingface spaces](https://huggingface.co/spaces/avichr/HebEMO_demo) or as [colab notebook](https://colab.research.google.com/drive/1Jw3gOWjwVMcZslu-ttXoNeD17lms1-ff?usp=sharing) ``` # !pip install pyplutchik==0.0.7 # !pip install transformers==4.14.1 !git clone https://github.com/avichaychriqui/HeBERT.git from HeBERT.src.HebEMO import * HebEMO_model = HebEMO() HebEMO_model.hebemo(input_path = 'data/text_example.txt') # return analyzed pandas.DataFrame hebEMO_df = HebEMO_model.hebemo(text='החיים יפים ומאושרים', plot=True) ``` <img src="https://github.com/avichaychriqui/HeBERT/blob/main/data/hebEMO1.png?raw=true" width="300" height="300" /> ### For sentiment classification model (polarity ONLY): from transformers import AutoTokenizer, AutoModel, pipeline tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT_sentiment_analysis") #same as 'avichr/heBERT' tokenizer model = AutoModel.from_pretrained("avichr/heBERT_sentiment_analysis") # how to use? sentiment_analysis = pipeline( "sentiment-analysis", model="avichr/heBERT_sentiment_analysis", tokenizer="avichr/heBERT_sentiment_analysis", return_all_scores = True ) sentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים') >>> [[{'label': 'neutral', 'score': 0.9978172183036804}, >>> {'label': 'positive', 'score': 0.0014792329166084528}, >>> {'label': 'negative', 'score': 0.0007035882445052266}]] sentiment_analysis('קפה זה טעים') >>> [[{'label': 'neutral', 'score': 0.00047328314394690096}, >>> {'label': 'possitive', 'score': 0.9994067549705505}, >>> {'label': 'negetive', 'score': 0.00011996887042187154}]] sentiment_analysis('אני לא אוהב את העולם') >>> [[{'label': 'neutral', 'score': 9.214012970915064e-05}, >>> {'label': 'possitive', 'score': 8.876807987689972e-05}, >>> {'label': 'negetive', 'score': 0.9998190999031067}]] ## Contact us [Avichay Chriqui](mailto:[email protected]) <br> [Inbal yahav](mailto:[email protected]) <br> The Coller Semitic Languages AI Lab <br> Thank you, תודה, شكرا <br> ## If you used this model please cite us as : Chriqui, A., & Yahav, I. (2022). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. INFORMS Journal on Data Science, forthcoming. ``` @article{chriqui2021hebert, title={HeBERT \& HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition}, author={Chriqui, Avihay and Yahav, Inbal}, journal={INFORMS Journal on Data Science}, year={2022} } ```
{}
avichr/hebEMO_surprise
null
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
# HebEMO - Emotion Recognition Model for Modern Hebrew <img align="right" src="https://github.com/avichaychriqui/HeBERT/blob/main/data/heBERT_logo.png?raw=true" width="250"> HebEMO is a tool that detects polarity and extracts emotions from modern Hebrew User-Generated Content (UGC), which was trained on a unique Covid-19 related dataset that we collected and annotated. HebEMO yielded a high performance of weighted average F1-score = 0.96 for polarity classification. Emotion detection reached an F1-score of 0.78-0.97, with the exception of *surprise*, which the model failed to capture (F1 = 0.41). These results are better than the best-reported performance, even when compared to the English language. ## Emotion UGC Data Description Our UGC data includes comments posted on news articles collected from 3 major Israeli news sites, between January 2020 to August 2020. The total size of the data is ~150 MB, including over 7 million words and 350K sentences. ~2000 sentences were annotated by crowd members (3-10 annotators per sentence) for overall sentiment (polarity) and [eight emotions](https://en.wikipedia.org/wiki/Robert_Plutchik#Plutchik's_wheel_of_emotions): anger, disgust, anticipation , fear, joy, sadness, surprise and trust. The percentage of sentences in which each emotion appeared is found in the table below. | | anger | disgust | expectation | fear | happy | sadness | surprise | trust | sentiment | |------:|------:|--------:|------------:|-----:|------:|--------:|---------:|------:|-----------| | **ratio** | 0.78 | 0.83 | 0.58 | 0.45 | 0.12 | 0.59 | 0.17 | 0.11 | 0.25 | ## Performance ### Emotion Recognition | emotion | f1-score | precision | recall | |-------------|----------|-----------|----------| | anger | 0.96 | 0.99 | 0.93 | | disgust | 0.97 | 0.98 | 0.96 | |anticipation | 0.82 | 0.80 | 0.87 | | fear | 0.79 | 0.88 | 0.72 | | joy | 0.90 | 0.97 | 0.84 | | sadness | 0.90 | 0.86 | 0.94 | | surprise | 0.40 | 0.44 | 0.37 | | trust | 0.83 | 0.86 | 0.80 | *The above metrics is for positive class (meaning, the emotion is reflected in the text).* ### Sentiment (Polarity) Analysis | | precision | recall | f1-score | |--------------|-----------|--------|----------| | neutral | 0.83 | 0.56 | 0.67 | | positive | 0.96 | 0.92 | 0.94 | | negative | 0.97 | 0.99 | 0.98 | | accuracy | | | 0.97 | | macro avg | 0.92 | 0.82 | 0.86 | | weighted avg | 0.96 | 0.97 | 0.96 | *Sentiment (polarity) analysis model is also available on AWS! for more information visit [AWS' git](https://github.com/aws-samples/aws-lambda-docker-serverless-inference/tree/main/hebert-sentiment-analysis-inference-docker-lambda)* ## How to use ### Emotion Recognition Model An online model can be found at [huggingface spaces](https://huggingface.co/spaces/avichr/HebEMO_demo) or as [colab notebook](https://colab.research.google.com/drive/1Jw3gOWjwVMcZslu-ttXoNeD17lms1-ff?usp=sharing) ``` # !pip install pyplutchik==0.0.7 # !pip install transformers==4.14.1 !git clone https://github.com/avichaychriqui/HeBERT.git from HeBERT.src.HebEMO import * HebEMO_model = HebEMO() HebEMO_model.hebemo(input_path = 'data/text_example.txt') # return analyzed pandas.DataFrame hebEMO_df = HebEMO_model.hebemo(text='החיים יפים ומאושרים', plot=True) ``` <img src="https://github.com/avichaychriqui/HeBERT/blob/main/data/hebEMO1.png?raw=true" width="300" height="300" /> ### For sentiment classification model (polarity ONLY): from transformers import AutoTokenizer, AutoModel, pipeline tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT_sentiment_analysis") #same as 'avichr/heBERT' tokenizer model = AutoModel.from_pretrained("avichr/heBERT_sentiment_analysis") # how to use? sentiment_analysis = pipeline( "sentiment-analysis", model="avichr/heBERT_sentiment_analysis", tokenizer="avichr/heBERT_sentiment_analysis", return_all_scores = True ) sentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים') >>> [[{'label': 'neutral', 'score': 0.9978172183036804}, >>> {'label': 'positive', 'score': 0.0014792329166084528}, >>> {'label': 'negative', 'score': 0.0007035882445052266}]] sentiment_analysis('קפה זה טעים') >>> [[{'label': 'neutral', 'score': 0.00047328314394690096}, >>> {'label': 'possitive', 'score': 0.9994067549705505}, >>> {'label': 'negetive', 'score': 0.00011996887042187154}]] sentiment_analysis('אני לא אוהב את העולם') >>> [[{'label': 'neutral', 'score': 9.214012970915064e-05}, >>> {'label': 'possitive', 'score': 8.876807987689972e-05}, >>> {'label': 'negetive', 'score': 0.9998190999031067}]] ## Contact us [Avichay Chriqui](mailto:[email protected]) <br> [Inbal yahav](mailto:[email protected]) <br> The Coller Semitic Languages AI Lab <br> Thank you, תודה, شكرا <br> ## If you used this model please cite us as : Chriqui, A., & Yahav, I. (2022). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. INFORMS Journal on Data Science, forthcoming. ``` @article{chriqui2021hebert, title={HeBERT \& HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition}, author={Chriqui, Avihay and Yahav, Inbal}, journal={INFORMS Journal on Data Science}, year={2022} } ```
{}
avichr/hebEMO_trust
null
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
avihay/TestModel
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# rickbot Dialo-GPT
{"tags": ["conversational"]}
avinashshrangee/DialoGPT-small-Ricky
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
avioo1/bert-base-uncased-finetuned-squad
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
avioo1/bert-large-uncased-whole-word-masking-finetuned-squad-finetuned-squad
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
avioo1/bert-large-uncased-whole-word-masking-finetuned-squad
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
avioo1/distilbert-base-cased-finetuned-squad
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.2125 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.2637 | 1.0 | 5533 | 1.2125 | ### Framework versions - Transformers 4.10.2 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"]}
avioo1/distilbert-base-uncased-finetuned-squad
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
avioo1/electra-base-squad2-finetuned-squad
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
avioo1/minilm-uncased-squad2-finetuned-squad
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-squad2-finetuned-squad This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 5.0220 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 74 | 1.7148 | | No log | 2.0 | 148 | 1.6994 | | No log | 3.0 | 222 | 1.7922 | | No log | 4.0 | 296 | 1.9947 | | No log | 5.0 | 370 | 2.0753 | | No log | 6.0 | 444 | 2.2096 | | 0.9547 | 7.0 | 518 | 2.3070 | | 0.9547 | 8.0 | 592 | 2.6947 | | 0.9547 | 9.0 | 666 | 2.7169 | | 0.9547 | 10.0 | 740 | 2.8503 | | 0.9547 | 11.0 | 814 | 3.1990 | | 0.9547 | 12.0 | 888 | 3.4931 | | 0.9547 | 13.0 | 962 | 3.6575 | | 0.3191 | 14.0 | 1036 | 3.1863 | | 0.3191 | 15.0 | 1110 | 3.7922 | | 0.3191 | 16.0 | 1184 | 3.6336 | | 0.3191 | 17.0 | 1258 | 4.1156 | | 0.3191 | 18.0 | 1332 | 4.1353 | | 0.3191 | 19.0 | 1406 | 3.9888 | | 0.3191 | 20.0 | 1480 | 4.4290 | | 0.1904 | 21.0 | 1554 | 4.0473 | | 0.1904 | 22.0 | 1628 | 4.5048 | | 0.1904 | 23.0 | 1702 | 4.4026 | | 0.1904 | 24.0 | 1776 | 4.2864 | | 0.1904 | 25.0 | 1850 | 4.3941 | | 0.1904 | 26.0 | 1924 | 4.4921 | | 0.1904 | 27.0 | 1998 | 4.9139 | | 0.1342 | 28.0 | 2072 | 4.8914 | | 0.1342 | 29.0 | 2146 | 5.0148 | | 0.1342 | 30.0 | 2220 | 5.0220 | ### Framework versions - Transformers 4.11.0 - Pytorch 1.9.0+cu102 - Datasets 1.12.1 - Tokenizers 0.10.3
{"license": "cc-by-4.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "roberta-base-squad2-finetuned-squad", "results": []}]}
avioo1/roberta-base-squad2-finetuned-squad
null
[ "transformers", "pytorch", "tensorboard", "roberta", "question-answering", "generated_from_trainer", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
avioo1/roberta-large-squad2-finetuned-squad
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.4981 - Matthews Correlation: 0.4218 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5248 | 1.0 | 535 | 0.4981 | 0.4218 | ### Framework versions - Transformers 4.9.1 - Pytorch 1.9.0+cu102 - Datasets 1.10.2 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model_index": [{"name": "distilbert-base-uncased-finetuned-cola", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}, "dataset": {"name": "glue", "type": "glue", "args": "cola"}, "metric": {"name": "Matthews Correlation", "type": "matthews_correlation", "value": 0.42176824452830747}}]}]}
avneet/distilbert-base-uncased-finetuned-cola
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00