Search is not available for this dataset
pipeline_tag
stringclasses
48 values
library_name
stringclasses
205 values
text
stringlengths
0
18.3M
metadata
stringlengths
2
1.07B
id
stringlengths
5
122
last_modified
null
tags
listlengths
1
1.84k
sha
null
created_at
stringlengths
25
25
question-answering
transformers
### QA Model trained on MLQA dataset for german langauge. MODEL used for fine tuning is GBERT Large by deepset.ai ## MLQA DEV (german) EM: 63.82 F1: 77.20 ## XQUAD TEST (german) EM: 65.96 F1: 80.85 ## Model inferencing: ```python !pip install -q transformers from transformers import pipeline qa_pipeline = pipeline( "question-answering", model="Sahajtomar/GBERTQnA", tokenizer="Sahajtomar/GBERTQnA" ) qa_pipeline({ 'context': "Vor einigen Jahren haben Wissenschaftler ein wichtiges Mutagen identifiziert, das in unseren eigenen Zellen liegt: APOBEC, ein Protein, das normalerweise als Schutzmittel gegen Virusinfektionen fungiert. Heute hat ein Team von Schweizer und russischen Wissenschaftlern unter der Leitung von Sergey Nikolaev, Genetiker an der Universität Genf (UNIGE) in der Schweiz, entschlüsselt, wie APOBEC eine Schwäche unseres DNA-Replikationsprozesses ausnutzt, um Mutationen in unserem Genom zu induzieren.", 'question': "Welches Mutagen schützt vor Virusinfektionen?" }) # output {'answer': 'APOBEC', 'end': 121, 'score': 0.9815779328346252, 'start': 115} ## Even complex queries can be answered pretty well qa_pipeline({ "context": 'Im Juli 1944 befand sich die Rote Armee tief auf polnischem Gebiet und verfolgte die Deutschen in Richtung Warschau. In dem Wissen, dass Stalin der Idee eines unabhängigen Polens feindlich gegenüberstand, gab die polnische Exilregierung in London der unterirdischen Heimatarmee (AK) den Befehl, vor dem Eintreffen der Roten Armee zu versuchen, die Kontrolle über Warschau von den Deutschen zu übernehmen. So begann am 1. August 1944, als sich die Rote Armee der Stadt näherte, der Warschauer Aufstand. Der bewaffnete Kampf, der 48 Stunden dauern sollte, war teilweise erfolgreich, dauerte jedoch 63 Tage. Schließlich mussten die Kämpfer der Heimatarmee und die ihnen unterstützenden Zivilisten kapitulieren. Sie wurden in Kriegsgefangenenlager in Deutschland transportiert, während die gesamte Zivilbevölkerung ausgewiesen wurde. Die Zahl der polnischen Zivilisten wird auf 150.000 bis 200.000 geschätzt.' 'question': "Wer wurde nach Deutschland transportiert?" #output {'answer': 'die Kämpfer der Heimatarmee und die ihnen unterstützenden Zivilisten', 'end': 693, 'score': 0.23357819020748138, 'start': 625} ``` Try it on a Colab: <a href="https://github.com/Sahajtomar/Question-Answering/blob/main/Sahajtomar_GBERTQnA.ipynb" target="_parent"><img src="https://camo.githubusercontent.com/52feade06f2fecbf006889a904d221e6a730c194/68747470733a2f2f636f6c61622e72657365617263682e676f6f676c652e636f6d2f6173736574732f636f6c61622d62616467652e737667" alt="Open In Colab" data-canonical-src="https://colab.research.google.com/assets/colab-badge.svg"></a>
{"language": "de", "tags": ["pytorch", "tf", "bert"], "datasets": ["mlqa"], "metrics": ["f1", "em"]}
Sahajtomar/GBERTQnA
null
[ "transformers", "pytorch", "tf", "jax", "bert", "question-answering", "de", "dataset:mlqa", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
transformers
### QA Model trained on MLQA dataset for german langauge. MODEL used for fine tuning is GELECTRA Large by deepset.ai ## MLQA DEV (german) EM: 64.27 \ F1: 77.39 ## XQUAD TEST (german) EM: 66.38 \ F1: 82.25 ## Hyperparameters per_gpu_train_batch_size 4 \ per_gpu_eval_batch_size 32 \ gradient_accumulation_steps 8 \ learning_rate 3e-5 \ num_train_epochs 1.0 \ max_seq_length 384 \ doc_stride 128 ## Model inferencing: ```python !pip install -q transformers from transformers import pipeline qa_pipeline = pipeline( "question-answering", model="Sahajtomar/GELECTRAQA", tokenizer="Sahajtomar/GELECTRAQA" ) qa_pipeline({ 'context': "Vor einigen Jahren haben Wissenschaftler ein wichtiges Mutagen identifiziert, das in unseren eigenen Zellen liegt: APOBEC, ein Protein, das normalerweise als Schutzmittel gegen Virusinfektionen fungiert. Heute hat ein Team von Schweizer und russischen Wissenschaftlern unter der Leitung von Sergey Nikolaev, Genetiker an der Universität Genf (UNIGE) in der Schweiz, entschlüsselt, wie APOBEC eine Schwäche unseres DNA-Replikationsprozesses ausnutzt, um Mutationen in unserem Genom zu induzieren.", 'question': "Welches Mutagen schützt vor Virusinfektionen?" }) # output {'answer': 'APOBEC', 'end': 121, 'score': 0.987, 'start': 115} ## Even complex queries can be answered pretty well qa_pipeline({ "context": "Es wird erwartet, dass sich schwarze Löcher mit Sternmasse bilden, wenn sehr massive Sterne am Ende ihres Lebenszyklus zusammenbrechen. Nachdem sich ein Schwarzes Loch gebildet hat, kann es weiter wachsen,indem es Masse aus seiner Umgebung absorbiert. Durch Absorption anderer Sterne und Verschmelzung mit anderen Schwarzen Löchern können sich supermassereiche Schwarze Löcher mit Millionen von Sonnenmassen (M☉) bilden. Es besteht Konsens darüber, dass in den Zentren der meisten Galaxien supermassereiche Schwarze Löcher existieren.", 'question': "Wie Sonnenmassen entstehen?" }) #output {'answer': 'Durch Absorption anderer Sterne und Verschmelzung mit anderen Schwarzen Löchern', 'end': 332, 'score': 0.23970196, 'start': 253} ```
{"language": "de", "tags": ["pytorch", "tf", "Gelectra"], "datasets": ["mlqa"], "metrics": ["f1", "em"]}
Sahajtomar/German-question-answer-Electra
null
[ "transformers", "pytorch", "tf", "electra", "question-answering", "Gelectra", "de", "dataset:mlqa", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
sentence-similarity
sentence-transformers
# German STS ## STS dev (german) 87.9% ## STS test (german) 84.3% #### STS pipeline ```python !pip install -U sentence-transformers from sentence_transformers import SentenceTransformer model = SentenceTransformer('..model_path..') sentences1 = ['Die Katze sitzt draußen', "Ein Mann spielt Gitarre", 'Der neue Film ist großartig'] sentences2 = ['Der Hund spielt im Garten', "Eine Frau sieht fern", 'Der neue Film ist so toll'] embeddings1 = model.encode(sentences1, convert_to_tensor=True) embeddings2 = model.encode(sentences2, convert_to_tensor=True) cosine_scores = util.pytorch_cos_sim(embeddings1, embeddings2) for i in range(len(sentences1)): for j in range(len(sentences2)): print(cosine_scores[i][j])) """ Die Katze sitzt draußen Der Hund spielt im Garten Score: 0.1259 Die Katze sitzt draußen Eine Frau sieht fern Score: 0.0567 Die Katze sitzt draußen Der neue Film ist so toll Score: 0.0557 Ein Mann spielt Gitarre Der Hund spielt im Garten Score: 0.1031 Ein Mann spielt Gitarre Eine Frau sieht fern Score: 0.0098 Ein Mann spielt Gitarre Der neue Film ist so toll Score: 0.0828 Der neue Film ist großartig Der Hund spielt im Garten Score: 0.1008 Der neue Film ist großartig Eine Frau sieht fern Score: 0.0674 """ ```
{"language": "de", "tags": ["semantic", "sentence-transformers", "sentence-similarity"], "datasets": ["sts"]}
Sahajtomar/German-semantic
null
[ "sentence-transformers", "bert", "semantic", "sentence-similarity", "de", "dataset:sts", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
zero-shot-classification
transformers
# German Zeroshot ## Model Description This model has [GBERT Large](https://huggingface.co/deepset/gbert-large) as base model and fine-tuned it on xnli de dataset. The default hypothesis template is in English: `This text is {}`. While using this model , change it to "In deisem geht es um {}." or something different. While inferencing through huggingface api may give poor results as it uses by default english template. Since model is monolingual and not multilingual, hypothesis template needs to be changed accordingly. ## XNLI DEV (german) Accuracy: 85.5 ## XNLI TEST (german) Accuracy: 83.6 #### Zero-shot classification pipeline ```python from transformers import pipeline classifier = pipeline("zero-shot-classification", model="Sahajtomar/German_Zeroshot") sequence = "Letzte Woche gab es einen Selbstmord in einer nahe gelegenen kolonie" candidate_labels = ["Verbrechen","Tragödie","Stehlen"] hypothesis_template = "In deisem geht es um {}." ## Since monolingual model,its sensitive to hypothesis template. This can be experimented classifier(sequence, candidate_labels, hypothesis_template=hypothesis_template) """{'labels': ['Tragödie', 'Verbrechen', 'Stehlen'], 'scores': [0.8328856854438782, 0.10494536352157593, 0.06316883927583696], 'sequence': 'Letzte Woche gab es einen Selbstmord in einer nahe gelegenen Kolonie'}""" ```
{"language": "multilingual", "tags": ["text-classification", "pytorch", "nli", "xnli", "de"], "datasets": ["xnli"], "pipeline_tag": "zero-shot-classification", "widget": [{"text": "Letzte Woche gab es einen Selbstmord in einer nahe gelegenen kolonie", "candidate_labels": "Verbrechen,Trag\u00f6die,Stehlen", "hypothesis_template": "In deisem geht es um {}."}]}
Sahajtomar/German_Zeroshot
null
[ "transformers", "pytorch", "jax", "bert", "text-classification", "nli", "xnli", "de", "zero-shot-classification", "multilingual", "dataset:xnli", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
token-classification
transformers
### NER model trained on BERT MODEL used for fine tuning is GBERT Large by deepset.ai ## Test Accuracy: 98 \ F1: 84.1 \ Precision: 82.7 \ Recall: 85.5 ## Model inferencing: ```python !pip install -q transformers from transformers import pipeline ner = pipeline( "ner", model="Sahajtomar/NER_legal_de", tokenizer="Sahajtomar/NER_legal_de") nlp_ner("Für eine Zuständigkeit des Verwaltungsgerichts Berlin nach § 52 Nr. 1 bis 4 VwGO hat der \ Antragsteller keine Anhaltspunkte vorgetragen .") ```
{"language": "de", "tags": ["pytorch", "tf", "bert", "NER"], "datasets": ["legal entity recognition"]}
Sahajtomar/NER_legal_de
null
[ "transformers", "pytorch", "tf", "jax", "bert", "token-classification", "NER", "de", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
sentence-similarity
sentence-transformers
# French STS ## STS dev (french) 87.4% ## STS test (french) 85.8% #### STS pipeline ```python !pip install -U sentence-transformers from sentence_transformers import SentenceTransformer model = SentenceTransformer('..model_path..') sentences1 = ["J'aime mon téléphone", "Mon téléphone n'est pas bon.", "Votre téléphone portable est superbe."] sentences2 = ["Est-ce qu'il neige demain?", "Récemment, de nombreux ouragans ont frappé les États-Unis", "Le réchauffement climatique est réel",] embeddings1 = model.encode(sentences1, convert_to_tensor=True) embeddings2 = model.encode(sentences2, convert_to_tensor=True) cosine_scores = util.pytorch_cos_sim(embeddings1, embeddings2) for i in range(len(sentences1)): for j in range(len(sentences2)): print(cosine_scores[i][j])) """ """ ```
{"language": "fr", "tags": ["semantic", "sentence-transformers", "sentence-similarity", "fr"], "datasets": ["sts"]}
Sahajtomar/french_semantic
null
[ "sentence-transformers", "semantic", "sentence-similarity", "fr", "dataset:sts", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Sahgrada/Sah
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
transformers
{}
Saifullah/sa_thread_summarization
null
[ "transformers", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Saiki/Real-ESRGAN-ANIME
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
SaintLau/DialoGPT-medium-josh
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Saisreenath/NewsClassification
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
{}
Saitomar/wav2vec2-large-xls-r-300m-bengali-kaggle
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-hindi-kaggle This model was trained from scratch on the common_voice dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
{"language": ["hi"], "tags": ["generated_from_trainer", "robust-speech-event", "hf-asr-leaderboard"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-hindi-kaggle", "results": []}]}
Saitomar/wav2vec2-large-xls-r-300m-hindi-kaggle
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "robust-speech-event", "hf-asr-leaderboard", "hi", "dataset:common_voice", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
transformers
### How to use #### Requirements Transformers require `transformers` and `sentencepiece`, both of which can be installed using `pip`. ```sh pip install transformers sentencepiece ``` #### Pipelines 🚀 In case you are not familiar with Transformers, you can use pipelines instead. Note that, pipelines can't have _no answer_ for the questions. ```python from transformers import pipeline model_name = "SajjadAyoubi/bert-base-fa-qa" qa_pipeline = pipeline("question-answering", model=model_name, tokenizer=model_name) text = "سلام من سجاد ایوبی هستم ۲۰ سالمه و به پردازش زبان طبیعی علاقه دارم" questions = ["اسمم چیه؟", "چند سالمه؟", "به چی علاقه دارم؟"] for question in questions: print(qa_pipeline({"context": text, "question": question})) >>> {'score': 0.4839823544025421, 'start': 8, 'end': 18, 'answer': 'سجاد ایوبی'} >>> {'score': 0.3747948706150055, 'start': 24, 'end': 32, 'answer': '۲۰ سالمه'} >>> {'score': 0.5945395827293396, 'start': 38, 'end': 55, 'answer': 'پردازش زبان طبیعی'} ``` #### Manual approach 🔥 Using the Manual approach, it is possible to have _no answer_ with even better performance. - PyTorch ```python from transformers import AutoTokenizer, AutoModelForQuestionAnswering from src.utils import AnswerPredictor model_name = "SajjadAyoubi/bert-base-fa-qa" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForQuestionAnswering.from_pretrained(model_name) text = "سلام من سجاد ایوبی هستم ۲۰ سالمه و به پردازش زبان طبیعی علاقه دارم" questions = ["اسمم چیه؟", "چند سالمه؟", "به چی علاقه دارم؟"] # this class is from src/utils.py and you can read more about it predictor = AnswerPredictor(model, tokenizer, device="cpu", n_best=10) preds = predictor(questions, [text] * 3, batch_size=3) for k, v in preds.items(): print(v) ``` Produces an output such below: ``` 100%|██████████| 1/1 [00:00<00:00, 3.56it/s] {'score': 8.040637016296387, 'text': 'سجاد ایوبی'} {'score': 9.901972770690918, 'text': '۲۰'} {'score': 12.117212295532227, 'text': 'پردازش زبان طبیعی'} ``` - TensorFlow 2.X ```python from transformers import AutoTokenizer, TFAutoModelForQuestionAnswering from src.utils import TFAnswerPredictor model_name = "SajjadAyoubi/bert-base-fa-qa" tokenizer = AutoTokenizer.from_pretrained(model_name) model = TFAutoModelForQuestionAnswering.from_pretrained(model_name) text = "سلام من سجاد ایوبی هستم ۲۰ سالمه و به پردازش زبان طبیعی علاقه دارم" questions = ["اسمم چیه؟", "چند سالمه؟", "به چی علاقه دارم؟"] # this class is from src/utils.py, you can read more about it predictor = TFAnswerPredictor(model, tokenizer, n_best=10) preds = predictor(questions, [text] * 3, batch_size=3) for k, v in preds.items(): print(v) ``` Produces an output such below: ```text 100%|██████████| 1/1 [00:00<00:00, 3.56it/s] {'score': 8.040637016296387, 'text': 'سجاد ایوبی'} {'score': 9.901972770690918, 'text': '۲۰'} {'score': 12.117212295532227, 'text': 'پردازش زبان طبیعی'} ``` Or you can access the whole demonstration using [HowToUse iPython Notebook on Google Colab](https://colab.research.google.com/github/sajjjadayobi/PersianQA/blob/main/notebooks/HowToUse.ipynb)
{}
SajjadAyoubi/bert-base-fa-qa
null
[ "transformers", "pytorch", "tf", "jax", "bert", "question-answering", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
feature-extraction
transformers
# CLIPfa: Connecting Farsi Text and Images OpenAI released [`the paper Learning Transferable Visual Models From Natural Language Supervision`](https://arxiv.org/abs/2103.00020) in which they present the CLIP (Contrastive Language–Image Pre-training) model. This model is trained to connect text and images, by matching their corresponding vector representations using a contrastive learning objective. CLIP consists of two separate models, a vision encoder and a text encoder. These were trained on 400 Million images and corresponding captions. We have trained a Farsi (Persian) version of OpenAI's CLIP on a dataset of 400,000 (image, text) pairs. We used [`Farahani's RoBERTa-fa`](https://huggingface.co/m3hrdadfi/roberta-zwnj-wnli-mean-tokens) as the text encoder and [‍‍`ViT‍`](https://huggingface.co/openai/clip-vit-base-patch32) as the vision encoder from Original CLIP and finetuned them. - It should be noted that only 400K pairs were used for this training, whereas 4 million pairs were used for the Original CLIP. Also, the training took 30 days across 592 GPUs powered by the V100 chip. ## How to use? Both models generate vectors with 768 dimensions. ```python from transformers import CLIPVisionModel, RobertaModel, AutoTokenizer, CLIPFeatureExtractor # download pre-trained models vision_encoder = CLIPVisionModel.from_pretrained('SajjadAyoubi/clip-fa-vision') preprocessor = CLIPFeatureExtractor.from_pretrained('SajjadAyoubi/clip-fa-vision') text_encoder = RobertaModel.from_pretrained('SajjadAyoubi/clip-fa-text') tokenizer = AutoTokenizer.from_pretrained('SajjadAyoubi/clip-fa-text') # define input image and input text text = 'something' image = PIL.Image.open('my_favorite_image.jpg') # compute embeddings text_embedding = text_encoder(**tokenizer(text, return_tensors='pt')).pooler_output image_embedding = vision_encoder(**preprocessor(image, return_tensors='pt')).pooler_output text_embedding.shape == image_embedding.shape ``` ## Demo: The followings are just some use cases of CLIPfa on 25K [`Unsplash images`](https://github.com/unsplash/datasets) - use `pip install -q git+https://github.com/sajjjadayobi/clipfa.git` ```python from clipfa import CLIPDemo demo = CLIPDemo(vision_encoder, text_encoder, tokenizer) demo.compute_text_embeddings(['گاو' ,'اسب' ,'ماهی']) demo.compute_image_embeddings(test_df.image_path.to_list()) ``` ## Online Demo: [CLIPfa at Huggingface🤗 spaces](https://huggingface.co/spaces/SajjadAyoubi/CLIPfa-Demo) We used a small set of images (25K) to keep this app almost real-time, but it's obvious that the quality of image search depends heavily on the size of the image database. > Made with ❤️ in my basement🤫
{}
SajjadAyoubi/clip-fa-text
null
[ "transformers", "pytorch", "roberta", "feature-extraction", "arxiv:2103.00020", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
feature-extraction
transformers
# CLIPfa: Connecting Farsi Text and Images OpenAI released [`the paper Learning Transferable Visual Models From Natural Language Supervision`](https://arxiv.org/abs/2103.00020) in which they present the CLIP (Contrastive Language–Image Pre-training) model. This model is trained to connect text and images, by matching their corresponding vector representations using a contrastive learning objective. CLIP consists of two separate models, a vision encoder and a text encoder. These were trained on 400 Million images and corresponding captions. We have trained a Farsi (Persian) version of OpenAI's CLIP on a dataset of 400,000 (image, text) pairs. We used [`Farahani's RoBERTa-fa`](https://huggingface.co/m3hrdadfi/roberta-zwnj-wnli-mean-tokens) as the text encoder and [‍‍`ViT‍`](https://huggingface.co/openai/clip-vit-base-patch32) as the vision encoder from Original CLIP and finetuned them. - It should be noted that only 400K pairs were used for this training, whereas 4 million pairs were used for the Original CLIP. Also, the training took 30 days across 592 GPUs powered by the V100 chip. ## How to use? Both models generate vectors with 768 dimensions. ```python from transformers import CLIPVisionModel, RobertaModel, AutoTokenizer, CLIPFeatureExtractor # download pre-trained models vision_encoder = CLIPVisionModel.from_pretrained('SajjadAyoubi/clip-fa-vision') preprocessor = CLIPFeatureExtractor.from_pretrained('SajjadAyoubi/clip-fa-vision') text_encoder = RobertaModel.from_pretrained('SajjadAyoubi/clip-fa-text') tokenizer = AutoTokenizer.from_pretrained('SajjadAyoubi/clip-fa-text') # define input image and input text text = 'something' image = PIL.Image.open('my_favorite_image.jpg') # compute embeddings text_embedding = text_encoder(**tokenizer(text, return_tensors='pt')).pooler_output image_embedding = vision_encoder(**preprocessor(image, return_tensors='pt')).pooler_output text_embedding.shape == image_embedding.shape ``` ## Demo: The followings are just some use cases of CLIPfa on 25K [`Unsplash images`](https://github.com/unsplash/datasets) - use `pip install -q git+https://github.com/sajjjadayobi/clipfa.git` ```python from clipfa import CLIPDemo demo = CLIPDemo(vision_encoder, text_encoder, tokenizer) demo.compute_text_embeddings(['گاو' ,'اسب' ,'ماهی']) demo.compute_image_embeddings(test_df.image_path.to_list()) ``` ## Online Demo: [CLIPfa at Huggingface🤗 spaces](https://huggingface.co/spaces/SajjadAyoubi/CLIPfa-Demo) We used a small set of images (25K) to keep this app almost real-time, but it's obvious that the quality of image search depends heavily on the size of the image database. > Made with ❤️ in my basement🤫
{}
SajjadAyoubi/clip-fa-vision
null
[ "transformers", "pytorch", "clip_vision_model", "feature-extraction", "arxiv:2103.00020", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
fill-mask
transformers
<span align="center"> <a href="https://huggingface.co/SajjadAyoubi/"><img src="https://img.shields.io/static/v1?label=%F0%9F%A4%97%20Hugging%20Face&message=SajjadAyoubi&color=yellow"></a> <a href="https://colab.research.google.com/github/sajjjadayobi/PersianQA/blob/main/notebooks/Demo.ipynb"><img src="https://img.shields.io/static/v1?label=Colab&message=Fine-tuning Example&logo=Google%20Colab&color=f9ab00"></a> </span> # ParsBigBird: Persian Bert For **Long-Range** Sequences The [Bert](https://arxiv.org/abs/1810.04805) and [ParsBert](https://arxiv.org/abs/2005.12515) algorithms can handle texts with token lengths of up to 512, however, many tasks such as summarizing and answering questions require longer texts. In our work, we have trained the [BigBird](https://arxiv.org/abs/2007.14062) model for the Persian language to process texts up to 4096 in the Farsi (Persian) language using sparse attention. ## Evaluation: 🌡️ We have evaluated the model on three tasks with different sequence lengths | Name | Params | SnappFood (F1) | Digikala Magazine(F1) | PersianQA (F1) | | :--------------------------------------------------------------: | :----: | :-----------------: | :---------------: | :--------------: | | [distil-bigbird-fa-zwnj](https://github.com/sajjjadayobi/ParsBigBird) | 78M | 85.43% | **94.05%** | **73.34%** | | [bert-base-fa](https://github.com/hooshvare/parsbert) | 118M | **87.98%** | 93.65% | 70.06% | - Despite being as big as distill-bert, the model performs equally well as ParsBert and is much better on PersianQA which requires much more context - This evaluation was based on `max_lentgh=2048` (It can be changed up to 4096) ## How to use❓ ### As Contextualized Word Embedding ```python from transformers import BigBirdModel, AutoTokenizer MODEL_NAME = "SajjadAyoubi/distil-bigbird-fa-zwnj" # by default its in `block_sparse` block_size=32 model = BigBirdModel.from_pretrained(MODEL_NAME, block_size=32) # you can use full attention like the following: use this when input isn't longer than 512 model = BigBirdModel.from_pretrained(MODEL_NAME, attention_type="original_full") text = "😃 امیدوارم مدل بدردبخوری باشه چون خیلی طول کشید تا ترین بشه" tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME) tokens = tokenizer(text, return_tensors='pt') output = model(**tokens) # contextualized embedding ``` ### As Fill Blank ```python from transformers import pipeline MODEL_NAME = 'SajjadAyoubi/distil-bigbird-fa-zwnj' fill = pipeline('fill-mask', model=MODEL_NAME, tokenizer=MODEL_NAME) results = fill('تهران پایتخت [MASK] است.') print(results[0]['token_str']) >>> 'ایران' ``` ## Pretraining details: 🔭 This model was pretrained using a masked language model (MLM) objective on the Persian section of the Oscar dataset. Following the original BERT training, 15% of tokens were masked. This was first described in this [paper](https://arxiv.org/abs/2007.14062) and released in this [repository](https://github.com/google-research/bigbird). Documents longer than 4096 were split into multiple documents, while documents much smaller than 4096 were merged using the [SEP] token. Model is warm started from `distilbert-fa`’s [checkpoint](https://huggingface.co/HooshvareLab/distilbert-fa-zwnj-base). - For more details, you can take a look at config.json at the model card in 🤗 Model Hub ## Fine Tuning Recommendations: 🐤 Due to the model's memory requirements, `gradient_checkpointing` and `gradient_accumulation` should be used to maintain a reasonable batch size. Considering this model isn't really big, it's a good idea to first fine-tune it on your dataset using Masked LM objective (also called intermediate fine-tuning) before implementing the main task. In block_sparse mode, it doesn't matter how many tokens are input. It just attends to 256 tokens. Furthermore, original_full should be used up to 512 sequence lengths (instead of block sparse). ### Fine Tuning Examples 👷‍♂️👷‍♀️ | Dataset | Fine Tuning Example | | ------------------------------------- | ------------------------------------------------------------ | | Digikala Magazine Text Classification | <a href="https://colab.research.google.com/github/sajjjadayobi/PersianQA/blob/main/notebooks/Demo.ipynb"><img src="https://img.shields.io/static/v1?label=Colab&message=Fine-tuning Example&logo=Google%20Colab&color=f9ab00"></a> | ## Contact us: 🤝 If you have a technical question regarding the model, pretraining, code or publication, please create an issue in the repository. This is the fastest way to reach us. ## Citation: ↩️ we didn't publish any papers on the work. However, if you did, please cite us properly with an entry like one below. ```bibtex @misc{ParsBigBird, author = {Ayoubi, Sajjad}, title = {ParsBigBird: Persian Bert For Long-Range Sequences}, year = 2021, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/SajjjadAyobi/ParsBigBird}}, } ```
{}
SajjadAyoubi/distil-bigbird-fa-zwnj
null
[ "transformers", "pytorch", "big_bird", "fill-mask", "arxiv:1810.04805", "arxiv:2005.12515", "arxiv:2007.14062", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
transformers
### How to use #### Requirements Transformers require `transformers` and `sentencepiece`, both of which can be installed using `pip`. ```sh pip install transformers sentencepiece ``` #### Pipelines 🚀 In case you are not familiar with Transformers, you can use pipelines instead. Note that, pipelines can't have _no answer_ for the questions. ```python from transformers import pipeline model_name = "SajjadAyoubi/lm-roberta-large-fa-qa" qa_pipeline = pipeline("question-answering", model=model_name, tokenizer=model_name) text = "سلام من سجاد ایوبی هستم ۲۰ سالمه و به پردازش زبان طبیعی علاقه دارم" questions = ["اسمم چیه؟", "چند سالمه؟", "به چی علاقه دارم؟"] for question in questions: print(qa_pipeline({"context": text, "question": question})) >>> {'score': 0.4839823544025421, 'start': 8, 'end': 18, 'answer': 'سجاد ایوبی'} >>> {'score': 0.3747948706150055, 'start': 24, 'end': 32, 'answer': '۲۰ سالمه'} >>> {'score': 0.5945395827293396, 'start': 38, 'end': 55, 'answer': 'پردازش زبان طبیعی'} ``` #### Manual approach 🔥 Using the Manual approach, it is possible to have _no answer_ with even better performance. - PyTorch ```python from transformers import AutoTokenizer, AutoModelForQuestionAnswering from src.utils import AnswerPredictor model_name = "SajjadAyoubi/lm-roberta-large-fa-qa" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForQuestionAnswering.from_pretrained(model_name) text = "سلام من سجاد ایوبی هستم ۲۰ سالمه و به پردازش زبان طبیعی علاقه دارم" questions = ["اسمم چیه؟", "چند سالمه؟", "به چی علاقه دارم؟"] # this class is from src/utils.py and you can read more about it predictor = AnswerPredictor(model, tokenizer, device="cpu", n_best=10) preds = predictor(questions, [text] * 3, batch_size=3) for k, v in preds.items(): print(v) ``` Produces an output such below: ``` 100%|██████████| 1/1 [00:00<00:00, 3.56it/s] {'score': 8.040637016296387, 'text': 'سجاد ایوبی'} {'score': 9.901972770690918, 'text': '۲۰'} {'score': 12.117212295532227, 'text': 'پردازش زبان طبیعی'} ``` - TensorFlow 2.X ```python from transformers import AutoTokenizer, TFAutoModelForQuestionAnswering from src.utils import TFAnswerPredictor model_name = "SajjadAyoubi/lm-roberta-large-fa-qa" tokenizer = AutoTokenizer.from_pretrained(model_name) model = TFAutoModelForQuestionAnswering.from_pretrained(model_name) text = "سلام من سجاد ایوبی هستم ۲۰ سالمه و به پردازش زبان طبیعی علاقه دارم" questions = ["اسمم چیه؟", "چند سالمه؟", "به چی علاقه دارم؟"] # this class is from src/utils.py, you can read more about it predictor = TFAnswerPredictor(model, tokenizer, n_best=10) preds = predictor(questions, [text] * 3, batch_size=3) for k, v in preds.items(): print(v) ``` Produces an output such below: ```text 100%|██████████| 1/1 [00:00<00:00, 3.56it/s] {'score': 8.040637016296387, 'text': 'سجاد ایوبی'} {'score': 9.901972770690918, 'text': '۲۰'} {'score': 12.117212295532227, 'text': 'پردازش زبان طبیعی'} ``` Or you can access the whole demonstration using [HowToUse iPython Notebook on Google Colab](https://colab.research.google.com/github/sajjjadayobi/PersianQA/blob/main/notebooks/HowToUse.ipynb)
{}
SajjadAyoubi/xlm-roberta-large-fa-qa
null
[ "transformers", "pytorch", "tf", "xlm-roberta", "question-answering", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
* IMDB_URDUSENTIMENT_MODEL I have used IMDB URDU dataset to create custom model by using DistilBertForSequenceClassification.
{"language": ["en"], "license": "apache-2.0", "tags": ["text Classification"], "widget": [{"text": "\u0645\u06cc\u06ba \u062a\u0645\u06c1\u06cc\u06ba \u067e\u0633\u0646\u062f \u06a9\u0631\u062a\u0627 \u06c1\u0648\u06ba. </s></s> \u0645\u06cc\u06ba \u062a\u0645 \u0633\u06d2 \u067e\u06cc\u0627\u0631 \u06a9\u0631\u062a\u0627 \u06c1\u0648\u06ba."}]}
Sakil/IMDB_URDUSENTIMENT_MODEL
null
[ "transformers", "pytorch", "safetensors", "distilbert", "text-classification", "text Classification", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
# Dataset Collection: * The hatespeech dataset is collected from different open sources like Kaggle ,social media like Twitter. * The dataset has the two classes hatespeech and non hatespeech. * The class distribution is equal * Different strategies have been followed during the data gathering phase. * The dataset is collected from relevant sources. # distilbert-base-uncased model is fine-tuned for Hate Speech Detection * The model is fine-tuned on the dataset. * This model can be used to create the labels for academic purposes or for industrial purposes. * This model can be used for the inference purpose as well. # Data Fields: **label**: 0 - it is a hate speech, 1 - not a hate speech # Application: * This model is useful for the detection of hatespeech in the tweets. * There are numerous situations where we have tweet data but no labels, so this approach can be used to create labels. * You can fine-tune this model for your particular use cases. # Model Implementation # !pip install transformers[sentencepiece] from transformers import pipeline model_name="Sakil/distilbert_lazylearner_hatespeech_detection" classifier = pipeline("text-classification",model=model_name) classifier("!!! RT @mayasolovely: As a woman you shouldn't complain about cleaning up your house. &amp; as a man you should always take the trash out...") # Github: [Sakil Ansari](https://github.com/Sakil786/hate_speech_detection_pretrained_model)
{"language": "en", "license": "apache-2.0", "tags": ["hate", "speech"], "widget": [{"text": "RT @ShenikaRoberts: The shit you hear about me might be true or it might be faker than the bitch who told it to ya &#5736"}]}
Sakil/distilbert_lazylearner_hatespeech_detection
null
[ "transformers", "pytorch", "distilbert", "text-classification", "hate", "speech", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
* IMDBSentimentDistilBertModel: - I have used IMDB movie review dataset to create custom model by using DistilBertForSequenceClassification. from transformers import DistilBertForSequenceClassification, Trainer, TrainingArguments model = DistilBertForSequenceClassification.from_pretrained('./imdbsentdistilbertmodel')
{"language": ["en"], "license": "apache-2.0", "tags": ["text Classification"], "widget": [{"text": "I like you. </s></s> I love you."}]}
Sakil/imdbsentdistilbertmodel
null
[ "transformers", "pytorch", "distilbert", "text-classification", "text Classification", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
test
{}
Sakil/testmodel
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
fill-mask
transformers
# distilbert-base-nepali This model is pre-trained on [nepalitext](https://huggingface.co/datasets/Sakonii/nepalitext-language-model-dataset) dataset consisting of over 13 million Nepali text sequences using a masked language modeling (MLM) objective. Our approach trains a Sentence Piece Model (SPM) for text tokenization similar to [XLM-ROBERTa](https://arxiv.org/abs/1911.02116) and trains [distilbert model](https://arxiv.org/abs/1910.01108) for language modeling. Find more details in [this paper](https://aclanthology.org/2022.sigul-1.14/). It achieves the following results on the evaluation set: mlm probability|evaluation loss|evaluation perplexity --:|----:|-----:| 15%|2.349|10.479| 20%|2.605|13.351| ## Model description Refer to original [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) ## Intended uses & limitations This backbone model intends to be fine-tuned on Nepali language focused downstream task such as sequence classification, token classification or question answering. The language model being trained on a data with texts grouped to a block size of 512, it handles text sequence up to 512 tokens and may not perform satisfactorily on shorter sequences. ## Usage This model can be used directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='Sakonii/distilbert-base-nepali') >>> unmasker("मानविय गतिविधिले प्रातृतिक पर्यावरन प्रनालीलाई अपरिमेय क्षति पु्र्याएको छ। परिवर्तनशिल जलवायुले खाध, सुरक्षा, <mask>, जमिन, मौसमलगायतलाई असंख्य तरिकाले प्रभावित छ।") [{'score': 0.04128897562623024, 'sequence': 'मानविय गतिविधिले प्रातृतिक पर्यावरन प्रनालीलाई अपरिमेय क्षति पु्र्याएको छ। परिवर्तनशिल जलवायुले खाध, सुरक्षा, मौसम, जमिन, मौसमलगायतलाई असंख्य तरिकाले प्रभावित छ।', 'token': 2605, 'token_str': 'मौसम'}, {'score': 0.04100276157259941, 'sequence': 'मानविय गतिविधिले प्रातृतिक पर्यावरन प्रनालीलाई अपरिमेय क्षति पु्र्याएको छ। परिवर्तनशिल जलवायुले खाध, सुरक्षा, प्रकृति, जमिन, मौसमलगायतलाई असंख्य तरिकाले प्रभावित छ।', 'token': 2792, 'token_str': 'प्रकृति'}, {'score': 0.026525357738137245, 'sequence': 'मानविय गतिविधिले प्रातृतिक पर्यावरन प्रनालीलाई अपरिमेय क्षति पु्र्याएको छ। परिवर्तनशिल जलवायुले खाध, सुरक्षा, पानी, जमिन, मौसमलगायतलाई असंख्य तरिकाले प्रभावित छ।', 'token': 387, 'token_str': 'पानी'}, {'score': 0.02340106852352619, 'sequence': 'मानविय गतिविधिले प्रातृतिक पर्यावरन प्रनालीलाई अपरिमेय क्षति पु्र्याएको छ। परिवर्तनशिल जलवायुले खाध, सुरक्षा, जल, जमिन, मौसमलगायतलाई असंख्य तरिकाले प्रभावित छ।', 'token': 1313, 'token_str': 'जल'}, {'score': 0.02055591531097889, 'sequence': 'मानविय गतिविधिले प्रातृतिक पर्यावरन प्रनालीलाई अपरिमेय क्षति पु्र्याएको छ। परिवर्तनशिल जलवायुले खाध, सुरक्षा, वातावरण, जमिन, मौसमलगायतलाई असंख्य तरिकाले प्रभावित छ।', 'token': 790, 'token_str': 'वातावरण'}] ``` Here is how we can use the model to get the features of a given text in PyTorch: ```python from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained('Sakonii/distilbert-base-nepali') model = AutoModelForMaskedLM.from_pretrained('Sakonii/distilbert-base-nepali') # prepare input text = "चाहिएको text यता राख्नु होला।" encoded_input = tokenizer(text, return_tensors='pt') # forward pass output = model(**encoded_input) ``` ## Training data This model is trained on [nepalitext](https://huggingface.co/datasets/Sakonii/nepalitext-language-model-dataset) language modeling dataset which combines the datasets: [OSCAR](https://huggingface.co/datasets/oscar) , [cc100](https://huggingface.co/datasets/cc100) and a set of scraped Nepali articles on Wikipedia. As for training the language model, the texts in the training set are grouped to a block of 512 tokens. ## Tokenization A Sentence Piece Model (SPM) is trained on a subset of [nepalitext](https://huggingface.co/datasets/Sakonii/nepalitext-language-model-dataset) dataset for text tokenization. The tokenizer trained with vocab-size=24576, min-frequency=4, limit-alphabet=1000 and model-max-length=512. ## Training procedure The model is trained with the same configuration as the original [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased); 512 tokens per instance, 28 instances per batch, and around 35.7K training steps. ### Training hyperparameters The following hyperparameters were used for training of the final epoch: [ Refer to the *Training results* table below for varying hyperparameters every epoch ] - learning_rate: 5e-05 - train_batch_size: 28 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results The model is trained for 4 epochs with varying hyperparameters: | Training Loss | Epoch | MLM Probability | Train Batch Size | Step | Validation Loss | Perplexity | |:-------------:|:-----:|:---------------:|:----------------:|:-----:|:---------------:|:----------:| | 3.4477 | 1.0 | 15 | 26 | 38864 | 3.3067 | 27.2949 | | 2.9451 | 2.0 | 15 | 28 | 35715 | 2.8238 | 16.8407 | | 2.866 | 3.0 | 20 | 28 | 35715 | 2.7431 | 15.5351 | | 2.7287 | 4.0 | 20 | 28 | 35715 | 2.6053 | 13.5353 | | 2.6412 | 5.0 | 20 | 28 | 35715 | 2.5161 | 12.3802 | Final model evaluated with MLM Probability of 15%: | Training Loss | Epoch | MLM Probability | Train Batch Size | Step | Validation Loss | Perplexity | |:-------------:|:-----:|:---------------:|:----------------:|:-----:|:---------------:|:----------:| | - | - | 15 | - | - | 2.3494 | 10.4791 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.9.1 - Datasets 1.18.3 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": "Sakonii/nepalitext-language-model-dataset", "mask_token": "<mask>", "widget": [{"text": "\u092e\u093e\u0928\u0935\u093f\u092f \u0917\u0924\u093f\u0935\u093f\u0927\u093f\u0932\u0947 \u092a\u094d\u0930\u093e\u0924\u0943\u0924\u093f\u0915 \u092a\u0930\u094d\u092f\u093e\u0935\u0930\u0928 \u092a\u094d\u0930\u0928\u093e\u0932\u0940\u0932\u093e\u0908 \u0905\u092a\u0930\u093f\u092e\u0947\u092f \u0915\u094d\u0937\u0924\u093f \u092a\u0941\u094d\u0930\u094d\u092f\u093e\u090f\u0915\u094b \u091b\u0964 \u092a\u0930\u093f\u0935\u0930\u094d\u0924\u0928\u0936\u093f\u0932 \u091c\u0932\u0935\u093e\u092f\u0941\u0932\u0947 \u0916\u093e\u0927, \u0938\u0941\u0930\u0915\u094d\u0937\u093e, <mask>, \u091c\u092e\u093f\u0928, \u092e\u094c\u0938\u092e\u0932\u0917\u093e\u092f\u0924\u0932\u093e\u0908 \u0905\u0938\u0902\u0916\u094d\u092f \u0924\u0930\u093f\u0915\u093e\u0932\u0947 \u092a\u094d\u0930\u092d\u093e\u0935\u093f\u0924 \u091b\u0964", "example_title": "Example 1"}, {"text": "\u0905\u091a\u0947\u0932 \u0935\u093f\u0926\u094d\u092f\u093e\u0932\u092f \u0930 \u0915\u0932\u0947\u091c\u0939\u0930\u0942\u0932\u0947 \u0938\u094d\u092e\u093e\u0930\u093f\u0915\u093e \u0915\u0924\u094d\u0924\u093f\u0915\u094b \u092a\u094d\u0930\u0915\u093e\u0936\u0928 \u0917\u0930\u094d\u091b\u0928\u094d, \u092f\u0915\u093f\u0928 \u091b\u0948\u0928\u202f\u0964 \u0915\u0947\u0939\u0940 \u0935\u0930\u094d\u0937\u092a\u0939\u093f\u0932\u0947\u0938\u092e\u094d\u092e \u0917\u093e\u0909\u0901\u0938\u0939\u0930\u0915\u093e \u0938\u093e\u0928\u093e\u0920\u0942\u0932\u093e <mask> \u0938\u0902\u0938\u094d\u0925\u093e\u0939\u0930\u0942\u092e\u093e \u092a\u0941\u0917\u094d\u0926\u093e \u0936\u093f\u0915\u094d\u0937\u0915 \u0935\u093e \u0915\u0930\u094d\u092e\u091a\u093e\u0930\u0940\u0932\u0947 \u0938\u0902\u0938\u094d\u0925\u093e\u092c\u093e\u091f \u092a\u094d\u0930\u0915\u093e\u0936\u093f\u0924 \u092a\u0924\u094d\u0930\u093f\u0915\u093e, \u0938\u094d\u092e\u093e\u0930\u093f\u0915\u093e \u0930 \u092a\u0941\u0938\u094d\u0924\u0915 \u0915\u094b\u0938\u0947\u0932\u0940\u0915\u093e \u0930\u0942\u092a\u092e\u093e \u0925\u092e\u093e\u0909\u0901\u0925\u0947\u202f\u0964", "example_title": "Example 2"}, {"text": "\u091c\u0932\u0935\u093f\u0926\u094d\u092f\u0941\u0924\u094d \u0935\u093f\u0915\u093e\u0938\u0915\u094b \u0967\u0967\u0966 \u0935\u0930\u094d\u0937\u0915\u094b \u0907\u0924\u093f\u0939\u093e\u0938 \u092c\u0928\u093e\u090f\u0915\u094b \u0928\u0947\u092a\u093e\u0932\u092e\u093e \u0939\u093e\u0932 \u0938\u0930\u0915\u093e\u0930\u0940 \u0930 \u0928\u093f\u091c\u0940 \u0915\u094d\u0937\u0947\u0924\u094d\u0930\u092c\u093e\u091f \u0917\u0930\u0940 \u0915\u0930\u093f\u092c \u0968 \u0939\u091c\u093e\u0930 \u092e\u0947\u0917\u093e\u0935\u093e\u091f <mask> \u0909\u0924\u094d\u092a\u093e\u0926\u0928 \u092d\u0907\u0930\u0939\u0947\u0915\u094b \u091b\u202f\u0964", "example_title": "Example 3"}], "model-index": [{"name": "distilbert-base-nepali", "results": []}]}
Sakonii/distilbert-base-nepali
null
[ "transformers", "pytorch", "safetensors", "distilbert", "fill-mask", "generated_from_trainer", "dataset:Sakonii/nepalitext-language-model-dataset", "arxiv:1911.02116", "arxiv:1910.01108", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
{}
Salesforce/bart-large-xsum-samsum
null
[ "transformers", "pytorch", "tf", "bart", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# CodeT5-base for Code Summarization [CodeT5-base](https://huggingface.co/Salesforce/codet5-base) model fine-tuned on CodeSearchNet data in a multi-lingual training setting ( Ruby/JavaScript/Go/Python/Java/PHP) for code summarization. It was introduced in this EMNLP 2021 paper [CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation](https://arxiv.org/abs/2109.00859) by Yue Wang, Weishi Wang, Shafiq Joty, Steven C.H. Hoi. Please check out more at [this repository](https://github.com/salesforce/CodeT5). ## How to use Here is how to use this model: ```python from transformers import RobertaTokenizer, T5ForConditionalGeneration if __name__ == '__main__': tokenizer = RobertaTokenizer.from_pretrained('Salesforce/codet5-base-multi-sum') model = T5ForConditionalGeneration.from_pretrained('Salesforce/codet5-base-multi-sum') text = """def svg_to_image(string, size=None): if isinstance(string, unicode): string = string.encode('utf-8') renderer = QtSvg.QSvgRenderer(QtCore.QByteArray(string)) if not renderer.isValid(): raise ValueError('Invalid SVG data.') if size is None: size = renderer.defaultSize() image = QtGui.QImage(size, QtGui.QImage.Format_ARGB32) painter = QtGui.QPainter(image) renderer.render(painter) return image""" input_ids = tokenizer(text, return_tensors="pt").input_ids generated_ids = model.generate(input_ids, max_length=20) print(tokenizer.decode(generated_ids[0], skip_special_tokens=True)) # this prints: "Convert a SVG string to a QImage." ``` ## Fine-tuning data We employ the filtered version of CodeSearchNet data [[Husain et al., 2019](https://arxiv.org/abs/1909.09436)] from [CodeXGLUE](https://github.com/microsoft/CodeXGLUE/tree/main/Code-Text/code-to-text) benchmark for fine-tuning on code summarization. The data is tokenized with our pre-trained code-specific BPE (Byte-Pair Encoding) tokenizer. One can prepare text (or code) for the model using RobertaTokenizer with the vocab files from [codet5-base](https://huggingface.co/Salesforce/codet5-base). ### Data statistic | Programming Language | Training | Dev | Test | | :------------------- | :------: | :----: | :----: | | Python | 251,820 | 13,914 | 14,918 | | PHP | 241,241 | 12,982 | 14,014 | | Go | 167,288 | 7,325 | 8,122 | | Java | 164,923 | 5,183 | 10,955 | | JavaScript | 58,025 | 3,885 | 3,291 | | Ruby | 24,927 | 1,400 | 1,261 | ## Training procedure We fine-tune codet5-base on these six programming languages (Ruby/JavaScript/Go/Python/Java/PHP) in the multi-task learning setting. We employ the balanced sampling to avoid biasing towards high-resource tasks. Please refer to the [paper](https://arxiv.org/abs/2109.00859) for more details. ## Evaluation results Unlike the paper allowing to select different best checkpoints for different programming languages (PLs), here we employ one checkpoint for all PLs. Besides, we remove the task control prefix to specify the PL in training and inference. The results on the test set are shown as below: | Model | Ruby | Javascript | Go | Python | Java | PHP | Overall | | ----------- | :-------: | :--------: | :-------: | :-------: | :-------: | :-------: | :-------: | | Seq2Seq | 9.64 | 10.21 | 13.98 | 15.93 | 15.09 | 21.08 | 14.32 | | Transformer | 11.18 | 11.59 | 16.38 | 15.81 | 16.26 | 22.12 | 15.56 | | [RoBERTa](https://arxiv.org/pdf/1907.11692.pdf) | 11.17 | 11.90 | 17.72 | 18.14 | 16.47 | 24.02 | 16.57 | | [CodeBERT](https://arxiv.org/pdf/2002.08155.pdf) | 12.16 | 14.90 | 18.07 | 19.06 | 17.65 | 25.16 | 17.83 | | [PLBART](https://aclanthology.org/2021.naacl-main.211.pdf) | 14.11 |15.56 | 18.91 | 19.30 | 18.45 | 23.58 | 18.32 | | [CodeT5-small](https://arxiv.org/abs/2109.00859) |14.87 | 15.32 | 19.25 | 20.04 | 19.92 | 25.46 | 19.14 | | [CodeT5-base](https://arxiv.org/abs/2109.00859) | **15.24** | 16.16 | 19.56 | 20.01 | **20.31** | 26.03 | 19.55 | | [CodeT5-base-multi-sum](https://arxiv.org/abs/2109.00859) | **15.24** | **16.18** | **19.95** | **20.42** | 20.26 | **26.10** | **19.69** | ## Citation ```bibtex @inproceedings{ wang2021codet5, title={CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation}, author={Yue Wang, Weishi Wang, Shafiq Joty, Steven C.H. Hoi}, booktitle={Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021}, year={2021}, } ```
{"license": "bsd-3-clause", "tags": ["codet5"], "datasets": ["code_search_net"], "inference": true}
Salesforce/codet5-base-multi-sum
null
[ "transformers", "pytorch", "t5", "text2text-generation", "codet5", "dataset:code_search_net", "arxiv:2109.00859", "arxiv:1909.09436", "arxiv:1907.11692", "arxiv:2002.08155", "license:bsd-3-clause", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# CodeT5 (base-sized model) Pre-trained CodeT5 model. It was introduced in the paper [CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation](https://arxiv.org/abs/2109.00859) by Yue Wang, Weishi Wang, Shafiq Joty, Steven C.H. Hoi and first released in [this repository](https://github.com/salesforce/CodeT5). Disclaimer: The team releasing CodeT5 did not write a model card for this model so this model card has been written by the Hugging Face team (more specifically, [nielsr](https://huggingface.co/nielsr)). ## Model description From the abstract: "We present CodeT5, a unified pre-trained encoder-decoder Transformer model that better leverages the code semantics conveyed from the developer-assigned identifiers. Our model employs a unified framework to seamlessly support both code understanding and generation tasks and allows for multi-task learning. Besides, we propose a novel identifier-aware pre-training task that enables the model to distinguish which code tokens are identifiers and to recover them when they are masked. Furthermore, we propose to exploit the user-written code comments with a bimodal dual generation task for better NL-PL alignment. Comprehensive experiments show that CodeT5 significantly outperforms prior methods on understanding tasks such as code defect detection and clone detection, and generation tasks across various directions including PL-NL, NL-PL, and PL-PL. Further analysis reveals that our model can better capture semantic information from code." ## Intended uses & limitations This repository contains the pre-trained model only, so you can use this model for (among other tasks) masked span prediction, as shown in the code example below. However, the main use of this model is to fine-tune it for a downstream task of interest, such as: * code summarization * code generation * code translation * code refinement * code defect detection * code clone detection. Supervised datasets for code can be found [here](https://huggingface.co/datasets?languages=languages:code). See the [model hub](https://huggingface.co/models?search=salesforce/codet) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model: ```python from transformers import RobertaTokenizer, T5ForConditionalGeneration tokenizer = RobertaTokenizer.from_pretrained('Salesforce/codet5-base') model = T5ForConditionalGeneration.from_pretrained('Salesforce/codet5-base') text = "def greet(user): print(f'hello <extra_id_0>!')" input_ids = tokenizer(text, return_tensors="pt").input_ids # simply generate a single sequence generated_ids = model.generate(input_ids, max_length=8) print(tokenizer.decode(generated_ids[0], skip_special_tokens=True)) # this prints "{user.username}" ``` ## Training data The CodeT5 model was pretrained on CodeSearchNet [Husain et al., 2019](https://arxiv.org/abs/1909.09436). Additionally, the authors collected two datasets of C/CSharp from [BigQuery1](https://console.cloud.google.com/marketplace/details/github/github-repos) to ensure that all downstream tasks have overlapped programming languages with the pre-training data. In total, around 8.35 million instances are used for pretraining. ## Training procedure ### Preprocessing This model uses a code-specific BPE (Byte-Pair Encoding) tokenizer trained using the [HuggingFace Tokenizers](https://github.com/huggingface/tokenizers) library. One can prepare text (or code) for the model using RobertaTokenizer, with the files from this repository. ## Evaluation results For evaluation results on several downstream benchmarks, we refer to the paper. ### BibTeX entry and citation info ```bibtex @misc{wang2021codet5, title={CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation}, author={Yue Wang and Weishi Wang and Shafiq Joty and Steven C. H. Hoi}, year={2021}, eprint={2109.00859}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"license": "apache-2.0", "tags": ["codet5"], "datasets": ["code_search_net"], "inference": false}
Salesforce/codet5-base
null
[ "transformers", "pytorch", "t5", "text2text-generation", "codet5", "dataset:code_search_net", "arxiv:2109.00859", "arxiv:1909.09436", "license:apache-2.0", "autotrain_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# CodeT5 (small-sized model) Pre-trained CodeT5 model. It was introduced in the paper [CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation](https://arxiv.org/abs/2109.00859) by Yue Wang, Weishi Wang, Shafiq Joty, Steven C.H. Hoi and first released in [this repository](https://github.com/salesforce/CodeT5). Disclaimer: The team releasing CodeT5 did not write a model card for this model so this model card has been written by the Hugging Face team (more specifically, [nielsr](https://huggingface.co/nielsr)). ## Model description From the abstract: "We present CodeT5, a unified pre-trained encoder-decoder Transformer model that better leverages the code semantics conveyed from the developer-assigned identifiers. Our model employs a unified framework to seamlessly support both code understanding and generation tasks and allows for multi-task learning. Besides, we propose a novel identifier-aware pre-training task that enables the model to distinguish which code tokens are identifiers and to recover them when they are masked. Furthermore, we propose to exploit the user-written code comments with a bimodal dual generation task for better NL-PL alignment. Comprehensive experiments show that CodeT5 significantly outperforms prior methods on understanding tasks such as code defect detection and clone detection, and generation tasks across various directions including PL-NL, NL-PL, and PL-PL. Further analysis reveals that our model can better capture semantic information from code." ## Intended uses & limitations This repository contains the pre-trained model only, so you can use this model for masked span prediction, as shown in the code example below. However, the main use of this model is to fine-tune it for a downstream task of interest, such as: * code summarization * code generation * code translation * code refinement * code defect detection * code clone detection. See the [model hub](https://huggingface.co/models?search=salesforce/codet) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model: ```python from transformers import RobertaTokenizer, T5ForConditionalGeneration tokenizer = RobertaTokenizer.from_pretrained('Salesforce/codet5-small') model = T5ForConditionalGeneration.from_pretrained('Salesforce/codet5-small') text = "def greet(user): print(f'hello <extra_id_0>!')" input_ids = tokenizer(text, return_tensors="pt").input_ids # simply generate a single sequence generated_ids = model.generate(input_ids, max_length=10) print(tokenizer.decode(generated_ids[0], skip_special_tokens=True)) # this prints "user: {user.name}" ``` ## Training data The CodeT5 model was pretrained on CodeSearchNet [Husain et al., 2019](https://arxiv.org/abs/1909.09436). Additionally, the authors collected two datasets of C/CSharp from [BigQuery1](https://console.cloud.google.com/marketplace/details/github/github-repos) to ensure that all downstream tasks have overlapped programming languages with the pre-training data. In total, around 8.35 million instances are used for pretraining. ## Training procedure ### Preprocessing This model uses a code-specific BPE (Byte-Pair Encoding) tokenizer. One can prepare text (or code) for the model using RobertaTokenizer, with the files from this repository. ## Evaluation results For evaluation results on several downstream benchmarks, we refer to the paper. ### BibTeX entry and citation info ```bibtex @misc{wang2021codet5, title={CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation}, author={Yue Wang and Weishi Wang and Shafiq Joty and Steven C. H. Hoi}, year={2021}, eprint={2109.00859}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"license": "apache-2.0", "tags": ["codet5"], "datasets": ["code_search_net"], "inference": false}
Salesforce/codet5-small
null
[ "transformers", "pytorch", "t5", "text2text-generation", "codet5", "dataset:code_search_net", "arxiv:2109.00859", "arxiv:1909.09436", "license:apache-2.0", "autotrain_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
{}
Salesforce/cods-bart-large-xsum-samsum
null
[ "transformers", "pytorch", "bart", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
fill-mask
transformers
{}
Salesforce/grappa_large_jnt
null
[ "transformers", "pytorch", "jax", "roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# MixQG (3b-sized model) MixQG is a new question generation model pre-trained on a collection of QA datasets with a mix of answer types. It was introduced in the paper [MixQG: Neural Question Generation with Mixed Answer Types](https://arxiv.org/abs/2110.08175) and the associated code is released in [this](https://github.com/salesforce/QGen) repository. ### How to use Using Huggingface pipeline abstraction: ``` from transformers import pipeline nlp = pipeline("text2text-generation", model='Salesforce/mixqg-3b', tokenizer='Salesforce/mixqg-3b') CONTEXT = "In the late 17th century, Robert Boyle proved that air is necessary for combustion." ANSWER = "Robert Boyle" def format_inputs(context: str, answer: str): return f"{answer} \\n {context}" text = format_inputs(CONTEXT, ANSWER) nlp(text) # should output [{'generated_text': 'Who proved that air is necessary for combustion?'}] ``` Using the pre-trained model directly: ``` from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained('Salesforce/mixqg-3b') model = AutoModelForSeq2SeqLM.from_pretrained('Salesforce/mixqg-3b') CONTEXT = "In the late 17th century, Robert Boyle proved that air is necessary for combustion." ANSWER = "Robert Boyle" def format_inputs(context: str, answer: str): return f"{answer} \\n {context}" text = format_inputs(CONTEXT, ANSWER) input_ids = tokenizer(text, return_tensors="pt").input_ids generated_ids = model.generate(input_ids, max_length=32, num_beams=4) output = tokenizer.batch_decode(generated_ids, skip_special_tokens=True) print(output) # should output "Who proved that air is necessary for combustion?" ``` ### Citation ``` @misc{murakhovska2021mixqg, title={MixQG: Neural Question Generation with Mixed Answer Types}, author={Lidiya Murakhovs'ka and Chien-Sheng Wu and Tong Niu and Wenhao Liu and Caiming Xiong}, year={2021}, eprint={2110.08175}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "en", "widget": [{"text": "Robert Boyle \\\\n In the late 17th century, Robert Boyle proved that air is necessary for combustion."}]}
Salesforce/mixqg-3b
null
[ "transformers", "pytorch", "t5", "text2text-generation", "en", "arxiv:2110.08175", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# MixQG (base-sized model) MixQG is a new question generation model pre-trained on a collection of QA datasets with a mix of answer types. It was introduced in the paper [MixQG: Neural Question Generation with Mixed Answer Types](https://arxiv.org/abs/2110.08175) and the associated code is released in [this](https://github.com/salesforce/QGen) repository. ### How to use Using Huggingface pipeline abstraction: ``` from transformers import pipeline nlp = pipeline("text2text-generation", model='Salesforce/mixqg-base', tokenizer='Salesforce/mixqg-base') CONTEXT = "In the late 17th century, Robert Boyle proved that air is necessary for combustion." ANSWER = "Robert Boyle" def format_inputs(context: str, answer: str): return f"{answer} \\n {context}" text = format_inputs(CONTEXT, ANSWER) nlp(text) # should output [{'generated_text': 'Who proved that air is necessary for combustion?'}] ``` Using the pre-trained model directly: ``` from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained('Salesforce/mixqg-base') model = AutoModelForSeq2SeqLM.from_pretrained('Salesforce/mixqg-base') CONTEXT = "In the late 17th century, Robert Boyle proved that air is necessary for combustion." ANSWER = "Robert Boyle" def format_inputs(context: str, answer: str): return f"{answer} \\n {context}" text = format_inputs(CONTEXT, ANSWER) input_ids = tokenizer(text, return_tensors="pt").input_ids generated_ids = model.generate(input_ids, max_length=32, num_beams=4) output = tokenizer.batch_decode(generated_ids, skip_special_tokens=True) print(output) # should output "Who proved that air is necessary for combustion?" ``` ### Citation ``` @misc{murakhovska2021mixqg, title={MixQG: Neural Question Generation with Mixed Answer Types}, author={Lidiya Murakhovs'ka and Chien-Sheng Wu and Tong Niu and Wenhao Liu and Caiming Xiong}, year={2021}, eprint={2110.08175}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "en", "widget": [{"text": "Robert Boyle \\\\n In the late 17th century, Robert Boyle proved that air is necessary for combustion."}]}
Salesforce/mixqg-base
null
[ "transformers", "pytorch", "t5", "text2text-generation", "en", "arxiv:2110.08175", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# MixQG (large-sized model) MixQG is a new question generation model pre-trained on a collection of QA datasets with a mix of answer types. It was introduced in the paper [MixQG: Neural Question Generation with Mixed Answer Types](https://arxiv.org/abs/2110.08175) and the associated code is released in [this](https://github.com/salesforce/QGen) repository. ### How to use Using Huggingface pipeline abstraction: ``` from transformers import pipeline nlp = pipeline("text2text-generation", model='Salesforce/mixqg-large', tokenizer='Salesforce/mixqg-large') CONTEXT = "In the late 17th century, Robert Boyle proved that air is necessary for combustion." ANSWER = "Robert Boyle" def format_inputs(context: str, answer: str): return f"{answer} \\n {context}" text = format_inputs(CONTEXT, ANSWER) nlp(text) # should output [{'generated_text': 'Who proved that air is necessary for combustion?'}] ``` Using the pre-trained model directly: ``` from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained('Salesforce/mixqg-large') model = AutoModelForSeq2SeqLM.from_pretrained('Salesforce/mixqg-large') CONTEXT = "In the late 17th century, Robert Boyle proved that air is necessary for combustion." ANSWER = "Robert Boyle" def format_inputs(context: str, answer: str): return f"{answer} \\n {context}" text = format_inputs(CONTEXT, ANSWER) input_ids = tokenizer(text, return_tensors="pt").input_ids generated_ids = model.generate(input_ids, max_length=32, num_beams=4) output = tokenizer.batch_decode(generated_ids, skip_special_tokens=True) print(output) # should output "Who proved that air is necessary for combustion?" ``` ### Citation ``` @misc{murakhovska2021mixqg, title={MixQG: Neural Question Generation with Mixed Answer Types}, author={Lidiya Murakhovs'ka and Chien-Sheng Wu and Tong Niu and Wenhao Liu and Caiming Xiong}, year={2021}, eprint={2110.08175}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "en", "widget": [{"text": "Robert Boyle \\\\n In the late 17th century, Robert Boyle proved that air is necessary for combustion."}]}
Salesforce/mixqg-large
null
[ "transformers", "pytorch", "t5", "text2text-generation", "en", "arxiv:2110.08175", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
transformers
{}
Salesforce/qaconv-bert-large-uncased-whole-word-masking-squad2
null
[ "transformers", "pytorch", "bert", "question-answering", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
transformers
{}
Salesforce/qaconv-roberta-large-squad2
null
[ "transformers", "pytorch", "roberta", "question-answering", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
{}
Salesforce/qaconv-unifiedqa-t5-3b
null
[ "transformers", "pytorch", "tf", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
{}
Salesforce/qaconv-unifiedqa-t5-base
null
[ "transformers", "pytorch", "tf", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
{}
Salesforce/qaconv-unifiedqa-t5-large
null
[ "transformers", "pytorch", "tf", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# Harry Potter DialoGPT Model
{"tags": ["conversational"]}
Salma-2/DialoGPT-small-harrypotter
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
transformers
{}
SalmanMo/ALBERT_QA_1e
null
[ "transformers", "pytorch", "albert", "question-answering", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Saltyiron/model_hello
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
transformers
{}
Sam2021/xlm_rober_base_finetuned_squd_v1
null
[ "transformers", "pytorch", "xlm-roberta", "question-answering", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
object-detection
keras
# YOLOv4 YOLO, for "You Only Look Once", is an object detection system in real-time, introduced in [this paper](https://arxiv.org/abs/2004.10934), that recognizes various objects in a single enclosure. It identifies objects more rapidly and more precisely than other recognition systems. Three authors Alexey Bochkovskiy, the Russian developer who built the YOLO Windows version, Chien-Yao Wang, and Hong-Yuan Mark Liao, are accounted for in this work and the entire code is available on [Github](https://github.com/AlexeyAB/darknet). This YOLOv4 library, inspired by previous YOLOv3 implementations here: * [Yolov3 tensorflow](https://github.com/YunYang1994/tensorflow-yolov3) * [Yolov3 tf2](https://github.com/zzh8829/yolov3-tf2)uses Tensorflow 2.0 and is available on this [Github](https://github.com/hunglc007/tensorflow-yolov4-tflite). ### Limitations and biases Object-recognition technology has improved drastically in the past few years across the industry, and it is now part of a huge variety of products and services that millions of people worldwide use. However, errors in object-recognition algorithms can stem from the training data used to create the system is geographically constrained and/or that it fails to recognize cultural differences. The COCO dataset used to train yolov4-tflite has been found to have annotation errors on more than 20% of images. Such errors include captions describing people differently based on skin tone and gender expression. This serves as a reminder to be cognizant that these biases already exist and a warning to be careful about the increasing bias that is likely to come with advancements in image captioning technology. ### How to use YOLOv4tflite You can use this model to detect objects in an image of choice. Follow the following scripts to implement on your own! ```bash # install git lfs git lfs install # if presented with the error "git: 'lfs' is not a git command. See 'git --help'", try running these linux commands: curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash # change directory to base cd .. # install git-lfs sudo apt-get install git-lfs # for message "Git LFS initialized" git lfs install # change directory to yolo_v4_tflite cd ./yolo_v4_tflite # clone this repo into your notebook git clone https://huggingface.co/SamMorgan/yolo_v4_tflite # Run demo tensor flow for an example of how this model works python detect.py --weights ./checkpoints/yolov4-416 --size 416 --model yolov4 --image ./data/kite.jpg --output ./test.jpg # Try with your own image python detect.py --weights ./checkpoints/yolov4-416 --size 416 --model yolov4 --image <insert path to image of choice> --output <insert path to output location of choice> ``` ### Evaluate on COCO 2017 Dataset ```bash # run script in /script/get_coco_dataset_2017.sh to download COCO 2017 Dataset # preprocess coco dataset cd data mkdir dataset cd .. cd scripts python coco_convert.py --input ./coco/annotations/instances_val2017.json --output val2017.pkl python coco_annotation.py --coco_path ./coco cd .. # evaluate yolov4 model python evaluate.py --weights ./data/yolov4.weights cd mAP/extra python remove_space.py cd .. python main.py --output results_yolov4_tf ``` #### mAP50 on COCO 2017 Dataset | Detection | 512x512 | 416x416 | 320x320 | |-------------|---------|---------|---------| | YoloV3 | 55.43 | 52.32 | | | YoloV4 | 61.96 | 57.33 | | ### Benchmark ```bash python benchmarks.py --size 416 --model yolov4 --weights ./data/yolov4.weights ``` #### TensorRT performance | YoloV4 416 images/s | FP32 | FP16 | INT8 | |---------------------|----------|----------|----------| | Batch size 1 | 55 | 116 | | | Batch size 8 | 70 | 152 | | #### Tesla P100 | Detection | 512x512 | 416x416 | 320x320 | |-------------|---------|---------|---------| | YoloV3 FPS | 40.6 | 49.4 | 61.3 | | YoloV4 FPS | 33.4 | 41.7 | 50.0 | #### Tesla K80 | Detection | 512x512 | 416x416 | 320x320 | |-------------|---------|---------|---------| | YoloV3 FPS | 10.8 | 12.9 | 17.6 | | YoloV4 FPS | 9.6 | 11.7 | 16.0 | #### Tesla T4 | Detection | 512x512 | 416x416 | 320x320 | |-------------|---------|---------|---------| | YoloV3 FPS | 27.6 | 32.3 | 45.1 | | YoloV4 FPS | 24.0 | 30.3 | 40.1 | #### Tesla P4 | Detection | 512x512 | 416x416 | 320x320 | |-------------|---------|---------|---------| | YoloV3 FPS | 20.2 | 24.2 | 31.2 | | YoloV4 FPS | 16.2 | 20.2 | 26.5 | #### Macbook Pro 15 (2.3GHz i7) | Detection | 512x512 | 416x416 | 320x320 | |-------------|---------|---------|---------| | YoloV3 FPS | | | | | YoloV4 FPS | | | | ### Traning your own model ```bash # Prepare your dataset # If you want to train from scratch: In config.py set FISRT_STAGE_EPOCHS=0 # Run script: python train.py # Transfer learning: python train.py --weights ./data/yolov4.weights ``` The training performance is not fully reproduced yet, so I recommended to use Alex's [Darknet](https://github.com/AlexeyAB/darknet) to train your own data, then convert the .weights to tensorflow or tflite. ### References * YOLOv4: Optimal Speed and Accuracy of Object Detection [YOLOv4](https://arxiv.org/abs/2004.10934). * [darknet](https://github.com/AlexeyAB/darknet)
{"language": "en", "license": "mit", "tags": ["object detection", "computer vision", "darknet", "yolo"], "datasets": ["coco", "imagenette"], "thumbnail": "https://github.com/hunglc007/tensorflow-yolov4-tflite", "pipeline_tag": "object-detection"}
SamMorgan/yolo_v4_tflite
null
[ "keras", "tflite", "object detection", "computer vision", "darknet", "yolo", "object-detection", "en", "dataset:coco", "dataset:imagenette", "arxiv:2004.10934", "license:mit", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Samaule666/DialoGPT-small-Kirito
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# Peter from Your Boyfriend Game.
{"tags": ["conversational"]}
Sammigooof/Peterbot
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Sammith/hs
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Samsun121/model_name
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
SanaaChomsky/Anime
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Sanan/Object_Detection_YOLOv5
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
transformers
{}
SanayCo/model_output
null
[ "transformers", "pytorch", "jax", "bert", "question-answering", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-fi-to-en This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt19 dataset. It achieves the following results on the evaluation set: - Loss: 3.5185 - Bleu: 1.2541 - Gen Len: 17.395 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:| | 3.413 | 1.0 | 6250 | 3.5378 | 1.2291 | 17.4057 | | 3.342 | 2.0 | 12500 | 3.5185 | 1.2541 | 17.395 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["wmt19"], "metrics": ["bleu"], "model-index": [{"name": "t5-small-finetuned-fi-to-en", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "wmt19", "type": "wmt19", "args": "fi-en"}, "metrics": [{"type": "bleu", "value": 1.2541, "name": "Bleu"}]}]}]}
Sancha/t5-small-finetuned-fi-to-en
null
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:wmt19", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Sand/DialoGPT-small-eren
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Sandalfon/hugging_first_try
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Sandbox/bot1
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
SanderGi/DialoGPT-small-Claire
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Sanghee/distilbert-base-uncased-finetuned-cola
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Sanghee/distilbert-base-uncased-finetuned-re
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Sangheon/bert-base-uncased-finetuned-swag
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Sanjay/semantic_analysis
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Sanjaygowda/XLSR_Wav2Vec2_on_Kannada
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-lar-xlsr-es-col This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-spanish](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-spanish) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0947 - Wer: 0.1884 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 4.8446 | 8.51 | 400 | 2.8174 | 0.9854 | | 0.5146 | 17.02 | 800 | 0.1022 | 0.2020 | | 0.0706 | 25.53 | 1200 | 0.0947 | 0.1884 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.1+cu102 - Datasets 1.13.3 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-lar-xlsr-es-col", "results": []}]}
Santiagot1105/wav2vec2-lar-xlsr-es-col
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-lar-xlsr-finetune-es-col This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1669 - Wer: 0.2595 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 5.1108 | 8.51 | 400 | 0.5936 | 0.6085 | | 0.3015 | 17.02 | 800 | 0.2071 | 0.2941 | | 0.0989 | 25.53 | 1200 | 0.1669 | 0.2595 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.1+cu102 - Datasets 1.13.3 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-lar-xlsr-finetune-es-col", "results": []}]}
Santiagot1105/wav2vec2-lar-xlsr-finetune-es-col
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Santiagot1105/wav2vec2-large-xlsr-finetune-es-2-Test
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Santiagot1105/wav2vec2-large-xlsr-finetune-es-3-Test
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xlsr-finetune-es-col This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.6514 - Wer: 0.9874 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 4.9709 | 3.25 | 400 | 2.9673 | 1.0 | | 2.9488 | 6.5 | 800 | 2.9075 | 0.9973 | | 2.907 | 9.76 | 1200 | 2.8772 | 0.9688 | | 2.886 | 13.01 | 1600 | 2.8245 | 0.9484 | | 2.8043 | 16.26 | 2000 | 2.7134 | 0.9874 | | 2.7288 | 19.51 | 2400 | 2.6750 | 0.9874 | | 2.7072 | 22.76 | 2800 | 2.6651 | 0.9874 | | 2.6892 | 26.02 | 3200 | 2.6573 | 0.9874 | | 2.683 | 29.27 | 3600 | 2.6514 | 0.9874 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.1+cu102 - Datasets 1.13.3 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-large-xlsr-finetune-es-col", "results": []}]}
Santiagot1105/wav2vec2-large-xlsr-finetune-es-col
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Santiagot1105/wav2vec2-large-xlsr-finetune-spanish-col-demo-colab
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xlsr-finetune-spanish-col This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-spanish](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-spanish) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.7105 - Wer: 0.9824 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 7.2829 | 3.25 | 400 | 2.9632 | 1.0 | | 2.9664 | 6.5 | 800 | 2.8494 | 1.0542 | | 2.8353 | 9.76 | 1200 | 2.8352 | 1.0101 | | 2.7863 | 13.01 | 1600 | 2.7421 | 0.9837 | | 2.762 | 16.26 | 2000 | 2.7254 | 0.9861 | | 2.7483 | 19.51 | 2400 | 2.7228 | 0.9874 | | 2.7482 | 22.76 | 2800 | 2.7228 | 0.9999 | | 2.7373 | 26.02 | 3200 | 2.7163 | 0.9824 | | 2.7328 | 29.27 | 3600 | 2.7105 | 0.9824 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.1+cu102 - Datasets 1.13.3 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-large-xlsr-finetune-spanish-col", "results": []}]}
Santiagot1105/wav2vec2-large-xlsr-finetune-spanish-col
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Santiagot1105/wav2vec2-large-xlsr-spanish-demo-colab
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Santiagot1105/wav2vec2-large-xlsr-spanish-small-demo-colab
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Sara/nlp
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
SaraBisbe/model_name
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
#Ally DialoGPT Model
{"tags": ["conversational"]}
SarahhhUwU/DialoGPT-small-ally
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Sarahliu186/ASR
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-base-timit-demo-colab", "results": []}]}
Sarahliu186/wav2vec2-base-timit-demo-colab
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
{}
Sarim24/Sarim24
null
[ "transformers", "tf", "distilbert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
<h1>Hugging Face model</h1>
{}
Sarim24/TransformerModel
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Sarmad/d_base_qa
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Sarmad/emotion_all
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Sarmad/model1
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Sarmad/model_all
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Sarmad/model_emotion_all
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
transformers
{}
Sarmad/projectmodel-bert
null
[ "transformers", "pytorch", "distilbert", "question-answering", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Sarmad/projectmodel
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Sarmad/your-model-name
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Saruchipapa/Ola
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
null
# Rick DialoGPT Model
{"tags": ["conversational"]}
Sarumomo/DialoGPT-small-test
null
[ "conversational", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Saswata23/DialoGPT-medium-Luke
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Sauce328/StyleTransferPDA
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
SaulLu/Test
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
# [WIP] Albert Bengali - dev version ## Model description For the moment, only the tokenizer is available. The tokenizer is based on [SentencePiece](https://github.com/google/sentencepiece) with Unigram language model segmentation algorithm. Taking into account certain characteristics of the language, we chose that: - the tokenizer passes in lower case all the texts because the Bengali language is a monocameral scrip (no difference between capital and lower case); - the sentence pieces can't go beyond the boundary of a word because the words are spaced by white spaces in the Bengali language. ## Intended uses & limitations This tokenizer is adapted to the Bengali language. You can use it to pre-train an Albert model on the Bengali language. #### How to use To tokenize: ```python from transformers import AlbertTokenizer tokenizer = AlbertTokenizer.from_pretrained('SaulLu/albert-bn-dev') text = "পোকেমন জাপানী ভিডিও গেম কোম্পানি নিনটেন্ডো কর্তৃক প্রকাশিত একটি মিডিয়া ফ্র‍্যাঞ্চাইজি।" encoded_input = tokenizer(text, return_tensors='pt') ``` #### Limitations and bias Provide examples of latent issues and potential remediations. ## Training data The tokenizer was trained on a random subset of 4M sentences of Bengali Oscar and Bengali Wikipedia. ## Training procedure ### Tokenizer The tokenizer was trained with the [SentencePiece](https://github.com/google/sentencepiece) on 8 x Intel(R) Core(TM) i7-10510U CPU @ 1.80GHz with 16GB RAM and 36GB SWAP. ``` import sentencepiece as spm config = { "input": "./dataset/oscar_bn.txt,./dataset/wikipedia_bn.txt", "input_format": "text", "model_type": "unigram", "vocab_size": 32000, "self_test_sample_size": 0, "character_coverage": 0.9995, "shuffle_input_sentence": true, "seed_sentencepiece_size": 1000000, "shrinking_factor": 0.75, "num_threads": 8, "num_sub_iterations": 2, "max_sentencepiece_length": 16, "max_sentence_length": 4192, "split_by_unicode_script": true, "split_by_number": true, "split_digits": true, "control_symbols": "[MASK]", "byte_fallback": false, "vocabulary_output_piece_score": true, "normalization_rule_name": "nmt_nfkc_cf", "add_dummy_prefix": true, "remove_extra_whitespaces": true, "hard_vocab_limit": true, "unk_id": 1, "bos_id": 2, "eos_id": 3, "pad_id": 0, "bos_piece": "[CLS]", "eos_piece": "[SEP]", "train_extremely_large_corpus": true, "split_by_whitespace": true, "model_prefix": "./spiece", "input_sentence_size": 4000000, "user_defined_symbols": "(,),-,.,–,£,।", } spm.SentencePieceTrainer.train(**config) ``` <!-- ## Eval results ### BibTeX entry and citation info ```bibtex @inproceedings{..., year={2020} } ``` -->
{"language": ["bn"], "license": "apache-2.0", "tags": [], "datasets": ["oscar", "wikipedia"], "metrics": []}
SaulLu/albert-bn-dev
null
[ "bn", "dataset:oscar", "dataset:wikipedia", "license:apache-2.0", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
SaulLu/bengali-tokenizer-v2
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
SaulLu/bengali-tokenizer-v3
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
SaulLu/bengali-tokenizer
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
zero-shot-image-classification
transformers
# Model Card: CLIP Disclaimer: The model card is taken and modified from the official CLIP repository, it can be found [here](https://github.com/openai/CLIP/blob/main/model-card.md). ## Model Details The CLIP model was developed by researchers at OpenAI to learn about what contributes to robustness in computer vision tasks. The model was also developed to test the ability of models to generalize to arbitrary image classification tasks in a zero-shot manner. It was not developed for general model deployment - to deploy models like CLIP, researchers will first need to carefully study their capabilities in relation to the specific context they’re being deployed within. ### Model Date January 2021 ### Model Type The base model uses a ViT-B/32 Transformer architecture as an image encoder and uses a masked self-attention Transformer as a text encoder. These encoders are trained to maximize the similarity of (image, text) pairs via a contrastive loss. There is also a variant of the model where the ResNet image encoder is replaced with a Vision Transformer. ### Model Version Initially, we’ve released one CLIP model based on the Vision Transformer architecture equivalent to ViT-B/32, along with the RN50 model, using the architecture equivalent to ResNet-50. *This port does not include the ResNet model.* Please see the paper linked below for further details about their specification. ### Documents - [Blog Post](https://openai.com/blog/clip/) - [CLIP Paper](https://arxiv.org/abs/2103.00020) ### Use with Transformers ```python3 from PIL import Image import requests from transformers import CLIPProcessor, CLIPModel model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32") processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True) outputs = model(**inputs) logits_per_image = outputs.logits_per_image # this is the image-text similarity score probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities ``` ## Model Use ### Intended Use The model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such models - the CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis. #### Primary intended uses The primary intended users of these models are AI researchers. We primarily imagine the model will be used by researchers to better understand robustness, generalization, and other capabilities, biases, and constraints of computer vision models. ### Out-of-Scope Use Cases **Any** deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIP’s performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case currently potentially harmful. Certain use cases which would fall under the domain of surveillance and facial recognition are always out-of-scope regardless of performance of the model. This is because the use of artificial intelligence for tasks such as these can be premature currently given the lack of testing norms and checks to ensure its fair use. Since the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases. ## Data The model was trained on publicly available image-caption data. This was done through a combination of crawling a handful of websites and using commonly-used pre-existing image datasets such as [YFCC100M](http://projects.dfki.uni-kl.de/yfcc100m/). A large portion of the data comes from our crawling of the internet. This means that the data is more representative of people and societies most connected to the internet which tend to skew towards more developed nations, and younger, male users. ### Data Mission Statement Our goal with building this dataset was to test out robustness and generalizability in computer vision tasks. As a result, the focus was on gathering large quantities of data from different publicly-available internet data sources. The data was gathered in a mostly non-interventionist manner. However, we only crawled websites that had policies against excessively violent and adult images and allowed us to filter out such content. We do not intend for this dataset to be used as the basis for any commercial or deployed model and will not be releasing the dataset. ## Performance and Limitations ### Performance We have evaluated the performance of CLIP on a wide range of benchmarks across a variety of computer vision datasets such as OCR to texture recognition to fine-grained classification. The paper describes model performance on the following datasets: - Food101 - CIFAR10 - CIFAR100 - Birdsnap - SUN397 - Stanford Cars - FGVC Aircraft - VOC2007 - DTD - Oxford-IIIT Pet dataset - Caltech101 - Flowers102 - MNIST - SVHN - IIIT5K - Hateful Memes - SST-2 - UCF101 - Kinetics700 - Country211 - CLEVR Counting - KITTI Distance - STL-10 - RareAct - Flickr30 - MSCOCO - ImageNet - ImageNet-A - ImageNet-R - ImageNet Sketch - ObjectNet (ImageNet Overlap) - Youtube-BB - ImageNet-Vid ## Limitations CLIP and our analysis of it have a number of limitations. CLIP currently struggles with respect to certain tasks such as fine grained classification and counting objects. CLIP also poses issues with regards to fairness and bias which we discuss in the paper and briefly in the next section. Additionally, our approach to testing CLIP also has an important limitation- in many cases we have used linear probes to evaluate the performance of CLIP and there is evidence suggesting that linear probes can underestimate model performance. ### Bias and Fairness We find that the performance of CLIP - and the specific biases it exhibits - can depend significantly on class design and the choices one makes for categories to include and exclude. We tested the risk of certain kinds of denigration with CLIP by classifying images of people from [Fairface](https://arxiv.org/abs/1908.04913) into crime-related and non-human animal categories. We found significant disparities with respect to race and gender. Additionally, we found that these disparities could shift based on how the classes were constructed. (Details captured in the Broader Impacts Section in the paper). We also tested the performance of CLIP on gender, race and age classification using the Fairface dataset (We default to using race categories as they are constructed in the Fairface dataset.) in order to assess quality of performance across different demographics. We found accuracy >96% across all races for gender classification with ‘Middle Eastern’ having the highest accuracy (98.4%) and ‘White’ having the lowest (96.5%). Additionally, CLIP averaged ~93% for racial classification and ~63% for age classification. Our use of evaluations to test for gender, race and age classification as well as denigration harms is simply to evaluate performance of the model across people and surface potential risks and not to demonstrate an endorsement/enthusiasm for such tasks. ## Feedback ### Where to send questions or comments about the model Please use [this Google Form](https://forms.gle/Uv7afRH5dvY34ZEs9)
{"tags": ["vision"]}
SaulLu/clip-vit-base-patch32
null
[ "transformers", "pytorch", "tf", "jax", "clip", "zero-shot-image-classification", "vision", "arxiv:2103.00020", "arxiv:1908.04913", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
SaulLu/codex-like-tokenizer
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# CodeT5 (small-sized model) Pre-trained CodeT5 model. It was introduced in the paper [CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation](https://arxiv.org/abs/2109.00859) by Yue Wang, Weishi Wang, Shafiq Joty, Steven C.H. Hoi and first released in [this repository](https://github.com/salesforce/CodeT5). Disclaimer: The team releasing CodeT5 did not write a model card for this model so this model card has been written by the Hugging Face team (more specifically, [nielsr](https://huggingface.co/nielsr)). ## Model description From the abstract: "We present CodeT5, a unified pre-trained encoder-decoder Transformer model that better leverages the code semantics conveyed from the developer-assigned identifiers. Our model employs a unified framework to seamlessly support both code understanding and generation tasks and allows for multi-task learning. Besides, we propose a novel identifier-aware pre-training task that enables the model to distinguish which code tokens are identifiers and to recover them when they are masked. Furthermore, we propose to exploit the user-written code comments with a bimodal dual generation task for better NL-PL alignment. Comprehensive experiments show that CodeT5 significantly outperforms prior methods on understanding tasks such as code defect detection and clone detection, and generation tasks across various directions including PL-NL, NL-PL, and PL-PL. Further analysis reveals that our model can better capture semantic information from code." ## Intended uses & limitations This repository contains the pre-trained model only, so you can use this model for masked span prediction, as shown in the code example below. However, the main use of this model is to fine-tune it for a downstream task of interest, such as: * code summarization * code generation * code translation * code refinement * code defect detection * code clone detection. See the [model hub](https://huggingface.co/models?search=salesforce/codet) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model: ```python from transformers import RobertaTokenizer, T5ForConditionalGeneration tokenizer = RobertaTokenizer.from_pretrained('Salesforce/codet5-small') model = T5ForConditionalGeneration.from_pretrained('Salesforce/codet5-small') text = "def greet(user): print(f'hello <extra_id_0>!')" input_ids = tokenizer(text, return_tensors="pt").input_ids # simply generate a single sequence generated_ids = model.generate(input_ids, max_length=10) print(tokenizer.decode(generated_ids[0], skip_special_tokens=True)) # this prints "user: {user.name}" ``` ## Training data The CodeT5 model was pretrained on CodeSearchNet [Husain et al., 2019](https://arxiv.org/abs/1909.09436). Additionally, the authors collected two datasets of C/CSharp from [BigQuery1](https://console.cloud.google.com/marketplace/details/github/github-repos) to ensure that all downstream tasks have overlapped programming languages with the pre-training data. In total, around 8.35 million instances are used for pretraining. ## Training procedure ### Preprocessing This model uses a code-specific BPE (Byte-Pair Encoding) tokenizer. One can prepare text (or code) for the model using RobertaTokenizer, with the files from this repository. ## Evaluation results For evaluation results on several downstream benchmarks, we refer to the paper. ### BibTeX entry and citation info ```bibtex @misc{wang2021codet5, title={CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation}, author={Yue Wang and Weishi Wang and Shafiq Joty and Steven C. H. Hoi}, year={2021}, eprint={2109.00859}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"license": "apache-2.0", "tags": ["codet5"], "datasets": ["code_search_net"], "inference": false}
SaulLu/cotet5_small_fix
null
[ "transformers", "pytorch", "t5", "text2text-generation", "codet5", "dataset:code_search_net", "arxiv:2109.00859", "arxiv:1909.09436", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
SaulLu/dummy-tokenizer-wordlevel
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
SaulLu/gpt2-wikitext2
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
SaulLu/gpt2_tokenizer_fixed
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
transformers
# MarkupLM **Multimodal (text +markup language) pre-training for [Document AI](https://www.microsoft.com/en-us/research/project/document-ai/)** ## Introduction MarkupLM is a simple but effective multi-modal pre-training method of text and markup language for visually-rich document understanding and information extraction tasks, such as webpage QA and webpage information extraction. MarkupLM archives the SOTA results on multiple datasets. For more details, please refer to our paper: [MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding](https://arxiv.org/abs/2110.08518) Junlong Li, Yiheng Xu, Lei Cui, Furu Wei
{}
SaulLu/markuplm-base
null
[ "transformers", "pytorch", "markuplm", "arxiv:2110.08518", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
SaulLu/my-new-shiny-tokenizer
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00