Search is not available for this dataset
pipeline_tag
stringclasses
48 values
library_name
stringclasses
205 values
text
stringlengths
0
18.3M
metadata
stringlengths
2
1.07B
id
stringlengths
5
122
last_modified
null
tags
listlengths
1
1.84k
sha
null
created_at
stringlengths
25
25
null
null
{}
dclee/Helsinki-NLP-opus-mt-en-es
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
https://teespring.com/dashboard/listings/113925135/edit
{}
ddddd/EDCLasVegas
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
{}
ddemszky/Feb25_09-02-16_combined_education_dataset_02252021.json_6.25e-05_hist1-truncated-acd81d
null
[ "transformers", "pytorch", "tensorboard", "bert", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
{}
ddemszky/supervised_finetuning_hist0_is_question_switchboard_question_detection.json_bs32_lr0.000063
null
[ "transformers", "pytorch", "tensorboard", "bert", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
ddj/minisiri.bert
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
sentence-similarity
sentence-transformers
# ddobokki/electra-small-nli-sts This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 256 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('ddobokki/electra-small-nli-sts') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('ddobokki/electra-small-nli-sts') model = AutoModel.from_pretrained('ddobokki/electra-small-nli-sts') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=ddobokki/electra-small-nli-sts) ## Training The model was trained with the parameters: **DataLoader**: `sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 9039 with parameters: ``` {'batch_size': 64} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 903, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 904, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: ElectraModel (1): Pooling({'word_embedding_dimension': 256, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
{"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers", "ko"], "pipeline_tag": "sentence-similarity"}
ddobokki/electra-small-nli-sts
null
[ "sentence-transformers", "pytorch", "electra", "feature-extraction", "sentence-similarity", "transformers", "ko", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM # device setting device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # load model and tokenizer model_name_or_path = "ddobokki/gpt2_poem" tokenizer = AutoTokenizer.from_pretrained(model_name_or_path) model = AutoModelForCausalLM.from_pretrained(model_name_or_path) model.to(device) keyword_start_token = "<k>" keyword_end_token = "</k>" text = "산 꼭대기가 보이는 경치" input_text = keyword_start_token + text + keyword_end_token input_ids = tokenizer.encode(input_text, return_tensors="pt").to(device) gen_ids = model.generate( input_ids, max_length=64, num_beams=100, no_repeat_ngram_size=2 ) generated = tokenizer.decode(gen_ids[0, :].tolist(), skip_special_tokens=True) >> 오르락내리락 산 꼭대기를 올려다보니 아득히 멀고 아득한 나뭇가지에 매달린 작은 산새 한 마리 이름 모를 풀 한포기 안고 어디론가 훌쩍 떠나가 버렸다 ```
{}
ddobokki/gpt2_poem
null
[ "transformers", "pytorch", "safetensors", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
sentence-similarity
sentence-transformers
# ddobokki/klue-roberta-small-nli-sts 한국어 Sentence Transformer 모델입니다. <!--- Describe your model here --> ## Usage (Sentence-Transformers) [sentence-transformers](https://www.SBERT.net) 라이브러리를 이용해 사용할 수 있습니다. ``` pip install -U sentence-transformers ``` 사용법 ```python from sentence_transformers import SentenceTransformer sentences = ["흐르는 강물을 거꾸로 거슬러 오르는", "세월이 가면 가슴이 터질 듯한"] model = SentenceTransformer('ddobokki/klue-roberta-small-nli-sts') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) transformers 라이브러리만 사용할 경우 ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ["흐르는 강물을 거꾸로 거슬러 오르는", "세월이 가면 가슴이 터질 듯한"] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('ddobokki/klue-roberta-small-nli-sts') model = AutoModel.from_pretrained('ddobokki/klue-roberta-small-nli-sts') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Performance - Semantic Textual Similarity test set results <br> | Model | Cosine Pearson | Cosine Spearman | Euclidean Pearson | Euclidean Spearman | Manhattan Pearson | Manhattan Spearman | Dot Pearson | Dot Spearman | |------------------------|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:| | KoSRoBERTa<sup>small</sup> | 84.27 | 84.17 | 83.33 | 83.65 | 83.34 | 83.65 | 82.10 | 81.38 | ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
{"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers", "ko"], "pipeline_tag": "sentence-similarity"}
ddobokki/klue-roberta-small-nli-sts
null
[ "sentence-transformers", "pytorch", "roberta", "feature-extraction", "sentence-similarity", "transformers", "ko", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
## EXAMPLE ```python import requests import torch from PIL import Image from transformers import ( VisionEncoderDecoderModel, ViTFeatureExtractor, PreTrainedTokenizerFast, ) # device setting device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') # load feature extractor and tokenizer encoder_model_name_or_path = "ddobokki/vision-encoder-decoder-vit-gpt2-coco-ko" feature_extractor = ViTFeatureExtractor.from_pretrained(encoder_model_name_or_path) tokenizer = PreTrainedTokenizerFast.from_pretrained(encoder_model_name_or_path) # load model model = VisionEncoderDecoderModel.from_pretrained(encoder_model_name_or_path) model.to(device) # inference url = 'http://images.cocodataset.org/val2017/000000039769.jpg' with Image.open(requests.get(url, stream=True).raw) as img: pixel_values = feature_extractor(images=img, return_tensors="pt").pixel_values generated_ids = model.generate(pixel_values.to(device),num_beams=5) generated_text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True) >> ['고양이 두마리가 담요 위에 누워 있다.'] ```
{}
ddobokki/vision-encoder-decoder-vit-gpt2-coco-ko
null
[ "transformers", "pytorch", "vision-encoder-decoder", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
{}
ddobokki/vit-kogpt_trinity-coco-ko
null
[ "transformers", "pytorch", "vision-encoder-decoder", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
speechbrain
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe> <br/><br/> # Conformer for KsponSpeech (with Transformer LM) This repository provides all the necessary tools to perform automatic speech recognition from an end-to-end system pretrained on KsponSpeech (Kr) within SpeechBrain. For a better experience, we encourage you to learn more about [SpeechBrain](https://speechbrain.github.io). The performance of the model is the following: | Release | eval clean CER | eval other CER | GPUs | | :------: | :------------: | :------------: | :---------: | | 01-23-23 | 7.33% | 7.99% | 6xA100 80GB | ## Pipeline description This ASR system is composed of 3 different but linked blocks: - Tokenizer (unigram) that transforms words into subword units and trained with the train transcriptions of KsponSpeech. - Neural language model (Transformer LM) trained on the train transcriptions of KsponSpeech - Acoustic model made of a conformer encoder and a joint decoder with CTC + transformer. Hence, the decoding also incorporates the CTC probabilities. ## Install SpeechBrain First of all, please install SpeechBrain with the following command: ``` !pip install git+https://github.com/speechbrain/speechbrain.git ``` Please notice that we encourage you to read our tutorials and learn more about [SpeechBrain](https://speechbrain.github.io). ### Transcribing your own audio files (in Korean) ```python from speechbrain.pretrained import EncoderDecoderASR asr_model = EncoderDecoderASR.from_hparams(source="ddwkim/asr-conformer-transformerlm-ksponspeech", savedir="pretrained_models/asr-conformer-transformerlm-ksponspeech", run_opts={"device":"cuda"}) asr_model.transcribe_file("ddwkim/asr-conformer-transformerlm-ksponspeech/record_0_16k.wav") ``` ### Inference on GPU To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method. ## Parallel Inference on a Batch Please, [see this Colab notebook](https://colab.research.google.com/drive/1finp9pfmGRzWHCAPNkqAH2yGH6k_BbPA?usp=sharing) on using the pretrained model ### Training The model was trained with SpeechBrain (Commit hash: '4b3bf60'). To train it from scratch follow these steps: 1. Clone SpeechBrain: ```bash git clone https://github.com/speechbrain/speechbrain/ ``` 2. Install it: ```bash cd speechbrain pip install -r requirements.txt pip install . ``` 3. Run Training: ```bash cd recipes/KsponSpeech/ASR/transformer python train.py hparams/conformer_medium.yaml --data_folder=your_data_folder ``` You can find our training results (models, logs, etc) at the subdirectories. ### Limitations The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets. # **About SpeechBrain** - Website: https://speechbrain.github.io/ - Code: https://github.com/speechbrain/speechbrain/ - HuggingFace: https://huggingface.co/speechbrain/ # **Citing SpeechBrain** Please, cite SpeechBrain if you use it for your research or business. ```bibtex @misc{speechbrain, title={{SpeechBrain}: A General-Purpose Speech Toolkit}, author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio}, year={2021}, eprint={2106.04624}, archivePrefix={arXiv}, primaryClass={eess.AS}, note={arXiv:2106.04624} } ``` # Citing the model ```bibtex @misc{returnzero, title = {ReturnZero Conformer Korean ASR model}, author = {Dongwon Kim and Dongwoo Kim and Jeongkyu Roh}, year = {2021}, howpublished = {\url{https://huggingface.co/ddwkim/asr-conformer-transformerlm-ksponspeech}}, } ``` # Citing KsponSpeech dataset ```bibtex @Article{app10196936, AUTHOR = {Bang, Jeong-Uk and Yun, Seung and Kim, Seung-Hi and Choi, Mu-Yeol and Lee, Min-Kyu and Kim, Yeo-Jeong and Kim, Dong-Hyun and Park, Jun and Lee, Young-Jik and Kim, Sang-Hun}, TITLE = {KsponSpeech: Korean Spontaneous Speech Corpus for Automatic Speech Recognition}, JOURNAL = {Applied Sciences}, VOLUME = {10}, YEAR = {2020}, NUMBER = {19}, ARTICLE-NUMBER = {6936}, URL = {https://www.mdpi.com/2076-3417/10/19/6936}, ISSN = {2076-3417}, DOI = {10.3390/app10196936} } ```
{"language": "kr", "license": "apache-2.0", "tags": ["ASR", "CTC", "Attention", "Conformer", "pytorch", "speechbrain"], "datasets": ["ksponspeech"], "metrics": ["wer", "cer"]}
ddwkim/asr-conformer-transformerlm-ksponspeech
null
[ "speechbrain", "ASR", "CTC", "Attention", "Conformer", "pytorch", "kr", "dataset:ksponspeech", "arxiv:2106.04624", "license:apache-2.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
dead69/GPT-medium-hagrid
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# DialoGPT Trained on the Speech of a Game Character Chat with the model: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("dead69/GTP-small-yoda") model = AutoModelWithLMHead.from_pretrained("dead69/GTP-small-yoda") # Let's chat for 4 lines for step in range(10): # encode the new user input, add the eos_token and return a tensor in Pytorch new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt') # print(new_user_input_ids) # append the new user input tokens to the chat history bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids # generated a response while limiting the total chat history to 1000 tokens, chat_history_ids = model.generate( bot_input_ids, max_length=200, pad_token_id=tokenizer.eos_token_id, no_repeat_ngram_size=3, do_sample=True, top_k=100, top_p=0.7, temperature=0.8 ) # pretty print last ouput tokens from bot print("Master YODA: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True))) ```
{"license": "mit", "tags": ["conversational"], "thumbnail": "https://huggingface.co/front/thumbnails/dialogpt.png"}
dead69/GPT-small-yoda
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
dead69/GTP-medium-hagrid
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
Pretraining Dataset: [AAAC01](https://huggingface.co/datasets/debatelab/aaac) Demo: [DeepA2 Demo](https://huggingface.co/spaces/debatelab/deepa2-demo) Paper: [DeepA2: A Modular Framework for Deep Argument Analysis with Pretrained Neural Text2Text Language Models](https://arxiv.org/abs/2110.01509) Authors: *Gregor Betz, Kyle Richardson* ## Abstract In this paper, we present and implement a multi-dimensional, modular framework for performing deep argument analysis (DeepA2) using current pre-trained language models (PTLMs). ArgumentAnalyst -- a T5 model (Raffel et al. 2020) set up and trained within DeepA2 -- reconstructs argumentative texts, which advance an informal argumentation, as valid arguments: It inserts, e.g., missing premises and conclusions, formalizes inferences, and coherently links the logical reconstruction to the source text. We create a synthetic corpus for deep argument analysis, and evaluate ArgumentAnalyst on this new dataset as well as on existing data, specifically EntailmentBank (Dalvi et al. 2021). Our empirical findings vindicate the overall framework and highlight the advantages of a modular design, in particular its ability to emulate established heuristics (such as hermeneutic cycles), to explore the model's uncertainty, to cope with the plurality of correct solutions (underdetermination), and to exploit higher-order evidence.
{"language": ["en"], "license": "cc-by-sa-4.0", "datasets": ["debatelab/aaac"], "widget": [{"text": "reason_statements: argument_source: If Peter likes fish, Peter has been to New York. So, Peter has been to New York.", "example_title": "Premise identification"}, {"text": "argdown_reconstruction: argument_source: If Peter likes fish, Peter has been to New York. So, Peter has been to New York.", "example_title": "Argdown reconstruction"}, {"text": "premises_formalized: reason_statements: If Peter likes fish, Peter has been to New York. (ref: (1))", "example_title": "Formalization"}], "inference": {"parameters": {"max_length": 80}}}
DebateLabKIT/argument-analyst
null
[ "transformers", "pytorch", "t5", "text2text-generation", "en", "dataset:debatelab/aaac", "arxiv:2110.01509", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# CRiPT Model Large (Critical Thinking Intermediarily Pretrained Transformer) Large version of the trained model (`SYL01-2020-10-24-72K/gpt2-large-train03-72K`) presented in the paper "Critical Thinking for Language Models" (Betz, Voigt and Richardson 2020). See also: * [blog entry](https://debatelab.github.io/journal/critical-thinking-language-models.html) * [GitHub repo](https://github.com/debatelab/aacorpus) * [paper](https://arxiv.org/pdf/2009.07185)
{"language": "en", "tags": ["gpt2"]}
DebateLabKIT/cript-large
null
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "en", "arxiv:2009.07185", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# CRiPT Model Medium (Critical Thinking Intermediarily Pretrained Transformer) Medium version of the trained model (`SYL01-2020-10-24-72K/gpt2-medium-train03-72K`) presented in the paper "Critical Thinking for Language Models" (Betz, Voigt and Richardson 2020). See also: * [blog entry](https://debatelab.github.io/journal/critical-thinking-language-models.html) * [GitHub repo](https://github.com/debatelab/aacorpus) * [paper](https://arxiv.org/pdf/2009.07185)
{"language": "en", "tags": ["gpt2"]}
DebateLabKIT/cript-medium
null
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "en", "arxiv:2009.07185", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# CRiPT Model (Critical Thinking Intermediarily Pretrained Transformer) Small version of the trained model (`SYL01-2020-10-24-72K/gpt2-small-train03-72K`) presented in the paper "Critical Thinking for Language Models" (Betz, Voigt and Richardson 2020). See also: * [blog entry](https://debatelab.github.io/journal/critical-thinking-language-models.html) * [GitHub repo](https://github.com/debatelab/aacorpus) * [paper](https://arxiv.org/pdf/2009.07185)
{"language": "en", "tags": ["gpt2"]}
DebateLabKIT/cript
null
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "en", "arxiv:2009.07185", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
This model has been trained for the purpose of classifying text from different domains. Currently it is trained with much lesser data and it has been trained to identify text from 3 domains, "sports", "healthcare" and "financial". Label_0 represents "financial", Label_1 represents "Healthcare" and Label_2 represents "Sports". Currently I have trained it with these 3 domains only, I am pretty soon planning to train it on more domains and more data, hence its accuracy will improve further too.
{}
debjyoti007/new_doc_classifier
null
[ "transformers", "pytorch", "distilbert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
dedok/F
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
# Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 38639804 - CO2 Emissions (in grams): 11.98841452241473 ## Validation Metrics - Loss: 0.421400249004364 - Accuracy: 0.86783988957902 - Macro F1: 0.8669477050676501 - Micro F1: 0.86783988957902 - Weighted F1: 0.86694770506765 - Macro Precision: 0.867606300132228 - Micro Precision: 0.86783988957902 - Weighted Precision: 0.8676063001322278 - Macro Recall: 0.86783988957902 - Micro Recall: 0.86783988957902 - Weighted Recall: 0.86783988957902 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/dee4hf/autonlp-shajBERT-38639804 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("dee4hf/autonlp-shajBERT-38639804", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("dee4hf/autonlp-shajBERT-38639804", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
{"language": "unk", "tags": "autonlp", "datasets": ["dee4hf/autonlp-data-shajBERT"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 11.98841452241473}
dee4hf/autonlp-shajBERT-38639804
null
[ "transformers", "pytorch", "albert", "text-classification", "autonlp", "unk", "dataset:dee4hf/autonlp-data-shajBERT", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
dee4hf/autonlp
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
trying to create my first BERT model
{}
dee4hf/deeBERT
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
dee4hf/overHate
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
## Model description T5 model trained for Grammar Correction. This model corrects grammatical mistakes in input sentences ### Dataset Description The T5-base model has been trained on C4_200M dataset. ### Model in Action 🚀 ``` import torch from transformers import T5Tokenizer, T5ForConditionalGeneration model_name = 'deep-learning-analytics/GrammarCorrector' torch_device = 'cuda' if torch.cuda.is_available() else 'cpu' tokenizer = T5Tokenizer.from_pretrained(model_name) model = T5ForConditionalGeneration.from_pretrained(model_name).to(torch_device) def correct_grammar(input_text,num_return_sequences): batch = tokenizer([input_text],truncation=True,padding='max_length',max_length=64, return_tensors="pt").to(torch_device) translated = model.generate(**batch,max_length=64,num_beams=num_beams, num_return_sequences=num_return_sequences, temperature=1.5) tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True) return tgt_text ``` ### Example Usage ``` text = 'He are moving here.' print(correct_grammar(text, num_return_sequences=2)) ['He is moving here.', 'He is moving here now.'] ``` Another example ``` text = 'Cat drinked milk' print(correct_grammar(text, num_return_sequences=2)) ['Cat drank milk.', 'Cat drink milk.'] ``` Model Developed by [Priya-Dwivedi](https://www.linkedin.com/in/priyanka-dwivedi-6864362)
{}
deep-learning-analytics/GrammarCorrector
null
[ "transformers", "pytorch", "tf", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
{}
deep-learning-analytics/automatic-title-generation
null
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
{}
deep-learning-analytics/segformer_semantic_segmentation
null
[ "transformers", "pytorch", "segformer", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
question-answering
transformers
# Model name Closed Book Trivia-QA T5 base ## Model description This is a T5-base model trained on No Context Trivia QA data set. The input to the model is a Trivia type question. The model is tuned to search for the answer in its memory to return it. The pretrained model used here was trained on Common Crawl (C4) data set. The model was trained for 135 epochs using a batch size of 32 and learning rate of 1e-3. Max_input_lngth is set as 25 and max_output_length is 10. Model attained an EM score of 17 and a Subset Match score of 24.5 We have written a blog post that covers the training procedure. Please find it [here](https://medium.com/@priya.dwivedi/build-a-trivia-bot-using-t5-transformer-345ff83205b6). Test the model on Trivia Questions from the websites below: https://www.triviaquestionss.com/easy-trivia-questions/ https://laffgaff.com/easy-trivia-questions-and-answers/ ## Usage ``` from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("deep-learning-analytics/triviaqa-t5-base") model = AutoModelWithLMHead.from_pretrained("deep-learning-analytics/triviaqa-t5-base") device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") model = model.to(device) text = "Who directed the movie Jaws?" preprocess_text = text.strip().replace("\n","") tokenized_text = tokenizer.encode(preprocess_text, return_tensors="pt").to(device) outs = model.model.generate( tokenized_text, max_length=10, num_beams=2, early_stopping=True ) dec = [tokenizer.decode(ids) for ids in outs] print("Predicted Answer: ", dec) ```
{"language": "eng", "tags": ["triviaqa", "t5-base", "pytorch", "lm-head", "question-answering", "closed-book", "t5", "pipeline:question-answering"], "datasets": ["triviaqa"], "metrics": [{"EM": 17}, {"Subset match": 24.5}], "widget": [{"text": ["Mount Everest is found in which mountain range?", "None"]}]}
deep-learning-analytics/triviaqa-t5-base
null
[ "transformers", "pytorch", "t5", "text2text-generation", "triviaqa", "t5-base", "lm-head", "question-answering", "closed-book", "pipeline:question-answering", "eng", "dataset:triviaqa", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
summarization
transformers
# Model name Wikihow T5-small ## Model description This is a T5-small model trained on Wikihow All data set. The model was trained for 3 epochs using a batch size of 16 and learning rate of 3e-4. Max_input_lngth is set as 512 and max_output_length is 150. Model attained a Rouge1 score of 31.2 and RougeL score of 24.5. We have written a blog post that covers the training procedure. Please find it [here](https://medium.com/@priya.dwivedi/fine-tuning-a-t5-transformer-for-any-summarization-task-82334c64c81). ## Usage ``` from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("deep-learning-analytics/wikihow-t5-small") model = AutoModelWithLMHead.from_pretrained("deep-learning-analytics/wikihow-t5-small") device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") model = model.to(device) text = """" Lack of fluids can lead to dry mouth, which is a leading cause of bad breath. Water can also dilute any chemicals in your mouth or gut that are causing bad breath., Studies show that eating 6 ounces of yogurt a day reduces the level of odor-causing compounds in the mouth. In particular, look for yogurt containing the active bacteria Streptococcus thermophilus or Lactobacillus bulgaricus., The abrasive nature of fibrous fruits and vegetables helps to clean teeth, while the vitamins, antioxidants, and acids they contain improve dental health.Foods that can be particularly helpful include:Apples — Apples contain vitamin C, which is necessary for health gums, as well as malic acid, which helps to whiten teeth.Carrots — Carrots are rich in vitamin A, which strengthens tooth enamel.Celery — Chewing celery produces a lot of saliva, which helps to neutralize bacteria that cause bad breath.Pineapples — Pineapples contain bromelain, an enzyme that cleans the mouth., These teas have been shown to kill the bacteria that cause bad breath and plaque., An upset stomach can lead to burping, which contributes to bad breath. Don’t eat foods that upset your stomach, or if you do, use antacids. If you are lactose intolerant, try lactase tablets., They can all cause bad breath. If you do eat them, bring sugar-free gum or a toothbrush and toothpaste to freshen your mouth afterwards., Diets low in carbohydrates lead to ketosis — a state in which the body burns primarily fat instead of carbohydrates for energy. This may be good for your waistline, but it also produces chemicals called ketones, which contribute to bad breath.To stop the problem, you must change your diet. Or, you can combat the smell in one of these ways:Drink lots of water to dilute the ketones.Chew sugarless gum or suck on sugarless mints.Chew mint leaves. """ preprocess_text = text.strip().replace("\n","") tokenized_text = tokenizer.encode(preprocess_text, return_tensors="pt").to(device) summary_ids = model.generate( tokenized_text, max_length=150, num_beams=2, repetition_penalty=2.5, length_penalty=1.0, early_stopping=True ) output = tokenizer.decode(summary_ids[0], skip_special_tokens=True) print ("\n\nSummarized text: \n",output) ```
{"language": "eng", "tags": ["wikihow", "t5-small", "pytorch", "lm-head", "seq2seq", "t5", "pipeline:summarization", "summarization"], "datasets": ["Wikihow"], "metrics": [{"Rouge1": 31.2}, {"RougeL": 24.5}], "widget": [{"text": "Lack of fluids can lead to dry mouth, which is a leading cause of bad breath. Water can also dilute any chemicals in your mouth or gut that are causing bad breath., Studies show that eating 6 ounces of yogurt a day reduces the level of odor-causing compounds in the mouth. In particular, look for yogurt containing the active bacteria Streptococcus thermophilus or Lactobacillus bulgaricus., The abrasive nature of fibrous fruits and vegetables helps to clean teeth, while the vitamins, antioxidants, and acids they contain improve dental health.Foods that can be particularly helpful include:Apples \u2014 Apples contain vitamin C, which is necessary for health gums, as well as malic acid, which helps to whiten teeth.Carrots \u2014 Carrots are rich in vitamin A, which strengthens tooth enamel.Celery \u2014 Chewing celery produces a lot of saliva, which helps to neutralize bacteria that cause bad breath.Pineapples \u2014 Pineapples contain bromelain, an enzyme that cleans the mouth., These teas have been shown to kill the bacteria that cause bad breath and plaque., An upset stomach can lead to burping, which contributes to bad breath. Don\u2019t eat foods that upset your stomach, or if you do, use antacids. If you are lactose intolerant, try lactase tablets., They can all cause bad breath. If you do eat them, bring sugar-free gum or a toothbrush and toothpaste to freshen your mouth afterwards., Diets low in carbohydrates lead to ketosis \u2014 a state in which the body burns primarily fat instead of carbohydrates for energy. This may be good for your waistline, but it also produces chemicals called ketones, which contribute to bad breath.To stop the problem, you must change your diet. Or, you can combat the smell in one of these ways:Drink lots of water to dilute the ketones.Chew sugarless gum or suck on sugarless mints.Chew mint leaves."}, {"text": " Bring 1/2 cup water to the boil.Add the fresh or dried rosemary to the water.Remove from the heat. Set aside for 1/2 an hour to infuse. Added flavour can be released by pressing down on the rosemary leaves with a spoon. Add the pieces to the blender or food processor with the elderflower cordial. Blend or process to a pur\u00e9e.,, Add the lemon or lime juice and stir to combine., Add a cover and place in the freezer.After 2 hours, remove from the freezer and break up with a fork. This helps the ice crystals to form properly.Continue doing this every hour until the granita freezes properly. Scoop the granita into dessert bowls and serve. Garnish with a cucumber curl or a small sprig of rosemary."}]}
deep-learning-analytics/wikihow-t5-small
null
[ "transformers", "pytorch", "t5", "text2text-generation", "wikihow", "t5-small", "lm-head", "seq2seq", "pipeline:summarization", "summarization", "eng", "dataset:Wikihow", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
deepakgupta/bert-stsb
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-distilled-squad-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased-distilled-squad](https://huggingface.co/distilbert-base-uncased-distilled-squad) on the squad_v2 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 0.1 ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad_v2"], "model-index": [{"name": "distilbert-base-uncased-distilled-squad-finetuned-squad", "results": []}]}
deepakvk/distilbert-base-uncased-distilled-squad-finetuned-squad
null
[ "transformers", "pytorch", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad_v2", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
deepakvk/roberta-base-finetuned-squad
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
deepakvk/tinyroberta-squad2-finetuned-squad
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
# Welcome to Roberta-Marathi-MLM ## Model Description > This is a small language model for [Marathi](https://en.wikipedia.org/wiki/Marathi) language with 1M data samples taken from [OSCAR page](https://oscar-public.huma-num.fr/shuffled/mr_dedup.txt.gz) ## Training params - **Dataset** - 1M data samples are used to train this model from OSCAR page(https://oscar-corpus.com/) eventhough data set is of 2.7 GB due to resource constraint to train I have picked only 1M data from the total 2.7GB data set. If you are interested in collaboration and have computational resources to train on you are most welcome to do so. - **Preprocessing** - ByteLevelBPETokenizer is used to tokenize the sentences at character level and vocabulary size is set to 52k as per standard values given by 🤗 <!-- - **Hyperparameters** - __ByteLevelBPETokenizer__ : vocabulary size = 52_000 and min_frequency = 2 __Trainer__ : num_train_epochs=12 - trained for 12 epochs per_gpu_train_batch_size=64 - batch size for the datasamples is 64 save_steps=10_000 - save model for every 10k steps save_total_limit=2 - save limit is set for 2 --> **Intended uses & limitations** this is for anyone who wants to make use of marathi language models for various tasks like language generation, translation and many more use cases. **Whatever else is helpful!** If you are intersted in collaboration feel free to reach me [Deepam](mailto:[email protected])
{"language": "mr"}
deepampatel/roberta-mlm-marathi
null
[ "transformers", "pytorch", "jax", "roberta", "fill-mask", "mr", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
deepanshudey/dbms
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # output This model is a fine-tuned version of [hf-test/xls-r-dummy](https://huggingface.co/hf-test/xls-r-dummy) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB dataset. It achieves the following results on the evaluation set: - Loss: 156.8789 - Wer: 1.3456 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
{"language": ["ab"], "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "output", "results": []}]}
deepdml/output
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "ab", "dataset:common_voice", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4798 - Wer: 0.3474 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.5229 | 4.0 | 500 | 1.6557 | 1.0422 | | 0.6618 | 8.0 | 1000 | 0.4420 | 0.4469 | | 0.2211 | 12.0 | 1500 | 0.4705 | 0.4002 | | 0.1281 | 16.0 | 2000 | 0.4347 | 0.3688 | | 0.0868 | 20.0 | 2500 | 0.4653 | 0.3590 | | 0.062 | 24.0 | 3000 | 0.4747 | 0.3519 | | 0.0472 | 28.0 | 3500 | 0.4798 | 0.3474 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.9.0+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-base-timit-demo-colab", "results": []}]}
deepdml/wav2vec2-base-timit-demo-colab
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-basque This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.4276 - Wer: 0.5962 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.9902 | 1.29 | 400 | 2.1257 | 1.0 | | 0.9625 | 2.59 | 800 | 0.5695 | 0.7452 | | 0.4605 | 3.88 | 1200 | 0.4276 | 0.5962 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
{"language": "eu", "license": "apache-2.0", "tags": ["automatic-speech-recognition", "basque", "generated_from_trainer", "hf-asr-leaderboard", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_7_0"], "metrics": ["wer", "cer"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-basque", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "eu"}, "metrics": [{"type": "wer", "value": 51.89, "name": "Test WER"}, {"type": "cer", "value": 10.01, "name": "Test CER"}]}]}]}
deepdml/wav2vec2-large-xls-r-300m-basque
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "basque", "generated_from_trainer", "hf-asr-leaderboard", "robust-speech-event", "eu", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
# Detectron2 Cascade-RCNN with FPN and Group Normalization on ResNext32xd4-50 trained on Publaynet for Document Layout Analysis The model and has been trained with the Tensorflow training toolkit Tensorpack and then transferred to Pytorch using a conversion script. The Tensorflow and Pytorch models differ slightly (padding ...), however validating both models give a difference of less than 0.03 mAP. A second model has been added where the Tensorpack model has been used as initial checkpoint and training has been resumed for 20K iterations. Performance of this model is now superior to the Tensorpack model. Please check: [Xu Zhong et. all. - PubLayNet: largest dataset ever for document layout analysis](https://arxiv.org/abs/1908.07836). This model is different from the model used the paper. The code has been adapted so that it can be used in a **deep**doctection pipeline. ## How this model can be used This model can be used with the **deep**doctection in a full pipeline, along with table recognition and OCR. Check the general instruction following this [Get_started](https://github.com/deepdoctection/deepdoctection/blob/master/notebooks/Get_Started.ipynb) tutorial. ## This is an inference model only To reduce the size of the checkpoint we removed all variables that are not necessary for inference. Therefore it cannot be used for fine-tuning. To fine tune this model please use Tensorflow, as well as its training script. More information can be found in this [this model card](https://huggingface.co/deepdoctection/tp_casc_rcnn_X_32xd4_50_FPN_GN_2FC_publaynet).
{"license": "apache-2.0", "tags": ["Pytorch"], "datasets": ["Publaynet"]}
deepdoctection/d2_casc_rcnn_X_32xd4_50_FPN_GN_2FC_publaynet_inference_only
null
[ "Pytorch", "dataset:Publaynet", "arxiv:1908.07836", "license:apache-2.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
# Detectron2 Cascade-RCNN with FPN and Group Normalization on ResNext32xd4-50 trained on Pubtabnet for Semantic Segmentation of tables. The model and has been trained with the Tensorflow training toolkit Tensorpack and then transferred to Pytorch using a conversion script. The Tensorflow and Pytorch models differ slightly (padding ...), however validating both models give a difference of less than 0.03 mAP. A second model has been added where the Tensorpack model has been used as initial checkpoint and training has been resumed for 50K iterations. Performance of this model is now superior to the Tensorpack model. Regarding the dataset, please check: [Xu Zhong et. all. - Image-based table recognition: data, model, and evaluation](https://arxiv.org/abs/1911.10683). The model has been trained on detecting cells from tables. Note, that the datasets contains tables only. Therefore, it is required to perform a table detection task before detecting cells. The code has been adapted so that it can be used in a **deep**doctection pipeline. ## How this model can be used This model can be used with the **deep**doctection in a full pipeline, along with table recognition and OCR. Check the general instruction following this [Get_started](https://github.com/deepdoctection/deepdoctection/blob/master/notebooks/Get_Started.ipynb) tutorial. ## This is an inference model only To reduce the size of the checkpoint we removed all variables that are not necessary for inference. Therefore it cannot be used for fine-tuning. To fine tune this model please use Tensorflow, as well as its training script. More information can be found in this [this model card](https://huggingface.co/deepdoctection/tp_casc_rcnn_X_32xd4_50_FPN_GN_2FC_pubtabnet_c).
{"license": "apache-2.0", "tags": ["Pytorch"], "datasets": ["Pubtabnet"]}
deepdoctection/d2_casc_rcnn_X_32xd4_50_FPN_GN_2FC_pubtabnet_c_inference_only
null
[ "Pytorch", "dataset:Pubtabnet", "arxiv:1911.10683", "license:apache-2.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
# Detectron2 Cascade-RCNN with FPN and Group Normalization on ResNext32xd4-50 trained on Pubtabnet for Semantic Segmentation of tables. The model and has been trained with the Tensorflow training toolkit Tensorpack and then transferred to Pytorch using a conversion script. The Tensorflow and Pytorch models differ slightly (padding ...), however validating both models give a difference of less than 0.03 mAP. A second model has been added where the Tensorpack model has been used as initial checkpoint and training has been resumed for 20K iterations. Performance of this model is now superior to the Tensorpack model. Regarding the dataset, please check: [Xu Zhong et. all. - Image-based table recognition: data, model, and evaluation](https://arxiv.org/abs/1911.10683). The model has been trained on detecting rows and columns for tables. As rows and column bounding boxes are not a priori an element of the annotations they are calculated using the bounding boxes of the cells and the intrinsic structure of the enclosed HTML. The code has been adapted so that it can be used in a **deep**doctection pipeline. ## How this model can be used This model can be used with the **deep**doctection in a full pipeline, along with table recognition and OCR. Check the general instruction following this [Get_started](https://github.com/deepdoctection/deepdoctection/blob/master/notebooks/Get_Started.ipynb) tutorial. ## This is an inference model only To reduce the size of the checkpoint we removed all variables that are not necessary for inference. Therefore it cannot be used for fine-tuning. To fine tune this model please use Tensorflow, as well as its training script. More information can be found in this [this model card](https://huggingface.co/deepdoctection/tp_casc_rcnn_X_32xd4_50_FPN_GN_2FC_pubtabnet_rc).
{"license": "apache-2.0", "tags": ["Pytorch"], "datasets": ["Pubtabnet"]}
deepdoctection/d2_casc_rcnn_X_32xd4_50_FPN_GN_2FC_pubtabnet_rc_inference_only
null
[ "Pytorch", "dataset:Pubtabnet", "arxiv:1911.10683", "license:apache-2.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
# Tensorpacks Cascade-RCNN with FPN and Group Normalization on ResNext32xd4-50 trained on Publaynet for Document Layout Analysis The model and its training code has been mainly taken from: [Tensorpack](https://github.com/tensorpack/tensorpack/tree/master/examples/FasterRCNN) . Please check: [Xu Zhong et. all. - PubLayNet: largest dataset ever for document layout analysis](https://arxiv.org/abs/1908.07836). This model is different from the model used the paper. The code has been adapted so that it can be used in a **deep**doctection pipeline. ## How this model can be used This model can be used with the **deep**doctection in a full pipeline, along with table recognition and OCR. Check the general instruction following this [Get_started](https://github.com/deepdoctection/deepdoctection/blob/master/notebooks/Get_Started.ipynb) tutorial. ## How this model was trained. To recreate the model run on the **deep**doctection framework, run: ```python >>> import os >>> from deep_doctection.datasets import DatasetRegistry >>> from deep_doctection.eval import MetricRegistry >>> from deep_doctection.utils import get_configs_dir_path >>> from deep_doctection.train import train_faster_rcnn publaynet = DatasetRegistry.get_dataset("publaynet") path_config_yaml=os.path.join(get_configs_dir_path(),"tp/layout/conf_frcnn_layout.yaml") path_weights = "" dataset_train = publaynet config_overwrite=["TRAIN.STEPS_PER_EPOCH=500","TRAIN.EVAL_PERIOD=200","TRAIN.STARTING_EPOCH=1", "PREPROC.TRAIN_SHORT_EDGE_SIZE=[800,1200]","TRAIN.CHECKPOINT_PERIOD=50", "BACKBONE.FREEZE_AT=0"] build_train_config=["max_datapoints=335703"] dataset_val = publaynet build_val_config = ["max_datapoints=2000"] coco_metric = MetricRegistry.get_metric("coco") train_faster_rcnn(path_config_yaml=path_config_yaml, dataset_train=dataset_train, path_weights=path_weights, config_overwrite=config_overwrite, log_dir="/path/to/dir", build_train_config=build_train_config, dataset_val=dataset_val, build_val_config=build_val_config, metric=coco_metric, pipeline_component_name="ImageLayoutService" ) ``` ## How to fine-tune this model To fine tune this model, please check this [Fine-tune](https://github.com/deepdoctection/deepdoctection/blob/master/notebooks/Fine_Tune.ipynb) tutorial.
{"license": "apache-2.0", "tags": ["Tensorflow"], "datasets": ["Publaynet"]}
deepdoctection/tp_casc_rcnn_X_32xd4_50_FPN_GN_2FC_publaynet
null
[ "Tensorflow", "dataset:Publaynet", "arxiv:1908.07836", "license:apache-2.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
# Tensorpacks Cascade-RCNN with FPN and Group Normalization on ResNext32xd4-50 trained on Publaynet for Document Layout Analysis The model and its training code has been mainly taken from: [Tensorpack](https://github.com/tensorpack/tensorpack/tree/master/examples/FasterRCNN) . Please check: [Xu Zhong et. all. - PubLayNet: largest dataset ever for document layout analysis](https://arxiv.org/abs/1908.07836). This model is different from the model used the paper. The code has been adapted so that it can be used in a **deep**doctection pipeline. ## How this model can be used This model can be used with the **deep**doctection in a full pipeline, along with table recognition and OCR. Check the general instruction following this [Get_started](https://github.com/deepdoctection/deepdoctection/blob/master/notebooks/Get_Started.ipynb) tutorial. ## This is an inference model only To reduce the size of the checkpoint we removed all variables that are not necessary for inference. Therefore it cannot be used for fine-tuning. To fine tune this model please check [this model](https://huggingface.co/deepdoctection/tp_casc_rcnn_X_32xd4_50_FPN_GN_2FC_publaynet). ## How this model was trained. To recreate the model run on the **deep**doctection framework, run: ```python >>> import os >>> from deep_doctection.datasets import DatasetRegistry >>> from deep_doctection.eval import MetricRegistry >>> from deep_doctection.utils import get_configs_dir_path >>> from deep_doctection.train import train_faster_rcnn publaynet = DatasetRegistry.get_dataset("publaynet") path_config_yaml=os.path.join(get_configs_dir_path(),"tp/layout/conf_frcnn_layout.yaml") path_weights = "" dataset_train = publaynet config_overwrite=["TRAIN.STEPS_PER_EPOCH=500","TRAIN.EVAL_PERIOD=200","TRAIN.STARTING_EPOCH=1", "PREPROC.TRAIN_SHORT_EDGE_SIZE=[800,1200]","TRAIN.CHECKPOINT_PERIOD=50", "BACKBONE.FREEZE_AT=0"] build_train_config=["max_datapoints=335703"] dataset_val = publaynet build_val_config = ["max_datapoints=2000"] coco_metric = MetricRegistry.get_metric("coco") train_faster_rcnn(path_config_yaml=path_config_yaml, dataset_train=dataset_train, path_weights=path_weights, config_overwrite=config_overwrite, log_dir="/path/to/dir", build_train_config=build_train_config, dataset_val=dataset_val, build_val_config=build_val_config, metric=coco_metric, pipeline_component_name="ImageLayoutService" ) ```
{"license": "apache-2.0", "tags": ["Tensorflow"], "datasets": ["Publaynet"]}
deepdoctection/tp_casc_rcnn_X_32xd4_50_FPN_GN_2FC_publaynet_inference_only
null
[ "Tensorflow", "dataset:Publaynet", "arxiv:1908.07836", "license:apache-2.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
# Tensorpacks Cascade-RCNN with FPN and Group Normalization on ResNext32xd4-50 trained on Pubtabnet for Semantic Segmentation of tables. The model and its training code has been mainly taken from: [Tensorpack](https://github.com/tensorpack/tensorpack/tree/master/examples/FasterRCNN) . Regarding the dataset, please check: [Xu Zhong et. all. - Image-based table recognition: data, model, and evaluation](https://arxiv.org/abs/1911.10683). The model has been trained on detecting cells from tables. Note, that the datasets contains tables only. Therefore, it is required to perform a table detection task before detecting cells. The code has been adapted so that it can be used in a **deep**doctection pipeline. ## How this model can be used This model can be used with the **deep**doctection in a full pipeline, along with table recognition and OCR. Check the general instruction following this [Get_started](https://github.com/deepdoctection/deepdoctection/blob/master/notebooks/Get_Started.ipynb) tutorial. ## How this model was trained. To recreate the model run on the **deep**doctection framework, run: ```python >>> import os >>> from deep_doctection.datasets import DatasetRegistry >>> from deep_doctection.eval import MetricRegistry >>> from deep_doctection.utils import get_configs_dir_path >>> from deep_doctection.train import train_faster_rcnn pubtabnet = DatasetRegistry.get_dataset("pubtabnet") pubtabnet.dataflow.categories.filter_categories(categories="CELL") path_config_yaml=os.path.join(get_configs_dir_path(),"tp/cell/conf_frcnn_cell.yaml") path_weights = "" dataset_train = pubtabnet config_overwrite=["TRAIN.STEPS_PER_EPOCH=500","TRAIN.STARTING_EPOCH=1", "TRAIN.CHECKPOINT_PERIOD=50","BACKBONE.FREEZE_AT=0", "PREPROC.TRAIN_SHORT_EDGE_SIZE=[200,600]"] build_train_config=["max_datapoints=500000"] dataset_val = pubtabnet build_val_config = ["max_datapoints=4000"] coco_metric = MetricRegistry.get_metric("coco") coco_metric.set_params(max_detections=[50,200,600], area_range=[[0,1000000],[0,200],[200,800],[800,1000000]]) train_faster_rcnn(path_config_yaml=path_config_yaml, dataset_train=dataset_train, path_weights=path_weights, config_overwrite=config_overwrite, log_dir="/path/to/dir", build_train_config=build_train_config, dataset_val=dataset_val, build_val_config=build_val_config, metric=coco_metric, pipeline_component_name="ImageLayoutService" ) ``` ## How to fine-tune this model To fine tune this model, please check this [Fine-tune](https://github.com/deepdoctection/deepdoctection/blob/master/notebooks/Fine_Tune.ipynb) tutorial.
{"license": "apache-2.0", "tags": ["Tensorflow"], "datasets": ["Pubtabnet"]}
deepdoctection/tp_casc_rcnn_X_32xd4_50_FPN_GN_2FC_pubtabnet_c
null
[ "Tensorflow", "dataset:Pubtabnet", "arxiv:1911.10683", "license:apache-2.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
# Tensorpacks Cascade-RCNN with FPN and Group Normalization on ResNext32xd4-50 trained on Pubtabnet for Semantic Segmentation of tables. The model and its training code has been mainly taken from: [Tensorpack](https://github.com/tensorpack/tensorpack/tree/master/examples/FasterRCNN) . Regarding the dataset, please check: [Xu Zhong et. all. - Image-based table recognition: data, model, and evaluation](https://arxiv.org/abs/1911.10683). The model has been trained on detecting cells from tables. Note, that the datasets contains tables only. Therefore, it is required to perform a table detection task before detecting cells. The code has been adapted so that it can be used in a **deep**doctection pipeline. ## How this model can be used This model can be used with the **deep**doctection in a full pipeline, along with table recognition and OCR. Check the general instruction following this [Get_started](https://github.com/deepdoctection/deepdoctection/blob/master/notebooks/Get_Started.ipynb) tutorial. ## This is an inference model only To reduce the size of the checkpoint we removed all variables that are not necessary for inference. Therefore it cannot be used for fine-tuning. To fine tune this model please check this [model](https://huggingface.co/deepdoctection/tp_casc_rcnn_X_32xd4_50_FPN_GN_2FC_pubtabnet_c) . ## How this model was trained. To recreate the model run on the **deep**doctection framework, run: ```python >>> import os >>> from deep_doctection.datasets import DatasetRegistry >>> from deep_doctection.eval import MetricRegistry >>> from deep_doctection.utils import get_configs_dir_path >>> from deep_doctection.train import train_faster_rcnn pubtabnet = DatasetRegistry.get_dataset("pubtabnet") pubtabnet.dataflow.categories.filter_categories(categories="CELL") path_config_yaml=os.path.join(get_configs_dir_path(),"tp/cell/conf_frcnn_cell.yaml") path_weights = "" dataset_train = pubtabnet config_overwrite=["TRAIN.STEPS_PER_EPOCH=500","TRAIN.STARTING_EPOCH=1", "TRAIN.CHECKPOINT_PERIOD=50","BACKBONE.FREEZE_AT=0", "PREPROC.TRAIN_SHORT_EDGE_SIZE=[200,600]"] build_train_config=["max_datapoints=500000"] dataset_val = pubtabnet build_val_config = ["max_datapoints=4000"] coco_metric = MetricRegistry.get_metric("coco") coco_metric.set_params(max_detections=[50,200,600], area_range=[[0,1000000],[0,200],[200,800],[800,1000000]]) train_faster_rcnn(path_config_yaml=path_config_yaml, dataset_train=dataset_train, path_weights=path_weights, config_overwrite=config_overwrite, log_dir="/path/to/dir", build_train_config=build_train_config, dataset_val=dataset_val, build_val_config=build_val_config, metric=coco_metric, pipeline_component_name="ImageLayoutService" ) ``` ## How to fine-tune this model To fine tune this model, please check this [Fine-tune](https://github.com/deepdoctection/deepdoctection/blob/master/notebooks/Fine_Tune.ipynb) tutorial.
{"license": "apache-2.0", "tags": ["Tensorflow"], "datasets": ["Pubtabnet"]}
deepdoctection/tp_casc_rcnn_X_32xd4_50_FPN_GN_2FC_pubtabnet_c_inference_only
null
[ "Tensorflow", "dataset:Pubtabnet", "arxiv:1911.10683", "license:apache-2.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
# Tensorpacks Cascade-RCNN with FPN and Group Normalization on ResNext32xd4-50 trained on Pubtabnet for Semantic Segmentation of tables. The model and its training code has been mainly taken from: [Tensorpack](https://github.com/tensorpack/tensorpack/tree/master/examples/FasterRCNN) . Regarding the dataset, please check: [Xu Zhong et. all. - Image-based table recognition: data, model, and evaluation](https://arxiv.org/abs/1911.10683). The model has been trained on detecting rows and columns for tables. As rows and column bounding boxes are not a priori an element of the annotations they are calculated using the bounding boxes of the cells and the intrinsic structure of the enclosed HTML. The code has been adapted so that it can be used in a **deep**doctection pipeline. ## How this model can be used This model can be used with the **deep**doctection in a full pipeline, along with table recognition and OCR. Check the general instruction following this [Get_started](https://github.com/deepdoctection/deepdoctection/blob/master/notebooks/Get_Started.ipynb) tutorial. ## How this model was trained. To recreate the model run on the **deep**doctection framework, run: ```python >>> import os >>> from deep_doctection.datasets import DatasetRegistry >>> from deep_doctection.eval import MetricRegistry >>> from deep_doctection.utils import get_configs_dir_path >>> from deep_doctection.train import train_faster_rcnn pubtabnet = DatasetRegistry.get_dataset("pubtabnet") pubtabnet.dataflow.categories.set_cat_to_sub_cat({"ITEM":"row_col"}) pubtabnet.dataflow.categories.filter_categories(categories=["ROW","COLUMN"]) path_config_yaml=os.path.join(get_configs_dir_path(),"tp/rows/conf_frcnn_rows.yaml") path_weights = "" dataset_train = pubtabnet config_overwrite=["TRAIN.STEPS_PER_EPOCH=500","TRAIN.STARTING_EPOCH=1", "TRAIN.CHECKPOINT_PERIOD=50"] build_train_config=["max_datapoints=500000","rows_and_cols=True"] dataset_val = pubtabnet build_val_config = ["max_datapoints=2000","rows_and_cols=True"] coco_metric = MetricRegistry.get_metric("coco") coco_metric.set_params(max_detections=[50,200,600], area_range=[[0,1000000],[0,200],[200,800],[800,1000000]]) train_faster_rcnn(path_config_yaml=path_config_yaml, dataset_train=dataset_train, path_weights=path_weights, config_overwrite=config_overwrite, log_dir="/path/to/dir", build_train_config=build_train_config, dataset_val=dataset_val, build_val_config=build_val_config, metric=coco_metric, pipeline_component_name="ImageLayoutService" ) ``` ## How to fine-tune this model To fine tune this model, please check this [Fine-tune](https://github.com/deepdoctection/deepdoctection/blob/master/notebooks/Fine_Tune.ipynb) tutorial.
{"license": "apache-2.0", "tags": ["Tensorflow"], "datasets": ["Pubtabnet"]}
deepdoctection/tp_casc_rcnn_X_32xd4_50_FPN_GN_2FC_pubtabnet_rc
null
[ "Tensorflow", "dataset:Pubtabnet", "arxiv:1911.10683", "license:apache-2.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
# Tensorpacks Cascade-RCNN with FPN and Group Normalization on ResNext32xd4-50 trained on Pubtabnet for Semantic Segmentation of tables. The model and its training code has been mainly taken from: [Tensorpack](https://github.com/tensorpack/tensorpack/tree/master/examples/FasterRCNN) . Regarding the dataset, please check: [Xu Zhong et. all. - Image-based table recognition: data, model, and evaluation](https://arxiv.org/abs/1911.10683). The model has been trained on detecting rows and columns for tables. As rows and column bounding boxes are not a priori an element of the annotations they are calculated using the bounding boxes of the cells and the intrinsic structure of the enclosed HTML. The code has been adapted so that it can be used in a **deep**doctection pipeline. ## How this model can be used This model can be used with the **deep**doctection in a full pipeline, along with table recognition and OCR. Check the general instruction following this [Get_started](https://github.com/deepdoctection/deepdoctection/blob/master/notebooks/Get_Started.ipynb) tutorial. ## This is an inference model only To reduce the size of the checkpoint we removed all variables that are not necessary for inference. Therefore it cannot be used for fine-tuning. To fine tune this model please check this [model](https://huggingface.co/deepdoctection/tp_casc_rcnn_X_32xd4_50_FPN_GN_2FC_pubtabnet_rc). ## How this model was trained. To recreate the model run on the **deep**doctection framework, run: ```python >>> import os >>> from deep_doctection.datasets import DatasetRegistry >>> from deep_doctection.eval import MetricRegistry >>> from deep_doctection.utils import get_configs_dir_path >>> from deep_doctection.train import train_faster_rcnn pubtabnet = DatasetRegistry.get_dataset("pubtabnet") pubtabnet.dataflow.categories.set_cat_to_sub_cat({"ITEM":"row_col"}) pubtabnet.dataflow.categories.filter_categories(categories=["ROW","COLUMN"]) path_config_yaml=os.path.join(get_configs_dir_path(),"tp/rows/conf_frcnn_rows.yaml") path_weights = "" dataset_train = pubtabnet config_overwrite=["TRAIN.STEPS_PER_EPOCH=500","TRAIN.STARTING_EPOCH=1", "TRAIN.CHECKPOINT_PERIOD=50"] build_train_config=["max_datapoints=500000","rows_and_cols=True"] dataset_val = pubtabnet build_val_config = ["max_datapoints=2000","rows_and_cols=True"] coco_metric = MetricRegistry.get_metric("coco") coco_metric.set_params(max_detections=[50,200,600], area_range=[[0,1000000],[0,200],[200,800],[800,1000000]]) train_faster_rcnn(path_config_yaml=path_config_yaml, dataset_train=dataset_train, path_weights=path_weights, config_overwrite=config_overwrite, log_dir="/path/to/dir", build_train_config=build_train_config, dataset_val=dataset_val, build_val_config=build_val_config, metric=coco_metric, pipeline_component_name="ImageLayoutService" ) ```
{"license": "apache-2.0", "tags": ["Tensorflow"], "datasets": ["Pubtabnet"]}
deepdoctection/tp_casc_rcnn_X_32xd4_50_FPN_GN_2FC_pubtabnet_rc_inference_only
null
[ "Tensorflow", "dataset:Pubtabnet", "arxiv:1911.10683", "license:apache-2.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
image-classification
transformers
# Poster2Plot An image captioning model to generate movie/t.v show plot from poster. It generates decent plots but is no way perfect. We are still working on improving the model. ## Live demo on Hugging Face Spaces: https://huggingface.co/spaces/deepklarity/poster2plot # Model Details The base model uses a Vision Transformer (ViT) model as an image encoder and GPT-2 as a decoder. We used the following models: * Encoder: [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) * Decoder: [gpt2](https://huggingface.co/gpt2) # Datasets Publicly available IMDb datasets were used to train the model. # How to use ## In PyTorch ```python import torch import re import requests from PIL import Image from transformers import AutoTokenizer, AutoFeatureExtractor, VisionEncoderDecoderModel # Pattern to ignore all the text after 2 or more full stops regex_pattern = "[.]{2,}" def post_process(text): try: text = text.strip() text = re.split(regex_pattern, text)[0] except Exception as e: print(e) pass return text def predict(image, max_length=64, num_beams=4): pixel_values = feature_extractor(images=image, return_tensors="pt").pixel_values pixel_values = pixel_values.to(device) with torch.no_grad(): output_ids = model.generate( pixel_values, max_length=max_length, num_beams=num_beams, return_dict_in_generate=True, ).sequences preds = tokenizer.batch_decode(output_ids, skip_special_tokens=True) pred = post_process(preds[0]) return pred model_name_or_path = "deepklarity/poster2plot" device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") # Load model. model = VisionEncoderDecoderModel.from_pretrained(model_name_or_path) model.to(device) print("Loaded model") feature_extractor = AutoFeatureExtractor.from_pretrained(model.encoder.name_or_path) print("Loaded feature_extractor") tokenizer = AutoTokenizer.from_pretrained(model.decoder.name_or_path, use_fast=True) if model.decoder.name_or_path == "gpt2": tokenizer.pad_token = tokenizer.eos_token print("Loaded tokenizer") url = "https://upload.wikimedia.org/wikipedia/en/2/26/Moana_Teaser_Poster.jpg" with Image.open(requests.get(url, stream=True).raw) as image: pred = predict(image) print(pred) ```
{"language": "en", "tags": ["image-classification", "image-captioning"]}
deepklarity/poster2plot
null
[ "transformers", "pytorch", "vision-encoder-decoder", "image-classification", "image-captioning", "en", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
Roberta-base training attempt on hindi datasets.
{}
deepklarity/roberta-base-hindi
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
# Perceiver IO for language Perceiver IO model pre-trained on the Masked Language Modeling (MLM) task proposed in [BERT](https://arxiv.org/abs/1810.04805) using a large text corpus obtained by combining [English Wikipedia](https://huggingface.co/datasets/wikipedia) and [C4](https://huggingface.co/datasets/c4). It was introduced in the paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Jaegle et al. and first released in [this repository](https://github.com/deepmind/deepmind-research/tree/master/perceiver). Disclaimer: The team releasing Perceiver IO did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description Perceiver IO is a transformer encoder model that can be applied on any modality (text, images, audio, video, ...). The core idea is to employ the self-attention mechanism on a not-too-large set of latent vectors (e.g. 256 or 512), and only use the inputs to perform cross-attention with the latents. This allows for the time and memory requirements of the self-attention mechanism to not depend on the size of the inputs. To decode, the authors employ so-called decoder queries, which allow to flexibly decode the final hidden states of the latents to produce outputs of arbitrary size and semantics. For masked language modeling, the output is a tensor containing the prediction scores of the language modeling head, of shape (batch_size, seq_length, vocab_size). <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/perceiver_architecture.jpg" alt="drawing" width="600"/> <small> Perceiver IO architecture.</small> As the time and memory requirements of the self-attention mechanism don't depend on the size of the inputs, the Perceiver IO authors train the model directly on raw UTF-8 bytes, rather than on subwords as is done in models like BERT, RoBERTa and GPT-2. This has many benefits: one doesn't need to train a tokenizer before training the model, one doesn't need to maintain a (fixed) vocabulary file, and this also doesn't hurt model performance as shown by [Bostrom et al., 2020](https://arxiv.org/abs/2004.03720). By pre-training the model, it learns an inner representation of language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the Perceiver model as inputs. ## Intended uses & limitations You can use the raw model for masked language modeling, but the model is intended to be fine-tuned on a labeled dataset. See the [model hub](https://huggingface.co/models?search=deepmind/perceiver) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model in PyTorch: ```python from transformers import PerceiverTokenizer, PerceiverForMaskedLM tokenizer = PerceiverTokenizer.from_pretrained("deepmind/language-perceiver") model = PerceiverForMaskedLM.from_pretrained("deepmind/language-perceiver") text = "This is an incomplete sentence where some words are missing." # prepare input encoding = tokenizer(text, padding="max_length", return_tensors="pt") # mask " missing.". Note that the model performs much better if the masked span starts with a space. encoding.input_ids[0, 52:61] = tokenizer.mask_token_id inputs, input_mask = encoding.input_ids.to(device), encoding.attention_mask.to(device) # forward pass outputs = model(inputs=inputs, attention_mask=input_mask) logits = outputs.logits masked_tokens_predictions = logits[0, 51:61].argmax(dim=-1) print(tokenizer.decode(masked_tokens_predictions)) >>> should print " missing." ``` ## Training data This model was pretrained on a combination of [English Wikipedia](https://huggingface.co/datasets/wikipedia) and [C4](https://huggingface.co/datasets/c4). 70% of the training tokens were sampled from the C4 dataset and the remaining 30% from Wikipedia. The authors concatenate 10 documents before splitting into crops to reduce wasteful computation on padding tokens. ## Training procedure ### Preprocessing Text preprocessing is trivial: it only involves encoding text into UTF-8 bytes, and padding them up to the same length (2048). ### Pretraining Hyperparameter details can be found in table 9 of the [paper](https://arxiv.org/abs/2107.14795). ## Evaluation results This model is able to achieve an average score of 81.8 on GLUE. For more details, we refer to table 3 of the original paper. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2107-14795, author = {Andrew Jaegle and Sebastian Borgeaud and Jean{-}Baptiste Alayrac and Carl Doersch and Catalin Ionescu and David Ding and Skanda Koppula and Daniel Zoran and Andrew Brock and Evan Shelhamer and Olivier J. H{\'{e}}naff and Matthew M. Botvinick and Andrew Zisserman and Oriol Vinyals and Jo{\~{a}}o Carreira}, title = {Perceiver {IO:} {A} General Architecture for Structured Inputs {\&} Outputs}, journal = {CoRR}, volume = {abs/2107.14795}, year = {2021}, url = {https://arxiv.org/abs/2107.14795}, eprinttype = {arXiv}, eprint = {2107.14795}, timestamp = {Tue, 03 Aug 2021 14:53:34 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2107-14795.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
{"language": ["en"], "license": "apache-2.0", "datasets": ["wikipedia", "c4"], "inference": false}
deepmind/language-perceiver
null
[ "transformers", "pytorch", "perceiver", "fill-mask", "en", "dataset:wikipedia", "dataset:c4", "arxiv:1810.04805", "arxiv:2107.14795", "arxiv:2004.03720", "license:apache-2.0", "autotrain_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
# Perceiver IO for multimodal autoencoding Perceiver IO model trained on [Kinetics-700-2020](https://arxiv.org/abs/2010.10864) for auto-encoding videos that consist of images, audio and a class label. It was introduced in the paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Jaegle et al. and first released in [this repository](https://github.com/deepmind/deepmind-research/tree/master/perceiver). The goal of multimodal autoencoding is to learn a model that can accurately reconstruct multimodal inputs in the presence of a bottleneck induced by an architecture. Disclaimer: The team releasing Perceiver IO did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description Perceiver IO is a transformer encoder model that can be applied on any modality (text, images, audio, video, ...). The core idea is to employ the self-attention mechanism on a not-too-large set of latent vectors (e.g. 256 or 512), and only use the inputs to perform cross-attention with the latents. This allows for the time and memory requirements of the self-attention mechanism to not depend on the size of the inputs. To decode, the authors employ so-called decoder queries, which allow to flexibly decode the final hidden states of the latents to produce outputs of arbitrary size and semantics. For multimodal autoencoding, the output contains the reconstructions of the 3 modalities: images, audio and the class label. <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/perceiver_architecture.jpg" alt="drawing" width="600"/> <small> Perceiver IO architecture.</small> As the time and memory requirements of the self-attention mechanism don't depend on the size of the inputs, the Perceiver IO authors can train the model by padding the inputs (images, audio, class label) with modality-specific embeddings and serialize all of them into a 2D input array (i.e. concatenate along the time dimension). Decoding the final hidden states of the latents is done by using queries containing Fourier-based position embeddings (for video and audio) and modality embeddings. ## Intended uses & limitations You can use the raw model for multimodal autoencoding. Note that by masking the class label during evaluation, the auto-encoding model becomes a video classifier. See the [model hub](https://huggingface.co/models search=deepmind/perceiver) to look for other versions on a task that may interest you. ### How to use We refer to the [tutorial notebook](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/Perceiver/Perceiver_for_Multimodal_Autoencoding.ipynb) regarding using the Perceiver for multimodal autoencoding. ## Training data This model was trained on [Kinetics-700-200](https://arxiv.org/abs/2010.10864), a dataset consisting of videos that belong to one of 700 classes. ## Training procedure ### Preprocessing The authors train on 16 frames at 224x224 resolution, preprocessed into 50k 4x4 patches as well as 30k raw audio samples, patched into a total of 1920 16-dimensional vectors and one 700-dimensional one-hot representation of the class label. ### Pretraining Hyperparameter details can be found in Appendix F of the [paper](https://arxiv.org/abs/2107.14795). ## Evaluation results For evaluation results, we refer to table 5 of the [paper](https://arxiv.org/abs/2107.14795). ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2107-14795, author = {Andrew Jaegle and Sebastian Borgeaud and Jean{-}Baptiste Alayrac and Carl Doersch and Catalin Ionescu and David Ding and Skanda Koppula and Daniel Zoran and Andrew Brock and Evan Shelhamer and Olivier J. H{\'{e}}naff and Matthew M. Botvinick and Andrew Zisserman and Oriol Vinyals and Jo{\~{a}}o Carreira}, title = {Perceiver {IO:} {A} General Architecture for Structured Inputs {\&} Outputs}, journal = {CoRR}, volume = {abs/2107.14795}, year = {2021}, url = {https://arxiv.org/abs/2107.14795}, eprinttype = {arXiv}, eprint = {2107.14795}, timestamp = {Tue, 03 Aug 2021 14:53:34 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2107-14795.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
{"license": "apache-2.0", "datasets": ["kinetics-700-2020"]}
deepmind/multimodal-perceiver
null
[ "transformers", "pytorch", "perceiver", "dataset:kinetics-700-2020", "arxiv:2010.10864", "arxiv:2107.14795", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
# Perceiver IO for optical flow Perceiver IO model trained on [AutoFlow](https://autoflow-google.github.io/). It was introduced in the paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Jaegle et al. and first released in [this repository](https://github.com/deepmind/deepmind-research/tree/master/perceiver). Optical flow is a decades-old open problem in computer vision. Given two images of the same scene (e.g. two consecutive frames of a video), the task is to estimate the 2D displacement for each pixel in the first image. This has many broader applications, such as navigation and visual odometry in robots, estimation of 3D geometry, and even to aid transfer of more complex, learned inference such as 3D human pose estimation from synthetic to real images. Disclaimer: The team releasing Perceiver IO did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description Perceiver IO is a transformer encoder model that can be applied on any modality (text, images, audio, video, ...). The core idea is to employ the self-attention mechanism on a not-too-large set of latent vectors (e.g. 256 or 512), and only use the inputs to perform cross-attention with the latents. This allows for the time and memory requirements of the self-attention mechanism to not depend on the size of the inputs. To decode, the authors employ so-called decoder queries, which allow to flexibly decode the final hidden states of the latents to produce outputs of arbitrary size and semantics. For optical flow, the output is a tensor containing the predicted flow of shape (batch_size, height, width, 2). <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/perceiver_architecture.jpg" alt="drawing" width="600"/> <small> Perceiver IO architecture.</small> As the time and memory requirements of the self-attention mechanism don't depend on the size of the inputs, the Perceiver IO authors can train the model on raw pixel values, by concatenating a pair of images and extracting a 3x3 patch around each pixel. The model obtains state-of-the-art results on important optical flow benchmarks, including [Sintel](http://sintel.is.tue.mpg.de/) and [KITTI](http://www.cvlibs.net/datasets/kitti/eval_scene_flow.php?benchmark=flow). ## Intended uses & limitations You can use the raw model for predicting optical flow between a pair of images. See the [model hub](https://huggingface.co/models?search=deepmind/perceiver) to look for other versions on a task that may interest you. ### How to use We refer to the [tutorial notebook](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/Perceiver/Perceiver_for_Optical_Flow.ipynb) regarding using the Perceiver for optical flow. ## Training data This model was trained on [AutoFlow](https://autoflow-google.github.io/), a synthetic dataset consisting of 400,000 annotated image pairs. ## Training procedure ### Preprocessing Frames are resized to a resolution of 368x496. The authors concatenate the frames along the channel dimension and extract a 3x3 patch around each pixel (leading to 3x3x3x2 = 54 values for each pixel). ### Pretraining Hyperparameter details can be found in Appendix E of the [paper](https://arxiv.org/abs/2107.14795). ## Evaluation results The model achieves a average end-point error (EPE) of 1.81 on Sintel.clean, 2.42 on Sintel.final and 4.98 on KITTI. For evaluation results, we refer to table 4 of the [paper](https://arxiv.org/abs/2107.14795). ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2107-14795, author = {Andrew Jaegle and Sebastian Borgeaud and Jean{-}Baptiste Alayrac and Carl Doersch and Catalin Ionescu and David Ding and Skanda Koppula and Daniel Zoran and Andrew Brock and Evan Shelhamer and Olivier J. H{\'{e}}naff and Matthew M. Botvinick and Andrew Zisserman and Oriol Vinyals and Jo{\~{a}}o Carreira}, title = {Perceiver {IO:} {A} General Architecture for Structured Inputs {\&} Outputs}, journal = {CoRR}, volume = {abs/2107.14795}, year = {2021}, url = {https://arxiv.org/abs/2107.14795}, eprinttype = {arXiv}, eprint = {2107.14795}, timestamp = {Tue, 03 Aug 2021 14:53:34 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2107-14795.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
{"license": "apache-2.0", "datasets": ["autoflow"]}
deepmind/optical-flow-perceiver
null
[ "transformers", "pytorch", "perceiver", "dataset:autoflow", "arxiv:2107.14795", "license:apache-2.0", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
image-classification
transformers
# Perceiver IO for vision (convolutional processing) Perceiver IO model pre-trained on ImageNet (14 million images, 1,000 classes) at resolution 224x224. It was introduced in the paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Jaegle et al. and first released in [this repository](https://github.com/deepmind/deepmind-research/tree/master/perceiver). Disclaimer: The team releasing Perceiver IO did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description Perceiver IO is a transformer encoder model that can be applied on any modality (text, images, audio, video, ...). The core idea is to employ the self-attention mechanism on a not-too-large set of latent vectors (e.g. 256 or 512), and only use the inputs to perform cross-attention with the latents. This allows for the time and memory requirements of the self-attention mechanism to not depend on the size of the inputs. To decode, the authors employ so-called decoder queries, which allow to flexibly decode the final hidden states of the latents to produce outputs of arbitrary size and semantics. For image classification, the output is a tensor containing the logits, of shape (batch_size, num_labels). <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/perceiver_architecture.jpg" alt="drawing" width="600"/> <small> Perceiver IO architecture.</small> As the time and memory requirements of the self-attention mechanism don't depend on the size of the inputs, the Perceiver IO authors can train the model directly on raw pixel values, rather than on patches as is done in ViT. This particular model employs a simple 2D conv+maxpool preprocessing network on the pixel values, before using the inputs for cross-attention with the latents. By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by replacing the classification decoder. ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=deepmind/perceiver) to look for other fine-tuned versions on a task that may interest you. ### How to use Here is how to use this model in PyTorch: ```python from transformers import PerceiverFeatureExtractor, PerceiverForImageClassificationConvProcessing import requests from PIL import Image feature_extractor = PerceiverFeatureExtractor.from_pretrained("deepmind/vision-perceiver-conv") model = PerceiverForImageClassificationConvProcessing.from_pretrained("deepmind/vision-perceiver-conv") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) # prepare input inputs = feature_extractor(image, return_tensors="pt").pixel_values # forward pass outputs = model(inputs) logits = outputs.logits print("Predicted class:", model.config.id2label[logits.argmax(-1).item()]) >>> should print Predicted class: tabby, tabby cat ``` ## Training data This model was pretrained on [ImageNet](http://www.image-net.org/), a dataset consisting of 14 million images and 1k classes. ## Training procedure ### Preprocessing Images are center cropped and resized to a resolution of 224x224 and normalized across the RGB channels. Note that data augmentation was used during pre-training, as explained in Appendix H of the [paper](https://arxiv.org/abs/2107.14795). ### Pretraining Hyperparameter details can be found in Appendix H of the [paper](https://arxiv.org/abs/2107.14795). ## Evaluation results This model is able to achieve a top-1 accuracy of 82.1 on ImageNet-1k. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2107-14795, author = {Andrew Jaegle and Sebastian Borgeaud and Jean{-}Baptiste Alayrac and Carl Doersch and Catalin Ionescu and David Ding and Skanda Koppula and Daniel Zoran and Andrew Brock and Evan Shelhamer and Olivier J. H{\'{e}}naff and Matthew M. Botvinick and Andrew Zisserman and Oriol Vinyals and Jo{\~{a}}o Carreira}, title = {Perceiver {IO:} {A} General Architecture for Structured Inputs {\&} Outputs}, journal = {CoRR}, volume = {abs/2107.14795}, year = {2021}, url = {https://arxiv.org/abs/2107.14795}, eprinttype = {arXiv}, eprint = {2107.14795}, timestamp = {Tue, 03 Aug 2021 14:53:34 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2107-14795.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
{"license": "apache-2.0", "datasets": ["imagenet"]}
deepmind/vision-perceiver-conv
null
[ "transformers", "pytorch", "perceiver", "image-classification", "dataset:imagenet", "arxiv:2107.14795", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
image-classification
transformers
# Perceiver IO for vision (fixed Fourier position embeddings) Perceiver IO model pre-trained on ImageNet (14 million images, 1,000 classes) at resolution 224x224. It was introduced in the paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Jaegle et al. and first released in [this repository](https://github.com/deepmind/deepmind-research/tree/master/perceiver). Disclaimer: The team releasing Perceiver IO did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description Perceiver IO is a transformer encoder model that can be applied on any modality (text, images, audio, video, ...). The core idea is to employ the self-attention mechanism on a not-too-large set of latent vectors (e.g. 256 or 512), and only use the inputs to perform cross-attention with the latents. This allows for the time and memory requirements of the self-attention mechanism to not depend on the size of the inputs. To decode, the authors employ so-called decoder queries, which allow to flexibly decode the final hidden states of the latents to produce outputs of arbitrary size and semantics. For image classification, the output is a tensor containing the logits, of shape (batch_size, num_labels). <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/perceiver_architecture.jpg" alt="drawing" width="600"/> <small> Perceiver IO architecture.</small> As the time and memory requirements of the self-attention mechanism don't depend on the size of the inputs, the Perceiver IO authors can train the model directly on raw pixel values, rather than on patches as is done in ViT. This particular model only adds fixed Fourier 2D position embeddings to the pixel values. By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by replacing the classification decoder. ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=deepmind/perceiver) to look for other fine-tuned versions on a task that may interest you. ### How to use Here is how to use this model in PyTorch: ```python from transformers import PerceiverImageProcessor, PerceiverForImageClassificationFourier import requests from PIL import Image processor = PerceiverImageProcessor.from_pretrained("deepmind/vision-perceiver-fourier") model = PerceiverForImageClassificationFourier.from_pretrained("deepmind/vision-perceiver-fourier") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) # prepare input inputs = processor(image, return_tensors="pt").pixel_values # forward pass outputs = model(inputs) logits = outputs.logits print("Predicted class:", model.config.id2label[logits.argmax(-1).item()]) >>> should print Predicted class: tabby, tabby cat ``` ## Training data This model was pretrained on [ImageNet](http://www.image-net.org/), a dataset consisting of 14 million images and 1k classes. ## Training procedure ### Preprocessing Images are center cropped and resized to a resolution of 224x224 and normalized across the RGB channels. Note that data augmentation was used during pre-training, as explained in Appendix H of the [paper](https://arxiv.org/abs/2107.14795). ### Pretraining Hyperparameter details can be found in Appendix H of the [paper](https://arxiv.org/abs/2107.14795). ## Evaluation results This model is able to achieve a top-1 accuracy of 79.0 on ImageNet-1k, and 84.5 when pre-trained on a large-scale dataset (JFT-300M, an internal dataset of Google). ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2107-14795, author = {Andrew Jaegle and Sebastian Borgeaud and Jean{-}Baptiste Alayrac and Carl Doersch and Catalin Ionescu and David Ding and Skanda Koppula and Daniel Zoran and Andrew Brock and Evan Shelhamer and Olivier J. H{\'{e}}naff and Matthew M. Botvinick and Andrew Zisserman and Oriol Vinyals and Jo{\~{a}}o Carreira}, title = {Perceiver {IO:} {A} General Architecture for Structured Inputs {\&} Outputs}, journal = {CoRR}, volume = {abs/2107.14795}, year = {2021}, url = {https://arxiv.org/abs/2107.14795}, eprinttype = {arXiv}, eprint = {2107.14795}, timestamp = {Tue, 03 Aug 2021 14:53:34 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2107-14795.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
{"license": "apache-2.0", "datasets": ["imagenet"]}
deepmind/vision-perceiver-fourier
null
[ "transformers", "pytorch", "perceiver", "image-classification", "dataset:imagenet", "arxiv:2107.14795", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
image-classification
transformers
# Perceiver IO for vision (learned position embeddings) Perceiver IO model pre-trained on ImageNet (14 million images, 1,000 classes) at resolution 224x224. It was introduced in the paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Jaegle et al. and first released in [this repository](https://github.com/deepmind/deepmind-research/tree/master/perceiver). Disclaimer: The team releasing Perceiver IO did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description Perceiver IO is a transformer encoder model that can be applied on any modality (text, images, audio, video, ...). The core idea is to employ the self-attention mechanism on a not-too-large set of latent vectors (e.g. 256 or 512), and only use the inputs to perform cross-attention with the latents. This allows for the time and memory requirements of the self-attention mechanism to not depend on the size of the inputs. To decode, the authors employ so-called decoder queries, which allow to flexibly decode the final hidden states of the latents to produce outputs of arbitrary size and semantics. For image classification, the output is a tensor containing the logits, of shape (batch_size, num_labels). <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/perceiver_architecture.jpg" alt="drawing" width="600"/> <small> Perceiver IO architecture.</small> As the time and memory requirements of the self-attention mechanism don't depend on the size of the inputs, the Perceiver IO authors can train the model directly on raw pixel values, rather than on patches as is done in ViT. This particular model only adds learned 1D position embeddings to the pixel values, hence it is given no privileged information about the 2D structure of images. By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by replacing the classification decoder. ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=deepmind/perceiver) to look for other fine-tuned versions on a task that may interest you. ### How to use Here is how to use this model in PyTorch: ```python from transformers import PerceiverFeatureExtractor, PerceiverForImageClassificationLearned import requests from PIL import Image feature_extractor = PerceiverFeatureExtractor.from_pretrained("deepmind/vision-perceiver-learned") model = PerceiverForImageClassificationLearned.from_pretrained("deepmind/vision-perceiver-learned") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) # prepare input encoding = feature_extractor(image, return_tensors="pt") inputs = encoding.pixel_values # forward pass outputs = model(inputs) logits = outputs.logits print("Predicted class:", model.config.id2label[logits.argmax(-1).item()]) >>> should print Predicted class: tabby, tabby cat ``` ## Training data This model was pretrained on [ImageNet](http://www.image-net.org/), a dataset consisting of 14 million images and 1k classes. ## Training procedure ### Preprocessing Images are center cropped and resized to a resolution of 224x224 and normalized across the RGB channels. Note that data augmentation was used during pre-training, as explained in Appendix H of the [paper](https://arxiv.org/abs/2107.14795). ### Pretraining Hyperparameter details can be found in Appendix H of the [paper](https://arxiv.org/abs/2107.14795). ## Evaluation results This model is able to achieve a top-1 accuracy of 72.7 on ImageNet-1k, despite having no privileged information about the 2D structure of images. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2107-14795, author = {Andrew Jaegle and Sebastian Borgeaud and Jean{-}Baptiste Alayrac and Carl Doersch and Catalin Ionescu and David Ding and Skanda Koppula and Daniel Zoran and Andrew Brock and Evan Shelhamer and Olivier J. H{\'{e}}naff and Matthew M. Botvinick and Andrew Zisserman and Oriol Vinyals and Jo{\~{a}}o Carreira}, title = {Perceiver {IO:} {A} General Architecture for Structured Inputs {\&} Outputs}, journal = {CoRR}, volume = {abs/2107.14795}, year = {2021}, url = {https://arxiv.org/abs/2107.14795}, eprinttype = {arXiv}, eprint = {2107.14795}, timestamp = {Tue, 03 Aug 2021 14:53:34 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2107-14795.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
{"license": "apache-2.0", "datasets": ["imagenet"]}
deepmind/vision-perceiver-learned
null
[ "transformers", "pytorch", "perceiver", "image-classification", "dataset:imagenet", "arxiv:2107.14795", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# Aeona | Chatbot ![Aeona Banner](https://github.com/deepsarda/Aeona/blob/master/dashboard/static/banner.png?raw=true) An generative AI made using [microsoft/DialoGPT-small](https://huggingface.co/microsoft/DialoGPT-small). Recommended to use along with an [AIML Chatbot](https://github.com/deepsarda/Aeona-Aiml) to reduce load, get better replies, add name and personality to your bot. Using an AIML Chatbot will allow you to hardcode some replies also. # AEONA Aeona is an chatbot which hope's to be able to talk with humans as if its an friend! It's main target platform is discord. You can invite the bot [here](https://aeona.xyz). To learn more about this project and chat with the ai, you can use this [website](https://aeona.xyz/). Aeona works why using context of the previous messages and guessing the personality of the human who is talking with it and adapting its own personality to better talk with the user. # Participate and Help the AI improve or just hang out at [hugging face discussions](https://huggingface.co/deepparag/Aeona/discussions) ## Goals The goal is to create an AI which will work with AIML in order to create the most human like AI. #### Why not an AI on its own? For AI it is not possible (realistically) to learn about the user and store data on them, when compared to an AIML which can even execute code! The goal of the AI is to generate responses where the AIML fails. Hence the goals becomes to make an AI which has a wide variety of knowledge, yet be as small as possible! So we use 3 dataset:- 1. [Movielines](https://www.kaggle.com/Cornell-University/movie-dialog-corpus) The movie lines promote longer and more thought out responses but it can be very random. About 200k lines! 2. [Discord Messages](https://www.kaggle.com/jef1056/discord-data) The messages are on a wide variety of topics filtered and removed spam which makes the AI highly random but gives it a very random response to every days questions! about 120 million messages! 3. Custom dataset scrapped from my messages, These messages are very narrow teaching this dataset and sending a random reply will make the AI say sorry loads of time! ## Training The Discord Messages Dataset simply dwarfs the other datasets, Hence the data sets are repeated. This leads to them covering each others issues! The AI has a context of 6 messages which means it will reply until the 4th message from user. [Example](https://huggingface.co/deepparag/Aeona-Beta/discussions/1) ## Tips for Hugging Face interference I recommend send the user input, previous 3 AI and human responses. Using more context than this will lead to useless responses but using less is alright but the responses may be random. ## Evaluation Below is a comparison of Aeona vs. other baselines on the mixed dataset given above using automatic evaluation metrics. | Model | Perplexity | |---|---| | Seq2seq Baseline [3] | 29.8 | | Wolf et al. [5] | 16.3 | | GPT-2 baseline | 99.5 | | DialoGPT baseline | 56.6 | | DialoGPT finetuned | 11.4 | | PersonaGPT | 10.2 | | **Aeona** | **7.9** | ## Usage Example: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("deepparag/Aeona") model = AutoModelWithLMHead.from_pretrained("deepparag/Aeona") # Let's chat for 4 lines for step in range(4): # encode the new user input, add the eos_token and return a tensor in Pytorch new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt') # print(new_user_input_ids) # append the new user input tokens to the chat history bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids # generated a response while limiting the total chat history to 1000 tokens, chat_history_ids = model.generate( bot_input_ids, max_length=200, pad_token_id=tokenizer.eos_token_id, no_repeat_ngram_size=4, do_sample=True, top_k=100, top_p=0.7, temperature=0.8 ) # pretty print last ouput tokens from bot print("Aeona: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True))) ```
{"license": "mit", "tags": ["conversational"], "datasets": ["blended_skill_talk"], "metrics": ["accuracy", "f1", "perplexity"], "thumbnail": "https://images-ext-2.discordapp.net/external/Wvtx1L98EbA7DR2lpZPbDxDuO4qmKt03nZygATZtXgk/%3Fsize%3D4096/https/cdn.discordapp.com/avatars/931226824753700934/338a9e413bbceaeb9095a29e97d4fac0.png", "pipeline_tag": "conversational"}
deepparag/Aeona
null
[ "transformers", "pytorch", "safetensors", "gpt2", "text-generation", "conversational", "dataset:blended_skill_talk", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
An generative AI made using [microsoft/DialoGPT-small](https://huggingface.co/microsoft/DialoGPT-small). Trained on: https://www.kaggle.com/Cornell-University/movie-dialog-corpus https://www.kaggle.com/jef1056/discord-data Important: The AI can be a bit weird at times as it is still undergoing training! At times it send stuff using :<random_wierd_words>: as they are discord emotes. It also send random @RandomName as it is trying to ping people. This works well on discord but on the web not so much but it is easy enough to remove such stuff using [re.sub](https://docs.python.org/3/library/re.html#re.sub) Issues: The AI like with all conversation AI lacks a character, it changes its name way too often. This can be solved using an AIML chatbot to give it a stable character! [Live Demo](https://dumbot-331213.uc.r.appspot.com/) Example: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("deepparag/DumBot") model = AutoModelWithLMHead.from_pretrained("deepparag/DumBot") # Let's chat for 4 lines for step in range(4): # encode the new user input, add the eos_token and return a tensor in Pytorch new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt') # print(new_user_input_ids) # append the new user input tokens to the chat history bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids # generated a response while limiting the total chat history to 1000 tokens, chat_history_ids = model.generate( bot_input_ids, max_length=200, pad_token_id=tokenizer.eos_token_id, no_repeat_ngram_size=4, do_sample=True, top_k=100, top_p=0.7, temperature=0.8 ) # pretty print last ouput tokens from bot print("DumBot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True))) ```
{"license": "mit", "tags": ["conversational"], "thumbnail": "https://cdn.discordapp.com/app-icons/870239976690970625/c02cae78ae105f07969cfd8f8ea3d0a0.png"}
deepparag/DumBot-Beta
null
[ "transformers", "pytorch", "gpt_neo", "text-generation", "conversational", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# THIS AI IS OUTDATED. See [Aeona](https://huggingface.co/deepparag/Aeona) An generative AI made using [microsoft/DialoGPT-small](https://huggingface.co/microsoft/DialoGPT-small). Trained on: https://www.kaggle.com/Cornell-University/movie-dialog-corpus https://www.kaggle.com/jef1056/discord-data [Live Demo](https://dumbot-331213.uc.r.appspot.com/) Example: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("deepparag/DumBot") model = AutoModelWithLMHead.from_pretrained("deepparag/DumBot") # Let's chat for 4 lines for step in range(4): # encode the new user input, add the eos_token and return a tensor in Pytorch new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt') # print(new_user_input_ids) # append the new user input tokens to the chat history bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids # generated a response while limiting the total chat history to 1000 tokens, chat_history_ids = model.generate( bot_input_ids, max_length=200, pad_token_id=tokenizer.eos_token_id, no_repeat_ngram_size=4, do_sample=True, top_k=100, top_p=0.7, temperature=0.8 ) # pretty print last ouput tokens from bot print("DumBot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True))) ```
{"license": "mit", "tags": ["conversational"], "thumbnail": "https://cdn.discordapp.com/app-icons/870239976690970625/c02cae78ae105f07969cfd8f8ea3d0a0.png"}
deepparag/DumBot
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
question-answering
transformers
This is a BERT base cased model trained on SQuAD v2
{"language": "en", "license": "cc-by-4.0", "datasets": ["squad_v2"], "model-index": [{"name": "deepset/bert-base-cased-squad2", "results": [{"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squad_v2", "type": "squad_v2", "config": "squad_v2", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 71.1517, "name": "Exact Match", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZGZlNmQ1YzIzMWUzNTg4YmI4NWVhYThiMzE2ZGZmNWUzNDM3NWI0ZGJkNzliNGUxNTY2MDA5MWVkYjAwYWZiMCIsInZlcnNpb24iOjF9.iUvVdy5c4hoXkwlThJankQqG9QXzNilvfF1_4P0oL8X-jkY5Q6YSsZx6G6cpgXogqFpn7JlE_lP6_OT0VIamCg"}, {"type": "f1", "value": 74.6714, "name": "F1", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMWE5OGNjODhmY2Y0NWIyZDIzMmQ2NmRjZGYyYTYzOWMxZDUzYzg4YjBhNTRiNTY4NTc0M2IxNjI5NWI5ZDM0NCIsInZlcnNpb24iOjF9.IqU9rbzUcKmDEoLkwCUZTKSH0ZFhtqgnhOaEDKKnaRMGBJLj98D5V4VirYT6jLh8FlR0FiwvMTMjReBcfTisAQ"}]}]}]}
deepset/bert-base-cased-squad2
null
[ "transformers", "pytorch", "jax", "safetensors", "bert", "question-answering", "en", "dataset:squad_v2", "license:cc-by-4.0", "model-index", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
This is a German BERT v1 (https://deepset.ai/german-bert) trained to do hate speech detection on the GermEval18Coarse dataset
{"license": "cc-by-4.0"}
deepset/bert-base-german-cased-hatespeech-GermEval18Coarse
null
[ "transformers", "pytorch", "jax", "safetensors", "bert", "text-classification", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
<a href="https://huggingface.co/exbert/?model=bert-base-german-cased"> \t<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a> # German BERT with old vocabulary For details see the related [FARM issue](https://github.com/deepset-ai/FARM/issues/60). ## About us ![deepset logo](https://workablehr.s3.amazonaws.com/uploads/account/logo/476306/logo) We bring NLP to the industry via open source! Our focus: Industry specific language models & large scale QA systems. Some of our work: - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert) - [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad) - [FARM](https://github.com/deepset-ai/FARM) - [Haystack](https://github.com/deepset-ai/haystack/) Get in touch: [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Slack](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai) By the way: [we're hiring!](http://www.deepset.ai/jobs)
{"language": "de", "license": "mit", "tags": ["exbert"], "thumbnail": "https://static.tildacdn.com/tild6438-3730-4164-b266-613634323466/german_bert.png"}
deepset/bert-base-german-cased-oldvocab
null
[ "transformers", "pytorch", "jax", "bert", "fill-mask", "exbert", "de", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
{}
deepset/bert-base-german-cased-sentiment-Germeval17
null
[ "transformers", "pytorch", "jax", "safetensors", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
question-answering
transformers
# bert-base-uncased for QA ## Overview **Language model:** bert-base-uncased **Language:** English **Downstream-task:** Extractive QA **Training data:** SQuAD 2.0 **Eval data:** SQuAD 2.0 **Infrastructure**: 1x Tesla v100 ## Hyperparameters ``` batch_size = 32 n_epochs = 3 base_LM_model = "bert-base-uncased" max_seq_len = 384 learning_rate = 3e-5 lr_schedule = LinearWarmup warmup_proportion = 0.2 doc_stride=128 max_query_length=64 ``` ## Performance ``` "exact": 73.67977764676156 "f1": 77.87647139308865 ``` ## Authors - Timo Möller: `timo.moeller [at] deepset.ai` - Julian Risch: `julian.risch [at] deepset.ai` - Malte Pietsch: `malte.pietsch [at] deepset.ai` - Michel Bartels: `michel.bartels [at] deepset.ai` ## About us ![deepset logo](https://workablehr.s3.amazonaws.com/uploads/account/logo/476306/logo) We bring NLP to the industry via open source! Our focus: Industry specific language models & large scale QA systems. Some of our work: - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert) - [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad) - [FARM](https://github.com/deepset-ai/FARM) - [Haystack](https://github.com/deepset-ai/haystack/) Get in touch: [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai) By the way: [we're hiring!](http://www.deepset.ai/jobs)
{"language": "en", "license": "cc-by-4.0", "datasets": ["squad_v2"], "model-index": [{"name": "deepset/bert-base-uncased-squad2", "results": [{"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squad_v2", "type": "squad_v2", "config": "squad_v2", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 75.6529, "name": "Exact Match", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTY2YmQ0ZDFjMjRlZWRiZWQ2YWQ4MTM0ODkyYTQ0NmYwMzBlNWViZWQ0ODFhMGJmMmY4ZGYwOTQyMDAyZGNjYyIsInZlcnNpb24iOjF9.UyqonQTsCB0BW86LfPy17kLt3a4r3wMeh04MDam5t_UhElp6N02YpiKOqcb1ethNHjAR0WGyxrcV3TI4d-wFAQ"}, {"type": "f1", "value": 78.6191, "name": "F1", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZWRkZWVjMDU2YTcxYWVkZTU1YmUzY2FkNWI5NDJkM2YwMjFmMmE0Njc3MjI5N2Q0NDdhZDNkZWNjMWE5YTRmZiIsInZlcnNpb24iOjF9.ol0Zacd9ZryXazXjgVssGFYG4s5FzbhGGaj1ZEDLVN2ziyzx23bo4GH9PSuGTFxRK2BO5_dxvDupLRqJOF59Bg"}]}]}]}
deepset/bert-base-uncased-squad2
null
[ "transformers", "pytorch", "safetensors", "bert", "question-answering", "en", "dataset:squad_v2", "license:cc-by-4.0", "model-index", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
question-answering
transformers
# bert-large-uncased-whole-word-masking-squad2 This is a berta-large model, fine-tuned using the SQuAD2.0 dataset for the task of question answering. ## Overview **Language model:** bert-large **Language:** English **Downstream-task:** Extractive QA **Training data:** SQuAD 2.0 **Eval data:** SQuAD 2.0 **Code:** See [an example QA pipeline on Haystack](https://haystack.deepset.ai/tutorials/first-qa-system) ## Usage ### In Haystack Haystack is an NLP framework by deepset. You can use this model in a Haystack pipeline to do question answering at scale (over many documents). To load the model in [Haystack](https://github.com/deepset-ai/haystack/): ```python reader = FARMReader(model_name_or_path="deepset/bert-large-uncased-whole-word-masking-squad2") # or reader = TransformersReader(model_name_or_path="FILL",tokenizer="deepset/bert-large-uncased-whole-word-masking-squad2") ``` ### In Transformers ```python from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline model_name = "deepset/bert-large-uncased-whole-word-masking-squad2" # a) Get predictions nlp = pipeline('question-answering', model=model_name, tokenizer=model_name) QA_input = { 'question': 'Why is model conversion important?', 'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.' } res = nlp(QA_input) # b) Load model & tokenizer model = AutoModelForQuestionAnswering.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) ``` ## About us <div class="grid lg:grid-cols-2 gap-x-4 gap-y-3"> <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center"> <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/deepset-logo-colored.png" class="w-40"/> </div> <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center"> <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/haystack-logo-colored.png" class="w-40"/> </div> </div> [deepset](http://deepset.ai/) is the company behind the open-source NLP framework [Haystack](https://haystack.deepset.ai/) which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc. Some of our other work: - [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")]([https://huggingface.co/deepset/tinyroberta-squad2) - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert) - [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad) ## Get in touch and join the Haystack community <p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://docs.haystack.deepset.ai">Documentation</a></strong>. We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community">Discord community open to everyone!</a></strong></p> [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai) By the way: [we're hiring!](http://www.deepset.ai/jobs)
{"language": "en", "license": "cc-by-4.0", "datasets": ["squad_v2"], "model-index": [{"name": "deepset/bert-large-uncased-whole-word-masking-squad2", "results": [{"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squad_v2", "type": "squad_v2", "config": "squad_v2", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 80.8846, "name": "Exact Match", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiY2E5ZGNkY2ExZWViZGEwNWE3OGRmMWM2ZmE4ZDU4ZDQ1OGM3ZWE0NTVmZjFmYmZjZmJmNjJmYTc3NTM3OTk3OSIsInZlcnNpb24iOjF9.aSblF4ywh1fnHHrN6UGL392R5KLaH3FCKQlpiXo_EdQ4XXEAENUCjYm9HWDiFsgfSENL35GkbSyz_GAhnefsAQ"}, {"type": "f1", "value": 83.8765, "name": "F1", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNGFlNmEzMTk2NjRkNTI3ZTk3ZTU1NWNlYzIyN2E0ZDFlNDA2ZjYwZWJlNThkMmRmMmE0YzcwYjIyZDM5NmRiMCIsInZlcnNpb24iOjF9.-rc2_Bsp_B26-o12MFYuAU0Ad2Hg9PDx7Preuk27WlhYJDeKeEr32CW8LLANQABR3Mhw2x8uTYkEUrSDMxxLBw"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squad", "type": "squad", "config": "plain_text", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 85.904, "name": "Exact Match"}, {"type": "f1", "value": 92.586, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "adversarial_qa", "type": "adversarial_qa", "config": "adversarialQA", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 28.233, "name": "Exact Match"}, {"type": "f1", "value": 41.17, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squad_adversarial", "type": "squad_adversarial", "config": "AddOneSent", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 78.064, "name": "Exact Match"}, {"type": "f1", "value": 83.591, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squadshifts amazon", "type": "squadshifts", "config": "amazon", "split": "test"}, "metrics": [{"type": "exact_match", "value": 65.615, "name": "Exact Match"}, {"type": "f1", "value": 80.733, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squadshifts new_wiki", "type": "squadshifts", "config": "new_wiki", "split": "test"}, "metrics": [{"type": "exact_match", "value": 81.57, "name": "Exact Match"}, {"type": "f1", "value": 91.199, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squadshifts nyt", "type": "squadshifts", "config": "nyt", "split": "test"}, "metrics": [{"type": "exact_match", "value": 83.279, "name": "Exact Match"}, {"type": "f1", "value": 91.09, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squadshifts reddit", "type": "squadshifts", "config": "reddit", "split": "test"}, "metrics": [{"type": "exact_match", "value": 69.305, "name": "Exact Match"}, {"type": "f1", "value": 82.405, "name": "F1"}]}]}]}
deepset/bert-large-uncased-whole-word-masking-squad2
null
[ "transformers", "pytorch", "tf", "jax", "safetensors", "bert", "question-answering", "en", "dataset:squad_v2", "license:cc-by-4.0", "model-index", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
question-answering
transformers
## Overview **Language model:** deepset/roberta-base-squad2-distilled **Language:** English **Training data:** SQuAD 2.0 training set **Eval data:** SQuAD 2.0 dev set **Infrastructure**: 1x V100 GPU **Published**: Apr 21st, 2021 ## Details - haystack's distillation feature was used for training. deepset/bert-large-uncased-whole-word-masking-squad2 was used as the teacher model. ## Hyperparameters ``` batch_size = 6 n_epochs = 2 max_seq_len = 384 learning_rate = 3e-5 lr_schedule = LinearWarmup embeds_dropout_prob = 0.1 temperature = 5 distillation_loss_weight = 1 ``` ## Performance ``` "exact": 68.6431398972458 "f1": 72.7637083790805 ``` ## Authors - Timo Möller: `timo.moeller [at] deepset.ai` - Julian Risch: `julian.risch [at] deepset.ai` - Malte Pietsch: `malte.pietsch [at] deepset.ai` - Michel Bartels: `michel.bartels [at] deepset.ai` ## About us ![deepset logo](https://workablehr.s3.amazonaws.com/uploads/account/logo/476306/logo) We bring NLP to the industry via open source! Our focus: Industry specific language models & large scale QA systems. Some of our work: - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert) - [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad) - [FARM](https://github.com/deepset-ai/FARM) - [Haystack](https://github.com/deepset-ai/haystack/) Get in touch: [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai) By the way: [we're hiring!](http://www.deepset.ai/jobs)
{"language": "en", "license": "mit", "tags": ["exbert"], "datasets": ["squad_v2"], "thumbnail": "https://thumb.tildacdn.com/tild3433-3637-4830-a533-353833613061/-/resize/720x/-/format/webp/germanquad.jpg", "model-index": [{"name": "deepset/bert-medium-squad2-distilled", "results": [{"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squad_v2", "type": "squad_v2", "config": "squad_v2", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 69.8231, "name": "Exact Match", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMmE4MGRkZTVjNmViMGNjYjVhY2E1NzcyOGQ1OWE1MWMzMjY5NWU0MmU0Y2I4OWU4YTU5OWQ5YTI2NWE1NmM0ZSIsInZlcnNpb24iOjF9.tnCJvWzMctTwiQu5yig_owO2ZI1t1MZz1AN2lQy4COAGOzuMovD-74acQvMbxJQoRfNNkIetz2hqYivf1lJKDw"}, {"type": "f1", "value": 72.9232, "name": "F1", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZTMwNzk0ZDRjNGUyMjQyNzc1NzczZmUwMTU2MTM5MGQ3M2NhODlmOTU4ZDI0YjhlNTVjNDA1MGEwM2M1MzIyZSIsInZlcnNpb24iOjF9.eElGmTOXH_qHTNaPwZ-dUJfVz9VMvCutDCof_6UG_625MwctT_j7iVkWcGwed4tUnunuq1BPm-0iRh1RuuB-AQ"}]}]}]}
deepset/bert-medium-squad2-distilled
null
[ "transformers", "pytorch", "safetensors", "bert", "question-answering", "exbert", "en", "dataset:squad_v2", "license:mit", "model-index", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
{}
deepset/bert-small-mm_retrieval-passage_encoder
null
[ "transformers", "pytorch", "dpr", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
{}
deepset/bert-small-mm_retrieval-question_encoder
null
[ "transformers", "pytorch", "safetensors", "dpr", "feature-extraction", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
{}
deepset/bert-small-mm_retrieval-table_encoder
null
[ "transformers", "pytorch", "dpr", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
{}
deepset/covid_bert_base
null
[ "transformers", "pytorch", "jax", "safetensors", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
question-answering
transformers
# electra-base for QA ## Overview **Language model:** electra-base **Language:** English **Downstream-task:** Extractive QA **Training data:** SQuAD 2.0 **Eval data:** SQuAD 2.0 **Code:** See [example](https://github.com/deepset-ai/FARM/blob/master/examples/question_answering.py) in [FARM](https://github.com/deepset-ai/FARM/blob/master/examples/question_answering.py) **Infrastructure**: 1x Tesla v100 ## Hyperparameters ``` seed=42 batch_size = 32 n_epochs = 5 base_LM_model = "google/electra-base-discriminator" max_seq_len = 384 learning_rate = 1e-4 lr_schedule = LinearWarmup warmup_proportion = 0.1 doc_stride=128 max_query_length=64 ``` ## Performance Evaluated on the SQuAD 2.0 dev set with the [official eval script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/). ``` "exact": 77.30144024256717, "f1": 81.35438272008543, "total": 11873, "HasAns_exact": 74.34210526315789, "HasAns_f1": 82.45961302894314, "HasAns_total": 5928, "NoAns_exact": 80.25231286795626, "NoAns_f1": 80.25231286795626, "NoAns_total": 5945 ``` ## Usage ### In Transformers ```python from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline model_name = "deepset/electra-base-squad2" # a) Get predictions nlp = pipeline('question-answering', model=model_name, tokenizer=model_name) QA_input = { 'question': 'Why is model conversion important?', 'context': 'The option to convert models between FARM and transformers gives freedom to the user and lets people easily switch between frameworks.' } res = nlp(QA_input) # b) Load model & tokenizer model = AutoModelForQuestionAnswering.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) ``` ### In FARM ```python from farm.modeling.adaptive_model import AdaptiveModel from farm.modeling.tokenization import Tokenizer from farm.infer import Inferencer model_name = "deepset/electra-base-squad2" # a) Get predictions nlp = Inferencer.load(model_name, task_type="question_answering") QA_input = [{"questions": ["Why is model conversion important?"], "text": "The option to convert models between FARM and transformers gives freedom to the user and lets people easily switch between frameworks."}] res = nlp.inference_from_dicts(dicts=QA_input) # b) Load model & tokenizer model = AdaptiveModel.convert_from_transformers(model_name, device="cpu", task_type="question_answering") tokenizer = Tokenizer.load(model_name) ``` ### In haystack For doing QA at scale (i.e. many docs instead of a single paragraph), you can load the model also in [haystack](https://github.com/deepset-ai/haystack/): ```python reader = FARMReader(model_name_or_path="deepset/electra-base-squad2") # or reader = TransformersReader(model="deepset/electra-base-squad2",tokenizer="deepset/electra-base-squad2") ``` ## Authors Vaishali Pal `vaishali.pal [at] deepset.ai` Branden Chan: `branden.chan [at] deepset.ai` Timo Möller: `timo.moeller [at] deepset.ai` Malte Pietsch: `malte.pietsch [at] deepset.ai` Tanay Soni: `tanay.soni [at] deepset.ai` ## About us ![deepset logo](https://workablehr.s3.amazonaws.com/uploads/account/logo/476306/logo) We bring NLP to the industry via open source! Our focus: Industry specific language models & large scale QA systems. Some of our work: - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert) - [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad) - [FARM](https://github.com/deepset-ai/FARM) - [Haystack](https://github.com/deepset-ai/haystack/) Get in touch: [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai) By the way: [we're hiring!](http://www.deepset.ai/jobs)
{"language": "en", "license": "cc-by-4.0", "datasets": ["squad_v2"], "model-index": [{"name": "deepset/electra-base-squad2", "results": [{"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squad_v2", "type": "squad_v2", "config": "squad_v2", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 77.6074, "name": "Exact Match", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYzE5NTRmMmUwYTk1MTI0NjM0ZmQwNDFmM2Y4Mjk4ZWYxOGVmOWI3ZGFiNWM4OTUxZDQ2ZjdmNmU3OTk5ZjRjYyIsInZlcnNpb24iOjF9.0VZRewdiovE4z3K5box5R0oTT7etpmd0BX44FJBLRFfot-uJ915b-bceSv3luJQ7ENPjaYSa7o7jcHlDzn3oAw"}, {"type": "f1", "value": 81.7181, "name": "F1", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiY2VlMzM0Y2UzYjhhNTJhMTFiYWZmMDNjNjRiZDgwYzc5NWE3N2M4ZGFlYWQ0ZjVkZTE2MDU0YmMzMDc1MTY5MCIsInZlcnNpb24iOjF9.jRV58UxOM7CJJSsmxJuZvlt00jMGA1thp4aqtcFi1C8qViQ1kW7NYz8rg1gNTDZNez2UwPS1NgN_HnnwBHPbCQ"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squad", "type": "squad", "config": "plain_text", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 80.407, "name": "Exact Match"}, {"type": "f1", "value": 88.942, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "adversarial_qa", "type": "adversarial_qa", "config": "adversarialQA", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 23.533, "name": "Exact Match"}, {"type": "f1", "value": 36.521, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squad_adversarial", "type": "squad_adversarial", "config": "AddOneSent", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 73.867, "name": "Exact Match"}, {"type": "f1", "value": 81.381, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squadshifts amazon", "type": "squadshifts", "config": "amazon", "split": "test"}, "metrics": [{"type": "exact_match", "value": 64.512, "name": "Exact Match"}, {"type": "f1", "value": 80.166, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squadshifts new_wiki", "type": "squadshifts", "config": "new_wiki", "split": "test"}, "metrics": [{"type": "exact_match", "value": 76.568, "name": "Exact Match"}, {"type": "f1", "value": 87.706, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squadshifts nyt", "type": "squadshifts", "config": "nyt", "split": "test"}, "metrics": [{"type": "exact_match", "value": 77.884, "name": "Exact Match"}, {"type": "f1", "value": 87.858, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squadshifts reddit", "type": "squadshifts", "config": "reddit", "split": "test"}, "metrics": [{"type": "exact_match", "value": 64.399, "name": "Exact Match"}, {"type": "f1", "value": 78.096, "name": "F1"}]}]}]}
deepset/electra-base-squad2
null
[ "transformers", "pytorch", "safetensors", "electra", "question-answering", "en", "dataset:squad_v2", "license:cc-by-4.0", "model-index", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
![bert_image](https://thumb.tildacdn.com/tild3433-3637-4830-a533-353833613061/-/resize/720x/-/format/webp/germanquad.jpg) ## Overview **Language model:** gbert-base-germandpr **Language:** German **Training data:** GermanDPR train set (~ 56MB) **Eval data:** GermanDPR test set (~ 6MB) **Infrastructure**: 4x V100 GPU **Published**: Apr 26th, 2021 ## Details - We trained a dense passage retrieval model with two gbert-base models as encoders of questions and passages. - The dataset is GermanDPR, a new, German language dataset, which we hand-annotated and published [online](https://deepset.ai/germanquad). - It comprises 9275 question/answer pairs in the training set and 1025 pairs in the test set. For each pair, there are one positive context and three hard negative contexts. - As the basis of the training data, we used our hand-annotated GermanQuAD dataset as positive samples and generated hard negative samples from the latest German Wikipedia dump (6GB of raw txt files). - The data dump was cleaned with tailored scripts, leading to 2.8 million indexed passages from German Wikipedia. See https://deepset.ai/germanquad for more details and dataset download. ## Hyperparameters ``` batch_size = 40 n_epochs = 20 num_training_steps = 4640 num_warmup_steps = 460 max_seq_len = 32 tokens for question encoder and 300 tokens for passage encoder learning_rate = 1e-6 lr_schedule = LinearWarmup embeds_dropout_prob = 0.1 num_hard_negatives = 2 ``` ## Performance During training, we monitored the in-batch average rank and the loss and evaluated different batch sizes, numbers of epochs, and number of hard negatives on a dev set split from the train set. The dev split contained 1030 question/answer pairs. Even without thorough hyperparameter tuning, we observed quite stable learning. Multiple restarts with different seeds produced quite similar results. Note that the in-batch average rank is influenced by settings for batch size and number of hard negatives. A smaller number of hard negatives makes the task easier. After fixing the hyperparameters we trained the model on the full GermanDPR train set. We further evaluated the retrieval performance of the trained model on the full German Wikipedia with the GermanDPR test set as labels. To this end, we converted the GermanDPR test set to SQuAD format. The DPR model drastically outperforms the BM25 baseline with regard to recall@k. ![performancetable](https://lh3.google.com/u/0/d/1lX6G0cp4NTx1yUWs74LI0Gcs41sYy_Fb=w2880-h1578-iv1) ## Usage ### In haystack You can load the model in [haystack](https://github.com/deepset-ai/haystack/) as a retriever for doing QA at scale: ```python retriever = DensePassageRetriever( document_store=document_store, query_embedding_model="deepset/gbert-base-germandpr-question_encoder" passage_embedding_model="deepset/gbert-base-germandpr-ctx_encoder" ) ``` ## Authors - Timo Möller: `timo.moeller [at] deepset.ai` - Julian Risch: `julian.risch [at] deepset.ai` - Malte Pietsch: `malte.pietsch [at] deepset.ai` ## About us ![deepset logo](https://workablehr.s3.amazonaws.com/uploads/account/logo/476306/logo) We bring NLP to the industry via open source! Our focus: Industry specific language models & large scale QA systems. Some of our work: - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert) - [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad) - [FARM](https://github.com/deepset-ai/FARM) - [Haystack](https://github.com/deepset-ai/haystack/) Get in touch: [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Website](https://deepset.ai) By the way: [we're hiring!](http://www.deepset.ai/jobs)
{"language": "de", "license": "mit", "tags": ["exbert"], "datasets": ["deepset/germandpr"], "thumbnail": "https://thumb.tildacdn.com/tild3433-3637-4830-a533-353833613061/-/resize/720x/-/format/webp/germanquad.jpg"}
deepset/gbert-base-germandpr-ctx_encoder
null
[ "transformers", "pytorch", "dpr", "exbert", "de", "dataset:deepset/germandpr", "license:mit", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
![bert_image](https://thumb.tildacdn.com/tild3433-3637-4830-a533-353833613061/-/resize/720x/-/format/webp/germanquad.jpg) ## Overview **Language model:** gbert-base-germandpr **Language:** German **Training data:** GermanDPR train set (~ 56MB) **Eval data:** GermanDPR test set (~ 6MB) **Infrastructure**: 4x V100 GPU **Published**: Apr 26th, 2021 ## Details - We trained a dense passage retrieval model with two gbert-base models as encoders of questions and passages. - The dataset is GermanDPR, a new, German language dataset, which we hand-annotated and published [online](https://deepset.ai/germanquad). - It comprises 9275 question/answer pairs in the training set and 1025 pairs in the test set. For each pair, there are one positive context and three hard negative contexts. - As the basis of the training data, we used our hand-annotated GermanQuAD dataset as positive samples and generated hard negative samples from the latest German Wikipedia dump (6GB of raw txt files). - The data dump was cleaned with tailored scripts, leading to 2.8 million indexed passages from German Wikipedia. See https://deepset.ai/germanquad for more details and dataset download. ## Hyperparameters ``` batch_size = 40 n_epochs = 20 num_training_steps = 4640 num_warmup_steps = 460 max_seq_len = 32 tokens for question encoder and 300 tokens for passage encoder learning_rate = 1e-6 lr_schedule = LinearWarmup embeds_dropout_prob = 0.1 num_hard_negatives = 2 ``` ## Performance During training, we monitored the in-batch average rank and the loss and evaluated different batch sizes, numbers of epochs, and number of hard negatives on a dev set split from the train set. The dev split contained 1030 question/answer pairs. Even without thorough hyperparameter tuning, we observed quite stable learning. Multiple restarts with different seeds produced quite similar results. Note that the in-batch average rank is influenced by settings for batch size and number of hard negatives. A smaller number of hard negatives makes the task easier. After fixing the hyperparameters we trained the model on the full GermanDPR train set. We further evaluated the retrieval performance of the trained model on the full German Wikipedia with the GermanDPR test set as labels. To this end, we converted the GermanDPR test set to SQuAD format. The DPR model drastically outperforms the BM25 baseline with regard to recall@k. ![performancetable](https://lh3.google.com/u/0/d/1lX6G0cp4NTx1yUWs74LI0Gcs41sYy_Fb=w2880-h1578-iv1) ## Usage ### In haystack You can load the model in [haystack](https://github.com/deepset-ai/haystack/) as a retriever for doing QA at scale: ```python retriever = DensePassageRetriever( document_store=document_store, query_embedding_model="deepset/gbert-base-germandpr-question_encoder" passage_embedding_model="deepset/gbert-base-germandpr-ctx_encoder" ) ``` ## Authors - Timo Möller: `timo.moeller [at] deepset.ai` - Julian Risch: `julian.risch [at] deepset.ai` - Malte Pietsch: `malte.pietsch [at] deepset.ai` ## About us ![deepset logo](https://workablehr.s3.amazonaws.com/uploads/account/logo/476306/logo) We bring NLP to the industry via open source! Our focus: Industry specific language models & large scale QA systems. Some of our work: - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert) - [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad) - [FARM](https://github.com/deepset-ai/FARM) - [Haystack](https://github.com/deepset-ai/haystack/) Get in touch: [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Website](https://deepset.ai) By the way: [we're hiring!](http://www.deepset.ai/jobs)
{"language": "de", "license": "mit", "tags": ["exbert"], "datasets": ["deepset/germandpr"], "thumbnail": "https://thumb.tildacdn.com/tild3433-3637-4830-a533-353833613061/-/resize/720x/-/format/webp/germanquad.jpg"}
deepset/gbert-base-germandpr-question_encoder
null
[ "transformers", "pytorch", "safetensors", "dpr", "feature-extraction", "exbert", "de", "dataset:deepset/germandpr", "license:mit", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
## Overview **Language model:** gbert-base-germandpr-reranking **Language:** German **Training data:** GermanDPR train set (~ 56MB) **Eval data:** GermanDPR test set (~ 6MB) **Infrastructure**: 1x V100 GPU **Published**: June 3rd, 2021 ## Details - We trained a text pair classification model in FARM, which can be used for reranking in document retrieval tasks. To this end, the classifier calculates the similarity of the query and each retrieved top k document (e.g., k=10). The top k documents are then sorted by their similarity scores. The document most similar to the query is the best. ## Hyperparameters ``` batch_size = 16 n_epochs = 2 max_seq_len = 512 tokens for question and passage concatenated learning_rate = 2e-5 lr_schedule = LinearWarmup embeds_dropout_prob = 0.1 ``` ## Performance We use the GermanDPR test dataset as ground truth labels and run two experiments to compare how a BM25 retriever performs with or without reranking with our model. The first experiment runs retrieval on the full German Wikipedia (more than 2 million passages) and second experiment runs retrieval on the GermanDPR dataset only (not more than 5000 passages). Both experiments use 1025 queries. Note that the second experiment is evaluating on a much simpler task because of the smaller dataset size, which explains strong BM25 retrieval performance. ### Full German Wikipedia (more than 2 million passages): BM25 Retriever without Reranking - recall@3: 0.4088 (419 / 1025) - mean_reciprocal_rank@3: 0.3322 BM25 Retriever with Reranking Top 10 Documents - recall@3: 0.5200 (533 / 1025) - mean_reciprocal_rank@3: 0.4800 ### GermanDPR Test Dataset only (not more than 5000 passages): BM25 Retriever without Reranking - recall@3: 0.9102 (933 / 1025) - mean_reciprocal_rank@3: 0.8528 BM25 Retriever with Reranking Top 10 Documents - recall@3: 0.9298 (953 / 1025) - mean_reciprocal_rank@3: 0.8813 ## Usage ### In haystack You can load the model in [haystack](https://github.com/deepset-ai/haystack/) for reranking the documents returned by a Retriever: ```python ... retriever = ElasticsearchRetriever(document_store=document_store) ranker = FARMRanker(model_name_or_path="deepset/gbert-base-germandpr-reranking") ... p = Pipeline() p.add_node(component=retriever, name="ESRetriever", inputs=["Query"]) p.add_node(component=ranker, name="Ranker", inputs=["ESRetriever"]) ) ``` ## About us ![deepset logo](https://workablehr.s3.amazonaws.com/uploads/account/logo/476306/logo) We bring NLP to the industry via open source! Our focus: Industry specific language models & large scale QA systems. Some of our work: - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert) - [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad) - [FARM](https://github.com/deepset-ai/FARM) - [Haystack](https://github.com/deepset-ai/haystack/) Get in touch: [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Website](https://deepset.ai) By the way: [we're hiring!](http://www.deepset.ai/jobs)
{"language": "de", "license": "mit", "datasets": ["deepset/germandpr"]}
deepset/gbert-base-germandpr-reranking
null
[ "transformers", "pytorch", "safetensors", "bert", "text-classification", "de", "dataset:deepset/germandpr", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
# German BERT base Released, Oct 2020, this is a German BERT language model trained collaboratively by the makers of the original German BERT (aka "bert-base-german-cased") and the dbmdz BERT (aka bert-base-german-dbmdz-cased). In our [paper](https://arxiv.org/pdf/2010.10906.pdf), we outline the steps taken to train our model and show that it outperforms its predecessors. ## Overview **Paper:** [here](https://arxiv.org/pdf/2010.10906.pdf) **Architecture:** BERT base **Language:** German ## Performance ``` GermEval18 Coarse: 78.17 GermEval18 Fine: 50.90 GermEval14: 87.98 ``` See also: deepset/gbert-base deepset/gbert-large deepset/gelectra-base deepset/gelectra-large deepset/gelectra-base-generator deepset/gelectra-large-generator ## Authors Branden Chan: `branden.chan [at] deepset.ai` Stefan Schweter: `stefan [at] schweter.eu` Timo Möller: `timo.moeller [at] deepset.ai` ## About us ![deepset logo](https://workablehr.s3.amazonaws.com/uploads/account/logo/476306/logo) We bring NLP to the industry via open source! Our focus: Industry specific language models & large scale QA systems. Some of our work: - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert) - [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad) - [FARM](https://github.com/deepset-ai/FARM) - [Haystack](https://github.com/deepset-ai/haystack/) Get in touch: [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Slack](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai) By the way: [we're hiring!](http://www.deepset.ai/jobs)
{"language": "de", "license": "mit", "datasets": ["wikipedia", "OPUS", "OpenLegalData"]}
deepset/gbert-base
null
[ "transformers", "pytorch", "tf", "safetensors", "fill-mask", "de", "dataset:wikipedia", "dataset:OPUS", "dataset:OpenLegalData", "arxiv:2010.10906", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
## Overview **Language model:** gbert-large-sts **Language:** German **Training data:** German STS benchmark train and dev set **Eval data:** German STS benchmark test set **Infrastructure**: 1x V100 GPU **Published**: August 12th, 2021 ## Details - We trained a gbert-large model on the task of estimating semantic similarity of German-language text pairs. The dataset is a machine-translated version of the [STS benchmark](https://ixa2.si.ehu.eus/stswiki/index.php/STSbenchmark), which is available [here](https://github.com/t-systems-on-site-services-gmbh/german-STSbenchmark). ## Hyperparameters ``` batch_size = 16 n_epochs = 4 warmup_ratio = 0.1 learning_rate = 2e-5 lr_schedule = LinearWarmup ``` ## Performance Stay tuned... and watch out for new papers on arxiv.org ;) ## Authors - Julian Risch: `julian.risch [at] deepset.ai` - Timo Möller: `timo.moeller [at] deepset.ai` - Julian Gutsch: `julian.gutsch [at] deepset.ai` - Malte Pietsch: `malte.pietsch [at] deepset.ai` ## About us ![deepset logo](https://workablehr.s3.amazonaws.com/uploads/account/logo/476306/logo) We bring NLP to the industry via open source! Our focus: Industry specific language models & large scale QA systems. Some of our work: - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert) - [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad) - [FARM](https://github.com/deepset-ai/FARM) - [Haystack](https://github.com/deepset-ai/haystack/) Get in touch: [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Website](https://deepset.ai) By the way: [we're hiring!](http://www.deepset.ai/jobs)
{"language": "de", "license": "mit", "tags": ["exbert"]}
deepset/gbert-large-sts
null
[ "transformers", "pytorch", "safetensors", "bert", "text-classification", "exbert", "de", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
# German BERT large Released, Oct 2020, this is a German BERT language model trained collaboratively by the makers of the original German BERT (aka "bert-base-german-cased") and the dbmdz BERT (aka bert-base-german-dbmdz-cased). In our [paper](https://arxiv.org/pdf/2010.10906.pdf), we outline the steps taken to train our model and show that it outperforms its predecessors. ## Overview **Paper:** [here](https://arxiv.org/pdf/2010.10906.pdf) **Architecture:** BERT large **Language:** German ## Performance ``` GermEval18 Coarse: 80.08 GermEval18 Fine: 52.48 GermEval14: 88.16 ``` See also: deepset/gbert-base deepset/gbert-large deepset/gelectra-base deepset/gelectra-large deepset/gelectra-base-generator deepset/gelectra-large-generator ## Authors **Branden Chan:** [email protected] **Stefan Schweter:** [email protected] **Timo Möller:** [email protected] ## About us <div class="grid lg:grid-cols-2 gap-x-4 gap-y-3"> <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center"> <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/deepset-logo-colored.png" class="w-40"/> </div> <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center"> <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/haystack-logo-colored.png" class="w-40"/> </div> </div> [deepset](http://deepset.ai/) is the company behind the open-source NLP framework [Haystack](https://haystack.deepset.ai/) which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc. Some of our other work: - [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")]([https://huggingface.co/deepset/tinyroberta-squad2) - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert) - [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad) ## Get in touch and join the Haystack community <p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://docs.haystack.deepset.ai">Documentation</a></strong>. We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community">Discord community open to everyone!</a></strong></p> [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai) By the way: [we're hiring!](http://www.deepset.ai/jobs)
{"language": "de", "license": "mit", "datasets": ["wikipedia", "OPUS", "OpenLegalData", "oscar"]}
deepset/gbert-large
null
[ "transformers", "pytorch", "tf", "safetensors", "fill-mask", "de", "dataset:wikipedia", "dataset:OPUS", "dataset:OpenLegalData", "dataset:oscar", "arxiv:2010.10906", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
# German ELECTRA base generator Released, Oct 2020, this is the generator component of the German ELECTRA language model trained collaboratively by the makers of the original German BERT (aka "bert-base-german-cased") and the dbmdz BERT (aka bert-base-german-dbmdz-cased). In our [paper](https://arxiv.org/pdf/2010.10906.pdf), we outline the steps taken to train our model. The generator is useful for performing masking experiments. If you are looking for a regular language model for embedding extraction, or downstream tasks like NER, classification or QA, please use deepset/gelectra-base. ## Overview **Paper:** [here](https://arxiv.org/pdf/2010.10906.pdf) **Architecture:** ELECTRA base (generator) **Language:** German See also: deepset/gbert-base deepset/gbert-large deepset/gelectra-base deepset/gelectra-large deepset/gelectra-base-generator deepset/gelectra-large-generator ## Authors Branden Chan: `branden.chan [at] deepset.ai` Stefan Schweter: `stefan [at] schweter.eu` Timo Möller: `timo.moeller [at] deepset.ai` ## About us ![deepset logo](https://workablehr.s3.amazonaws.com/uploads/account/logo/476306/logo) We bring NLP to the industry via open source! Our focus: Industry specific language models & large scale QA systems. Some of our work: - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert) - [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad) - [FARM](https://github.com/deepset-ai/FARM) - [Haystack](https://github.com/deepset-ai/haystack/) Get in touch: [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Slack](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai) By the way: [we're hiring!](http://www.deepset.ai/jobs)
{"language": "de", "license": "mit", "datasets": ["wikipedia", "OPUS", "OpenLegalData"]}
deepset/gelectra-base-generator
null
[ "transformers", "pytorch", "tf", "safetensors", "electra", "fill-mask", "de", "dataset:wikipedia", "dataset:OPUS", "dataset:OpenLegalData", "arxiv:2010.10906", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
question-answering
transformers
![bert_image](https://thumb.tildacdn.com/tild3433-3637-4830-a533-353833613061/-/resize/720x/-/format/webp/germanquad.jpg) ## Overview **Language model:** gelectra-base-germanquad-distilled **Language:** German **Training data:** GermanQuAD train set (~ 12MB) **Eval data:** GermanQuAD test set (~ 5MB) **Infrastructure**: 1x V100 GPU **Published**: Apr 21st, 2021 ## Details - We trained a German question answering model with a gelectra-base model as its basis. - The dataset is GermanQuAD, a new, German language dataset, which we hand-annotated and published [online](https://deepset.ai/germanquad). - The training dataset is one-way annotated and contains 11518 questions and 11518 answers, while the test dataset is three-way annotated so that there are 2204 questions and with 2204·3−76 = 6536answers, because we removed 76 wrong answers. - In addition to the annotations in GermanQuAD, haystack's distillation feature was used for training. deepset/gelectra-large-germanquad was used as the teacher model. See https://deepset.ai/germanquad for more details and dataset download in SQuAD format. ## Hyperparameters ``` batch_size = 24 n_epochs = 6 max_seq_len = 384 learning_rate = 3e-5 lr_schedule = LinearWarmup embeds_dropout_prob = 0.1 temperature = 2 distillation_loss_weight = 0.75 ``` ## Performance We evaluated the extractive question answering performance on our GermanQuAD test set. Model types and training data are included in the model name. For finetuning XLM-Roberta, we use the English SQuAD v2.0 dataset. The GELECTRA models are warm started on the German translation of SQuAD v1.1 and finetuned on \\\\germanquad. The human baseline was computed for the 3-way test set by taking one answer as prediction and the other two as ground truth. ``` "exact": 62.4773139745916 "f1": 80.9488017070188 ``` ![performancetable](https://lh3.google.com/u/0/d/1IFqkq8OZ7TFnGzxmW6eoxXSYa12f2M7O=w1970-h1546-iv1) ## Authors - Timo Möller: `timo.moeller [at] deepset.ai` - Julian Risch: `julian.risch [at] deepset.ai` - Malte Pietsch: `malte.pietsch [at] deepset.ai` - Michel Bartels: `michel.bartels [at] deepset.ai` ## About us ![deepset logo](https://workablehr.s3.amazonaws.com/uploads/account/logo/476306/logo) We bring NLP to the industry via open source! Our focus: Industry specific language models & large scale QA systems. Some of our work: - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert) - [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad) - [FARM](https://github.com/deepset-ai/FARM) - [Haystack](https://github.com/deepset-ai/haystack/) Get in touch: [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Slack](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai) By the way: [we're hiring!](http://www.deepset.ai/jobs)
{"language": "de", "license": "mit", "tags": ["exbert"], "datasets": ["deepset/germanquad"], "thumbnail": "https://thumb.tildacdn.com/tild3433-3637-4830-a533-353833613061/-/resize/720x/-/format/webp/germanquad.jpg"}
deepset/gelectra-base-germanquad-distilled
null
[ "transformers", "pytorch", "safetensors", "electra", "question-answering", "exbert", "de", "dataset:deepset/germanquad", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
question-answering
transformers
![bert_image](https://thumb.tildacdn.com/tild3433-3637-4830-a533-353833613061/-/resize/720x/-/format/webp/germanquad.jpg) ## Overview **Language model:** gelectra-base-germanquad **Language:** German **Training data:** GermanQuAD train set (~ 12MB) **Eval data:** GermanQuAD test set (~ 5MB) **Infrastructure**: 1x V100 GPU **Published**: Apr 21st, 2021 ## Details - We trained a German question answering model with a gelectra-base model as its basis. - The dataset is GermanQuAD, a new, German language dataset, which we hand-annotated and published [online](https://deepset.ai/germanquad). - The training dataset is one-way annotated and contains 11518 questions and 11518 answers, while the test dataset is three-way annotated so that there are 2204 questions and with 2204·3−76 = 6536answers, because we removed 76 wrong answers. See https://deepset.ai/germanquad for more details and dataset download in SQuAD format. ## Hyperparameters ``` batch_size = 24 n_epochs = 2 max_seq_len = 384 learning_rate = 3e-5 lr_schedule = LinearWarmup embeds_dropout_prob = 0.1 ``` ## Performance We evaluated the extractive question answering performance on our GermanQuAD test set. Model types and training data are included in the model name. For finetuning XLM-Roberta, we use the English SQuAD v2.0 dataset. The GELECTRA models are warm started on the German translation of SQuAD v1.1 and finetuned on [GermanQuAD](https://deepset.ai/germanquad). The human baseline was computed for the 3-way test set by taking one answer as prediction and the other two as ground truth. ![performancetable](https://images.prismic.io/deepset/1c63afd8-40e6-4fd9-85c4-0dbb81996183_german-qa-vs-xlm-r.png) ## Authors **Timo Möller:** [email protected] **Julian Risch:** [email protected] **Malte Pietsch:** [email protected] ## About us <div class="grid lg:grid-cols-2 gap-x-4 gap-y-3"> <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center"> <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/deepset-logo-colored.png" class="w-40"/> </div> <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center"> <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/haystack-logo-colored.png" class="w-40"/> </div> </div> [deepset](http://deepset.ai/) is the company behind the open-source NLP framework [Haystack](https://haystack.deepset.ai/) which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc. Some of our other work: - [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")]([https://huggingface.co/deepset/tinyroberta-squad2) - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert) - [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad) ## Get in touch and join the Haystack community <p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://docs.haystack.deepset.ai">Documentation</a></strong>. We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community">Discord community open to everyone!</a></strong></p> [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai) By the way: [we're hiring!](http://www.deepset.ai/jobs)
{"language": "de", "license": "mit", "tags": ["exbert"], "datasets": ["deepset/germanquad"], "thumbnail": "https://thumb.tildacdn.com/tild3433-3637-4830-a533-353833613061/-/resize/720x/-/format/webp/germanquad.jpg"}
deepset/gelectra-base-germanquad
null
[ "transformers", "pytorch", "tf", "safetensors", "electra", "question-answering", "exbert", "de", "dataset:deepset/germanquad", "license:mit", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
# German ELECTRA base Released, Oct 2020, this is a German ELECTRA language model trained collaboratively by the makers of the original German BERT (aka "bert-base-german-cased") and the dbmdz BERT (aka bert-base-german-dbmdz-cased). In our [paper](https://arxiv.org/pdf/2010.10906.pdf), we outline the steps taken to train our model. Our evaluation suggests that this model is somewhat undertrained. For best performance from a base sized model, we recommend deepset/gbert-base ## Overview **Paper:** [here](https://arxiv.org/pdf/2010.10906.pdf) **Architecture:** ELECTRA base (discriminator) **Language:** German ## Performance ``` GermEval18 Coarse: 76.02 GermEval18 Fine: 42.22 GermEval14: 86.02 ``` See also: deepset/gbert-base deepset/gbert-large deepset/gelectra-base deepset/gelectra-large deepset/gelectra-base-generator deepset/gelectra-large-generator ## Authors Branden Chan: `branden.chan [at] deepset.ai` Stefan Schweter: `stefan [at] schweter.eu` Timo Möller: `timo.moeller [at] deepset.ai` ## About us ![deepset logo](https://workablehr.s3.amazonaws.com/uploads/account/logo/476306/logo) We bring NLP to the industry via open source! Our focus: Industry specific language models & large scale QA systems. Some of our work: - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert) - [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad) - [FARM](https://github.com/deepset-ai/FARM) - [Haystack](https://github.com/deepset-ai/haystack/) Get in touch: [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Slack](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai) By the way: [we're hiring!](http://www.deepset.ai/jobs)
{"language": "de", "license": "mit", "datasets": ["wikipedia", "OPUS", "OpenLegalData"]}
deepset/gelectra-base
null
[ "transformers", "pytorch", "tf", "electra", "pretraining", "de", "dataset:wikipedia", "dataset:OPUS", "dataset:OpenLegalData", "arxiv:2010.10906", "license:mit", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
# German ELECTRA large generator Released, Oct 2020, this is the generator component of the German ELECTRA language model trained collaboratively by the makers of the original German BERT (aka "bert-base-german-cased") and the dbmdz BERT (aka bert-base-german-dbmdz-cased). In our [paper](https://arxiv.org/pdf/2010.10906.pdf), we outline the steps taken to train our model. The generator is useful for performing masking experiments. If you are looking for a regular language model for embedding extraction, or downstream tasks like NER, classification or QA, please use deepset/gelectra-large. ## Overview **Paper:** [here](https://arxiv.org/pdf/2010.10906.pdf) **Architecture:** ELECTRA large (generator) **Language:** German ## Performance ``` GermEval18 Coarse: 80.70 GermEval18 Fine: 55.16 GermEval14: 88.95 ``` See also: deepset/gbert-base deepset/gbert-large deepset/gelectra-base deepset/gelectra-large deepset/gelectra-base-generator deepset/gelectra-large-generator ## Authors Branden Chan: `branden.chan [at] deepset.ai` Stefan Schweter: `stefan [at] schweter.eu` Timo Möller: `timo.moeller [at] deepset.ai` ## About us ![deepset logo](https://workablehr.s3.amazonaws.com/uploads/account/logo/476306/logo) We bring NLP to the industry via open source! Our focus: Industry specific language models & large scale QA systems. Some of our work: - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert) - [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad) - [FARM](https://github.com/deepset-ai/FARM) - [Haystack](https://github.com/deepset-ai/haystack/) Get in touch: [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Slack](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai) By the way: [we're hiring!](http://www.deepset.ai/jobs)
{"language": "de", "license": "mit", "datasets": ["wikipedia", "OPUS", "OpenLegalData", "oscar"]}
deepset/gelectra-large-generator
null
[ "transformers", "pytorch", "tf", "safetensors", "electra", "fill-mask", "de", "dataset:wikipedia", "dataset:OPUS", "dataset:OpenLegalData", "dataset:oscar", "arxiv:2010.10906", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
question-answering
transformers
![bert_image](https://thumb.tildacdn.com/tild3433-3637-4830-a533-353833613061/-/resize/720x/-/format/webp/germanquad.jpg) ## Overview **Language model:** gelectra-large-germanquad **Language:** German **Training data:** GermanQuAD train set (~ 12MB) **Eval data:** GermanQuAD test set (~ 5MB) **Infrastructure**: 1x V100 GPU **Published**: Apr 21st, 2021 ## Details - We trained a German question answering model with a gelectra-large model as its basis. - The dataset is GermanQuAD, a new, German language dataset, which we hand-annotated and published [online](https://deepset.ai/germanquad). - The training dataset is one-way annotated and contains 11518 questions and 11518 answers, while the test dataset is three-way annotated so that there are 2204 questions and with 2204·3−76 = 6536 answers, because we removed 76 wrong answers. See https://deepset.ai/germanquad for more details and dataset download in SQuAD format. ## Hyperparameters ``` batch_size = 24 n_epochs = 2 max_seq_len = 384 learning_rate = 3e-5 lr_schedule = LinearWarmup embeds_dropout_prob = 0.1 ``` ## Performance We evaluated the extractive question answering performance on our GermanQuAD test set. Model types and training data are included in the model name. For finetuning XLM-Roberta, we use the English SQuAD v2.0 dataset. The GELECTRA models are warm started on the German translation of SQuAD v1.1 and finetuned on [GermanQuAD](https://deepset.ai/germanquad). The human baseline was computed for the 3-way test set by taking one answer as prediction and the other two as ground truth. ![performancetable](https://images.prismic.io/deepset/1c63afd8-40e6-4fd9-85c4-0dbb81996183_german-qa-vs-xlm-r.png) ## Authors **Timo Möller:** [email protected] **Julian Risch:** [email protected] **Malte Pietsch:** [email protected] ## About us <div class="grid lg:grid-cols-2 gap-x-4 gap-y-3"> <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center"> <img alt="" src="https://huggingface.co/spaces/deepset/README/resolve/main/haystack-logo-colored.svg" class="w-40"/> </div> <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center"> <img alt="" src="https://huggingface.co/spaces/deepset/README/resolve/main/deepset-logo-colored.svg" class="w-40"/> </div> </div> [deepset](http://deepset.ai/) is the company behind the open-source NLP framework [Haystack](https://haystack.deepset.ai/) which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc. Some of our other work: - [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")]([https://huggingface.co/deepset/tinyroberta-squad2) - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert) - [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad) ## Get in touch and join the Haystack community <p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://haystack.deepset.ai">Documentation</a></strong>. We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community/join">Discord community open to everyone!</a></strong></p> [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai) By the way: [we're hiring!](http://www.deepset.ai/jobs)
{"language": "de", "license": "mit", "tags": ["exbert"], "datasets": ["deepset/germanquad"], "thumbnail": "https://thumb.tildacdn.com/tild3433-3637-4830-a533-353833613061/-/resize/720x/-/format/webp/germanquad.jpg"}
deepset/gelectra-large-germanquad
null
[ "transformers", "pytorch", "tf", "safetensors", "electra", "question-answering", "exbert", "de", "dataset:deepset/germanquad", "license:mit", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
# German ELECTRA large Released, Oct 2020, this is a German ELECTRA language model trained collaboratively by the makers of the original German BERT (aka "bert-base-german-cased") and the dbmdz BERT (aka bert-base-german-dbmdz-cased). In our [paper](https://arxiv.org/pdf/2010.10906.pdf), we outline the steps taken to train our model and show that this is the state of the art German language model. ## Overview **Paper:** [here](https://arxiv.org/pdf/2010.10906.pdf) **Architecture:** ELECTRA large (discriminator) **Language:** German ## Performance ``` GermEval18 Coarse: 80.70 GermEval18 Fine: 55.16 GermEval14: 88.95 ``` See also: deepset/gbert-base deepset/gbert-large deepset/gelectra-base deepset/gelectra-large deepset/gelectra-base-generator deepset/gelectra-large-generator ## Authors Branden Chan: `branden.chan [at] deepset.ai` Stefan Schweter: `stefan [at] schweter.eu` Timo Möller: `timo.moeller [at] deepset.ai` ## About us ![deepset logo](https://workablehr.s3.amazonaws.com/uploads/account/logo/476306/logo) We bring NLP to the industry via open source! Our focus: Industry specific language models & large scale QA systems. Some of our work: - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert) - [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad) - [FARM](https://github.com/deepset-ai/FARM) - [Haystack](https://github.com/deepset-ai/haystack/) Get in touch: [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai) By the way: [we're hiring!](http://www.deepset.ai/jobs)
{"language": "de", "license": "mit", "datasets": ["wikipedia", "OPUS", "OpenLegalData", "oscar"]}
deepset/gelectra-large
null
[ "transformers", "pytorch", "tf", "electra", "pretraining", "de", "dataset:wikipedia", "dataset:OPUS", "dataset:OpenLegalData", "dataset:oscar", "arxiv:2010.10906", "license:mit", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
question-answering
transformers
# MiniLM-L12-H384-uncased for QA ## Overview **Language model:** microsoft/MiniLM-L12-H384-uncased **Language:** English **Downstream-task:** Extractive QA **Training data:** SQuAD 2.0 **Eval data:** SQuAD 2.0 **Code:** See an [example QA pipeline on Haystack](https://haystack.deepset.ai/tutorials/01_basic_qa_pipeline) **Infrastructure**: 1x Tesla v100 ## Hyperparameters ``` seed=42 batch_size = 12 n_epochs = 4 base_LM_model = "microsoft/MiniLM-L12-H384-uncased" max_seq_len = 384 learning_rate = 4e-5 lr_schedule = LinearWarmup warmup_proportion = 0.2 doc_stride=128 max_query_length=64 grad_acc_steps=4 ``` ## Performance Evaluated on the SQuAD 2.0 dev set with the [official eval script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/). ``` "exact": 76.13071675229513, "f1": 79.49786500219953, "total": 11873, "HasAns_exact": 78.35695006747639, "HasAns_f1": 85.10090269418276, "HasAns_total": 5928, "NoAns_exact": 73.91084945332211, "NoAns_f1": 73.91084945332211, "NoAns_total": 5945 ``` ## Usage ### In Haystack For doing QA at scale (i.e. many docs instead of single paragraph), you can load the model also in [Haystack](https://github.com/deepset-ai/haystack/): ```python reader = FARMReader(model_name_or_path="deepset/minilm-uncased-squad2") # or reader = TransformersReader(model="deepset/minilm-uncased-squad2",tokenizer="deepset/minilm-uncased-squad2") ``` ### In Transformers ```python from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline model_name = "deepset/minilm-uncased-squad2" # a) Get predictions nlp = pipeline('question-answering', model=model_name, tokenizer=model_name) QA_input = { 'question': 'Why is model conversion important?', 'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.' } res = nlp(QA_input) # b) Load model & tokenizer model = AutoModelForQuestionAnswering.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) ``` ## Authors **Vaishali Pal:** [email protected] **Branden Chan:** [email protected] **Timo Möller:** [email protected] **Malte Pietsch:** [email protected] **Tanay Soni:** [email protected] ## About us <div class="grid lg:grid-cols-2 gap-x-4 gap-y-3"> <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center"> <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/deepset-logo-colored.png" class="w-40"/> </div> <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center"> <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/haystack-logo-colored.png" class="w-40"/> </div> </div> [deepset](http://deepset.ai/) is the company behind the open-source NLP framework [Haystack](https://haystack.deepset.ai/) which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc. Some of our other work: - [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")]([https://huggingface.co/deepset/tinyroberta-squad2) - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert) - [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad) ## Get in touch and join the Haystack community <p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://docs.haystack.deepset.ai">Documentation</a></strong>. We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community">Discord community open to everyone!</a></strong></p> [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai) By the way: [we're hiring!](http://www.deepset.ai/jobs)
{"language": "en", "license": "cc-by-4.0", "datasets": ["squad_v2"], "model-index": [{"name": "deepset/minilm-uncased-squad2", "results": [{"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squad_v2", "type": "squad_v2", "config": "squad_v2", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 76.1921, "name": "Exact Match", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNmViZTQ3YTBjYTc3ZDQzYmI1Mzk3MTAxM2MzNjdmMTc0MWY4Yzg2MWU3NGQ1MDJhZWI2NzY0YWYxZTY2OTgzMiIsInZlcnNpb24iOjF9.s4XCRs_pvW__LJ57dpXAEHD6NRsQ3XaFrM1xaguS6oUs5fCN77wNNc97scnfoPXT18A8RAn0cLTNivfxZm0oBA"}, {"type": "f1", "value": 79.5483, "name": "F1", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZmJlYTIyOTg2NjMyMzg4NzNlNGIzMTY2NDVkMjg0ODdiOWRmYjVkZDYyZjBjNWNiNTBhNjcwOWUzMDM4ZWJiZiIsInZlcnNpb24iOjF9.gxpwIBBA3_5xPi-TaZcqWNnGgCiHzxaUNgrS2jucxoVWGxhBtnPdwKVCxLleQoDDZenAXB3Yh71zMP3xTSeHCw"}]}]}]}
deepset/minilm-uncased-squad2
null
[ "transformers", "pytorch", "jax", "safetensors", "bert", "question-answering", "en", "dataset:squad_v2", "license:cc-by-4.0", "model-index", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
This language model is trained using sentence_transformers (https://github.com/UKPLab/sentence-transformers) Started with bert-base-nli-stsb-mean-tokens Continue training on quora questions deduplication dataset (https://www.kaggle.com/c/quora-question-pairs) See train_script.py for script used Below is the performance over the course of training epoch,steps,cosine_pearson,cosine_spearman,euclidean_pearson,euclidean_spearman,manhattan_pearson,manhattan_spearman,dot_pearson,dot_spearman 0,1000,0.5944576426835938,0.6010801382777033,0.5942803776859142,0.5934485776801595,0.5939676679774666,0.593162725602328,0.5905591590826669,0.5921674789994058 0,2000,0.6404080440207146,0.6416811632113405,0.6384419354012121,0.6352050423100778,0.6379917744471867,0.6347884067391001,0.6410544760582826,0.6379252046791412 0,3000,0.6710168301884945,0.6676529324662036,0.6660195209784969,0.6618423144808695,0.6656461098096684,0.6615366331956389,0.6724401903484759,0.666073727723655 0,4000,0.6886373265097949,0.6808948140300153,0.67907655686838,0.6714218133850957,0.6786809551564443,0.6711577956884357,0.6926435869763303,0.68190855298609 0,5000,0.6991409753700026,0.6919630610321864,0.6991041519437052,0.6868961486499775,0.6987076032270729,0.6865385550504007,0.7035518148330993,0.6916275246101342 0,6000,0.7120367327025509,0.6975005265298305,0.7065567493967201,0.6922375503495235,0.7060005509843024,0.6916475765570651,0.7147094303373102,0.6981390706722722 0,7000,0.7254672394728687,0.7130118465900485,0.7261844956277705,0.7086213543110718,0.7257479964972307,0.7079315661881832,0.728729909455115,0.7122743793160531 0,8000,0.7402421930101399,0.7216774208330149,0.7367901914441078,0.7166256588352043,0.7362607046874481,0.7158881916281887,0.7433902441373252,0.7220998491980078 0,9000,0.7381005358120434,0.7197216844469877,0.7343228719349923,0.7139462687943793,0.7345247569255238,0.7145106206467152,0.7421843672419275,0.720686853053079 0,10000,0.7465436564646095,0.7260327107480364,0.7467524239596304,0.7230195666847953,0.7467721566237211,0.7231367593302213,0.749792199122442,0.7263143296580317 0,11000,0.7521805421706547,0.7323771570146701,0.7530672061250105,0.729223203496722,0.7530616532823367,0.7293818369675622,0.7552399002305836,0.7320808333541338 0,12000,0.7579359969644401,0.7340677616737238,0.7570017235719905,0.7305965412825544,0.7570601853520393,0.730718189957289,0.7611254136080384,0.7351501229591327 0,-1,0.7573407371218097,0.7329952035782198,0.755595312163209,0.7291445551777086,0.7557737117990928,0.7295404703700227,0.7607276219361719,0.7342415455980179 1,1000,0.7619907683805341,0.7374667949734767,0.7629820517114324,0.7330364216044966,0.7628369522755882,0.7331912674450544,0.7658583898073758,0.7381503446695727 1,2000,0.7618972640071228,0.7362151058969478,0.764582212425539,0.7335856230046062,0.7643125513700815,0.7334501607097152,0.7652852805583232,0.7369104639809163 1,3000,0.7687362955240467,0.7404674623181671,0.7708304819979073,0.7380959815601529,0.7707835692712482,0.7379796800453193,0.772074854759756,0.7414513460702766 1,4000,0.7685047787908202,0.7403088288815168,0.7703522257474043,0.7379787888808298,0.7701221475099808,0.7377898546753812,0.7713755359045312,0.7409415801952219 1,5000,0.7696438109797803,0.7410393893292365,0.773270389327895,0.7392953127251652,0.7729880866533291,0.7389853982789335,0.7726236305835863,0.7416278035580925 1,6000,0.7749538363837081,0.7436499342062207,0.774879168058157,0.7401827241766746,0.7745754601165837,0.739763415043146,0.7788801166152383,0.7446249060022169 1,7000,0.7794560817870597,0.7480970176267153,0.7803506944510302,0.7453305130502859,0.7799867949176531,0.7447100155494814,0.7828208193123926,0.7486740690324809 1,8000,0.7855844359073243,0.7496742172376921,0.7828816645965887,0.747176409009761,0.7827584875358967,0.7471037762845532,0.7879159073496309,0.7507349669102151 1,9000,0.7844110753729492,0.7507746252693759,0.7847208586489722,0.7485172180290892,0.7846408087474059,0.748491818820158,0.7872061334510225,0.7514470349769437 1,10000,0.7881311227435004,0.7530048509727403,0.7886917756879734,0.7508018068765787,0.7883332502188707,0.7505037008187275,0.7910707228932787,0.7537200382362567 1,11000,0.7883300109606874,0.7513494487126553,0.7879329130497712,0.749818368689255,0.7876525616593218,0.7494872882301785,0.7911454269743292,0.7522843165147303 1,12000,0.7853334933336618,0.7516809747712728,0.7893895316714998,0.749780492728257,0.7890075986655403,0.7494079715118533,0.7885959664070629,0.7523827940133203 1,-1,0.7887529238148887,0.7534076729932393,0.7896864404801204,0.7513080079201105,0.7894077512343298,0.7510009899066772,0.7919617393746149,0.7542173273241598 2,1000,0.7919209063905188,0.7550167329363414,0.7917464066515253,0.7523043685293455,0.7914371703225378,0.7520285423781206,0.7950297421784158,0.7562599556207076 2,2000,0.7924507768792486,0.7542908512484463,0.7934519001953887,0.7517491515010692,0.7931885648751081,0.751521004535999,0.7951637852162545,0.7551495215642072 2,3000,0.7937606244038364,0.755599577136169,0.7933633347508111,0.7527922999916203,0.7931581019714242,0.7527132061436363,0.797275652800117,0.7569827180764233 2,4000,0.7938389298721445,0.7578716892320315,0.7963783770097079,0.7555928931784702,0.796150381773947,0.7555438771581088,0.7972911620482322,0.759178632650707 2,5000,0.7935330563129844,0.7551129824372304,0.7970775059297484,0.7527285792572385,0.7967359830546507,0.7524478515463257,0.7966395126138969,0.756319220359678 2,6000,0.7929852776759999,0.7525490026774382,0.7952484474454824,0.7503695753216607,0.7950784132079611,0.7503677929234961,0.7956152082976395,0.7535275392698093 2,7000,0.794956504054517,0.756119591765251,0.7982025041673655,0.7532521587180684,0.7980261618830962,0.7532107179960499,0.7983222918908033,0.7571226363678287 2,8000,0.7934568432535339,0.7538336661192452,0.797015698241178,0.7514773358161916,0.7968076980315735,0.7513458838811067,0.7960694134685949,0.754143803399873 2,9000,0.7970040626682157,0.7576497805894974,0.7987855332059015,0.7550996144509958,0.7984693921009676,0.7548260162973456,0.7999509314900626,0.758347143906916 2,10000,0.7979442987735523,0.7585338500791028,0.8018677081664496,0.7557412777548302,0.8015397301245205,0.7552916678886369,0.8007921348414564,0.7589772216225288 2,11000,0.7985519561040211,0.7579986850302035,0.8021236875460913,0.7555826443181872,0.8019861620475348,0.7553763317660516,0.8009230128897853,0.7586541619907702 2,12000,0.7986842143860736,0.7599570950134775,0.8029131054823838,0.7577678644678973,0.8027922603736795,0.7575152095990927,0.8020896747930555,0.7608540869254408 2,-1,0.7994135319568432,0.7596286881516635,0.8022087183675333,0.7570593611974978,0.8020218401019292,0.7567291719729909,0.8026346812258125,0.7603928913647044 3,1000,0.7985505039929134,0.7592588405681144,0.8023296699449267,0.7569345933969436,0.8023622066009718,0.7570237132696928,0.8013054275981851,0.759643838536062 3,2000,0.7995482191699455,0.759205368623176,0.8026859405513612,0.7565709841358819,0.8024845263367439,0.7562920388231202,0.8021318586127523,0.7596496313300967 3,3000,0.7991070423195897,0.7582027696555826,0.8016352550470427,0.7555585819429662,0.8014268261947898,0.7551838327642736,0.8013136081494014,0.7584429477727118 3,4000,0.7999188836884763,0.7586764419322649,0.802987646214278,0.7561111254802977,0.8026549791861386,0.7556463650525692,0.8024068858366156,0.7591238238715613 3,5000,0.7988075932525881,0.7583533823004922,0.8019498750207454,0.755792967372457,0.8016459824731964,0.7553834613587099,0.8015528810821693,0.7589527136833425 3,6000,0.8003341798460688,0.7585432077405799,0.8032464035902267,0.7563722467405277,0.8028695045742804,0.7557626665682309,0.8027937010871594,0.7590404967573696 3,7000,0.799187592384933,0.7579358555659604,0.8028413548398412,0.7555875459131398,0.8025187078191003,0.7551196665011402,0.8018680475193432,0.7585565756912578 3,8000,0.797725037202641,0.757439012042047,0.802048241301358,0.7548888458326453,0.8017608103042271,0.7544606246736175,0.8005479449399782,0.758037452190282 3,9000,0.7990232649360067,0.7573703896772077,0.8021375332910405,0.754873027155089,0.8018733796679427,0.7545680141630304,0.8016400687760605,0.7579461042843499 3,10000,0.7994934439260372,0.758368978248884,0.8035693504115055,0.75619400688862,0.8032990505007025,0.7559016935896375,0.8022819185772518,0.7589558328445544 3,11000,0.8002954591825011,0.758710753096932,0.8043310859792212,0.7566387152306694,0.8040865016706966,0.7564221538891368,0.8030873114870971,0.7592722085543488 3,12000,0.8003726616196549,0.7588056657991931,0.8044000317617518,0.7566146528909147,0.8041705213966136,0.7563419459362758,0.8031760015719815,0.7593194421057111 3,-1,0.8004926728141455,0.7587192194882135,0.8043340929890026,0.756546030526114,0.8041028559910275,0.7563103085106637,0.8032542493776693,0.7592325501951863
{"license": "apache-2.0"}
deepset/quora_dedup_bert_base
null
[ "transformers", "pytorch", "jax", "safetensors", "bert", "feature-extraction", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
question-answering
transformers
# roberta-base-squad2 for QA on COVID-19 ## Overview **Language model:** deepset/roberta-base-squad2 **Language:** English **Downstream-task:** Extractive QA **Training data:** [SQuAD-style CORD-19 annotations from 23rd April](https://github.com/deepset-ai/COVID-QA/blob/master/data/question-answering/200423_covidQA.json) **Code:** See [an example QA pipeline on Haystack](https://haystack.deepset.ai/tutorials/01_basic_qa_pipeline) **Infrastructure**: Tesla v100 ## Hyperparameters ``` batch_size = 24 n_epochs = 3 base_LM_model = "deepset/roberta-base-squad2" max_seq_len = 384 learning_rate = 3e-5 lr_schedule = LinearWarmup warmup_proportion = 0.1 doc_stride = 128 xval_folds = 5 dev_split = 0 no_ans_boost = -100 ``` --- license: cc-by-4.0 --- ## Performance 5-fold cross-validation on the data set led to the following results: **Single EM-Scores:** [0.222, 0.123, 0.234, 0.159, 0.158] **Single F1-Scores:** [0.476, 0.493, 0.599, 0.461, 0.465] **Single top\\_3\\_recall Scores:** [0.827, 0.776, 0.860, 0.771, 0.777] **XVAL EM:** 0.17890995260663506 **XVAL f1:** 0.49925444207319924 **XVAL top\\_3\\_recall:** 0.8021327014218009 This model is the model obtained from the **third** fold of the cross-validation. ## Usage ### In Haystack For doing QA at scale (i.e. many docs instead of single paragraph), you can load the model also in [haystack](https://github.com/deepset-ai/haystack/): ```python reader = FARMReader(model_name_or_path="deepset/roberta-base-squad2-covid") # or reader = TransformersReader(model="deepset/roberta-base-squad2",tokenizer="deepset/roberta-base-squad2-covid") ``` ### In Transformers ```python from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline model_name = "deepset/roberta-base-squad2-covid" # a) Get predictions nlp = pipeline('question-answering', model=model_name, tokenizer=model_name) QA_input = { 'question': 'Why is model conversion important?', 'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.' } res = nlp(QA_input) # b) Load model & tokenizer model = AutoModelForQuestionAnswering.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) ``` ## Authors **Branden Chan:** [email protected] **Timo Möller:** [email protected] **Malte Pietsch:** [email protected] **Tanay Soni:** [email protected] **Bogdan Kostić:** [email protected] ## About us <div class="grid lg:grid-cols-2 gap-x-4 gap-y-3"> <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center"> <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/deepset-logo-colored.png" class="w-40"/> </div> <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center"> <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/haystack-logo-colored.png" class="w-40"/> </div> </div> [deepset](http://deepset.ai/) is the company behind the open-source NLP framework [Haystack](https://haystack.deepset.ai/) which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc. Some of our other work: - [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")]([https://huggingface.co/deepset/tinyroberta-squad2) - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert) - [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad) ## Get in touch and join the Haystack community <p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://docs.haystack.deepset.ai">Documentation</a></strong>. We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community/join">Discord community open to everyone!</a></strong></p> [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai) By the way: [we're hiring!](http://www.deepset.ai/jobs)
{"language": "en", "license": "cc-by-4.0", "datasets": ["squad_v2"]}
deepset/roberta-base-squad2-covid
null
[ "transformers", "pytorch", "jax", "safetensors", "roberta", "question-answering", "en", "dataset:squad_v2", "license:cc-by-4.0", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
question-answering
transformers
## Overview **Language model:** deepset/roberta-base-squad2-distilled **Language:** English **Training data:** SQuAD 2.0 training set **Eval data:** SQuAD 2.0 dev set **Infrastructure**: 4x V100 GPU **Published**: Dec 8th, 2021 ## Details - haystack's distillation feature was used for training. deepset/roberta-large-squad2 was used as the teacher model. ## Hyperparameters ``` batch_size = 80 n_epochs = 4 max_seq_len = 384 learning_rate = 3e-5 lr_schedule = LinearWarmup embeds_dropout_prob = 0.1 temperature = 1.5 distillation_loss_weight = 0.75 ``` ## Performance ``` "exact": 79.8366040596311 "f1": 83.916407079888 ``` ## Authors **Timo Möller:** [email protected] **Julian Risch:** [email protected] **Malte Pietsch:** [email protected] **Michel Bartels:** [email protected] ## About us <div class="grid lg:grid-cols-2 gap-x-4 gap-y-3"> <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center"> <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/deepset-logo-colored.png" class="w-40"/> </div> <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center"> <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/haystack-logo-colored.png" class="w-40"/> </div> </div> [deepset](http://deepset.ai/) is the company behind the open-source NLP framework [Haystack](https://haystack.deepset.ai/) which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc. Some of our other work: - [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")]([https://huggingface.co/deepset/tinyroberta-squad2) - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert) - [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad) ## Get in touch and join the Haystack community <p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://docs.haystack.deepset.ai">Documentation</a></strong>. We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community">Discord community open to everyone!</a></strong></p> [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai) By the way: [we're hiring!](http://www.deepset.ai/jobs)
{"language": "en", "license": "mit", "tags": ["exbert"], "datasets": ["squad_v2"], "thumbnail": "https://thumb.tildacdn.com/tild3433-3637-4830-a533-353833613061/-/resize/720x/-/format/webp/germanquad.jpg", "model-index": [{"name": "deepset/roberta-base-squad2-distilled", "results": [{"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squad_v2", "type": "squad_v2", "config": "squad_v2", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 80.8593, "name": "Exact Match", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMzVjNzkxNmNiNDkzNzdiYjJjZGM3ZTViMGJhOGM2ZjFmYjg1MjYxMDM2YzM5NWMwNDIyYzNlN2QwNGYyNDMzZSIsInZlcnNpb24iOjF9.Rgww8tf8D7nF2dh2U_DMrFzmp87k8s7RFibrDXSvQyA66PGWXwjlsd1552lzjHnNV5hvHUM1-h3PTuY_5p64BA"}, {"type": "f1", "value": 84.0104, "name": "F1", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTAyZDViNWYzNjA4OWQ5MzgyYmQ2ZDlhNWRhMTIzYTYxYzViMmI4NWE4ZGU5MzVhZTAwNTRlZmRlNWUwMjI0ZSIsInZlcnNpb24iOjF9.Er21BNgJ3jJXLuZtpubTYq9wCwO1i_VLQFwS5ET0e4eAYVVj0aOA40I5FvP5pZac3LjkCnVacxzsFWGCYVmnDA"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squad", "type": "squad", "config": "plain_text", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 86.225, "name": "Exact Match"}, {"type": "f1", "value": 92.483, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "adversarial_qa", "type": "adversarial_qa", "config": "adversarialQA", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 29.9, "name": "Exact Match"}, {"type": "f1", "value": 41.183, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squad_adversarial", "type": "squad_adversarial", "config": "AddOneSent", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 79.071, "name": "Exact Match"}, {"type": "f1", "value": 84.472, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squadshifts amazon", "type": "squadshifts", "config": "amazon", "split": "test"}, "metrics": [{"type": "exact_match", "value": 70.733, "name": "Exact Match"}, {"type": "f1", "value": 83.958, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squadshifts new_wiki", "type": "squadshifts", "config": "new_wiki", "split": "test"}, "metrics": [{"type": "exact_match", "value": 82.011, "name": "Exact Match"}, {"type": "f1", "value": 91.092, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squadshifts nyt", "type": "squadshifts", "config": "nyt", "split": "test"}, "metrics": [{"type": "exact_match", "value": 84.203, "name": "Exact Match"}, {"type": "f1", "value": 91.521, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squadshifts reddit", "type": "squadshifts", "config": "reddit", "split": "test"}, "metrics": [{"type": "exact_match", "value": 72.029, "name": "Exact Match"}, {"type": "f1", "value": 83.454, "name": "F1"}]}]}]}
deepset/roberta-base-squad2-distilled
null
[ "transformers", "pytorch", "safetensors", "roberta", "question-answering", "exbert", "en", "dataset:squad_v2", "license:mit", "model-index", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
question-answering
transformers
# roberta-base for QA This is the [roberta-base](https://huggingface.co/roberta-base) model, fine-tuned using the [SQuAD2.0](https://huggingface.co/datasets/squad_v2) dataset. It's been trained on question-answer pairs, including unanswerable questions, for the task of Question Answering. ## Overview **Language model:** roberta-base **Language:** English **Downstream-task:** Extractive QA **Training data:** SQuAD 2.0 **Eval data:** SQuAD 2.0 **Code:** See [an example QA pipeline on Haystack](https://haystack.deepset.ai/tutorials/first-qa-system) **Infrastructure**: 4x Tesla v100 ## Hyperparameters ``` batch_size = 96 n_epochs = 2 base_LM_model = "roberta-base" max_seq_len = 386 learning_rate = 3e-5 lr_schedule = LinearWarmup warmup_proportion = 0.2 doc_stride=128 max_query_length=64 ``` ## Using a distilled model instead Please note that we have also released a distilled version of this model called [deepset/tinyroberta-squad2](https://huggingface.co/deepset/tinyroberta-squad2). The distilled model has a comparable prediction quality and runs at twice the speed of the base model. ## Usage ### In Haystack Haystack is an NLP framework by deepset. You can use this model in a Haystack pipeline to do question answering at scale (over many documents). To load the model in [Haystack](https://github.com/deepset-ai/haystack/): ```python reader = FARMReader(model_name_or_path="deepset/roberta-base-squad2") # or reader = TransformersReader(model_name_or_path="deepset/roberta-base-squad2",tokenizer="deepset/roberta-base-squad2") ``` For a complete example of ``roberta-base-squad2`` being used for Question Answering, check out the [Tutorials in Haystack Documentation](https://haystack.deepset.ai/tutorials/first-qa-system) ### In Transformers ```python from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline model_name = "deepset/roberta-base-squad2" # a) Get predictions nlp = pipeline('question-answering', model=model_name, tokenizer=model_name) QA_input = { 'question': 'Why is model conversion important?', 'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.' } res = nlp(QA_input) # b) Load model & tokenizer model = AutoModelForQuestionAnswering.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) ``` ## Performance Evaluated on the SQuAD 2.0 dev set with the [official eval script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/). ``` "exact": 79.87029394424324, "f1": 82.91251169582613, "total": 11873, "HasAns_exact": 77.93522267206478, "HasAns_f1": 84.02838248389763, "HasAns_total": 5928, "NoAns_exact": 81.79983179142137, "NoAns_f1": 81.79983179142137, "NoAns_total": 5945 ``` ## Authors **Branden Chan:** [email protected] **Timo Möller:** [email protected] **Malte Pietsch:** [email protected] **Tanay Soni:** [email protected] ## About us <div class="grid lg:grid-cols-2 gap-x-4 gap-y-3"> <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center"> <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/deepset-logo-colored.png" class="w-40"/> </div> <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center"> <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/haystack-logo-colored.png" class="w-40"/> </div> </div> [deepset](http://deepset.ai/) is the company behind the open-source NLP framework [Haystack](https://haystack.deepset.ai/) which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc. Some of our other work: - [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")]([https://huggingface.co/deepset/tinyroberta-squad2) - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert) - [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad) ## Get in touch and join the Haystack community <p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://docs.haystack.deepset.ai">Documentation</a></strong>. We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community">Discord community open to everyone!</a></strong></p> [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai) By the way: [we're hiring!](http://www.deepset.ai/jobs)
{"language": "en", "license": "cc-by-4.0", "datasets": ["squad_v2"], "model-index": [{"name": "deepset/roberta-base-squad2", "results": [{"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squad_v2", "type": "squad_v2", "config": "squad_v2", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 79.9309, "name": "Exact Match", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMDhhNjg5YzNiZGQ1YTIyYTAwZGUwOWEzZTRiYzdjM2QzYjA3ZTUxNDM1NjE1MTUyMjE1MGY1YzEzMjRjYzVjYiIsInZlcnNpb24iOjF9.EH5JJo8EEFwU7osPz3s7qanw_tigeCFhCXjSfyN0Y1nWVnSfulSxIk_DbAEI5iE80V4EKLyp5-mYFodWvL2KDA"}, {"type": "f1", "value": 82.9501, "name": "F1", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMjk5ZDYwOGQyNjNkMWI0OTE4YzRmOTlkY2JjNjQ0YTZkNTMzMzNkYTA0MDFmNmI3NjA3NjNlMjhiMDQ2ZjJjNSIsInZlcnNpb24iOjF9.DDm0LNTkdLbGsue58bg1aH_s67KfbcmkvL-6ZiI2s8IoxhHJMSf29H_uV2YLyevwx900t-MwTVOW3qfFnMMEAQ"}, {"type": "total", "value": 11869, "name": "total", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMGFkMmI2ODM0NmY5NGNkNmUxYWViOWYxZDNkY2EzYWFmOWI4N2VhYzY5MGEzMTVhOTU4Zjc4YWViOGNjOWJjMCIsInZlcnNpb24iOjF9.fexrU1icJK5_MiifBtZWkeUvpmFISqBLDXSQJ8E6UnrRof-7cU0s4tX_dIsauHWtUpIHMPZCf5dlMWQKXZuAAA"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squad", "type": "squad", "config": "plain_text", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 85.289, "name": "Exact Match"}, {"type": "f1", "value": 91.841, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "adversarial_qa", "type": "adversarial_qa", "config": "adversarialQA", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 29.5, "name": "Exact Match"}, {"type": "f1", "value": 40.367, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squad_adversarial", "type": "squad_adversarial", "config": "AddOneSent", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 78.567, "name": "Exact Match"}, {"type": "f1", "value": 84.469, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squadshifts amazon", "type": "squadshifts", "config": "amazon", "split": "test"}, "metrics": [{"type": "exact_match", "value": 69.924, "name": "Exact Match"}, {"type": "f1", "value": 83.284, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squadshifts new_wiki", "type": "squadshifts", "config": "new_wiki", "split": "test"}, "metrics": [{"type": "exact_match", "value": 81.204, "name": "Exact Match"}, {"type": "f1", "value": 90.595, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squadshifts nyt", "type": "squadshifts", "config": "nyt", "split": "test"}, "metrics": [{"type": "exact_match", "value": 82.931, "name": "Exact Match"}, {"type": "f1", "value": 90.756, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squadshifts reddit", "type": "squadshifts", "config": "reddit", "split": "test"}, "metrics": [{"type": "exact_match", "value": 71.55, "name": "Exact Match"}, {"type": "f1", "value": 82.939, "name": "F1"}]}]}]}
deepset/roberta-base-squad2
null
[ "transformers", "pytorch", "tf", "jax", "rust", "safetensors", "roberta", "question-answering", "en", "dataset:squad_v2", "license:cc-by-4.0", "model-index", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
question-answering
transformers
{}
deepset/roberta-large-squad2-hp
null
[ "transformers", "pytorch", "jax", "roberta", "question-answering", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
question-answering
transformers
# roberta-large for QA This is the [roberta-large](https://huggingface.co/roberta-large) model, fine-tuned using the [SQuAD2.0](https://huggingface.co/datasets/squad_v2) dataset. It's been trained on question-answer pairs, including unanswerable questions, for the task of Question Answering. ## Overview **Language model:** roberta-large **Language:** English **Downstream-task:** Extractive QA **Training data:** SQuAD 2.0 **Eval data:** SQuAD 2.0 **Code:** See [an example QA pipeline on Haystack](https://haystack.deepset.ai/tutorials/first-qa-system) **Infrastructure**: 4x Tesla v100 ## Hyperparameters ``` base_LM_model = "roberta-large" ``` ## Using a distilled model instead Please note that we have also released a distilled version of this model called [deepset/roberta-base-squad2-distilled](https://huggingface.co/deepset/roberta-base-squad2-distilled). The distilled model has a comparable prediction quality and runs at twice the speed of the large model. ## Usage ### In Haystack Haystack is an NLP framework by deepset. You can use this model in a Haystack pipeline to do question answering at scale (over many documents). To load the model in [Haystack](https://github.com/deepset-ai/haystack/): ```python reader = FARMReader(model_name_or_path="deepset/roberta-large-squad2") # or reader = TransformersReader(model_name_or_path="deepset/roberta-large-squad2",tokenizer="deepset/roberta-large-squad2") ``` For a complete example of ``roberta-large-squad2`` being used for Question Answering, check out the [Tutorials in Haystack Documentation](https://haystack.deepset.ai/tutorials/first-qa-system) ### In Transformers ```python from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline model_name = "deepset/roberta-large-squad2" # a) Get predictions nlp = pipeline('question-answering', model=model_name, tokenizer=model_name) QA_input = { 'question': 'Why is model conversion important?', 'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.' } res = nlp(QA_input) # b) Load model & tokenizer model = AutoModelForQuestionAnswering.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) ``` ## Authors **Branden Chan:** [email protected] **Timo Möller:** [email protected] **Malte Pietsch:** [email protected] **Tanay Soni:** [email protected] ## About us <div class="grid lg:grid-cols-2 gap-x-4 gap-y-3"> <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center"> <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/deepset-logo-colored.png" class="w-40"/> </div> <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center"> <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/haystack-logo-colored.png" class="w-40"/> </div> </div> [deepset](http://deepset.ai/) is the company behind the open-source NLP framework [Haystack](https://haystack.deepset.ai/) which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc. Some of our other work: - [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")]([https://huggingface.co/deepset/tinyroberta-squad2) - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert) - [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad) ## Get in touch and join the Haystack community <p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://docs.haystack.deepset.ai">Documentation</a></strong>. We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community">Discord community open to everyone!</a></strong></p> [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai) By the way: [we're hiring!](http://www.deepset.ai/jobs)
{"language": "en", "license": "cc-by-4.0", "datasets": ["squad_v2"], "base_model": "roberta-large", "model-index": [{"name": "deepset/roberta-large-squad2", "results": [{"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squad_v2", "type": "squad_v2", "config": "squad_v2", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 85.168, "name": "Exact Match"}, {"type": "f1", "value": 88.349, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squad", "type": "squad", "config": "plain_text", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 87.162, "name": "Exact Match"}, {"type": "f1", "value": 93.603, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "adversarial_qa", "type": "adversarial_qa", "config": "adversarialQA", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 35.9, "name": "Exact Match"}, {"type": "f1", "value": 48.923, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squad_adversarial", "type": "squad_adversarial", "config": "AddOneSent", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 81.142, "name": "Exact Match"}, {"type": "f1", "value": 87.099, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squadshifts amazon", "type": "squadshifts", "config": "amazon", "split": "test"}, "metrics": [{"type": "exact_match", "value": 72.453, "name": "Exact Match"}, {"type": "f1", "value": 86.325, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squadshifts new_wiki", "type": "squadshifts", "config": "new_wiki", "split": "test"}, "metrics": [{"type": "exact_match", "value": 82.338, "name": "Exact Match"}, {"type": "f1", "value": 91.974, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squadshifts nyt", "type": "squadshifts", "config": "nyt", "split": "test"}, "metrics": [{"type": "exact_match", "value": 84.352, "name": "Exact Match"}, {"type": "f1", "value": 92.645, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squadshifts reddit", "type": "squadshifts", "config": "reddit", "split": "test"}, "metrics": [{"type": "exact_match", "value": 74.722, "name": "Exact Match"}, {"type": "f1", "value": 86.86, "name": "F1"}]}]}]}
deepset/roberta-large-squad2
null
[ "transformers", "pytorch", "jax", "safetensors", "roberta", "question-answering", "en", "dataset:squad_v2", "base_model:roberta-large", "license:cc-by-4.0", "model-index", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
This is an upload of the bert-base-nli-stsb-mean-tokens pretrained model from the Sentence Transformers Repo (https://github.com/UKPLab/sentence-transformers)
{"license": "apache-2.0"}
deepset/sentence_bert
null
[ "transformers", "pytorch", "jax", "bert", "license:apache-2.0", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
This model contains the converted PyTorch checkpoint of the original Tensorflow model available in the [TaPas repository](https://github.com/google-research/tapas/blob/master/DENSE_TABLE_RETRIEVER.md#reader-models). It is described in Herzig et al.'s (2021) [paper](https://aclanthology.org/2021.naacl-main.43/) _Open Domain Question Answering over Tables via Dense Retrieval_. This model has 2 versions that can be used differing only in the table scoring head. The default one has an adapted table scoring head in order to be able to generate probabilities out of the logits. The other (non-default) version corresponds to the original checkpoint from the TaPas repository and can be accessed by setting `revision="original"`. # Usage ## In Haystack If you want to use this model for question-answering over tables, you can load it in [Haystack](https://github.com/deepset-ai/haystack/): ```python from haystack.nodes import TableReader table_reader = TableReader(model_name_or_path="deepset/tapas-large-nq-hn-reader") ```
{"language": "en", "license": "apache-2.0", "tags": ["tapas"]}
deepset/tapas-large-nq-hn-reader
null
[ "transformers", "pytorch", "tapas", "en", "license:apache-2.0", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
This model contains the converted PyTorch checkpoint of the original Tensorflow model available in the [TaPas repository](https://github.com/google-research/tapas/blob/master/DENSE_TABLE_RETRIEVER.md#reader-models). It is described in Herzig et al.'s (2021) [paper](https://aclanthology.org/2021.naacl-main.43/) _Open Domain Question Answering over Tables via Dense Retrieval_. This model has 2 versions which can be used differing only in the table scoring head. The default one has an adapted table scoring head in order to be able to generate probabilities out of the logits. The other (non-default) version corredponds to the original checkpoint from the TaPas repository and can be accessed setting `revision="original"`. # Usage ## In Haystack If you want to use this model for question-answering over tables, you can load it in [Haystack](https://github.com/deepset-ai/haystack/): ```python from haystack.nodes import TableReader table_reader = TableReader(model_name_or_path="deepset/tapas-large-nq-reader") ```
{"language": "en", "license": "apache-2.0", "tags": ["tapas"]}
deepset/tapas-large-nq-reader
null
[ "transformers", "pytorch", "tapas", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
question-answering
transformers
## Overview **Language model:** deepset/tinybert-6L-768D-squad2 **Language:** English **Training data:** SQuAD 2.0 training set x 20 augmented + SQuAD 2.0 training set without augmentation **Eval data:** SQuAD 2.0 dev set **Infrastructure**: 1x V100 GPU **Published**: Dec 8th, 2021 ## Details - haystack's intermediate layer and prediction layer distillation features were used for training (based on [TinyBERT](https://arxiv.org/pdf/1909.10351.pdf)). deepset/bert-base-uncased-squad2 was used as the teacher model and huawei-noah/TinyBERT_General_6L_768D was used as the student model. ## Hyperparameters ### Intermediate layer distillation ``` batch_size = 26 n_epochs = 5 max_seq_len = 384 learning_rate = 5e-5 lr_schedule = LinearWarmup embeds_dropout_prob = 0.1 temperature = 1 ``` ### Prediction layer distillation ``` batch_size = 26 n_epochs = 5 max_seq_len = 384 learning_rate = 3e-5 lr_schedule = LinearWarmup embeds_dropout_prob = 0.1 temperature = 1 distillation_loss_weight = 1.0 ``` ## Performance ``` "exact": 71.87736882001179 "f1": 76.36111895973675 ``` ## Authors - Timo Möller: `timo.moeller [at] deepset.ai` - Julian Risch: `julian.risch [at] deepset.ai` - Malte Pietsch: `malte.pietsch [at] deepset.ai` - Michel Bartels: `michel.bartels [at] deepset.ai` ## About us ![deepset logo](https://workablehr.s3.amazonaws.com/uploads/account/logo/476306/logo) We bring NLP to the industry via open source! Our focus: Industry specific language models & large scale QA systems. Some of our work: - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert) - [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad) - [FARM](https://github.com/deepset-ai/FARM) - [Haystack](https://github.com/deepset-ai/haystack/) Get in touch: [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai) By the way: [we're hiring!](http://www.deepset.ai/jobs)
{"language": "en", "license": "mit", "tags": ["exbert"], "datasets": ["squad_v2"], "thumbnail": "https://thumb.tildacdn.com/tild3433-3637-4830-a533-353833613061/-/resize/720x/-/format/webp/germanquad.jpg", "model-index": [{"name": "deepset/tinybert-6l-768d-squad2", "results": [{"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squad_v2", "type": "squad_v2", "config": "squad_v2", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 73.8248, "name": "Exact Match", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMGFmZmFiN2E5ODZkOTkyMjQ1NTUzMmQwMjc0M2RlYzVlNmM4YTFlNzA4YzIwY2JkY2EyNDg2ZTY3OTdjZTVlZiIsInZlcnNpb24iOjF9.ZZ6c2OI3lzeNhuSWTh28j00zk-sPrqkTvdVBZv2wJc1D4YnR-xOj72haybT6MV_xeYqTg3-x9L8PsWSS20NaDw"}, {"type": "f1", "value": 77.1684, "name": "F1", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNzAxMDk1YzI5ZjA2N2ZmMzAxNjgxYzJiNzAzYmI1ZWU5ZDRmYWY3OWJmMjlmNDcyMGE0YWY5NjNhZTk4YWY5ZSIsInZlcnNpb24iOjF9.rF3raNGUSYv5D2xzWLZztD99vwDKvWb22LG32RomrDGP6XKTbCVqZzAw5UFw93jKb0VoLApbQQ-AOGxLj3U_Cg"}]}]}]}
deepset/tinybert-6l-768d-squad2
null
[ "transformers", "pytorch", "safetensors", "bert", "question-answering", "exbert", "en", "dataset:squad_v2", "arxiv:1909.10351", "license:mit", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
question-answering
transformers
# tinyroberta-squad2 ## Overview **Language model:** tinyroberta-squad2 **Language:** English **Training data:** The PILE **Code:** **Infrastructure**: 4x Tesla v100 ## Hyperparameters ``` batch_size = 96 n_epochs = 4 base_LM_model = "deepset/tinyroberta-squad2-step1" max_seq_len = 384 learning_rate = 1e-4 lr_schedule = LinearWarmup warmup_proportion = 0.2 teacher = "deepset/roberta-base" ``` ## Distillation This model was distilled using the TinyBERT approach described in [this paper](https://arxiv.org/pdf/1909.10351.pdf) and implemented in [haystack](https://github.com/deepset-ai/haystack). We have performed intermediate layer distillation with roberta-base as the teacher which resulted in [deepset/tinyroberta-6l-768d](https://huggingface.co/deepset/tinyroberta-6l-768d). This model has not been distilled for any specific task. If you are interested in using distillation to improve its performance on a downstream task, you can take advantage of haystack's new [distillation functionality](https://haystack.deepset.ai/guides/model-distillation). You can also check out [deepset/tinyroberta-squad2](https://huggingface.co/deepset/tinyroberta-squad2) for a model that is already distilled on an extractive QA downstream task. ## Usage ### In Transformers ```python from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline model_name = "deepset/tinyroberta-squad2" model = AutoModelForQuestionAnswering.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) ``` ### In FARM ```python from farm.modeling.adaptive_model import AdaptiveModel from farm.modeling.tokenization import Tokenizer from farm.infer import Inferencer model_name = "deepset/tinyroberta-squad2" model = AdaptiveModel.convert_from_transformers(model_name, device="cpu", task_type="question_answering") tokenizer = Tokenizer.load(model_name) ``` ### In haystack For doing QA at scale (i.e. many docs instead of single paragraph), you can load the model also in [haystack](https://github.com/deepset-ai/haystack/): ```python reader = FARMReader(model_name_or_path="deepset/roberta-base-squad2") # or reader = TransformersReader(model_name_or_path="deepset/roberta-base-squad2",tokenizer="deepset/roberta-base-squad2") ``` ## Authors Branden Chan: `branden.chan [at] deepset.ai` Timo Möller: `timo.moeller [at] deepset.ai` Malte Pietsch: `malte.pietsch [at] deepset.ai` Tanay Soni: `tanay.soni [at] deepset.ai` Michel Bartels: `michel.bartels [at] deepset.ai` ## About us ![deepset logo](https://workablehr.s3.amazonaws.com/uploads/account/logo/476306/logo) We bring NLP to the industry via open source! Our focus: Industry specific language models & large scale QA systems. Some of our work: - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert) - [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad) - [FARM](https://github.com/deepset-ai/FARM) - [Haystack](https://github.com/deepset-ai/haystack/) Get in touch: [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Slack](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai) By the way: [we're hiring!](http://www.deepset.ai/jobs)
{"language": "en", "license": "cc-by-4.0", "datasets": ["squad_v2"]}
deepset/tinyroberta-6l-768d
null
[ "transformers", "pytorch", "safetensors", "roberta", "question-answering", "en", "dataset:squad_v2", "arxiv:1909.10351", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
question-answering
transformers
{}
deepset/tinyroberta-squad2-step1
null
[ "transformers", "pytorch", "safetensors", "roberta", "question-answering", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
question-answering
transformers
# tinyroberta-squad2 This is the *distilled* version of the [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) model. This model has a comparable prediction quality and runs at twice the speed of the base model. ## Overview **Language model:** tinyroberta-squad2 **Language:** English **Downstream-task:** Extractive QA **Training data:** SQuAD 2.0 **Eval data:** SQuAD 2.0 **Code:** See [an example QA pipeline on Haystack](https://haystack.deepset.ai/tutorials/first-qa-system) **Infrastructure**: 4x Tesla v100 ## Hyperparameters ``` batch_size = 96 n_epochs = 4 base_LM_model = "deepset/tinyroberta-squad2-step1" max_seq_len = 384 learning_rate = 3e-5 lr_schedule = LinearWarmup warmup_proportion = 0.2 doc_stride = 128 max_query_length = 64 distillation_loss_weight = 0.75 temperature = 1.5 teacher = "deepset/robert-large-squad2" ``` ## Distillation This model was distilled using the TinyBERT approach described in [this paper](https://arxiv.org/pdf/1909.10351.pdf) and implemented in [haystack](https://github.com/deepset-ai/haystack). Firstly, we have performed intermediate layer distillation with roberta-base as the teacher which resulted in [deepset/tinyroberta-6l-768d](https://huggingface.co/deepset/tinyroberta-6l-768d). Secondly, we have performed task-specific distillation with [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) as the teacher for further intermediate layer distillation on an augmented version of SQuADv2 and then with [deepset/roberta-large-squad2](https://huggingface.co/deepset/roberta-large-squad2) as the teacher for prediction layer distillation. ## Usage ### In Haystack Haystack is an NLP framework by deepset. You can use this model in a Haystack pipeline to do question answering at scale (over many documents). To load the model in [Haystack](https://github.com/deepset-ai/haystack/): ```python reader = FARMReader(model_name_or_path="deepset/tinyroberta-squad2") # or reader = TransformersReader(model_name_or_path="deepset/tinyroberta-squad2") ``` ### In Transformers ```python from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline model_name = "deepset/tinyroberta-squad2" # a) Get predictions nlp = pipeline('question-answering', model=model_name, tokenizer=model_name) QA_input = { 'question': 'Why is model conversion important?', 'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.' } res = nlp(QA_input) # b) Load model & tokenizer model = AutoModelForQuestionAnswering.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) ``` ## Performance Evaluated on the SQuAD 2.0 dev set with the [official eval script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/). ``` "exact": 78.69114798281817, "f1": 81.9198998536977, "total": 11873, "HasAns_exact": 76.19770580296895, "HasAns_f1": 82.66446878592329, "HasAns_total": 5928, "NoAns_exact": 81.17746005046257, "NoAns_f1": 81.17746005046257, "NoAns_total": 5945 ``` ## Authors **Branden Chan:** [email protected] **Timo Möller:** [email protected] **Malte Pietsch:** [email protected] **Tanay Soni:** [email protected] **Michel Bartels:** [email protected] ## About us <div class="grid lg:grid-cols-2 gap-x-4 gap-y-3"> <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center"> <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/deepset-logo-colored.png" class="w-40"/> </div> <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center"> <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/haystack-logo-colored.png" class="w-40"/> </div> </div> [deepset](http://deepset.ai/) is the company behind the open-source NLP framework [Haystack](https://haystack.deepset.ai/) which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc. Some of our other work: - [roberta-base-squad2]([https://huggingface.co/deepset/roberta-base-squad2) - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert) - [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad) ## Get in touch and join the Haystack community <p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://docs.haystack.deepset.ai">Documentation</a></strong>. We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community/join">Discord community open to everyone!</a></strong></p> [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai) By the way: [we're hiring!](http://www.deepset.ai/jobs)
{"language": "en", "license": "cc-by-4.0", "datasets": ["squad_v2"], "model-index": [{"name": "deepset/tinyroberta-squad2", "results": [{"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squad_v2", "type": "squad_v2", "config": "squad_v2", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 78.8627, "name": "Exact Match", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDNlZDU4ODAxMzY5NGFiMTMyZmQ1M2ZhZjMyODA1NmFlOGMxNzYxNTA4OGE5YTBkZWViZjBkNGQ2ZmMxZjVlMCIsInZlcnNpb24iOjF9.Wgu599r6TvgMLTrHlLMVAbUtKD_3b70iJ5QSeDQ-bRfUsVk6Sz9OsJCp47riHJVlmSYzcDj_z_3jTcUjCFFXBg"}, {"type": "f1", "value": 82.0355, "name": "F1", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOTFkMzEzMWNiZDRhMGZlODhkYzcwZTZiMDFjZDg2YjllZmUzYWM5NTgwNGQ2NGYyMDk2ZGQwN2JmMTE5NTc3YiIsInZlcnNpb24iOjF9.ChgaYpuRHd5WeDFjtiAHUyczxtoOD_M5WR8834jtbf7wXhdGOnZKdZ1KclmhoI5NuAGc1NptX-G0zQ5FTHEcBA"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squad", "type": "squad", "config": "plain_text", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 83.86, "name": "Exact Match"}, {"type": "f1", "value": 90.752, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "adversarial_qa", "type": "adversarial_qa", "config": "adversarialQA", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 25.967, "name": "Exact Match"}, {"type": "f1", "value": 37.006, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squad_adversarial", "type": "squad_adversarial", "config": "AddOneSent", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 76.329, "name": "Exact Match"}, {"type": "f1", "value": 83.292, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squadshifts amazon", "type": "squadshifts", "config": "amazon", "split": "test"}, "metrics": [{"type": "exact_match", "value": 63.915, "name": "Exact Match"}, {"type": "f1", "value": 78.395, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squadshifts new_wiki", "type": "squadshifts", "config": "new_wiki", "split": "test"}, "metrics": [{"type": "exact_match", "value": 80.297, "name": "Exact Match"}, {"type": "f1", "value": 89.808, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squadshifts nyt", "type": "squadshifts", "config": "nyt", "split": "test"}, "metrics": [{"type": "exact_match", "value": 80.149, "name": "Exact Match"}, {"type": "f1", "value": 88.321, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squadshifts reddit", "type": "squadshifts", "config": "reddit", "split": "test"}, "metrics": [{"type": "exact_match", "value": 66.959, "name": "Exact Match"}, {"type": "f1", "value": 79.3, "name": "F1"}]}]}]}
deepset/tinyroberta-squad2
null
[ "transformers", "pytorch", "safetensors", "roberta", "question-answering", "en", "dataset:squad_v2", "arxiv:1909.10351", "license:cc-by-4.0", "model-index", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
question-answering
transformers
# deepset/xlm-roberta-base-squad2-distilled - haystack's distillation feature was used for training. deepset/xlm-roberta-large-squad2 was used as the teacher model. ## Overview **Language model:** deepset/xlm-roberta-base-squad2-distilled **Language:** Multilingual **Downstream-task:** Extractive QA **Training data:** SQuAD 2.0 **Eval data:** SQuAD 2.0 **Code:** See [an example QA pipeline on Haystack](https://haystack.deepset.ai/tutorials/first-qa-system) **Infrastructure**: 1x Tesla v100 ## Hyperparameters ``` batch_size = 56 n_epochs = 4 max_seq_len = 384 learning_rate = 3e-5 lr_schedule = LinearWarmup embeds_dropout_prob = 0.1 temperature = 3 distillation_loss_weight = 0.75 ``` ## Usage ### In Haystack Haystack is an NLP framework by deepset. You can use this model in a Haystack pipeline to do question answering at scale (over many documents). To load the model in [Haystack](https://github.com/deepset-ai/haystack/): ```python reader = FARMReader(model_name_or_path="deepset/xlm-roberta-base-squad2-distilled") # or reader = TransformersReader(model_name_or_path="deepset/xlm-roberta-base-squad2-distilled",tokenizer="deepset/xlm-roberta-base-squad2-distilled") ``` For a complete example of ``deepset/xlm-roberta-base-squad2-distilled`` being used for [question answering], check out the [Tutorials in Haystack Documentation](https://haystack.deepset.ai/tutorials/first-qa-system) ### In Transformers ```python from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline model_name = "deepset/xlm-roberta-base-squad2-distilled" # a) Get predictions nlp = pipeline('question-answering', model=model_name, tokenizer=model_name) QA_input = { 'question': 'Why is model conversion important?', 'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.' } res = nlp(QA_input) # b) Load model & tokenizer model = AutoModelForQuestionAnswering.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) ``` ## Performance Evaluated on the SQuAD 2.0 dev set ``` "exact": 74.06721131980123% "f1": 76.39919553344667% ``` ## Authors **Timo Möller:** [email protected] **Julian Risch:** [email protected] **Malte Pietsch:** [email protected] **Michel Bartels:** [email protected] ## About us <div class="grid lg:grid-cols-2 gap-x-4 gap-y-3"> <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center"> <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/deepset-logo-colored.png" class="w-40"/> </div> <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center"> <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/haystack-logo-colored.png" class="w-40"/> </div> </div> [deepset](http://deepset.ai/) is the company behind the open-source NLP framework [Haystack](https://haystack.deepset.ai/) which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc. Some of our other work: - [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")]([https://huggingface.co/deepset/tinyroberta-squad2) - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert) - [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad) ## Get in touch and join the Haystack community <p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://haystack.deepset.ai">Documentation</a></strong>. We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community/join">Discord community open to everyone!</a></strong></p> [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai) By the way: [we're hiring!](http://www.deepset.ai/jobs)
{"language": "multilingual", "license": "mit", "tags": ["exbert"], "datasets": ["squad_v2"], "thumbnail": "https://thumb.tildacdn.com/tild3433-3637-4830-a533-353833613061/-/resize/720x/-/format/webp/germanquad.jpg"}
deepset/xlm-roberta-base-squad2-distilled
null
[ "transformers", "pytorch", "safetensors", "xlm-roberta", "question-answering", "exbert", "multilingual", "dataset:squad_v2", "license:mit", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
question-answering
transformers
# Multilingual XLM-RoBERTa base for QA on various languages ## Overview **Language model:** xlm-roberta-base **Language:** Multilingual **Downstream-task:** Extractive QA **Training data:** SQuAD 2.0 **Eval data:** SQuAD 2.0 dev set - German MLQA - German XQuAD **Code:** See [example](https://github.com/deepset-ai/FARM/blob/master/examples/question_answering.py) in [FARM](https://github.com/deepset-ai/FARM/blob/master/examples/question_answering.py) **Infrastructure**: 4x Tesla v100 ## Hyperparameters ``` batch_size = 22*4 n_epochs = 2 max_seq_len=256, doc_stride=128, learning_rate=2e-5, ``` Corresponding experiment logs in mlflow: [link](https://public-mlflow.deepset.ai/#/experiments/2/runs/b25ec75e07614accb3f1ce03d43dbe08) ## Performance Evaluated on the SQuAD 2.0 dev set with the [official eval script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/). ``` "exact": 73.91560683904657 "f1": 77.14103746689592 ``` Evaluated on German MLQA: test-context-de-question-de.json "exact": 33.67279167589108 "f1": 44.34437105434842 "total": 4517 Evaluated on German XQuAD: xquad.de.json "exact": 48.739495798319325 "f1": 62.552615701071495 "total": 1190 ## Usage ### In Transformers ```python from transformers.pipelines import pipeline from transformers.modeling_auto import AutoModelForQuestionAnswering from transformers.tokenization_auto import AutoTokenizer model_name = "deepset/xlm-roberta-base-squad2" # a) Get predictions nlp = pipeline('question-answering', model=model_name, tokenizer=model_name) QA_input = { 'question': 'Why is model conversion important?', 'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.' } res = nlp(QA_input) # b) Load model & tokenizer model = AutoModelForQuestionAnswering.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) ``` ### In FARM ```python from farm.modeling.adaptive_model import AdaptiveModel from farm.modeling.tokenization import Tokenizer from farm.infer import Inferencer model_name = "deepset/xlm-roberta-base-squad2" # a) Get predictions nlp = Inferencer.load(model_name, task_type="question_answering") QA_input = [{"questions": ["Why is model conversion important?"], "text": "The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks."}] res = nlp.inference_from_dicts(dicts=QA_input, rest_api_schema=True) # b) Load model & tokenizer model = AdaptiveModel.convert_from_transformers(model_name, device="cpu", task_type="question_answering") tokenizer = Tokenizer.load(model_name) ``` ### In haystack For doing QA at scale (i.e. many docs instead of single paragraph), you can load the model also in [haystack](https://github.com/deepset-ai/haystack/): ```python reader = FARMReader(model_name_or_path="deepset/xlm-roberta-base-squad2") # or reader = TransformersReader(model="deepset/roberta-base-squad2",tokenizer="deepset/xlm-roberta-base-squad2") ``` ## Authors Branden Chan: `branden.chan [at] deepset.ai` Timo Möller: `timo.moeller [at] deepset.ai` Malte Pietsch: `malte.pietsch [at] deepset.ai` Tanay Soni: `tanay.soni [at] deepset.ai` ## About us ![deepset logo](https://workablehr.s3.amazonaws.com/uploads/account/logo/476306/logo) We bring NLP to the industry via open source! Our focus: Industry specific language models & large scale QA systems. Some of our work: - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert) - [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad) - [FARM](https://github.com/deepset-ai/FARM) - [Haystack](https://github.com/deepset-ai/haystack/) Get in touch: [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai) By the way: [we're hiring!](http://www.deepset.ai/jobs)
{"license": "cc-by-4.0", "datasets": ["squad_v2"], "model-index": [{"name": "deepset/xlm-roberta-base-squad2", "results": [{"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squad_v2", "type": "squad_v2", "config": "squad_v2", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 74.0354, "name": "Exact Match", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMWMxNWQ2ODJkNWIzZGQwOWI4OTZjYjU3ZDVjZGQzMjI5MzljNjliZTY4Mzk4YTk4OTMzZWYxZjUxYmZhYTBhZSIsInZlcnNpb24iOjF9.eEeFYYJ30BfJDd-JYfI1kjlxJrRF6OFtj2GnkTCOO4kqX31inFy8ptDWusVlLFsUphm4dNWfTKXC5e-gytLBDA"}, {"type": "f1", "value": 77.1833, "name": "F1", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMjg4MjNkOTA4Y2I5OGFlYTk1NWZjMWFlNjI5M2Y0NGZhMThhN2M4YmY2Y2RhZjcwYzU0MGNjN2RkZDljZmJmNiIsInZlcnNpb24iOjF9.TX42YMXpH4e0qu7cC4ARDlZWSkd55dwwyeyFXmOlXERNnEicDuFBCsy8WHLaqQCLUkzODJ22Hw4zhv81rwnlAQ"}]}]}]}
deepset/xlm-roberta-base-squad2
null
[ "transformers", "pytorch", "safetensors", "xlm-roberta", "question-answering", "dataset:squad_v2", "license:cc-by-4.0", "model-index", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
question-answering
transformers
# Multilingual XLM-RoBERTa large for QA on various languages ## Overview **Language model:** xlm-roberta-large **Language:** Multilingual **Downstream-task:** Extractive QA **Training data:** SQuAD 2.0 **Eval data:** SQuAD dev set - German MLQA - German XQuAD **Training run:** [MLFlow link](https://public-mlflow.deepset.ai/#/experiments/124/runs/3a540e3f3ecf4dd98eae8fc6d457ff20) **Infrastructure**: 4x Tesla v100 ## Hyperparameters ``` batch_size = 32 n_epochs = 3 base_LM_model = "xlm-roberta-large" max_seq_len = 256 learning_rate = 1e-5 lr_schedule = LinearWarmup warmup_proportion = 0.2 doc_stride=128 max_query_length=64 ``` ## Performance Evaluated on the SQuAD 2.0 English dev set with the [official eval script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/). ``` "exact": 79.45759285774446, "f1": 83.79259828925511, "total": 11873, "HasAns_exact": 71.96356275303644, "HasAns_f1": 80.6460053117963, "HasAns_total": 5928, "NoAns_exact": 86.93019343986543, "NoAns_f1": 86.93019343986543, "NoAns_total": 5945 ``` Evaluated on German [MLQA: test-context-de-question-de.json](https://github.com/facebookresearch/MLQA) ``` "exact": 49.34691166703564, "f1": 66.15582561674236, "total": 4517, ``` Evaluated on German [XQuAD: xquad.de.json](https://github.com/deepmind/xquad) ``` "exact": 61.51260504201681, "f1": 78.80206098332569, "total": 1190, ``` ## Usage ### In Haystack For doing QA at scale (i.e. many docs instead of single paragraph), you can load the model also in [haystack](https://github.com/deepset-ai/haystack/): ```python reader = FARMReader(model_name_or_path="deepset/xlm-roberta-large-squad2") # or reader = TransformersReader(model="deepset/xlm-roberta-large-squad2",tokenizer="deepset/xlm-roberta-large-squad2") ``` ### In Transformers ```python from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline model_name = "deepset/xlm-roberta-large-squad2" # a) Get predictions nlp = pipeline('question-answering', model=model_name, tokenizer=model_name) QA_input = { 'question': 'Why is model conversion important?', 'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.' } res = nlp(QA_input) # b) Load model & tokenizer model = AutoModelForQuestionAnswering.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) ``` ## Authors **Branden Chan:** [email protected] **Timo Möller:** [email protected] **Malte Pietsch:** [email protected] **Tanay Soni:** [email protected] ## About us <div class="grid lg:grid-cols-2 gap-x-4 gap-y-3"> <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center"> <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/deepset-logo-colored.png" class="w-40"/> </div> <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center"> <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/haystack-logo-colored.png" class="w-40"/> </div> </div> [deepset](http://deepset.ai/) is the company behind the open-source NLP framework [Haystack](https://haystack.deepset.ai/) which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc. Some of our other work: - [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")]([https://huggingface.co/deepset/tinyroberta-squad2) - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert) - [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad) ## Get in touch and join the Haystack community <p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://docs.haystack.deepset.ai">Documentation</a></strong>. We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community">Discord community open to everyone!</a></strong></p> [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai) By the way: [we're hiring!](http://www.deepset.ai/jobs)
{"language": "multilingual", "license": "cc-by-4.0", "tags": ["question-answering"], "datasets": ["squad_v2"], "model-index": [{"name": "deepset/xlm-roberta-large-squad2", "results": [{"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squad_v2", "type": "squad_v2", "config": "squad_v2", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 81.8281, "name": "Exact Match", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNzVhZDE2NTg5NmUwOWRkMmI2MGUxYjFlZjIzNmMyNDQ2MDY2MDNhYzE0ZjY5YTkyY2U4ODc3ODFiZjQxZWQ2YSIsInZlcnNpb24iOjF9.f_rN3WPMAdv-OBPz0T7N7lOxYz9f1nEr_P-vwKhi3jNdRKp_JTy18MYR9eyJM2riKHC6_ge-8XwfyrUf51DSDA"}, {"type": "f1", "value": 84.8886, "name": "F1", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZGE5MWJmZGUxMGMwNWFhYzVhZjQwZGEwOWQ4N2Q2Yjg5NzdjNDFiNDhiYTQ1Y2E5ZWJkOTFhYmI1Y2Q2ZGYwOCIsInZlcnNpb24iOjF9.TIdH-tOx3kEMDs5wK1r6iwZqqSjNGlBrpawrsE917j1F3UFJVnQ7wJwaj0OIgmC4iw8OQeLZL56ucBcLApa-AQ"}]}]}]}
deepset/xlm-roberta-large-squad2
null
[ "transformers", "pytorch", "safetensors", "xlm-roberta", "question-answering", "multilingual", "dataset:squad_v2", "license:cc-by-4.0", "model-index", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00