Search is not available for this dataset
pipeline_tag
stringclasses 48
values | library_name
stringclasses 205
values | text
stringlengths 0
18.3M
| metadata
stringlengths 2
1.07B
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
|
---|---|---|---|---|---|---|---|---|
null | transformers | {} | NtDNlp/cmcbert | null | [
"transformers",
"pytorch",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
feature-extraction | transformers | #EmbeddingSimilarityEvaluator: Evaluating the model on STS.en-en.txt dataset in epoch 2 after 26000 steps:
| Type | Pearson | Spearman |
| ----------- | ----------- | ----------- |
| Cosine | 0.7650 | 0.8095 |
| Euclidean | 0.8089 | 0.8010 |
| Cosine | 0.8075 | 0.7999 |
| Euclidean | 0.7531 | 0.7680
| {} | NtDNlp/sentence-embedding-vietnamese | null | [
"transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {"license": "afl-3.0"} | Nui/Sconte | null | [
"license:afl-3.0",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Nukki/Eu | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | NullisTerminis/en-to-pt | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | NullisTerminis/marian-finetuned-kde4-en-to-fr | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | transformers | {} | Numenta/BertSparse80 | null | [
"transformers",
"pytorch",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | transformers | {} | Numenta/BertSparse85 | null | [
"transformers",
"pytorch",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | transformers | {} | Numenta/BertSparse90 | null | [
"transformers",
"pytorch",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
automatic-speech-recognition | transformers | # Quran Speech Recognizer
This application will listen to the user's Quran recitation, and take the
user to the position of the Quran from where the s/he had recited.
You can also take a look at our [presentation slides](https://docs.google.com/presentation/d/1dbbVYHi3LQRiggH14nN36YV2A-ddUAKg67aX5MWi0ys/edit?usp=sharing).
# Methodology
We used transfer learning to make our application. We fine-tuned the pretrained
model available at https://huggingface.co/elgeish/wav2vec2-large-xlsr-53-arabic
using the data available at https://www.kaggle.com/c/quran-asr-challenge/data.
Our model can be found at https://huggingface.co/Nuwaisir/Quran_speech_recognizer.
# Usage
Run all the cells of run_ui.ipynb. The last cell will hear your
recitation for 5 seconds (changeable) from the time you run that cell. And then convert your
speech to Arabic text and show the most probable corresponding parts of 30th juzz
(Surah 78 - 114) of the Quran as the output based on edit distance value.
Currently, we are searching from Surah 78 to Surah 114 as the searching
algorithm needs some time to search the whole Quran. This range can be changed
in the 6th cell of the notebook. | {} | Nuwaisir/Quran_speech_recognizer | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | OLAOLA/AA | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | OStars/bert-base-chinese-finetuned-ner | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-generation | null | {"tags": ["conversational"]} | Obesitycart/ChatBot | null | [
"conversational",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers |
# 707 DialoGPT Model | {"tags": ["conversational"]} | Obscurity/DialoGPT-Medium-707 | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Ocelma/Test | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers |
# GPT2-Mongolia
## Model description
GPT-2 is a transformers model pretrained on a very small corpus of Mongolian news data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences.
## How to use
```python
import tensorflow as tf
from transformers import GPT2Config, TFGPT2LMHeadModel, GPT2Tokenizer
from transformers import WEIGHTS_NAME, CONFIG_NAME
tokenizer = GPT2Tokenizer.from_pretrained('Ochiroo/tiny_mn_gpt')
model = TFGPT2LMHeadModel.from_pretrained('Ochiroo/tiny_mn_gpt')
text = "Намайг Эрдэнэ-Очир гэдэг. Би"
input_ids = tokenizer.encode(text, return_tensors='tf')
beam_outputs = model.generate(
input_ids,
max_length = 25,
num_beams = 5,
temperature = 0.7,
no_repeat_ngram_size=2,
num_return_sequences=5
)
print(tokenizer.decode(beam_outputs[0]))
```
## Training data and biases
Trained on 500MB of Mongolian news dataset (IKON) on RTX 2060. | {"language": "mn"} | Ochiroo/tiny_mn_gpt | null | [
"transformers",
"tf",
"gpt2",
"text-generation",
"mn",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
# HEL-ACH-EN
## Model description
MT model translating Acholi to English initialized with weights from [opus-mt-luo-en](https://huggingface.co/Helsinki-NLP/opus-mt-luo-en) on HuggingFace.
## Intended uses & limitations
Machine Translation experiments. Do not use for sensitive tasks.
#### How to use
```python
# You can include sample code which will be formatted
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("Ogayo/Hel-ach-en")
model = AutoModelForSeq2SeqLM.from_pretrained("Ogayo/Hel-ach-en")
```
#### Limitations and bias
Trained on Jehovah Witnesses data so contains theirs and Christian views.
## Training data
Trained on OPUS JW300 data.
Initialized with weights from [opus-mt-luo-en](https://huggingface.co/Helsinki-NLP/opus-mt-luo-en?text=Bed+gi+nyasi+mar+chieng%27+nyuol+mopong%27+gi+mor%21#model_card)
## Training procedure
Remove duplicates and rows with no alphabetic characters. Used GPU
## Eval results
testset | BLEU
--- | ---
JW300.luo.en| 46.1
| {"language": ["ach", "en"], "license": "cc-by-4.0", "tags": ["translation"], "datasets": ["JW300"], "metrics": ["bleu"]} | Ogayo/Hel-ach-en | null | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"translation",
"ach",
"en",
"dataset:JW300",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Ogayo/ach-en-translator | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | {} | Ogayo/mt-ach-en | null | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | {} | Ogayo/mt-adh-en | null | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | {} | Ogayo/mt-en-ach | null | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | {} | Ogayo/mt-en-adh | null | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Oji/DialoGPT-small-BOT | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers |
# Rick and Morty DialoGPT Model | {"tags": ["conversational"]} | Oji/DialoGPT-small-Rick | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Oji/DialoGPT-smaller-Rick | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | OksanaBash/distilroberta-base-finetuned-wikitext2 | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Oksi/Sisisi | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Olaide/Psgnation | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Olaijs/DialoGPT-small-harrypotter | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | OliverZC/HuggingFace | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Olmik43/bR4mlOsmlOY | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
fill-mask | transformers | {} | Omar2027/Author_identification | null | [
"transformers",
"pytorch",
"jax",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | AutoTokenizer | {} | Omar2027/AutoTokenizer | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1259
- Accuracy: 0.9332
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 0.5952 | 0.7355 |
| 0.7663 | 2.0 | 636 | 0.3130 | 0.8742 |
| 0.7663 | 3.0 | 954 | 0.2024 | 0.9206 |
| 0.3043 | 4.0 | 1272 | 0.1590 | 0.9235 |
| 0.181 | 5.0 | 1590 | 0.1378 | 0.9303 |
| 0.181 | 6.0 | 1908 | 0.1287 | 0.9329 |
| 0.1468 | 7.0 | 2226 | 0.1259 | 0.9332 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["clinc_oos"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased-distilled-clinc", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "clinc_oos", "type": "clinc_oos", "args": "plus"}, "metrics": [{"type": "accuracy", "value": 0.9332258064516129, "name": "Accuracy"}]}, {"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "clinc_oos", "type": "clinc_oos", "config": "small", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.8587272727272727, "name": "Accuracy", "verified": true}, {"type": "precision", "value": 0.8619245385984416, "name": "Precision Macro", "verified": true}, {"type": "precision", "value": 0.8587272727272727, "name": "Precision Micro", "verified": true}, {"type": "precision", "value": 0.8797945801452213, "name": "Precision Weighted", "verified": true}, {"type": "recall", "value": 0.9359690949227375, "name": "Recall Macro", "verified": true}, {"type": "recall", "value": 0.8587272727272727, "name": "Recall Micro", "verified": true}, {"type": "recall", "value": 0.8587272727272727, "name": "Recall Weighted", "verified": true}, {"type": "f1", "value": 0.8922503214655346, "name": "F1 Macro", "verified": true}, {"type": "f1", "value": 0.8587272727272727, "name": "F1 Micro", "verified": true}, {"type": "f1", "value": 0.8506829426037475, "name": "F1 Weighted", "verified": true}, {"type": "loss", "value": 0.9798759818077087, "name": "loss", "verified": true}]}]}]} | Omar95farag/distilbert-base-uncased-distilled-clinc | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers | {} | Omar95farag/distilbert-base-uncased-finetuned-clinc | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {"license": "mit"} | Omerdor/OCTv0.1 | null | [
"license:mit",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Omfhxll/Ghj | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Onlyblacktea/my_transformers | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers |
#keytotext
[](https://pypi.org/project/keytotext/)
[](https://pepy.tech/project/keytotext)
[](https://colab.research.google.com/github/gagan3012/keytotext/blob/master/notebooks/K2T.ipynb)
[](https://share.streamlit.io/gagan3012/keytotext/UI/app.py)
[](https://github.com/gagan3012/keytotext#api)
[](https://hub.docker.com/r/gagan30/keytotext)
[](https://huggingface.co/models?filter=keytotext)
[](https://keytotext.readthedocs.io/en/latest/?badge=latest)
[](https://github.com/psf/black)

Idea is to build a model which will take keywords as inputs and generate sentences as outputs.
Potential use case can include:
- Marketing
- Search Engine Optimization
- Topic generation etc.
- Fine tuning of topic modeling models | {"language": "en", "license": "MIT", "tags": ["keytotext", "k2t", "Keywords to Sentences"], "datasets": ["WebNLG", "Dart"], "metrics": ["NLG"], "thumbnail": "Keywords to Sentences"} | OnsElleuch/logisgenerator | null | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"keytotext",
"k2t",
"Keywords to Sentences",
"en",
"dataset:WebNLG",
"dataset:Dart",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | OpeyemiOsakuade/naijaNER | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Optimal/DialoGPT-small-harrypotter | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers |
#harry potter dialogpt model | {"tags": ["conversational"]} | Optimal/Harry | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Optimal/chatbot | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | OraeeMaryam1376/my_model | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | OreoSlayer/Choch_GPT-medium | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | OreoSlayer/Chochito_GPT2 | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Orhantuna/Tunalar | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Orodrethxvx/DialoGPT-small-harrybotter | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | OsKyl/model_name | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | OsanOrg/test | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Osasuna52/bert-base-multilingual-cased-finetuned-polish-squad2-finetuned-squad-h-truncated-0f3010 | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Osasuna52/bert-base-polish-cased-v1-finetuned-squad-Geotrend-distilbert-cased-dev | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Osasuna52/bert-base-polish-cased-v1-finetuned-squad-kleczek-bert-cased-dev | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Osasuna52/bert-base-polish-cased-v1-finetuned-squad-kleczek-bert-cased | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Osasuna52/distilbert-base-pl-cased-finetuned-squad-Geotrend-distilbert-cased-dev | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Osasuna52/herbert-base-cased-finetuned-squad-allegro-herbert-base-cased-dev | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Osasuna52/herbert-large-cased-finetuned-squad-allegro-herbert-large-cased-dev | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers | # Finetuned DialoGPT model for Eng-Spa translation
DialoGPT-small model was used and finetuned on English to Spanish translations, extracted from http://storage.googleapis.com/download.tensorflow.org/data/spa-eng.zip
some examples of translations
| Role | Response |
| :---: |------------------------|
| User | please, sing me a song |
| Bot | Por favor, canta una canción. |
| User | I really want to go to China |
| Bot | Realmente quiero ir a China. |
| User | Can you do me a favor? |
| Bot | ¿Me puedes hacer un favor? |
| User | I don't know what you are talking about |
| Bot | No sé de qué estás hablando. |
| User | I don't want to go to China |
| Bot | No quiero ir a China. |
# Using the model
example code for trying out the model
```python
from transformers import AutoModelWithLMHead, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('microsoft/DialoGPT-small')
model = AutoModelWithLMHead.from_pretrained('OscarNav/dialoGPT_translate')
# Let's traslate 5 sentences
for step in range(5):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(
new_user_input_ids, max_length=1000,
pad_token_id=tokenizer.eos_token_id,
top_p=0.92, top_k = 50
)
# pretty print last ouput tokens from bot
print("DialoGPT: {}".format(tokenizer.decode(chat_history_ids[:, new_user_input_ids.shape[-1]:][0], skip_special_tokens=True)))
``` | {} | OscarNav/dialoGPT_translate | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | OsefZ/distilbert-base-uncased-finetuned-squad | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | ### Introduction:
This model belongs to text-classification. You can determine the emotion behind a sentence.
### Label Explaination:
LABEL_0: Positive (have positive emotion)
LABEL_1: Negative (have negative emotion)
### Usage:
```python
>>> from transformers import pipeline
>>> ec = pipeline('text-classification', model='Osiris/emotion_classifier')
>>> ec("Hello, I'm a good model.")
```
### Accuracy:
We reach 83.82% for validation dataset, and 84.42% for test dataset. | {} | Osiris/emotion_classifier | null | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers | ### Introduction:
This model belongs to text-classification. You can check whether the sentence consists any emotion.
### Label Explaination:
LABEL_1: Non Neutral (have some emotions)
LABEL_0: Neutral (have no emotion)
### Usage:
```python
>>> from transformers import pipeline
>>> nnc = pipeline('text-classification', model='Osiris/neutral_non_neutral_classifier')
>>> nnc("Hello, I'm a good model.")
```
### Accuracy:
We reach 93.98% for validation dataset, and 91.92% for test dataset. | {} | Osiris/neutral_non_neutral_classifier | null | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | git lfs install
git clone https://huggingface.co/r3dhummingbird/DialoGPT-medium-joshua | {} | OsmyReal/Ayuda | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
automatic-speech-recognition | transformers |
# Distil-wav2vec2
This model is a distilled version of the wav2vec2 model (https://arxiv.org/pdf/2006.11477.pdf). This model is 45% times smaller and twice as fast as the original wav2vec2 base model.
# Evaluation results
This model achieves the following results (speed is mesured for a batch size of 64):
|Model| Size| WER Librispeech-test-clean |WER Librispeech-test-other|Speed on cpu|speed on gpu|
|----------| ------------- |-------------|-----------| ------|----|
|Distil-wav2vec2| 197.9 Mb | 0.0983 | 0.2266|0.4006s| 0.0046s|
|wav2vec2-base| 360 Mb | 0.0389 | 0.1047|0.4919s| 0.0082s|
# Usage
notebook (executes seamlessly on google colab) at https://github.com/OthmaneJ/distil-wav2vec2
| {"language": "en", "license": "apache-2.0", "tags": ["speech", "audio", "automatic-speech-recognition"], "datasets": ["librispeech_asr"]} | OthmaneJ/distil-wav2vec2 | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"speech",
"audio",
"en",
"dataset:librispeech_asr",
"arxiv:2006.11477",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Otiham/Bah | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Overburn/Large-Jim500 | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers |
0 Tony Stark DialoGPT Model | {"tags": ["conversational"]} | P4RZ1V4L/DialoGPT-Medium-Tony | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | PJ/Phylicz | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | PJH10/tutorial | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | PSC/wav2vec2-large-xls-r-300m-tr-colab | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers |
#Rick and Morty DialoGPT medium model | {"tags": ["conversational"]} | PVAbhiram2003/DialoGPT-medium-RickandMorty | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | PaalHannus/INF368_data | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | PaalHannus/Model_out | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | PaalHannus/NER_nor-finetuned-ner | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Pablogps/xls-r-ab-test | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
fill-mask | transformers | {} | PaegiPang/bert_base_hate_speech | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-v2_squad
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the **squadV1** dataset.
- "eval_exact_match": 82.69631031220435
- "eval_f1": 90.10806626207174
- "eval_samples": 10808
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.14.1
- Pytorch 1.9.0
- Datasets 1.16.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "albert-base-v2_squad", "results": []}]} | Palak/albert-base-v2_squad | null | [
"transformers",
"pytorch",
"albert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-large-v2_squad
This model is a fine-tuned version of [albert-large-v2](https://huggingface.co/albert-large-v2) on the **squadV1** dataset.
- "eval_exact_match": 84.80605487228004
- "eval_f1": 91.80638438705844
- "eval_samples": 10808
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.14.1
- Pytorch 1.9.0
- Datasets 1.16.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "albert-large-v2_squad", "results": []}]} | Palak/albert-large-v2_squad | null | [
"transformers",
"pytorch",
"albert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base_squad
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the **squadV1** dataset.
- "eval_exact_match": 80.97445600756859
- "eval_f1": 88.0153886332912
- "eval_samples": 10790
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.14.1
- Pytorch 1.9.0
- Datasets 1.16.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "distilroberta-base_squad", "results": []}]} | Palak/distilroberta-base_squad | null | [
"transformers",
"pytorch",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# google_electra-base-discriminator_squad
This model is a fine-tuned version of [google/electra-base-discriminator](https://huggingface.co/google/electra-base-discriminator) on the **squadV1** dataset.
- "eval_exact_match": 85.33585619678335
- "eval_f1": 91.97363450387108
- "eval_samples": 10784
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.14.1
- Pytorch 1.9.0
- Datasets 1.16.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "google_electra-base-discriminator_squad", "results": []}]} | Palak/google_electra-base-discriminator_squad | null | [
"transformers",
"pytorch",
"electra",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# google_electra-small-discriminator_squad
This model is a fine-tuned version of [google/electra-small-discriminator](https://huggingface.co/google/electra-small-discriminator) on the **squadV1** dataset.
- "eval_exact_match": 76.95364238410596
- "eval_f1": 84.98869246841396
- "eval_samples": 10784
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.14.1
- Pytorch 1.9.0
- Datasets 1.16.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "google_electra-small-discriminator_squad", "results": []}]} | Palak/google_electra-small-discriminator_squad | null | [
"transformers",
"pytorch",
"electra",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# microsoft_deberta-base_squad
This model is a fine-tuned version of [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base) on the **squadV1** dataset.
- "eval_exact_match": 86.30085146641439
- "eval_f1": 92.68502275661561
- "eval_samples": 10788
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.14.1
- Pytorch 1.9.0
- Datasets 1.16.1
- Tokenizers 0.10.3
| {"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "microsoft_deberta-base_squad", "results": []}]} | Palak/microsoft_deberta-base_squad | null | [
"transformers",
"pytorch",
"deberta",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:mit",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# microsoft-deberta-large
This model is a fine-tuned version of [microsoft_deberta-large](https://huggingface.co/microsoft/deberta-large) on the **squadV1** dataset.
- "eval_exact_match": 87.89025543992432
- "eval_f1": 93.8755152147345
- "eval_samples": 10788
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.14.1
- Pytorch 1.9.0
- Datasets 1.16.1
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "microsoft-deberta-large", "results": []}]} | Palak/microsoft_deberta-large_squad | null | [
"transformers",
"pytorch",
"deberta",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base_squad
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the squad dataset.
- "eval_exact_match": 82.69631031220435
- "eval_f1": 89.4562841806503
- "eval_samples": 10918
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.14.1
- Pytorch 1.9.0
- Datasets 1.16.1
- Tokenizers 0.10.3
| {"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "xlm-roberta-base_squad", "results": []}]} | Palak/xlm-roberta-base_squad | null | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:mit",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eval
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the squad dataset.
- eval_exact_match": 85.96026490066225
- "eval_f1": 92.25000664341768
- "eval_samples": 10918
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 0.67
### Framework versions
- Transformers 4.14.1
- Pytorch 1.9.0
- Datasets 1.16.1
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "xlm-roberta-base_squad", "results": []}]} | Palak/xlm-roberta-large_squad | null | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Pallisgaard/distilbert-base-uncased-finetuned-cola | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Pamela/Pamela | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Pantea/Pan | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Pantea/Pantea | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers |
#Harry Potter AI bot | {"tags": ["conversational"]} | Paradocx/Dialogpt-mid-hpai | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Paramveer/t5-small-finetuned-xsum | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
sentence-similarity | sentence-transformers |
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 365 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 4,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 146,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> | {"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"} | ParkMyungkyu/KLUE-STS-roberta-base | null | [
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers | A fine-tuned model based on'gumgo91/IUPAC_BERT'for Blood brain barrier permeability prediction based on IUPAC string. There are also BiLSTM models available as well as these two models in 'https://github.com/mephisto121/BBBNLP if you want to check them all and check the codes too.
[](https://colab.research.google.com/drive/1jGYf3sq93yO4EbgVaEl3nlClrVatVaXS#scrollTo=AMEdQItmilAw) | {} | Parsa/BBB_prediction_classification_IUPAC | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers | A fine-tuned model based on'DeepChem/ChemBERTa-77M-MLM'for Blood brain barrier permeability prediction based on SMILES string. There are also BiLSTM models available as well as these two models in 'https://github.com/mephisto121/BBBNLP if you want to check them all and check the codes too.
[](https://colab.research.google.com/drive/1jGYf3sq93yO4EbgVaEl3nlClrVatVaXS#scrollTo=AMEdQItmilAw) | {} | Parsa/BBB_prediction_classification_SMILES | null | [
"transformers",
"pytorch",
"safetensors",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text2text-generation | transformers | {} | Parth/boolean | null | [
"transformers",
"pytorch",
"jax",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | from transformers import MT5ForConditionalGeneration, AutoTokenizer
model = MT5ForConditionalGeneration.from_pretrained("Parth/mT5-question-generator")
tokenizer = AutoTokenizer.from_pretrained("google/mt5-base")
| {} | Parth/mT5-question-generator | null | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text2text-generation | transformers | {} | Parth/result | null | [
"transformers",
"pytorch",
"jax",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Pascal/model_name | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | 'hello'
| {} | Patrickdg/distilbert-consumer-complaints | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Paul012/bart-model | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | ##An MT5ForConditionalGeneration trained on 3 tasks from PAN Profiling Hate Speech Spreaders on Twitter dataset (ES):
* topic attribution - topics were assigned with BertTopic library using embeddings from `Hate-speech-CNERG/dehatebert-mono-spanish` bert model (train and test sets from the PAN task)
* hate speech identification (train set from the PAN task)
in order to generate tone of comment use prefix **hater classification:** | {} | PaulAdversarial/PAN_twitter_hate_speech_2021_ES_MT5 | null | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.