Search is not available for this dataset
pipeline_tag
stringclasses 48
values | library_name
stringclasses 205
values | text
stringlengths 0
18.3M
| metadata
stringlengths 2
1.07B
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
|
---|---|---|---|---|---|---|---|---|
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned-amazon_reviews_multi
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the amazon_reviews_multi dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
| {"license": "cc-by-4.0", "tags": ["generated_from_trainer"], "datasets": ["amazon_reviews_multi"], "model_index": [{"name": "roberta-base-bne-finetuned-amazon_reviews_multi", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}, "dataset": {"name": "amazon_reviews_multi", "type": "amazon_reviews_multi", "args": "es"}}]}]} | IsabellaKarabasz/roberta-base-bne-finetuned-amazon_reviews_multi | null | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Ishdeep/DialoGPT-small-JoeyBot | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {"license": "mit"} | IshiKura/ELMo | null | [
"license:mit",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
automatic-speech-recognition | transformers | {} | Iskaj/300m_cv8.0_nl_base | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [hf-test/xls-r-dummy](https://huggingface.co/hf-test/xls-r-dummy) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB dataset.
It achieves the following results on the evaluation set:
- Loss: 156.8789
- Wer: 1.3456
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.1.dev0
- Tokenizers 0.11.0
| {"language": ["ab"], "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "", "results": []}]} | Iskaj/hf-challenge-test | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"ab",
"dataset:common_voice",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Iskaj/hf-test-nl | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# newnew
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - NL dataset.
It achieves the following results on the evaluation set:
- Loss: 11.4375
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 4000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3.dev0
- Tokenizers 0.11.0
| {"language": ["nl"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "newnew", "results": []}]} | Iskaj/newnew | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"nl",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Iskaj/w2v-sub | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
automatic-speech-recognition | transformers | Copy of "facebook/wav2vec2-large-xlsr-53-dutch"
| {} | Iskaj/w2v-xlsr-dutch-lm-added | null | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
automatic-speech-recognition | transformers | Model cloned from https://huggingface.co/facebook/wav2vec2-large-xlsr-53-dutch
Currently bugged: Logits size 48, vocab size 50 | {} | Iskaj/w2v-xlsr-dutch-lm | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
automatic-speech-recognition | transformers | # xlsr300m_cv_7.0_nl_lm | {"language": ["nl"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "nl", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "XLS-R-300M - Dutch", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8 NL", "type": "mozilla-foundation/common_voice_8_0", "args": "nl"}, "metrics": [{"type": "wer", "value": 32, "name": "Test WER"}, {"type": "cer", "value": 17, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "nl"}, "metrics": [{"type": "wer", "value": 37.44, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "nl"}, "metrics": [{"type": "wer", "value": 38.74, "name": "Test WER"}]}]}]} | Iskaj/xlsr300m_cv_7.0_nl_lm | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"nl",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
automatic-speech-recognition | transformers |
# xlsr300m_cv_8.0_nl
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id Iskaj/xlsr300m_cv_8.0_nl --dataset mozilla-foundation/common_voice_8_0 --config nl --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id Iskaj/xlsr300m_cv_8.0_nl --dataset speech-recognition-community-v2/dev_data --config nl --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
### Inference
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "Iskaj/xlsr300m_cv_8.0_nl"
sample_iter = iter(load_dataset("mozilla-foundation/common_voice_8_0", "nl", split="test", streaming=True, use_auth_token=True))
sample = next(sample_iter)
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
inputs = processor(resampled_audio, sampling_rate=16_000, return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
transcription[0].lower()
#'het kontine schip lag aangemeert in de aven'
```
| {"language": ["nl"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "mozilla-foundation/common_voice_7_0", "nl", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "XLS-R-300M - Dutch", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8 NL", "type": "mozilla-foundation/common_voice_8_0", "args": "nl"}, "metrics": [{"type": "wer", "value": 46.94, "name": "Test WER"}, {"type": "cer", "value": 21.65, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "nl"}, "metrics": [{"type": "wer", "value": "???", "name": "Test WER"}, {"type": "cer", "value": "???", "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "nl"}, "metrics": [{"type": "wer", "value": 42.56, "name": "Test WER"}]}]}]} | Iskaj/xlsr300m_cv_8.0_nl | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"mozilla-foundation/common_voice_7_0",
"nl",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
automatic-speech-recognition | transformers |
# xlsr_300m_CV_8.0_50_EP_new_params_nl | {"language": ["nl"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "hf-asr-leaderboard", "model_for_talk", "mozilla-foundation/common_voice_8_0", "nl", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "XLS-R-300M - Dutch", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8 NL", "type": "mozilla-foundation/common_voice_8_0", "args": "nl"}, "metrics": [{"type": "wer", "value": 35.44, "name": "Test WER"}, {"type": "cer", "value": 19.57, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "nl"}, "metrics": [{"type": "wer", "value": 37.17, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "nl"}, "metrics": [{"type": "wer", "value": 38.73, "name": "Test WER"}]}]}]} | Iskaj/xlsr_300m_CV_8.0_50_EP_new_params_nl | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_8_0",
"nl",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-generation | null | #sherlock | {"tags": ["conversational"]} | Istiaque190515/Sherlock | null | [
"conversational",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers | #harry_bot | {"tags": ["conversational"]} | Istiaque190515/harry_bot_discord | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers | #harry_potter | {"tags": ["conversational"]} | Istiaque190515/harry_potter | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers | {} | Itcast/bert-base-cnc | null | [
"transformers",
"pytorch",
"jax",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | transformers | {} | Itcast/cnc_output | null | [
"transformers",
"pytorch",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | ItcastAI/bert_cn_finetuning | null | [
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | ItcastAI/bert_cn_finetunning | null | [
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | ItcastAI/bert_finetuning_test | null | [
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | ItcastAI/bert_finetunning_test | null | [
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers | {} | ItelAi/Chatbot | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers |
# Tohru DialoGPT model | {"tags": ["conversational"]} | ItoYagura/DialoGPT-medium-tohru | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers | {} | ItuThesis2022MlviNikw/bert-base-uncased | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | ItuThesis2022MlviNikw/deberta-v3-base | null | [
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers |
# Pickle Rick DialoGPT Model | {"tags": ["conversational"]} | ItzJorinoPlays/DialoGPT-small-PickleRick | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Ivanclay/J | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
token-classification | transformers | {} | Ivo/emscad-skill-extraction-conference-token-classification | null | [
"transformers",
"pytorch",
"tf",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | Ivo/emscad-skill-extraction-conference | null | [
"transformers",
"pytorch",
"tf",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
token-classification | transformers | {} | Ivo/emscad-skill-extraction-token-classification | null | [
"transformers",
"pytorch",
"tf",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | Ivo/emscad-skill-extraction | null | [
"transformers",
"pytorch",
"tf",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Izadora12/Arcane | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers |
# Thor DialogGPT Model | {"tags": ["conversational"]} | J-Chiang/DialoGPT-small-thor | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
question-answering | transformers |
## Model description
This model was obtained by fine-tuning deepset/bert-base-cased-squad2 on Cord19 Dataset.
## How to use
```python
from transformers.pipelines import pipeline
model_name = "JAlexis/PruebaBert"
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
inputs = {
'question': 'How can I protect myself against covid-19?',
'context': 'Preventative measures consist of recommendations to wear a mask in public, maintain social distancing of at least six feet, wash hands regularly, and use hand sanitizer. To facilitate this aim, we adapt the conceptual model and measures of Liao et al. [6] to the current context of the COVID-19 pandemic and the culture of the USA. Applying this model in a different time and context provides an opportunity to make comparisons of reactions to information sources across a decade of evolving attitudes toward media and government, between two cultures (Hong Kong vs. the USA), and between two considerably different global pandemics (H1N1 vs. COVID-19). ',
'question': 'How can I protect myself against covid-19?',
'context': ' ',
}
nlp(inputs)
```
## Overview
```
Language model: deepset/bert-base-cased-squad2
Language: English
Downstream-task: Q&A
Datasets: CORD-19 from 31rd January 2022
Code: Haystack and FARM
Infrastructure: Tesla T4
```
## Hyperparameters
```
batch_size = 8
n_epochs = 7
max_seq_len = max_length
learning_rate = AdamW: 2e-5
```
| {"language": "en", "tags": ["pytorch", "question-answering"], "datasets": ["squad2", "cord19"], "metrics": ["f1"], "widget": [{"text": "How can I protect myself against covid-19?", "context": "Preventative measures consist of recommendations to wear a mask in public, maintain social distancing of at least six feet, wash hands regularly, and use hand sanitizer. To facilitate this aim, we adapt the conceptual model and measures of Liao et al. [6] to the current context of the COVID-19 pandemic and the culture of the USA. Applying this model in a different time and context provides an opportunity to make comparisons of reactions to information sources across a decade of evolving attitudes toward media and government, between two cultures (Hong Kong vs. the USA), and between two considerably different global pandemics (H1N1 vs. COVID-19)."}, {"text": "How can I protect myself against covid-19?", "context": " "}]} | JAlexis/Bertv1_fine | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"en",
"dataset:squad2",
"dataset:cord19",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
question-answering | transformers |
## Model description
This model was obtained by fine-tuning deepset/bert-base-cased-squad2 on Cord19 Dataset.
## How to use
```python
from transformers.pipelines import pipeline
model_name = "JAlexis/PruebaBert"
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
inputs = {
'question': 'How can I protect myself against covid-19?',
'context': 'Preventative measures consist of recommendations to wear a mask in public, maintain social distancing of at least six feet, wash hands regularly, and use hand sanitizer. To facilitate this aim, we adapt the conceptual model and measures of Liao et al. [6] to the current context of the COVID-19 pandemic and the culture of the USA. Applying this model in a different time and context provides an opportunity to make comparisons of reactions to information sources across a decade of evolving attitudes toward media and government, between two cultures (Hong Kong vs. the USA), and between two considerably different global pandemics (H1N1 vs. COVID-19). ',
'question': 'How can I protect myself against covid-19?',
'context': ' ',
}
nlp(inputs)
```
## Overview
```
Language model: deepset/bert-base-cased-squad2
Language: English
Downstream-task: Q&A
Datasets: CORD-19 from 31rd January 2022
Code: Haystack and FARM
Infrastructure: Tesla T4
```
## Hyperparameters
```
batch_size = 8
n_epochs = 9
max_seq_len = max_length
learning_rate = AdamW: 1e-5
```
| {"language": "en", "tags": ["pytorch", "question-answering"], "datasets": ["squad2", "cord19"], "metrics": ["EM (exact match)"], "widget": [{"text": "How can I protect myself against covid-19?", "context": "Preventative measures consist of recommendations to wear a mask in public, maintain social distancing of at least six feet, wash hands regularly, and use hand sanitizer. To facilitate this aim, we adapt the conceptual model and measures of Liao et al. [6] to the current context of the COVID-19 pandemic and the culture of the USA. Applying this model in a different time and context provides an opportunity to make comparisons of reactions to information sources across a decade of evolving attitudes toward media and government, between two cultures (Hong Kong vs. the USA), and between two considerably different global pandemics (H1N1 vs. COVID-19)."}, {"text": "How can I protect myself against covid-19?", "context": " "}]} | JAlexis/PruebaBert | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"en",
"dataset:squad2",
"dataset:cord19",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8366
- Matthews Correlation: 0.5472
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5224 | 1.0 | 535 | 0.5432 | 0.4243 |
| 0.3447 | 2.0 | 1070 | 0.4968 | 0.5187 |
| 0.2347 | 3.0 | 1605 | 0.6540 | 0.5280 |
| 0.1747 | 4.0 | 2140 | 0.7547 | 0.5367 |
| 0.1255 | 5.0 | 2675 | 0.8366 | 0.5472 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model-index": [{"name": "distilbert-base-uncased-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.5471613867597194, "name": "Matthews Correlation"}]}]}]} | JBNLRY/distilbert-base-uncased-finetuned-cola | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text2text-generation | transformers |
# T5 Question Generation and Question Answering
## Model description
This model is a T5 Transformers model (airklizz/t5-base-multi-fr-wiki-news) that was fine-tuned in french on 3 different tasks
* question generation
* question answering
* answer extraction
It obtains quite good results on FQuAD validation dataset.
## Intended uses & limitations
This model functions for the 3 tasks mentionned earlier and was not tested on other tasks.
```python
from transformers import T5ForConditionalGeneration, T5Tokenizer
model = T5ForConditionalGeneration.from_pretrained("JDBN/t5-base-fr-qg-fquad")
tokenizer = T5Tokenizer.from_pretrained("JDBN/t5-base-fr-qg-fquad")
```
## Training data
The initial model used was https://huggingface.co/airKlizz/t5-base-multi-fr-wiki-news. This model was finetuned on a dataset composed of FQuAD and PIAF on the 3 tasks mentioned previously.
The data were preprocessed like this
* question generation: "generate question: Barack Hussein Obama, né le 4 aout 1961, est un homme politique américain et avocat. Il a été élu <hl> en 2009 <hl> pour devenir le 44ème président des Etats-Unis d'Amérique."
* question answering: "question: Quand Barack Hussein Obamaa-t-il été élu président des Etats-Unis d’Amérique? context: Barack Hussein Obama, né le 4 aout 1961, est un homme politique américain et avocat. Il a été élu en 2009 pour devenir le 44ème président des Etats-Unis d’Amérique."
* answer extraction: "extract_answers: Barack Hussein Obama, né le 4 aout 1961, est un homme politique américain et avocat. <hl> Il a été élu en 2009 pour devenir le 44ème président des Etats-Unis d’Amérique <hl>."
The preprocessing we used was implemented in https://github.com/patil-suraj/question_generation
## Eval results
#### On FQuAD validation set
| BLEU_1 | BLEU_2 | BLEU_3 | BLEU_4 | METEOR | ROUGE_L | CIDEr |
|--------|--------|--------|--------|--------|---------|-------|
| 0.290 | 0.203 | 0.149 | 0.111 | 0.197 | 0.284 | 1.038 |
#### Question Answering metrics
For these metrics, the performance of this question answering model (https://huggingface.co/illuin/camembert-base-fquad) on FQuAD original question and on T5 generated questions are compared.
| Questions | Exact Match | F1 Score |
|------------------|--------|--------|
|Original FQuAD | 54.015 | 77.466 |
|Generated | 45.765 | 67.306 |
### BibTeX entry and citation info
```bibtex
@misc{githubPatil,
author = {Patil Suraj},
title = {question generation GitHub repository},
year = {2020},
howpublished={\url{https://github.com/patil-suraj/question_generation}}
}
@article{T5,
title={Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer},
author={Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu},
year={2019},
eprint={1910.10683},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
@misc{dhoffschmidt2020fquad,
title={FQuAD: French Question Answering Dataset},
author={Martin d'Hoffschmidt and Wacim Belblidia and Tom Brendlé and Quentin Heinrich and Maxime Vidal},
year={2020},
eprint={2002.06071},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| {"language": "fr", "tags": ["pytorch", "t5", "question-generation", "seq2seq"], "datasets": ["fquad", "piaf"], "widget": [{"text": "generate question: Barack Hussein Obama, n\u00e9 le 4 aout 1961, est un homme politique am\u00e9ricain et avocat. Il a \u00e9t\u00e9 \u00e9lu <hl> en 2009 <hl> pour devenir le 44\u00e8me pr\u00e9sident des Etats-Unis d'Am\u00e9rique. </s>"}, {"text": "question: Quand Barack Obama a t'il \u00e9t\u00e9 \u00e9lu pr\u00e9sident? context: Barack Hussein Obama, n\u00e9 le 4 aout 1961, est un homme politique am\u00e9ricain et avocat. Il a \u00e9t\u00e9 \u00e9lu en 2009 pour devenir le 44\u00e8me pr\u00e9sident des Etats-Unis d'Am\u00e9rique. </s>"}]} | JDBN/t5-base-fr-qg-fquad | null | [
"transformers",
"pytorch",
"jax",
"t5",
"text2text-generation",
"question-generation",
"seq2seq",
"fr",
"dataset:fquad",
"dataset:piaf",
"arxiv:1910.10683",
"arxiv:2002.06071",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers |
@ Harry Potter DialoGPT Model | {"tags": ["conversational"]} | JDS22/DialoGPT-medium-HarryPotterBot | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | JDT/my-bert | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | JEEEEEEK/model_name | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | JIWON/NLI_model | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-finetuned-nli
This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6210
- Accuracy: 0.085
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 196 | 0.6210 | 0.085 |
| No log | 2.0 | 392 | 0.5421 | 0.0643 |
| 0.5048 | 3.0 | 588 | 0.5523 | 0.062 |
| 0.5048 | 4.0 | 784 | 0.5769 | 0.0533 |
| 0.5048 | 5.0 | 980 | 0.5959 | 0.052 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"tags": ["generated_from_trainer"], "datasets": ["klue"], "metrics": ["accuracy"], "model-index": [{"name": "bert-base-finetuned-nli", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "klue", "type": "klue", "args": "nli"}, "metrics": [{"type": "accuracy", "value": 0.085, "name": "Accuracy"}]}]}]} | JIWON/bert-base-finetuned-nli | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:klue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers | {} | JP040/bert-german-sentiment-twitter | null | [
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | JPK/DialoGPT-small-HarryPotter | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | JRRY/DialoGPT-small-harrypotter | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
token-classification | transformers | {} | JSv4/layoutlmv2-finetuned-funsd-test | null | [
"transformers",
"pytorch",
"tensorboard",
"layoutlmv2",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
feature-extraction | transformers | {} | Jackkkkk/tm-bert | null | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Jackson99/DialoGPT-small-jakeperalta | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Jackyswl/bert-base-chinese-finetuned-wikitext2 | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
fill-mask | transformers |
# aristoBERTo
aristoBERTo is a transformer model for ancient Greek, a low resource language. We initialized the pre-training with weights from [GreekBERT](https://huggingface.co/nlpaueb/bert-base-greek-uncased-v1), a Greek version of BERT which was trained on a large corpus of modern Greek (~ 30 GB of texts). We continued the pre-training with an ancient Greek corpus of about 900 MB, which was scrapped from the web and post-processed. Duplicate texts and editorial punctuation were removed.
Applied to the processing of ancient Greek, aristoBERTo outperforms xlm-roberta-base and mdeberta in most downstream tasks like the labeling of POS, MORPH, DEP and LEMMA.
aristoBERTo is provided by the [Diogenet project](https://diogenet.ucsd.edu) of the University of California, San Diego.
## Intended uses
This model was created for fine-tuning with spaCy and the ancient Greek Universal Dependency datasets as well as a NER corpus produced by the [Diogenet project](https://diogenet.ucsd.edu). As a fill-mask model, AristoBERTo can also be used in the restoration of damaged Greek papyri, inscriptions, and manuscripts.
It achieves the following results on the evaluation set:
- Loss: 1.6323
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 1.377 | 20.0 | 3414220 | 1.6314 |
### Framework versions
- Transformers 4.14.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
| {"language": ["grc"], "widget": [{"text": "\u03a0\u03bb\u03ac\u03c4\u03c9\u03bd \u1f41 \u03a0\u03b5\u03c1\u03b9\u03ba\u03c4\u03b9\u03cc\u03bd\u03b7\u03c2 [MASK] \u03b3\u03ad\u03bd\u03bf\u03c2 \u1f00\u03bd\u03ad\u03c6\u03b5\u03c1\u03b5\u03bd \u03b5\u1f30\u03c2 \u03a3\u03cc\u03bb\u03c9\u03bd\u03b1."}, {"text": "\u1f41 \u039a\u03c1\u03b9\u03c4\u03af\u03b1\u03c2 \u1f00\u03c0\u03ad\u03b2\u03bb\u03b5\u03c8\u03b5 [MASK] \u03c4\u1f74\u03bd \u03b8\u03cd\u03c1\u03b1\u03bd."}, {"text": "\u03c0\u03c1\u1ff6\u03c4\u03bf\u03b9 \u03b4\u1f72 \u03ba\u03b1\u1f76 \u03bf\u1f50\u03bd\u03cc\u03bc\u03b1\u03c4\u03b1 \u1f31\u03c1\u1f70 \u1f14\u03b3\u03bd\u03c9\u03c3\u03b1\u03bd \u03ba\u03b1\u1f76 [MASK] \u1f31\u03c1\u03bf\u1f7a\u03c2 \u1f14\u03bb\u03b5\u03be\u03b1\u03bd."}], "model-index": [{"name": "aristoBERTo", "results": []}]} | Jacobo/aristoBERTo | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"grc",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# axiothea
This is an experimental roberta model trained with an ancient Greek corpus of about 900 MB, which was scrapped from the web and post-processed. Duplicate texts and editorial punctuation were removed. The training dataset will be soon available in the Huggingface datasets hub. Training a model of ancient Greek is challenging given that it is a low resource language from which 50% of the register has only survived in fragmentary texts. The model is provided by the Diogenet project at the University of California, San Diego.
It achieves the following results on the evaluation set:
- Loss: 3.3351
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.7013 | 1.0 | 341422 | 4.8813 |
| 4.2866 | 2.0 | 682844 | 4.4422 |
| 4.0496 | 3.0 | 1024266 | 4.2132 |
| 3.8503 | 4.0 | 1365688 | 4.0246 |
| 3.6917 | 5.0 | 1707110 | 3.8756 |
| 3.4917 | 6.0 | 2048532 | 3.7381 |
| 3.3907 | 7.0 | 2389954 | 3.6107 |
| 3.2876 | 8.0 | 2731376 | 3.5044 |
| 3.1994 | 9.0 | 3072798 | 3.3980 |
| 3.0806 | 10.0 | 3414220 | 3.3095 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.14.0
- Tokenizers 0.10.3
| {"language": ["grc"], "tags": ["generated_from_trainer"], "widget": [{"text": "\u03a0\u03bb\u03ac\u03c4\u03c9\u03bd \u1f41 \u03a0\u03b5\u03c1\u03b9\u03ba\u03c4\u03b9\u03cc\u03bd\u03b7\u03c2 <mask> \u03b3\u03ad\u03bd\u03bf\u03c2 \u1f00\u03bd\u03ad\u03c6\u03b5\u03c1\u03b5\u03bd \u03b5\u1f30\u03c2 \u03a3\u03cc\u03bb\u03c9\u03bd\u03b1."}, {"text": "\u1f41 \u039a\u03c1\u03b9\u03c4\u03af\u03b1\u03c2 \u1f00\u03c0\u03ad\u03b2\u03bb\u03b5\u03c8\u03b5 <mask> \u03c4\u1f74\u03bd \u03b8\u03cd\u03c1\u03b1\u03bd."}, {"text": "\u1f6e \u03c6\u03af\u03bb\u03b5 \u039a\u03bb\u03b5\u03b9\u03bd\u03af\u03b1, \u03ba\u03b1\u03bb\u1ff6\u03c2 \u03bc\u1f72\u03bd <mask>."}], "model-index": [{"name": "dioBERTo", "results": []}]} | Jacobo/axiothea | null | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"generated_from_trainer",
"grc",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Jacopo/ToonClip | null | [
"onnx",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | JadAssaf/STPI | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {"language": "zh_CN", "license": "MIT", "tags": ["NLP", "LAW"], "datasets": ["WIP"], "metrics": ["WIP"], "thumbnail": "url to a thumbnail used in social sharing"} | Jade/bert_base_law | null | [
"NLP",
"LAW",
"dataset:WIP",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Jaeger/DialogGPT-small-stewie | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Jaewon/test | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Jagp/Jagp | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | JaidevShriram/gpt2-wikitext2 | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-csa-10-rev3
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5869
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 18.7934 | 25.0 | 200 | 3.5869 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-base-csa-10-rev3", "results": []}]} | Jainil30/wav2vec2-base-csa-10-rev3 | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Jainil30/wav2vec2-base-timit-demo-colab | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Jakakshwve/Hazel | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | JakeKo/KO | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | JakeKo/NLP | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Jalal/pidgin-english-asr-model | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | JalalKol/distilroberta-base-finetuned-wikitext2 | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | JamesU/learning | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Jamesr227/Mod-bot-ai-small | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Janez/mt5-small-finetuned-amazon-en-es | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Janez/xlm-roberta-base-finetuned-panx-de | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Jaroslav/test | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | JarvisZHAO/chatbot | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | JasonCheung/gpt2 | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | JasonYe/Covid_19_NLP_twitter | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | JavaWhiz/DialoGPT-Tony-Stark | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | {} | Javel/linkedin_post_t5 | null | [
"transformers",
"pytorch",
"jax",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Javel/t5_linkedin_post | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sagemaker-distilbert-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2469
- Accuracy: 0.9165
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9351 | 1.0 | 500 | 0.2469 | 0.9165 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["emotion"], "metrics": ["accuracy"], "model-index": [{"name": "sagemaker-distilbert-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.9165, "name": "Accuracy"}]}]}]} | JaviBJ/sagemaker-distilbert-emotion | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | transformers | {} | Jawharah/Test | null | [
"transformers",
"lean_albert",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Jaymakwanacodes/HarryPotter | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
multiple-choice | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-semeval2020-task4a-append-e2-b32-l5e5
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5466
- Accuracy: 0.8890
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 344 | 0.3057 | 0.8630 |
| 0.4091 | 2.0 | 688 | 0.2964 | 0.8880 |
| 0.1322 | 3.0 | 1032 | 0.4465 | 0.8820 |
| 0.1322 | 4.0 | 1376 | 0.5466 | 0.8890 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.12.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"]} | JazibEijaz/bert-base-uncased-finetuned-semeval2020-task4a-append-e2-b32-l5e5 | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"multiple-choice",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
multiple-choice | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-semeval2020-task4a
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the ComVE dataset which was part of SemEval 2020 Task 4.
It achieves the following results on the test set:
- Loss: 0.2782
- Accuracy: 0.9040
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 344 | 0.2700 | 0.8940 |
| 0.349 | 2.0 | 688 | 0.2782 | 0.9040 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.12.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"]} | JazibEijaz/bert-base-uncased-finetuned-semeval2020-task4a | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"multiple-choice",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
multiple-choice | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-semeval2020-task4b-append-e3-b32-l4e5
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5121
- Accuracy: 0.8700
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 344 | 0.3603 | 0.8550 |
| 0.3894 | 2.0 | 688 | 0.4011 | 0.8630 |
| 0.1088 | 3.0 | 1032 | 0.5121 | 0.8700 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.12.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"]} | JazibEijaz/bert-base-uncased-finetuned-semeval2020-task4b-append-e3-b32-l4e5 | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"multiple-choice",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
multiple-choice | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-semeval2020-task4b-base-e2-b32-l3e5
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4114
- Accuracy: 0.8700
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 344 | 0.3773 | 0.8490 |
| 0.3812 | 2.0 | 688 | 0.4114 | 0.8700 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.12.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"]} | JazibEijaz/bert-base-uncased-finetuned-semeval2020-task4b-base-e2-b32-l3e5 | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"multiple-choice",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
multiple-choice | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-semeval2020-task4b
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the ComVE dataset which was part of SemEval 2020 Task 4.
It achieves the following results on the test set:
- Loss: 0.6760
- Accuracy: 0.8760
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5016 | 1.0 | 688 | 0.3502 | 0.8600 |
| 0.2528 | 2.0 | 1376 | 0.5769 | 0.8620 |
| 0.0598 | 3.0 | 2064 | 0.6720 | 0.8700 |
| 0.0197 | 4.0 | 2752 | 0.6760 | 0.8760 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.12.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"]} | JazibEijaz/bert-base-uncased-finetuned-semeval2020-task4b | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"multiple-choice",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
multiple-choice | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-swag-e1-b16-l5e5
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the swag dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5202
- Accuracy: 0.7997
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.701 | 1.0 | 4597 | 0.5202 | 0.7997 |
### Framework versions
- Transformers 4.12.2
- Pytorch 1.9.1
- Datasets 1.12.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["swag"], "metrics": ["accuracy"], "model-index": [{"name": "bert-base-uncased-finetuned-swag-e1-b16-l5e5", "results": []}]} | JazibEijaz/bert-base-uncased-finetuned-swag-e1-b16-l5e5 | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"multiple-choice",
"generated_from_trainer",
"dataset:swag",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Jcarneiro/meuModelo | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
token-classification | transformers |
# camembert-ner: model fine-tuned from camemBERT for NER task (including DATE tag).
## Introduction
[camembert-ner-with-dates] is an extension of french camembert-ner model with an additionnal tag for dates.
Model was trained on enriched version of wikiner-fr dataset (~170 634 sentences).
On my test data (mix of chat and email), this model got an f1 score of ~83% (in comparison dateparser was ~70%).
Dateparser library can still be be used on the output of this model in order to convert text to python datetime object
(https://dateparser.readthedocs.io/en/latest/).
## How to use camembert-ner-with-dates with HuggingFace
##### Load camembert-ner-with-dates and its sub-word tokenizer :
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("Jean-Baptiste/camembert-ner-with-dates")
model = AutoModelForTokenClassification.from_pretrained("Jean-Baptiste/camembert-ner-with-dates")
##### Process text sample (from wikipedia)
from transformers import pipeline
nlp = pipeline('ner', model=model, tokenizer=tokenizer, aggregation_strategy="simple")
nlp("Apple est créée le 1er avril 1976 dans le garage de la maison d'enfance de Steve Jobs à Los Altos en Californie par Steve Jobs, Steve Wozniak et Ronald Wayne14, puis constituée sous forme de société le 3 janvier 1977 à l'origine sous le nom d'Apple Computer, mais pour ses 30 ans et pour refléter la diversification de ses produits, le mot « computer » est retiré le 9 janvier 2015.")
[{'entity_group': 'ORG',
'score': 0.9776379466056824,
'word': 'Apple',
'start': 0,
'end': 5},
{'entity_group': 'DATE',
'score': 0.9793774570737567,
'word': 'le 1er avril 1976 dans le',
'start': 15,
'end': 41},
{'entity_group': 'PER',
'score': 0.9958226680755615,
'word': 'Steve Jobs',
'start': 74,
'end': 85},
{'entity_group': 'LOC',
'score': 0.995087186495463,
'word': 'Los Altos',
'start': 87,
'end': 97},
{'entity_group': 'LOC',
'score': 0.9953305125236511,
'word': 'Californie',
'start': 100,
'end': 111},
{'entity_group': 'PER',
'score': 0.9961076378822327,
'word': 'Steve Jobs',
'start': 115,
'end': 126},
{'entity_group': 'PER',
'score': 0.9960325956344604,
'word': 'Steve Wozniak',
'start': 127,
'end': 141},
{'entity_group': 'PER',
'score': 0.9957776467005411,
'word': 'Ronald Wayne',
'start': 144,
'end': 157},
{'entity_group': 'DATE',
'score': 0.994030773639679,
'word': 'le 3 janvier 1977 à',
'start': 198,
'end': 218},
{'entity_group': 'ORG',
'score': 0.9720810294151306,
'word': "d'Apple Computer",
'start': 240,
'end': 257},
{'entity_group': 'DATE',
'score': 0.9924157659212748,
'word': '30 ans et',
'start': 272,
'end': 282},
{'entity_group': 'DATE',
'score': 0.9934852868318558,
'word': 'le 9 janvier 2015.',
'start': 363,
'end': 382}]
```
## Model performances (metric: seqeval)
Global
```
'precision': 0.928
'recall': 0.928
'f1': 0.928
```
By entity
```
Label LOC: (precision:0.929, recall:0.932, f1:0.931, support:9510)
Label PER: (precision:0.952, recall:0.965, f1:0.959, support:9399)
Label MISC: (precision:0.878, recall:0.844, f1:0.860, support:5364)
Label ORG: (precision:0.848, recall:0.883, f1:0.865, support:2299)
Label DATE: Not relevant because of method used to add date tag on wikiner dataset (estimated f1 ~90%)
```
| {"language": "fr", "license": "mit", "datasets": ["Jean-Baptiste/wikiner_fr"], "widget": [{"text": "Je m'appelle jean-baptiste et j'habite \u00e0 montr\u00e9al depuis fevr 2012"}]} | Jean-Baptiste/camembert-ner-with-dates | null | [
"transformers",
"pytorch",
"onnx",
"safetensors",
"camembert",
"token-classification",
"fr",
"dataset:Jean-Baptiste/wikiner_fr",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
token-classification | transformers |
# camembert-ner: model fine-tuned from camemBERT for NER task.
## Introduction
[camembert-ner] is a NER model that was fine-tuned from camemBERT on wikiner-fr dataset.
Model was trained on wikiner-fr dataset (~170 634 sentences).
Model was validated on emails/chat data and overperformed other models on this type of data specifically.
In particular the model seems to work better on entity that don't start with an upper case.
## Training data
Training data was classified as follow:
Abbreviation|Description
-|-
O |Outside of a named entity
MISC |Miscellaneous entity
PER |Person’s name
ORG |Organization
LOC |Location
## How to use camembert-ner with HuggingFace
##### Load camembert-ner and its sub-word tokenizer :
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("Jean-Baptiste/camembert-ner")
model = AutoModelForTokenClassification.from_pretrained("Jean-Baptiste/camembert-ner")
##### Process text sample (from wikipedia)
from transformers import pipeline
nlp = pipeline('ner', model=model, tokenizer=tokenizer, aggregation_strategy="simple")
nlp("Apple est créée le 1er avril 1976 dans le garage de la maison d'enfance de Steve Jobs à Los Altos en Californie par Steve Jobs, Steve Wozniak et Ronald Wayne14, puis constituée sous forme de société le 3 janvier 1977 à l'origine sous le nom d'Apple Computer, mais pour ses 30 ans et pour refléter la diversification de ses produits, le mot « computer » est retiré le 9 janvier 2015.")
[{'entity_group': 'ORG',
'score': 0.9472818374633789,
'word': 'Apple',
'start': 0,
'end': 5},
{'entity_group': 'PER',
'score': 0.9838564991950989,
'word': 'Steve Jobs',
'start': 74,
'end': 85},
{'entity_group': 'LOC',
'score': 0.9831605950991312,
'word': 'Los Altos',
'start': 87,
'end': 97},
{'entity_group': 'LOC',
'score': 0.9834540486335754,
'word': 'Californie',
'start': 100,
'end': 111},
{'entity_group': 'PER',
'score': 0.9841555754343668,
'word': 'Steve Jobs',
'start': 115,
'end': 126},
{'entity_group': 'PER',
'score': 0.9843501806259155,
'word': 'Steve Wozniak',
'start': 127,
'end': 141},
{'entity_group': 'PER',
'score': 0.9841533899307251,
'word': 'Ronald Wayne',
'start': 144,
'end': 157},
{'entity_group': 'ORG',
'score': 0.9468960364659628,
'word': 'Apple Computer',
'start': 243,
'end': 257}]
```
## Model performances (metric: seqeval)
Overall
precision|recall|f1
-|-|-
0.8859|0.8971|0.8914
By entity
entity|precision|recall|f1
-|-|-|-
PER|0.9372|0.9598|0.9483
ORG|0.8099|0.8265|0.8181
LOC|0.8905|0.9005|0.8955
MISC|0.8175|0.8117|0.8146
For those who could be interested, here is a short article on how I used the results of this model to train a LSTM model for signature detection in emails:
https://medium.com/@jean-baptiste.polle/lstm-model-for-email-signature-detection-8e990384fefa
| {"language": "fr", "license": "mit", "datasets": ["Jean-Baptiste/wikiner_fr"], "widget": [{"text": "Je m'appelle jean-baptiste et je vis \u00e0 montr\u00e9al"}, {"text": "george washington est all\u00e9 \u00e0 washington"}]} | Jean-Baptiste/camembert-ner | null | [
"transformers",
"pytorch",
"onnx",
"safetensors",
"camembert",
"token-classification",
"fr",
"dataset:Jean-Baptiste/wikiner_fr",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
token-classification | transformers |
# roberta-large-ner-english: model fine-tuned from roberta-large for NER task
## Introduction
[roberta-large-ner-english] is an english NER model that was fine-tuned from roberta-large on conll2003 dataset.
Model was validated on emails/chat data and outperformed other models on this type of data specifically.
In particular the model seems to work better on entity that don't start with an upper case.
## Training data
Training data was classified as follow:
Abbreviation|Description
-|-
O |Outside of a named entity
MISC |Miscellaneous entity
PER |Person’s name
ORG |Organization
LOC |Location
In order to simplify, the prefix B- or I- from original conll2003 was removed.
I used the train and test dataset from original conll2003 for training and the "validation" dataset for validation. This resulted in a dataset of size:
Train | Validation
-|-
17494 | 3250
## How to use roberta-large-ner-english with HuggingFace
##### Load roberta-large-ner-english and its sub-word tokenizer :
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("Jean-Baptiste/roberta-large-ner-english")
model = AutoModelForTokenClassification.from_pretrained("Jean-Baptiste/roberta-large-ner-english")
##### Process text sample (from wikipedia)
from transformers import pipeline
nlp = pipeline('ner', model=model, tokenizer=tokenizer, aggregation_strategy="simple")
nlp("Apple was founded in 1976 by Steve Jobs, Steve Wozniak and Ronald Wayne to develop and sell Wozniak's Apple I personal computer")
[{'entity_group': 'ORG',
'score': 0.99381506,
'word': ' Apple',
'start': 0,
'end': 5},
{'entity_group': 'PER',
'score': 0.99970853,
'word': ' Steve Jobs',
'start': 29,
'end': 39},
{'entity_group': 'PER',
'score': 0.99981767,
'word': ' Steve Wozniak',
'start': 41,
'end': 54},
{'entity_group': 'PER',
'score': 0.99956465,
'word': ' Ronald Wayne',
'start': 59,
'end': 71},
{'entity_group': 'PER',
'score': 0.9997918,
'word': ' Wozniak',
'start': 92,
'end': 99},
{'entity_group': 'MISC',
'score': 0.99956393,
'word': ' Apple I',
'start': 102,
'end': 109}]
```
## Model performances
Model performances computed on conll2003 validation dataset (computed on the tokens predictions)
entity|precision|recall|f1
-|-|-|-
PER|0.9914|0.9927|0.9920
ORG|0.9627|0.9661|0.9644
LOC|0.9795|0.9862|0.9828
MISC|0.9292|0.9262|0.9277
Overall|0.9740|0.9766|0.9753
On private dataset (email, chat, informal discussion), computed on word predictions:
entity|precision|recall|f1
-|-|-|-
PER|0.8823|0.9116|0.8967
ORG|0.7694|0.7292|0.7487
LOC|0.8619|0.7768|0.8171
By comparison on the same private dataset, Spacy (en_core_web_trf-3.2.0) was giving:
entity|precision|recall|f1
-|-|-|-
PER|0.9146|0.8287|0.8695
ORG|0.7655|0.6437|0.6993
LOC|0.8727|0.6180|0.7236
For those who could be interested, here is a short article on how I used the results of this model to train a LSTM model for signature detection in emails:
https://medium.com/@jean-baptiste.polle/lstm-model-for-email-signature-detection-8e990384fefa
| {"language": "en", "license": "mit", "datasets": ["conll2003"], "widget": [{"text": "My name is jean-baptiste and I live in montreal"}, {"text": "My name is clara and I live in berkeley, california."}, {"text": "My name is wolfgang and I live in berlin"}], "train-eval-index": [{"config": "conll2003", "task": "token-classification", "task_id": "entity_extraction", "splits": {"eval_split": "validation"}, "col_mapping": {"tokens": "tokens", "ner_tags": "tags"}}]} | Jean-Baptiste/roberta-large-ner-english | null | [
"transformers",
"pytorch",
"tf",
"onnx",
"safetensors",
"roberta",
"token-classification",
"en",
"dataset:conll2003",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
token-classification | transformers |
# roberta-ticker: model was fine-tuned from Roberta to detect financial tickers
## Introduction
This is a model specifically designed to identify tickers in text.
Model was trained on transformed dataset from following Kaggle dataset:
https://www.kaggle.com/omermetinn/tweets-about-the-top-companies-from-2015-to-2020
## How to use roberta-ticker with HuggingFace
##### Load roberta-ticker and its sub-word tokenizer :
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("Jean-Baptiste/roberta-ticker")
model = AutoModelForTokenClassification.from_pretrained("Jean-Baptiste/roberta-ticker")
##### Process text sample
from transformers import pipeline
nlp = pipeline('ner', model=model, tokenizer=tokenizer, aggregation_strategy="simple")
nlp("I am going to buy 100 shares of cake tomorrow")
[{'entity_group': 'TICKER',
'score': 0.9612462520599365,
'word': ' cake',
'start': 32,
'end': 36}]
nlp("I am going to eat a cake tomorrow")
[]
```
## Model performances
```
precision: 0.914157
recall: 0.788824
f1: 0.846878
```
| {"language": "en", "widget": [{"text": "I am going to buy 100 shares of cake tomorrow"}]} | Jean-Baptiste/roberta-ticker | null | [
"transformers",
"pytorch",
"safetensors",
"roberta",
"token-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Jeangyu/bert-base-finetuned-ynat | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers | # Tony Stark | {"tags": ["conversational"]} | Jedi33/tonystarkAI | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers | {} | Jeevesh8/DA-LF | null | [
"transformers",
"pytorch",
"longformer",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
fill-mask | transformers | {} | Jeevesh8/DA-bert | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | First 50 [Feather BERT-s](https://arxiv.org/abs/1911.02969) compressed in groups of 10.
Clone this repository, decompress the compressed folders, and provide the paths to the Feather BERT you want to use in ``.from_pretrained()``.
For downloading next 50 Feather BERT-s, see [here](https://huggingface.co/Jeevesh8/feather_berts1/). | {} | Jeevesh8/feather_berts | null | [
"arxiv:1911.02969",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | Second 50 [Feather BERT-s](https://arxiv.org/abs/1911.02969) compressed in groups of 10.
Clone this repository, decompress the compressed folders, and provide the paths to the Feather BERT you want to use in ``.from_pretrained()``.
For downloading first 50 Feather BERT-s, see [here](https://huggingface.co/Jeevesh8/feather_berts/). | {} | Jeevesh8/feather_berts1 | null | [
"arxiv:1911.02969",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers | {} | Jeevesh8/multiberts_seed_0_ft_0 | null | [
"transformers",
"jax",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | Jeevesh8/multiberts_seed_0_ft_1 | null | [
"transformers",
"jax",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | Jeevesh8/multiberts_seed_0_ft_2 | null | [
"transformers",
"jax",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.