modelId
stringlengths 4
81
| tags
sequence | pipeline_tag
stringclasses 17
values | config
dict | downloads
int64 0
59.7M
| first_commit
unknown | card
stringlengths 51
438k
|
---|---|---|---|---|---|---|
CAMeL-Lab/bert-base-arabic-camelbert-ca | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 580 | null | ---
language: "en"
tags:
- English
- Bible
dataset:
- English Bible Translation Dataset
- Link: https://www.kaggle.com/oswinrh/bible
inference: false
---
## Dataset
English Bible Translation Dataset (https://www.kaggle.com/oswinrh/bible)
*NOTE:* It is `roberta-base` fine-tuned (for MLM objective) for 1 epoch (using MLM objective) on the 7 `.csv` files mentioned above, which consist of around 5.5M words.
## Citation
If you use this model in your work, please add the following citation -
```
@inproceedings{nandy-etal-2021-cs60075,
title = "cs60075{\_}team2 at {S}em{E}val-2021 Task 1 : Lexical Complexity Prediction using Transformer-based Language Models pre-trained on various text corpora",
author = "Nandy, Abhilash and
Adak, Sayantan and
Halder, Tanurima and
Pokala, Sai Mahesh",
booktitle = "Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.semeval-1.87",
doi = "10.18653/v1/2021.semeval-1.87",
pages = "678--682",
abstract = "The main contribution of this paper is to fine-tune transformer-based language models pre-trained on several text corpora, some being general (E.g., Wikipedia, BooksCorpus), some being the corpora from which the CompLex Dataset was extracted, and others being from other specific domains such as Finance, Law, etc. We perform ablation studies on selecting the transformer models and how their individual complexity scores are aggregated to get the resulting complexity scores. Our method achieves a best Pearson Correlation of 0.784 in sub-task 1 (single word) and 0.836 in sub-task 2 (multiple word expressions).",
}
```
|
CAMeL-Lab/bert-base-arabic-camelbert-da-poetry | [
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:1905.05700",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 37 | null | ---
language:
- English
tags:
- EManuals
- customer support
- QA
- bert
---
Refer to https://aclanthology.org/2021.findings-emnlp.392/ for the paper and https://sites.google.com/view/emanualqa/home for the project website
## Citation
Please cite the work if you would like to use it.
```
@inproceedings{nandy-etal-2021-question-answering,
title = "Question Answering over Electronic Devices: A New Benchmark Dataset and a Multi-Task Learning based {QA} Framework",
author = "Nandy, Abhilash and
Sharma, Soumya and
Maddhashiya, Shubham and
Sachdeva, Kapil and
Goyal, Pawan and
Ganguly, NIloy",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
month = nov,
year = "2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-emnlp.392",
doi = "10.18653/v1/2021.findings-emnlp.392",
pages = "4600--4609",
abstract = "Answering questions asked from instructional corpora such as E-manuals, recipe books, etc., has been far less studied than open-domain factoid context-based question answering. This can be primarily attributed to the absence of standard benchmark datasets. In this paper, we meticulously create a large amount of data connected with E-manuals and develop a suitable algorithm to exploit it. We collect E-Manual Corpus, a huge corpus of 307,957 E-manuals, and pretrain RoBERTa on this large corpus. We create various benchmark QA datasets which include question answer pairs curated by experts based upon two E-manuals, real user questions from Community Question Answering Forum pertaining to E-manuals etc. We introduce EMQAP (E-Manual Question Answering Pipeline) that answers questions pertaining to electronics devices. Built upon the pretrained RoBERTa, it harbors a supervised multi-task learning framework which efficiently performs the dual tasks of identifying the section in the E-manual where the answer can be found and the exact answer span within that section. For E-Manual annotated question-answer pairs, we show an improvement of about 40{\%} in ROUGE-L F1 scores over most competitive baseline. We perform a detailed ablation study and establish the versatility of EMQAP across different circumstances. The code and datasets are shared at https://github.com/abhi1nandy2/EMNLP-2021-Findings, and the corresponding project website is https://sites.google.com/view/emanualqa/home.",
}
``` |
CAMeL-Lab/bert-base-arabic-camelbert-da-pos-glf | [
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 54 | null | ---
language:
- English
tags:
- Europarl
- roberta
datasets:
- Europarl
---
Refer to https://aclanthology.org/2021.semeval-1.87/
## Citation
If you use this model in your work, please add the following citation -
```
@inproceedings{nandy-etal-2021-cs60075,
title = "cs60075{\_}team2 at {S}em{E}val-2021 Task 1 : Lexical Complexity Prediction using Transformer-based Language Models pre-trained on various text corpora",
author = "Nandy, Abhilash and
Adak, Sayantan and
Halder, Tanurima and
Pokala, Sai Mahesh",
booktitle = "Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.semeval-1.87",
doi = "10.18653/v1/2021.semeval-1.87",
pages = "678--682",
abstract = "The main contribution of this paper is to fine-tune transformer-based language models pre-trained on several text corpora, some being general (E.g., Wikipedia, BooksCorpus), some being the corpora from which the CompLex Dataset was extracted, and others being from other specific domains such as Finance, Law, etc. We perform ablation studies on selecting the transformer models and how their individual complexity scores are aggregated to get the resulting complexity scores. Our method achieves a best Pearson Correlation of 0.784 in sub-task 1 (single word) and 0.836 in sub-task 2 (multiple word expressions).",
}
```
|
CAMeL-Lab/bert-base-arabic-camelbert-da-sentiment | [
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"has_space"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 19,850 | null | ---
language: en
tags:
- spanbert
datasets:
- ade_corpus_v2
widget:
- text: "Having fever after taking paracetamol."
example_title: "NER"
- text: "Birth defects associated with thalidomide."
example_title: "NER"
- text: "Deafness and kidney failure associated with gentamicin (an antibiotic)."
example_title: "NER"
- text: "Bleeding of the intestine associated with aspirin therapy."
example_title: "NER"
---
spanbert-large-cased fine-tuned for <b>"Adverse drug reaction"</b> and <b>"Drug"</b> span Extraction.
<b>Details of spanbert-large-cased:</b>
https://huggingface.co/SpanBERT/spanbert-large-cased
<b>Details of the downstream task (Adverse drug reaction and Drug Extraction) - Dataset</b>
https://huggingface.co/datasets/ade_corpus_v2 |
CAMeL-Lab/bert-base-arabic-camelbert-mix-did-madar-corpus26 | [
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 45 | null | # Dataset
---
---
datasets:
- covid_qa_deepset
---
---
Covid 19 question answering data obtained from [covid_qa_deepset](https://huggingface.co/datasets/covid_qa_deepset).
# Original Repository
Repository for the fine tuning, inference and evaluation scripts can be found [here](https://github.com/abhijithneilabraham/Covid-QA).
# Model in action
```
import torch
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained("abhijithneilabraham/longformer_covid_qa")
model = AutoModelForQuestionAnswering.from_pretrained("abhijithneilabraham/longformer_covid_qa")
question = "In this way, what do the mRNA-destabilising RBPs constitute ?"
text =
"""
In this way, mRNA-destabilising RBPs constitute a 'brake' on the immune system, which may ultimately be toggled therapeutically. I anticipate continued efforts in this area will lead to new methods of regaining control over inflammation in autoimmunity, selectively enhancing immunity in immunotherapy, and modulating RNA synthesis and virus replication during infection.
Another mRNA under post-transcriptional regulation by Regnase-1 and Roquin is Furin, which encodes a conserved proprotein convertase crucial in human health and disease. Furin, along with other PCSK family members, is widely implicated in immune regulation, cancer and the entry, maturation or release of a broad array of evolutionarily diverse viruses including human papillomavirus (HPV), influenza (IAV), Ebola (EboV), dengue (DenV) and human immunodeficiency virus (HIV). Here, Braun and Sauter review the roles of furin in these processes, as well as the history and future of furin-targeting therapeutics. 7 They also discuss their recent work revealing how two IFN-cinducible factors exhibit broad-spectrum inhibition of IAV, measles (MV), zika (ZikV) and HIV by suppressing furin activity. 8 Over the coming decade, I expect to see an ever-finer spatiotemporal resolution of host-oriented therapies to achieve safe, effective and broad-spectrum yet costeffective therapies for clinical use.
The increasing abundance of affordable, sensitive, high-throughput genome sequencing technologies has led to a recent boom in metagenomics and the cataloguing of the microbiome of our world. The MinION nanopore sequencer is one of the latest innovations in this space, enabling direct sequencing in a miniature form factor with only minimal sample preparation and a consumer-grade laptop computer. Nakagawa and colleagues here report on their latest experiments using this system, further improving its performance for use in resource-poor contexts for meningitis diagnoses. 9 While direct sequencing of viral genomic RNA is challenging, this system was recently used to directly sequence an RNA virus genome (IAV) for the first time. 10 I anticipate further improvements in the performance of such devices over the coming decade will transform virus surveillance efforts, the importance of which was underscored by the recent EboV and novel coronavirus (nCoV / COVID-19) outbreaks, enabling rapid deployment of antiviral treatments that take resistance-conferring mutations into account.
Decades of basic immunology research have provided a near-complete picture of the main armaments in the human antiviral arsenal. Nevertheless, this focus on mammalian defences and pathologies has sidelined examination of the types and roles of viruses and antiviral defences that exist throughout our biosphere. One case in point is the CRISPR/Cas antiviral immune system of prokaryotes, which is now repurposed as a revolutionary gene-editing biotechnology in plants and animals. 11 Another is the ancient lineage of nucleocytosolic large DNA viruses (NCLDVs), which are emerging human pathogens that possess enormous genomes of up to several megabases in size encoding hundreds of proteins with unique and unknown functions. 12 Moreover, hundreds of human-and avian-infective viruses such as IAV strain H5N1 are known, but recent efforts indicate the true number may be in the millions and many harbour zoonotic potential. 13 It is increasingly clear that host-virus interactions have generated truly vast yet poorly understood and untapped biodiversity. Closing this Special Feature, Watanabe and Kawaoka elaborate on neo-virology, an emerging field engaged in cataloguing and characterising this biodiversity through a global consortium. 14 I predict these efforts will unlock a vast wealth of currently unexplored biodiversity, leading to biotechnologies and treatments that leverage the host-virus interactions developed throughout evolution.
When biomedical innovations fall into the 'Valley of Death', patients who are therefore not reached all too often fall with them. Being entrusted with the resources and expectation to conceive, deliver and communicate dividends to society is both cherished and eagerly pursued at every stage of our careers. Nevertheless, the road to research translation is winding and is built on a foundation of basic research. Supporting industry-academia collaboration and nurturing talent and skills in the Indo-Pacific region are two of the four pillars of the National Innovation and Science Agenda. 2 These frame Australia's Medical Research and Innovation Priorities, which include antimicrobial resistance, global health and health security, drug repurposing and translational research infrastructure, 15 capturing many of the key elements of this CTI Special Feature. Establishing durable international relationships that integrate diverse expertise is essential to delivering these outcomes. To this end, NHMRC has recently taken steps under the International Engagement Strategy 16 to increase cooperation with its counterparts overseas. These include the Japan Agency for Medical Research and Development (AMED), tasked with translating the biomedical research output of that country. Given the reciprocal efforts at accelerating bilateral engagement currently underway, 17 the prospects for new areas of international cooperation and mobility have never been more exciting nor urgent. With the above in mind, all contributions to this CTI Special Feature I have selected from research presented by fellow invitees to the 2018 Awaji International Forum on Infection and Immunity (AIFII) and 2017 Consortium of Biological Sciences (ConBio) conferences in Japan. Both Australia and Japan have strong traditions in immunology and related disciplines, and I predict that the quantity, quality and importance of our bilateral cooperation will accelerate rapidly over the short to medium term. By expanding and cooperatively leveraging our respective research strengths, our efforts may yet solve the many pressing disease, cost and other sustainability issues of our time.
"""
encoding = tokenizer(question, text, return_tensors="pt")
input_ids = encoding["input_ids"]
# default is local attention everywhere
# the forward method will automatically set global attention on question tokens
attention_mask = encoding["attention_mask"]
start_scores, end_scores = model(input_ids, attention_mask=attention_mask)
all_tokens = tokenizer.convert_ids_to_tokens(input_ids[0].tolist())
answer_tokens = all_tokens[torch.argmax(start_scores) :torch.argmax(end_scores)+1]
answer = tokenizer.decode(tokenizer.convert_tokens_to_ids(answer_tokens))
# output => a 'brake' on the immune system
``` |
CAMeL-Lab/bert-base-arabic-camelbert-mix-did-nadi | [
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 63 | null | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('abhijithneilabraham/stsb_multi_mt_distilbert-base-uncased')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('abhijithneilabraham/stsb_multi_mt_distilbert-base-uncased')
model = AutoModel.from_pretrained('abhijithneilabraham/stsb_multi_mt_distilbert-base-uncased')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 360 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 25,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 900,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-glf | [
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 132 | null | ---
language:
- en
license: apache-2.0
datasets:
- squad_v2
model-index:
- name: abhilash1910/albert-squad-v2
results:
- task:
type: question-answering
name: Question Answering
dataset:
name: squad_v2
type: squad_v2
config: squad_v2
split: validation
metrics:
- type: exact_match
value: 23.6563
name: Exact Match
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZTE5ZTM2YzIwZjBhYjM0ZDUyNzBiMjg1YjZhMGJiMGViMjYzYjQ5ZmI4MGFkYmU4YjY1OTNjYzAwZWRlZjIwNSIsInZlcnNpb24iOjF9.jlvV8WRPSPKJm6UdApoh-dXcAOmLPtF5smsHt39RoO4sFzzbH6elUz5yPF5Lt9Yc2YDIl6c8JDsODqMxmsD0Bg
- type: f1
value: 29.3808
name: F1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiN2ZjYWRlYTI1NDkwYzNhMzM5YTg2NjZmODg0NjNkOGM3YjM2NTlkYjVhZWI0MzlmNjNkMTMxODlkNTY3ODBkMiIsInZlcnNpb24iOjF9.CR1MYeU3uqld9bbI8CLupMtote4WEG9fIq9enwhFJfVpChIT9BGKm86zaPmXHg0yBaNHgkMaEt_a-DaIdiEwAg
---
|
CAMeL-Lab/bert-base-arabic-camelbert-mix-sentiment | [
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 855 | null | ---
tags:
- finance
---
# Roberta Masked Language Model Trained On Financial Phrasebank Corpus
This is a Masked Language Model trained with [Roberta](https://huggingface.co/transformers/model_doc/roberta.html) on a Financial Phrasebank Corpus.
The model is built using Huggingface transformers.
The model can be found at :[Financial_Roberta](https://huggingface.co/abhilash1910/financial_roberta)
## Specifications
The corpus for training is taken from the Financial Phrasebank (Malo et al)[https://www.researchgate.net/publication/251231107_Good_Debt_or_Bad_Debt_Detecting_Semantic_Orientations_in_Economic_Texts].
## Model Specification
The model chosen for training is [Roberta](https://arxiv.org/abs/1907.11692) with the following specifications:
1. vocab_size=56000
2. max_position_embeddings=514
3. num_attention_heads=12
4. num_hidden_layers=6
5. type_vocab_size=1
This is trained by using RobertaConfig from transformers package.
The model is trained for 10 epochs with a gpu batch size of 64 units.
## Usage Specifications
For using this model, we have to first import AutoTokenizer and AutoModelWithLMHead Modules from transformers
After that we have to specify, the pre-trained model,which in this case is 'abhilash1910/financial_roberta' for the tokenizers and the model.
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("abhilash1910/financial_roberta")
model = AutoModelWithLMHead.from_pretrained("abhilash1910/financial_roberta")
```
After this the model will be downloaded, it will take some time to download all the model files.
For testing the model, we have to import pipeline module from transformers and create a masked output model for inference as follows:
```python
from transformers import pipeline
model_mask = pipeline('fill-mask', model='abhilash1910/inancial_roberta')
model_mask("The company had a <mask> of 20% in 2020.")
```
Some of the examples are also provided with generic financial statements:
Example 1:
```python
model_mask("The company had a <mask> of 20% in 2020.")
```
Output:
```bash
[{'sequence': '<s>The company had a profit of 20% in 2020.</s>',
'score': 0.023112965747714043,
'token': 421,
'token_str': 'Ġprofit'},
{'sequence': '<s>The company had a loss of 20% in 2020.</s>',
'score': 0.021379893645644188,
'token': 616,
'token_str': 'Ġloss'},
{'sequence': '<s>The company had a year of 20% in 2020.</s>',
'score': 0.0185744296759367,
'token': 443,
'token_str': 'Ġyear'},
{'sequence': '<s>The company had a sales of 20% in 2020.</s>',
'score': 0.018143286928534508,
'token': 428,
'token_str': 'Ġsales'},
{'sequence': '<s>The company had a value of 20% in 2020.</s>',
'score': 0.015319528989493847,
'token': 776,
'token_str': 'Ġvalue'}]
```
Example 2:
```python
model_mask("The <mask> is listed under NYSE")
```
Output:
```bash
[{'sequence': '<s>The company is listed under NYSE</s>',
'score': 0.1566661298274994,
'token': 359,
'token_str': 'Ġcompany'},
{'sequence': '<s>The total is listed under NYSE</s>',
'score': 0.05542507395148277,
'token': 522,
'token_str': 'Ġtotal'},
{'sequence': '<s>The value is listed under NYSE</s>',
'score': 0.04729423299431801,
'token': 776,
'token_str': 'Ġvalue'},
{'sequence': '<s>The order is listed under NYSE</s>',
'score': 0.02533523552119732,
'token': 798,
'token_str': 'Ġorder'},
{'sequence': '<s>The contract is listed under NYSE</s>',
'score': 0.02087237872183323,
'token': 635,
'token_str': 'Ġcontract'}]
```
## Resources
For all resources , please look into the [HuggingFace](https://huggingface.co/) Site and the [Repositories](https://github.com/huggingface).
|
CAMeL-Lab/bert-base-arabic-camelbert-msa-quarter | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | null | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- abhishek/autonlp-data-ferd1
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 2652021
## Validation Metrics
- Loss: 0.3934604227542877
- Accuracy: 0.8411030860144452
- Precision: 0.8201550387596899
- Recall: 0.8076335877862595
- AUC: 0.8946767157983608
- F1: 0.8138461538461538
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/abhishek/autonlp-ferd1-2652021
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("abhishek/autonlp-ferd1-2652021", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("abhishek/autonlp-ferd1-2652021", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
CLTL/icf-levels-att | [
"pytorch",
"roberta",
"text-classification",
"nl",
"transformers",
"license:mit"
] | text-classification | {
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 32 | null | Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on German using the Common Voice dataset. When using this model, make sure that your speech input is sampled at 16kHz.
As capitalization is an important part of the German language (eg. Sie vs. sie).
I trained a model using a vocab that includes both lower case and upper case letters in hopes that the model would learn the correct casing.
This removes the need to do any post-processing like truecasing.
| Reference | Prediction |
| ------------- | ------------- |
| **Die** zoologische **Einordnung** der **Spezies** ist seit **Jahrzehnten** umstritten | **Die** psoologische **Einordnung** der **Spezies** ist seit **Jahrzehnten** umstritten |
| **Hauptgeschäftsfeld** war ursprünglich der öffentliche **Sektor** in **Irland** | **Hauptgeschäftsfeld** war ursprünglich der öffentliche **Sektor** in **Irland** |
| **Er** vertrat den **Wahlkreis Donauwörth** im **Parlament** | **Er** vertrat den **Wahlkreis DonauWört** im **Parlament** |
| **Ich** bin gespannt welche **Lieder** sie wählt | **Ich** bin gespannt welche **Lieder** see wählt |
| **Eine** allgemein verbindliche **Definition** gibt es nicht | **Eine** allgemeinverbindliche **Definition** gibt es nicht |
```
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
import soundfile as sf
import torch
# load model and processor
processor = Wav2Vec2Processor.from_pretrained("abnerh/wav2vec2-xlsr-300m-german-truecase")
model = Wav2Vec2ForCTC.from_pretrained("abnerh/wav2vec2-xlsr-300m-german-truecase")
speech, sr = sf.read('audio.wav')
# tokenize
input_values = processor(speech, return_tensors="pt", padding="longest").input_values # Batch size 1
# retrieve logits
logits = model(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
# print transcription results
print(transcription)
``` |
CLTL/icf-levels-fac | [
"pytorch",
"roberta",
"text-classification",
"nl",
"transformers",
"license:mit"
] | text-classification | {
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 32 | null | # Transferring Monolingual Model to Low-Resource Language: The Case Of Tigrinya:
## Proposed Method:
<img src="data/proposed.png" height = "330" width ="760" >
The proposed method transfers a mono-lingual Transformer model into new target language at lexical level by learning new token embeddings. All implementation in this repo uses XLNet as a source Transformer model, however, other Transformer models can also be used similarly.
## Main files:
All files are IPython Notebook files which can be excuted simply in Google Colab.
- train.ipynb : Fine-tunes XLNet (mono-lingual transformer) on new target language (Tigrinya) sentiment analysis dataset. [](https://colab.research.google.com/drive/1bSSrKE-TSphUyrNB2UWhFI-Bkoz0a5l0?usp=sharing)
- test.ipynb : Evaluates the fine-tuned model on test data. [](https://colab.research.google.com/drive/17R1lvRjxILVNk971vzZT79o2OodwaNIX?usp=sharing)
- token_embeddings.ipynb : Trains a word2vec token embeddings for Tigrinya language. [](https://colab.research.google.com/drive/1hCtetAllAjBw28EVQkJFpiKdFtXmuxV7?usp=sharing)
- process_Tigrinya_comments.ipynb : Extracts Tigrinya comments from mixed language contents. [](https://colab.research.google.com/drive/1-ndLlBV-iLZNBW3Z8OfKAqUUCjvGbdZU?usp=sharing)
- extract_YouTube_comments.ipynb : Downloads available comments from a YouTube channel ID. [](https://colab.research.google.com/drive/1b7G85wHKe18y45JIDtvDJdO5dOkRmDdp?usp=sharing)
- auto_labelling.ipynb : Automatically labels Tigrinya comments in to positive or negative sentiments based on [Emoji's sentiment](http://kt.ijs.si/data/Emoji_sentiment_ranking/). [](https://colab.research.google.com/drive/1wnZf7CBBCIr966vRUITlxKCrANsMPpV7?usp=sharing)
## Tigrinya Tokenizer:
A [sentencepiece](https://github.com/google/sentencepiece) based tokenizer for Tigrinya has been released to the public and can be accessed as in the following:
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("abryee/TigXLNet")
tokenizer.tokenize("ዋዋዋው እዛ ፍሊም ካብተን ዘድንቀን ሓንቲ ኢያ ሞ ብጣዕሚ ኢና ነመስግን ሓንቲ ክብላ ደልየ ዘሎኹ ሓደራኣኹም ኣብ ጊዜኹም ተረክቡ")
## TigXLNet:
A new general purpose transformer model for low-resource language Tigrinya is also released to the public and be accessed as in the following:
from transformers import AutoConfig, AutoModel
config = AutoConfig.from_pretrained("abryee/TigXLNet")
config.d_head = 64
model = AutoModel.from_pretrained("abryee/TigXLNet", config=config)
## Evaluation:
The proposed method is evaluated using two datasets:
- A newly created sentiment analysis dataset for low-resource language (Tigriyna).
<table>
<tr>
<td> <table>
<thead>
<tr>
<th><sub>Models</sub></th>
<th><sub>Configuration</sub></th>
<th><sub>F1-Score</sub></th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan=3><sub>BERT</sub></td>
<td rowspan=1><sub>+Frozen BERT weights</sub></td>
<td><sub>54.91</sub></td>
</tr>
<tr>
<td rowspan=1><sub>+Random embeddings</sub></td>
<td><sub>74.26</sub></td>
</tr>
<tr>
<td rowspan=1><sub>+Frozen token embeddings</sub></td>
<td><sub>76.35</sub></td>
</tr>
<tr>
<td rowspan=3><sub>mBERT</sub></td>
<td rowspan=1><sub>+Frozen mBERT weights</sub></td>
<td><sub>57.32</sub></td>
</tr>
<tr>
<td rowspan=1><sub>+Random embeddings</sub></td>
<td><sub>76.01</sub></td>
</tr>
<tr>
<td rowspan=1><sub>+Frozen token embeddings</sub></td>
<td><sub>77.51</sub></td>
</tr>
<tr>
<td rowspan=3><sub>XLNet</sub></td>
<td rowspan=1><sub>+Frozen XLNet weights</sub></td>
<td><strong><sub>68.14</sub></strong></td>
</tr>
<tr>
<td rowspan=1><sub>+Random embeddings</sub></td>
<td><strong><sub>77.83</sub></strong></td>
</tr>
<tr>
<td rowspan=1><sub>+Frozen token embeddings</sub></td>
<td><strong><sub>81.62</sub></strong></td>
</tr>
</tbody>
</table> </td>
<td><img src="data/effect_of_dataset_size.png" alt="3" width = 480px height = 280px></td>
</tr>
</table>
- Cross-lingual Sentiment dataset ([CLS](https://zenodo.org/record/3251672#.Xs65VzozbIU)).
<table>
<thead>
<tr>
<th rowspan=2><sub>Models</sub></th>
<th rowspan=1 colspan=3><sub>English</sub></th>
<th rowspan=1 colspan=3><sub>German</sub></th>
<th rowspan=1 colspan=3><sub>French</sub></th>
<th rowspan=1 colspan=3><sub>Japanese</sub></th>
<th rowspan=2><sub>Average</sub></th>
</tr>
<tr>
<th colspan=1><sub>Books</sub></th>
<th colspan=1><sub>DVD</sub></th>
<th colspan=1><sub>Music</sub></th>
<th colspan=1><sub>Books</sub></th>
<th colspan=1><sub>DVD</sub></th>
<th colspan=1><sub>Music</sub></th>
<th colspan=1><sub>Books</sub></th>
<th colspan=1><sub>DVD</sub></th>
<th colspan=1><sub>Music</sub></th>
<th colspan=1><sub>Books</sub></th>
<th colspan=1><sub>DVD</sub></th>
<th colspan=1><sub>Music</sub></th>
</tr>
</thead>
<tbody>
<tr>
<td colspan=1><sub>XLNet</sub></td>
<td colspan=1><sub><strong>92.90</strong></sub></td>
<td colspan=1><sub><strong>93.31</strong></sub></td>
<td colspan=1><sub><strong>92.02</strong></sub></td>
<td colspan=1><sub>85.23</sub></td>
<td colspan=1><sub>83.30</sub></td>
<td colspan=1><sub>83.89</sub></td>
<td colspan=1><sub>73.05</sub></td>
<td colspan=1><sub>69.80</sub></td>
<td colspan=1><sub>70.12</sub></td>
<td colspan=1><sub>83.20</sub></td>
<td colspan=1><sub><strong>86.07</strong></sub></td>
<td colspan=1><sub>85.24</sub></td>
<td colspan=1><sub>83.08</sub></td>
</tr>
<tr>
<td colspan=1><sub>mBERT</sub></td>
<td colspan=1><sub>92.78</sub></td>
<td colspan=1><sub>90.30</sub></td>
<td colspan=1><sub>91.88</sub></td>
<td colspan=1><sub><strong>88.65</strong></sub></td>
<td colspan=1><sub><strong>85.85</strong></sub></td>
<td colspan=1><sub><strong>90.38</strong></sub></td>
<td colspan=1><sub><strong>91.09</strong></sub></td>
<td colspan=1><sub><strong>88.57</strong></sub></td>
<td colspan=1><sub><strong>93.67</strong></sub></td>
<td colspan=1><sub><strong>84.35</strong></sub></td>
<td colspan=1><sub>81.77</sub></td>
<td colspan=1><sub><strong>87.53</strong></sub></td>
<td colspan=1><sub><strong>88.90</strong></sub></td>
</tr>
</tbody>
</table>
## Dataset used for this paper:
We have constructed new sentiment analysis dataset for Tigrinya language and it can be found in the zip file (Tigrinya Sentiment Analysis Dataset)
## Citing our paper:
Our paper can be accessed from ArXiv [link](https://arxiv.org/pdf/2006.07698.pdf), and please consider citing our work.
@misc{tela2020transferring,
title={Transferring Monolingual Model to Low-Resource Language: The Case of Tigrinya},
author={Abrhalei Tela and Abraham Woubie and Ville Hautamaki},
year={2020},
eprint={2006.07698},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
## Any questions, comments, feedback is appreciated! And can be forwarded to the following email: [email protected]
|
CallumRai/HansardGPT2 | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 14 | null | ruGPT-3 fine-tuned on russian fanfiction about Bangatan Boys (BTS). |
Cameron/BERT-eec-emotion | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 36 | null | https://www.guilded.gg/thisiscineplex/overview/news/XRz48Dr6
https://www.guilded.gg/FLIXmasGR/overview/news/7R0WorPy
https://www.guilded.gg/FLIXmasGR/overview/news/NyE5BPmy
https://www.guilded.gg/FLIXmasGR/overview/news/2l3Konal
https://www.guilded.gg/FLIXmasGR/overview/news/AykDjvVR
https://www.guilded.gg/FLIXmasGR/overview/news/16YOGQoR
https://www.guilded.gg/FLIXmasGR/overview/news/KR2ngpXR
https://www.guilded.gg/FLIXmasGR/overview/news/xypa2qZR
https://www.guilded.gg/FLIXmasGR/overview/news/A6jZGQk6
https://www.guilded.gg/FLIXmasGR/overview/news/1ROQVMe6
https://www.guilded.gg/FLIXmasGR/overview/news/4yAW0Kvl
https://www.guilded.gg/FLIXmasGR/overview/news/JlaoGQBy
https://www.guilded.gg/FLIXmasGR/overview/news/YyrPnVEl
https://www.guilded.gg/FLIXmasGR/overview/news/4lGz3aBR
https://www.guilded.gg/FLIXmasGR/overview/news/16nKkj1y
https://www.guilded.gg/FLIXmasGR/overview/news/X6QA0Ng6
https://www.guilded.gg/FLIXmasGR/overview/news/XRz4xGa6
https://www.guilded.gg/FLIXmasGR/overview/news/PlqV9826
https://www.guilded.gg/FLIXmasGR/overview/news/7R0WokWy
https://www.guilded.gg/FLIXmasGR/overview/news/qlDvK4dy
https://www.guilded.gg/FLIXmasGR/overview/news/2l3KopZl
https://www.guilded.gg/FLIXmasGR/overview/news/16YOGj4R
https://www.guilded.gg/FLIXmasGR/overview/news/4ldxGzQl
|
Cameron/BERT-jigsaw-identityhate | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 37 | null | # ReviewBERT
BERT (post-)trained from review corpus to understand sentiment, options and various e-commence aspects.
`BERT-DK_laptop` is trained from 100MB laptop corpus under `Electronics/Computers & Accessories/Laptops`.
## Model Description
The original model is from `BERT-base-uncased` trained from Wikipedia+BookCorpus.
Models are post-trained from [Amazon Dataset](http://jmcauley.ucsd.edu/data/amazon/) and [Yelp Dataset](https://www.yelp.com/dataset/challenge/).
`BERT-DK_laptop` is trained from 100MB laptop corpus under `Electronics/Computers & Accessories/Laptops`.
## Instructions
Loading the post-trained weights are as simple as, e.g.,
```python
import torch
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("activebus/BERT-DK_laptop")
model = AutoModel.from_pretrained("activebus/BERT-DK_laptop")
```
## Evaluation Results
Check our [NAACL paper](https://www.aclweb.org/anthology/N19-1242.pdf)
## Citation
If you find this work useful, please cite as following.
```
@inproceedings{xu_bert2019,
title = "BERT Post-Training for Review Reading Comprehension and Aspect-based Sentiment Analysis",
author = "Xu, Hu and Liu, Bing and Shu, Lei and Yu, Philip S.",
booktitle = "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics",
month = "jun",
year = "2019",
}
```
|
Cameron/BERT-jigsaw-severetoxic | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 30 | null | # ReviewBERT
BERT (post-)trained from review corpus to understand sentiment, options and various e-commence aspects.
`BERT-DK_rest` is trained from 1G (19 types) restaurants from Yelp.
## Model Description
The original model is from `BERT-base-uncased` trained from Wikipedia+BookCorpus.
Models are post-trained from [Amazon Dataset](http://jmcauley.ucsd.edu/data/amazon/) and [Yelp Dataset](https://www.yelp.com/dataset/challenge/).
## Instructions
Loading the post-trained weights are as simple as, e.g.,
```python
import torch
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("activebus/BERT-DK_rest")
model = AutoModel.from_pretrained("activebus/BERT-DK_rest")
```
## Evaluation Results
Check our [NAACL paper](https://www.aclweb.org/anthology/N19-1242.pdf)
## Citation
If you find this work useful, please cite as following.
```
@inproceedings{xu_bert2019,
title = "BERT Post-Training for Review Reading Comprehension and Aspect-based Sentiment Analysis",
author = "Xu, Hu and Liu, Bing and Shu, Lei and Yu, Philip S.",
booktitle = "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics",
month = "jun",
year = "2019",
}
```
|
Cameron/BERT-mdgender-convai-binary | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 33 | null | # ReviewBERT
BERT (post-)trained from review corpus to understand sentiment, options and various e-commence aspects.
`BERT-DK_laptop` is trained from 100MB laptop corpus under `Electronics/Computers & Accessories/Laptops`.
`BERT-PT_*` addtionally uses SQuAD 1.1.
## Model Description
The original model is from `BERT-base-uncased` trained from Wikipedia+BookCorpus.
Models are post-trained from [Amazon Dataset](http://jmcauley.ucsd.edu/data/amazon/) and [Yelp Dataset](https://www.yelp.com/dataset/challenge/).
## Instructions
Loading the post-trained weights are as simple as, e.g.,
```python
import torch
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("activebus/BERT-PT_laptop")
model = AutoModel.from_pretrained("activebus/BERT-PT_laptop")
```
## Evaluation Results
Check our [NAACL paper](https://www.aclweb.org/anthology/N19-1242.pdf)
## Citation
If you find this work useful, please cite as following.
```
@inproceedings{xu_bert2019,
title = "BERT Post-Training for Review Reading Comprehension and Aspect-based Sentiment Analysis",
author = "Xu, Hu and Liu, Bing and Shu, Lei and Yu, Philip S.",
booktitle = "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics",
month = "jun",
year = "2019",
}
```
|
Cameron/BERT-mdgender-convai-ternary | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 38 | null | # ReviewBERT
BERT (post-)trained from review corpus to understand sentiment, options and various e-commence aspects.
`BERT-DK_rest` is trained from 1G (19 types) restaurants from Yelp.
`BERT-PT_*` addtionally uses SQuAD 1.1.
## Model Description
The original model is from `BERT-base-uncased` trained from Wikipedia+BookCorpus.
Models are post-trained from [Amazon Dataset](http://jmcauley.ucsd.edu/data/amazon/) and [Yelp Dataset](https://www.yelp.com/dataset/challenge/).
## Instructions
Loading the post-trained weights are as simple as, e.g.,
```python
import torch
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("activebus/BERT-PT_rest")
model = AutoModel.from_pretrained("activebus/BERT-PT_rest")
```
## Evaluation Results
Check our [NAACL paper](https://www.aclweb.org/anthology/N19-1242.pdf)
## Citation
If you find this work useful, please cite as following.
```
@inproceedings{xu_bert2019,
title = "BERT Post-Training for Review Reading Comprehension and Aspect-based Sentiment Analysis",
author = "Xu, Hu and Liu, Bing and Shu, Lei and Yu, Philip S.",
booktitle = "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics",
month = "jun",
year = "2019",
}
```
|
Cameron/BERT-mdgender-wizard | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 30 | null | # ReviewBERT
BERT (post-)trained from review corpus to understand sentiment, options and various e-commence aspects.
Please visit https://github.com/howardhsu/BERT-for-RRC-ABSA for details.
`BERT-XD_Review` is a cross-domain (beyond just `laptop` and `restaurant`) language model, where each example is from a single product / restaurant with the same rating, post-trained (fine-tuned) on a combination of 5-core Amazon reviews and all Yelp data, expected to be 22 G in total. It is trained for 4 epochs on `bert-base-uncased`.
The preprocessing code [here](https://github.com/howardhsu/BERT-for-RRC-ABSA/transformers).
## Model Description
The original model is from `BERT-base-uncased`.
Models are post-trained from [Amazon Dataset](http://jmcauley.ucsd.edu/data/amazon/) and [Yelp Dataset](https://www.yelp.com/dataset/challenge/).
## Instructions
Loading the post-trained weights are as simple as, e.g.,
```python
import torch
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("activebus/BERT-XD_Review")
model = AutoModel.from_pretrained("activebus/BERT-XD_Review")
```
## Evaluation Results
Check our [NAACL paper](https://www.aclweb.org/anthology/N19-1242.pdf)
`BERT_Review` is expected to have similar performance on domain-specific tasks (such as aspect extraction) as `BERT-DK`, but much better on general tasks such as aspect sentiment classification (different domains mostly share similar sentiment words).
## Citation
If you find this work useful, please cite as following.
```
@inproceedings{xu_bert2019,
title = "BERT Post-Training for Review Reading Comprehension and Aspect-based Sentiment Analysis",
author = "Xu, Hu and Liu, Bing and Shu, Lei and Yu, Philip S.",
booktitle = "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics",
month = "jun",
year = "2019",
}
```
|
Cameron/BERT-rtgender-opgender-annotations | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 33 | null | # ReviewBERT
BERT (post-)trained from review corpus to understand sentiment, options and various e-commence aspects.
`BERT_Review` is cross-domain (beyond just `laptop` and `restaurant`) language model with one example from randomly mixed domains, post-trained (fine-tuned) on a combination of 5-core Amazon reviews and all Yelp data, expected to be 22 G in total. It is trained for 4 epochs on `bert-base-uncased`.
The preprocessing code [here](https://github.com/howardhsu/BERT-for-RRC-ABSA/transformers).
## Model Description
The original model is from `BERT-base-uncased` trained from Wikipedia+BookCorpus.
Models are post-trained from [Amazon Dataset](http://jmcauley.ucsd.edu/data/amazon/) and [Yelp Dataset](https://www.yelp.com/dataset/challenge/).
## Instructions
Loading the post-trained weights are as simple as, e.g.,
```python
import torch
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("activebus/BERT_Review")
model = AutoModel.from_pretrained("activebus/BERT_Review")
```
## Evaluation Results
Check our [NAACL paper](https://www.aclweb.org/anthology/N19-1242.pdf)
`BERT_Review` is expected to have similar performance on domain-specific tasks (such as aspect extraction) as `BERT-DK`, but much better on general tasks such as aspect sentiment classification (different domains mostly share similar sentiment words).
## Citation
If you find this work useful, please cite as following.
```
@inproceedings{xu_bert2019,
title = "BERT Post-Training for Review Reading Comprehension and Aspect-based Sentiment Analysis",
author = "Xu, Hu and Liu, Bing and Shu, Lei and Yu, Philip S.",
booktitle = "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics",
month = "jun",
year = "2019",
}
```
|
Canadiancaleb/DialoGPT-small-walter | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 13 | null | ---
language:
- pt
---
This model was distilled from [BERTimbau](https://huggingface.co/neuralmind/bert-base-portuguese-cased)
## Usage
```python
from transformers import AutoTokenizer # Or BertTokenizer
from transformers import AutoModelForPreTraining # Or BertForPreTraining for loading pretraining heads
from transformers import AutoModel # or BertModel, for BERT without pretraining heads
model = AutoModelForPreTraining.from_pretrained('adalbertojunior/distilbert-portuguese-cased')
tokenizer = AutoTokenizer.from_pretrained('adalbertojunior/distilbert-portuguese-cased', do_lower_case=False)
```
You should fine tune it on your own data.
It can achieve accuracy up to 99% relative to the original BERTimbau in some tasks. |
Canadiancaleb/jessebot | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language:
- pt
---
Image Captioning in Portuguese trained with ViT and GPT2
[DEMO](https://huggingface.co/spaces/adalbertojunior/image_captioning_portuguese)
Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC) |
Captain272/lstm | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | This model has been trained by fine-tuning a BERTweet sentiment classification model named "finiteautomata/bertweet-base-sentiment-analysis", on a labeled positive/negative dataset of tweets.
email : [email protected] |
dccuchile/albert-base-spanish-finetuned-mldoc | [
"pytorch",
"albert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 34 | null | ---
title: Twitter Sentiments
emoji: 😍
colorFrom: yellow
colorTo: blue
sdk: streamlit
app_file: app.py
pinned: false
---
# Configuration
`title`: _string_
Display title for the Space
`emoji`: _string_
Space emoji (emoji-only character allowed)
`colorFrom`: _string_
Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
`colorTo`: _string_
Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
`sdk`: _string_
Can be either `gradio` or `streamlit`
`app_file`: _string_
Path to your main application file (which contains either `gradio` or `streamlit` Python code).
Path is relative to the root of the repository.
`pinned`: _boolean_
Whether the Space stays on top of your list.
|
dccuchile/albert-base-spanish-finetuned-pawsx | [
"pytorch",
"albert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 25 | "2021-06-24T03:46:16Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- null
model_index:
- name: 100perc
results:
- task:
name: Causal Language Modeling
type: text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 100perc
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4594
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| No log | 1.0 | 140 | 1.8292 |
| No log | 2.0 | 280 | 1.7373 |
| No log | 3.0 | 420 | 1.6889 |
| 2.26 | 4.0 | 560 | 1.6515 |
| 2.26 | 5.0 | 700 | 1.6258 |
| 2.26 | 6.0 | 840 | 1.6063 |
| 2.26 | 7.0 | 980 | 1.5873 |
| 1.6847 | 8.0 | 1120 | 1.5749 |
| 1.6847 | 9.0 | 1260 | 1.5634 |
| 1.6847 | 10.0 | 1400 | 1.5513 |
| 1.6073 | 11.0 | 1540 | 1.5421 |
| 1.6073 | 12.0 | 1680 | 1.5352 |
| 1.6073 | 13.0 | 1820 | 1.5270 |
| 1.6073 | 14.0 | 1960 | 1.5203 |
| 1.5545 | 15.0 | 2100 | 1.5142 |
| 1.5545 | 16.0 | 2240 | 1.5089 |
| 1.5545 | 17.0 | 2380 | 1.5048 |
| 1.5156 | 18.0 | 2520 | 1.5009 |
| 1.5156 | 19.0 | 2660 | 1.4970 |
| 1.5156 | 20.0 | 2800 | 1.4935 |
| 1.5156 | 21.0 | 2940 | 1.4897 |
| 1.4835 | 22.0 | 3080 | 1.4865 |
| 1.4835 | 23.0 | 3220 | 1.4851 |
| 1.4835 | 24.0 | 3360 | 1.4820 |
| 1.4565 | 25.0 | 3500 | 1.4787 |
| 1.4565 | 26.0 | 3640 | 1.4774 |
| 1.4565 | 27.0 | 3780 | 1.4749 |
| 1.4565 | 28.0 | 3920 | 1.4748 |
| 1.4326 | 29.0 | 4060 | 1.4728 |
| 1.4326 | 30.0 | 4200 | 1.4692 |
| 1.4326 | 31.0 | 4340 | 1.4692 |
| 1.4326 | 32.0 | 4480 | 1.4668 |
| 1.4126 | 33.0 | 4620 | 1.4664 |
| 1.4126 | 34.0 | 4760 | 1.4659 |
| 1.4126 | 35.0 | 4900 | 1.4643 |
| 1.394 | 36.0 | 5040 | 1.4622 |
| 1.394 | 37.0 | 5180 | 1.4629 |
| 1.394 | 38.0 | 5320 | 1.4610 |
| 1.394 | 39.0 | 5460 | 1.4623 |
| 1.3775 | 40.0 | 5600 | 1.4599 |
| 1.3775 | 41.0 | 5740 | 1.4600 |
| 1.3775 | 42.0 | 5880 | 1.4580 |
| 1.363 | 43.0 | 6020 | 1.4584 |
| 1.363 | 44.0 | 6160 | 1.4577 |
| 1.363 | 45.0 | 6300 | 1.4559 |
| 1.363 | 46.0 | 6440 | 1.4545 |
| 1.3484 | 47.0 | 6580 | 1.4568 |
| 1.3484 | 48.0 | 6720 | 1.4579 |
| 1.3484 | 49.0 | 6860 | 1.4562 |
| 1.3379 | 50.0 | 7000 | 1.4558 |
| 1.3379 | 51.0 | 7140 | 1.4556 |
| 1.3379 | 52.0 | 7280 | 1.4581 |
| 1.3379 | 53.0 | 7420 | 1.4554 |
| 1.3258 | 54.0 | 7560 | 1.4561 |
| 1.3258 | 55.0 | 7700 | 1.4553 |
| 1.3258 | 56.0 | 7840 | 1.4555 |
| 1.3258 | 57.0 | 7980 | 1.4572 |
| 1.3158 | 58.0 | 8120 | 1.4551 |
| 1.3158 | 59.0 | 8260 | 1.4573 |
| 1.3158 | 60.0 | 8400 | 1.4561 |
| 1.3072 | 61.0 | 8540 | 1.4557 |
| 1.3072 | 62.0 | 8680 | 1.4548 |
| 1.3072 | 63.0 | 8820 | 1.4547 |
| 1.3072 | 64.0 | 8960 | 1.4556 |
| 1.2986 | 65.0 | 9100 | 1.4555 |
| 1.2986 | 66.0 | 9240 | 1.4566 |
| 1.2986 | 67.0 | 9380 | 1.4558 |
| 1.2916 | 68.0 | 9520 | 1.4565 |
| 1.2916 | 69.0 | 9660 | 1.4552 |
| 1.2916 | 70.0 | 9800 | 1.4558 |
| 1.2916 | 71.0 | 9940 | 1.4553 |
| 1.2846 | 72.0 | 10080 | 1.4579 |
| 1.2846 | 73.0 | 10220 | 1.4572 |
| 1.2846 | 74.0 | 10360 | 1.4572 |
| 1.2792 | 75.0 | 10500 | 1.4564 |
| 1.2792 | 76.0 | 10640 | 1.4576 |
| 1.2792 | 77.0 | 10780 | 1.4571 |
| 1.2792 | 78.0 | 10920 | 1.4580 |
| 1.2736 | 79.0 | 11060 | 1.4578 |
| 1.2736 | 80.0 | 11200 | 1.4583 |
| 1.2736 | 81.0 | 11340 | 1.4576 |
| 1.2736 | 82.0 | 11480 | 1.4580 |
| 1.2699 | 83.0 | 11620 | 1.4575 |
| 1.2699 | 84.0 | 11760 | 1.4583 |
| 1.2699 | 85.0 | 11900 | 1.4588 |
| 1.2664 | 86.0 | 12040 | 1.4590 |
| 1.2664 | 87.0 | 12180 | 1.4593 |
| 1.2664 | 88.0 | 12320 | 1.4582 |
| 1.2664 | 89.0 | 12460 | 1.4591 |
| 1.2627 | 90.0 | 12600 | 1.4595 |
| 1.2627 | 91.0 | 12740 | 1.4585 |
| 1.2627 | 92.0 | 12880 | 1.4590 |
| 1.2613 | 93.0 | 13020 | 1.4590 |
| 1.2613 | 94.0 | 13160 | 1.4598 |
| 1.2613 | 95.0 | 13300 | 1.4592 |
| 1.2613 | 96.0 | 13440 | 1.4597 |
| 1.2591 | 97.0 | 13580 | 1.4593 |
| 1.2591 | 98.0 | 13720 | 1.4593 |
| 1.2591 | 99.0 | 13860 | 1.4597 |
| 1.258 | 100.0 | 14000 | 1.4594 |
### Framework versions
- Transformers 4.8.0
- Pytorch 1.8.1+cu111
- Datasets 1.8.0
- Tokenizers 0.10.3
|
dccuchile/albert-xlarge-spanish-finetuned-ner | [
"pytorch",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"AlbertForTokenClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
language:
- zh_CN
- zh_CN
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model_index:
- name: filter-mlsum-pretrained
results:
- task:
name: Translation
type: translation
metric:
name: Rouge1
type: rouge
value: 42.1802
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# filter-mlsum-pretrained
This model is a fine-tuned version of [lincoln/mbart-mlsum-automatic-summarization](https://huggingface.co/lincoln/mbart-mlsum-automatic-summarization) on an unkown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1258
- Rouge1: 42.1802
- Rouge2: 28.8282
- Rougel: 38.353
- Rougelsum: 38.4497
- Gen Len: 15.7048
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 13
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Bleu | Gen Len | Validation Loss | Rouge-1 | Rouge-2 | Rouge-3 | Rouge-4 |
|:-------------:|:-----:|:-----:|:------:|:-------:|:---------------:|:-------:|:-------:|:-------:|:-------:|
| 3.2488 | 0.02 | 600 | 1.0077 | 16.5021 | 2.9137 | 0.3472 | 0.2187 | 0.129 | 0.0831 |
| 2.8602 | 0.04 | 1200 | 1.0448 | 15.5959 | 2.7929 | 0.3555 | 0.231 | 0.1425 | 0.0948 |
| 2.7612 | 0.06 | 1800 | 0.9912 | 15.9283 | 2.7275 | 0.3634 | 0.2327 | 0.139 | 0.0892 |
| 2.71 | 0.08 | 2400 | 1.1238 | 16.029 | 2.6673 | 0.3705 | 0.2393 | 0.1448 | 0.0937 |
| 2.6029 | 0.11 | 3000 | 1.091 | 15.8317 | 2.6153 | 0.3705 | 0.2382 | 0.1443 | 0.0943 |
| 2.5834 | 0.13 | 3600 | 1.0894 | 15.9131 | 2.5937 | 0.3793 | 0.246 | 0.1517 | 0.1013 |
| 2.5339 | 0.15 | 4200 | 1.1034 | 15.8331 | 2.5716 | 0.3758 | 0.2441 | 0.146 | 0.0948 |
| 2.5176 | 0.17 | 4800 | 1.1365 | 16.2552 | 2.5338 | 0.3695 | 0.2385 | 0.1454 | 0.0942 |
| 2.4962 | 0.19 | 5400 | 1.1237 | 16.0041 | 2.5145 | 0.3773 | 0.2462 | 0.1533 | 0.1017 |
| 2.4573 | 0.21 | 6000 | 0.9416 | 16.1241 | 2.5056 | 0.3753 | 0.2457 | 0.1541 | 0.1012 |
| 2.4324 | 0.23 | 6600 | 1.122 | 15.3448 | 2.4891 | 0.3824 | 0.2531 | 0.157 | 0.1033 |
| 2.4343 | 0.25 | 7200 | 1.8299 | 15.5959 | 2.4728 | 0.384 | 0.2512 | 0.1556 | 0.1026 |
| 2.4089 | 0.28 | 7800 | 1.7741 | 16.3421 | 2.4608 | 0.3818 | 0.2501 | 0.1556 | 0.102 |
| 2.376 | 0.3 | 8400 | 1.1575 | 15.3834 | 2.4402 | 0.3887 | 0.2582 | 0.1611 | 0.1058 |
| 2.3739 | 0.32 | 9000 | 1.7924 | 15.6455 | 2.4331 | 0.3902 | 0.2561 | 0.1587 | 0.1042 |
| 2.3485 | 0.34 | 9600 | 2.2605 | 15.5407 | 2.4215 | 0.3712 | 0.2423 | 0.1493 | 0.0984 |
| 2.3535 | 0.36 | 10200 | 1.2569 | 16.2538 | 2.4047 | 0.3837 | 0.2524 | 0.1572 | 0.1045 |
| 2.3359 | 0.38 | 10800 | 1.2334 | 15.4607 | 2.4025 | 0.3808 | 0.2488 | 0.1531 | 0.0994 |
| 2.3265 | 0.4 | 11400 | 1.116 | 16.2703 | 2.3926 | 0.3909 | 0.2574 | 0.159 | 0.1049 |
| 2.3024 | 0.42 | 12000 | 1.0944 | 15.3807 | 2.3964 | 0.3883 | 0.2554 | 0.158 | 0.1043 |
| 2.2988 | 0.45 | 12600 | 1.6318 | 15.5062 | 2.3616 | 0.3889 | 0.259 | 0.1617 | 0.107 |
| 2.2966 | 0.47 | 13200 | 1.1887 | 15.8041 | 2.3728 | 0.3835 | 0.2556 | 0.1633 | 0.1111 |
| 2.2823 | 0.49 | 13800 | 1.1252 | 15.9972 | 2.3591 | 0.3805 | 0.249 | 0.1571 | 0.1052 |
| 2.2748 | 0.51 | 14400 | 1.0418 | 15.3021 | 2.3619 | 0.3862 | 0.2569 | 0.161 | 0.1072 |
| 2.2624 | 0.53 | 15000 | 1.0299 | 15.8634 | 2.3415 | 0.3909 | 0.2575 | 0.1608 | 0.1072 |
| 2.2585 | 0.55 | 15600 | 1.0671 | 15.5503 | 2.3557 | 0.3899 | 0.2586 | 0.1622 | 0.1077 |
| 2.2586 | 0.57 | 16200 | 1.6521 | 15.4345 | 2.3431 | 0.389 | 0.2593 | 0.1642 | 0.1105 |
| 2.2464 | 0.59 | 16800 | 1.2836 | 15.6124 | 2.3609 | 0.3934 | 0.2591 | 0.1593 | 0.1041 |
| 2.2523 | 0.62 | 17400 | 1.7653 | 15.8648 | 2.3339 | 0.3958 | 0.2653 | 0.1683 | 0.1133 |
| 2.2287 | 0.64 | 18000 | 1.3186 | 16.4455 | 2.3188 | 0.3911 | 0.2617 | 0.1678 | 0.1143 |
| 2.2068 | 0.66 | 18600 | 1.6488 | 15.9062 | 2.3109 | 0.3919 | 0.262 | 0.1657 | 0.1115 |
| 2.2195 | 0.68 | 19200 | 1.8291 | 15.5269 | 2.3271 | 0.3859 | 0.2575 | 0.1631 | 0.1081 |
| 2.2128 | 0.7 | 19800 | 2.2759 | 15.8703 | 2.3113 | 0.3962 | 0.2655 | 0.1691 | 0.1123 |
| 2.2071 | 0.72 | 20400 | 2.4205 | 15.9738 | 2.3036 | 0.3907 | 0.2608 | 0.1637 | 0.1081 |
| 2.1975 | 0.74 | 21000 | 1.9886 | 15.8234 | 2.2919 | 0.3906 | 0.2632 | 0.169 | 0.1157 |
| 2.1965 | 0.76 | 21600 | 1.8754 | 15.3434 | 2.2957 | 0.39 | 0.2608 | 0.1665 | 0.1114 |
| 2.1886 | 0.78 | 22200 | 1.5683 | 15.3407 | 2.2835 | 0.3968 | 0.2658 | 0.168 | 0.1117 |
| 2.185 | 0.81 | 22800 | 2.127 | 16.0566 | 2.2685 | 0.3913 | 0.2624 | 0.1691 | 0.114 |
| 2.1697 | 0.83 | 23400 | 1.2554 | 15.7021 | 2.2888 | 0.3983 | 0.2676 | 0.1704 | 0.1148 |
| 2.1637 | 0.85 | 24000 | 2.0099 | 16.2607 | 2.2767 | 0.3979 | 0.2681 | 0.1719 | 0.1181 |
| 2.1559 | 0.87 | 24600 | 2.2632 | 15.2179 | 2.2840 | 0.3996 | 0.269 | 0.1714 | 0.1152 |
| 2.1666 | 0.89 | 25200 | 1.2354 | 15.6828 | 2.2744 | 0.397 | 0.2655 | 0.1677 | 0.1108 |
| 2.1388 | 0.91 | 25800 | 1.2576 | 15.7959 | 2.2661 | 0.3982 | 0.2655 | 0.1687 | 0.1128 |
| 2.1458 | 0.93 | 26400 | 1.334 | 15.6428 | 2.2582 | 0.3976 | 0.2682 | 0.1711 | 0.1142 |
| 2.1337 | 0.95 | 27000 | 1.287 | 16.1379 | 2.2474 | 0.4001 | 0.2654 | 0.1682 | 0.1119 |
| 2.1324 | 0.98 | 27600 | 1.1739 | 16.0552 | 2.2487 | 0.4003 | 0.2664 | 0.168 | 0.1113 |
| 2.1318 | 1.0 | 28200 | 2.1267 | 15.931 | 2.2553 | 0.4037 | 0.27 | 0.1714 | 0.1163 |
| 2.0379 | 1.02 | 28800 | 1.1489 | 15.3421 | 2.2787 | 0.3962 | 0.263 | 0.1674 | 0.114 |
| 1.9044 | 1.04 | 29400 | 1.6737 | 15.6 | 2.2538 | 0.4003 | 0.2693 | 0.1729 | 0.1161 |
| 1.9149 | 1.06 | 30000 | 1.1077 | 15.771 | 2.2487 | 0.4062 | 0.274 | 0.1774 | 0.1209 |
| 1.9211 | 1.08 | 30600 | 1.2744 | 15.0566 | 2.2708 | 0.4075 | 0.2742 | 0.1744 | 0.1172 |
| 1.9285 | 1.1 | 31200 | 1.1875 | 16.1021 | 2.2443 | 0.3983 | 0.2652 | 0.1671 | 0.1124 |
| 1.9106 | 1.12 | 31800 | 1.2422 | 15.36 | 2.2562 | 0.4079 | 0.2751 | 0.1762 | 0.119 |
| 1.9313 | 1.15 | 32400 | 1.3036 | 15.8317 | 2.2515 | 0.4027 | 0.2717 | 0.1748 | 0.1196 |
| 1.931 | 1.17 | 33000 | 1.138 | 16.1917 | 2.2415 | 0.4016 | 0.2701 | 0.1724 | 0.1179 |
| 1.9232 | 1.19 | 33600 | 1.2741 | 15.6814 | 2.2511 | 0.4074 | 0.2757 | 0.1782 | 0.1222 |
| 1.9233 | 1.21 | 34200 | 1.4101 | 15.8345 | 2.2388 | 0.4027 | 0.2712 | 0.1727 | 0.1174 |
| 1.9172 | 1.23 | 34800 | 1.252 | 15.6124 | 2.2434 | 0.4046 | 0.2747 | 0.1783 | 0.1215 |
| 1.9258 | 1.25 | 35400 | 1.2459 | 15.5062 | 2.2342 | 0.4107 | 0.2801 | 0.1814 | 0.1236 |
| 1.9184 | 1.27 | 36000 | 1.2943 | 15.6083 | 2.2393 | 0.4119 | 0.2817 | 0.1839 | 0.1244 |
| 1.9195 | 1.29 | 36600 | 1.1197 | 15.8359 | 2.2237 | 0.4014 | 0.2695 | 0.1699 | 0.1132 |
| 1.932 | 1.31 | 37200 | 1.2212 | 15.9752 | 2.2202 | 0.4027 | 0.2708 | 0.1723 | 0.1168 |
| 1.9161 | 1.34 | 37800 | 1.2541 | 15.5779 | 2.2236 | 0.4091 | 0.2783 | 0.1804 | 0.1244 |
| 1.9115 | 1.36 | 38400 | 1.4237 | 15.8276 | 2.1993 | 0.4122 | 0.2813 | 0.1832 | 0.1258 |
| 1.9108 | 1.38 | 39000 | 1.8321 | 16.0386 | 2.2079 | 0.412 | 0.2794 | 0.1806 | 0.1226 |
| 1.921 | 1.4 | 39600 | 1.8388 | 15.5076 | 2.2158 | 0.411 | 0.2799 | 0.1804 | 0.1226 |
| 1.9124 | 1.42 | 40200 | 1.915 | 16.0 | 2.2071 | 0.4032 | 0.2726 | 0.1742 | 0.1185 |
| 1.9134 | 1.44 | 40800 | 2.1237 | 16.0372 | 2.1980 | 0.4036 | 0.2702 | 0.1689 | 0.1122 |
| 1.9124 | 1.46 | 41400 | 2.4274 | 15.3421 | 2.2111 | 0.4037 | 0.274 | 0.1754 | 0.1203 |
| 1.9149 | 1.48 | 42000 | 1.8393 | 15.5683 | 2.2105 | 0.4057 | 0.2748 | 0.1762 | 0.119 |
| 1.9147 | 1.51 | 42600 | 1.2703 | 16.3048 | 2.1874 | 0.4084 | 0.2767 | 0.179 | 0.1233 |
| 1.9075 | 1.53 | 43200 | 1.7775 | 15.9545 | 2.1946 | 0.4109 | 0.2807 | 0.1857 | 0.1286 |
| 1.8996 | 1.55 | 43800 | 1.2485 | 15.6648 | 2.1924 | 0.4082 | 0.2749 | 0.1764 | 0.1196 |
| 1.9003 | 1.57 | 44400 | 1.1624 | 15.8041 | 2.1895 | 0.4093 | 0.2758 | 0.1766 | 0.1194 |
| 1.9048 | 1.59 | 45000 | 1.8167 | 16.2938 | 2.1843 | 0.407 | 0.2741 | 0.1779 | 0.1203 |
| 1.9017 | 1.61 | 45600 | 2.0689 | 15.3931 | 2.2073 | 0.4111 | 0.2795 | 0.1811 | 0.1246 |
| 1.8946 | 1.63 | 46200 | 1.7099 | 15.9917 | 2.1839 | 0.4095 | 0.2797 | 0.1826 | 0.1258 |
| 1.886 | 1.65 | 46800 | 1.8287 | 15.8276 | 2.1945 | 0.4051 | 0.2761 | 0.1799 | 0.1237 |
| 1.9068 | 1.68 | 47400 | 1.9476 | 15.3503 | 2.1926 | 0.4132 | 0.2819 | 0.1836 | 0.1262 |
| 1.9008 | 1.7 | 48000 | 1.3086 | 15.5931 | 2.1857 | 0.4167 | 0.2868 | 0.1893 | 0.1303 |
| 1.8965 | 1.72 | 48600 | 2.1687 | 15.8317 | 2.1781 | 0.402 | 0.2715 | 0.175 | 0.1197 |
| 1.8907 | 1.74 | 49200 | 2.3316 | 15.8952 | 2.1661 | 0.4035 | 0.2717 | 0.1746 | 0.1193 |
| 1.8938 | 1.76 | 49800 | 1.6839 | 15.6028 | 2.1736 | 0.4008 | 0.2693 | 0.1741 | 0.1184 |
| 1.8769 | 1.78 | 50400 | 1.1867 | 15.9393 | 2.1723 | 0.403 | 0.272 | 0.1761 | 0.1201 |
| 1.8813 | 1.8 | 51000 | 1.8509 | 16.2538 | 2.1454 | 0.4085 | 0.2773 | 0.1801 | 0.1227 |
| 1.8913 | 1.82 | 51600 | 1.9677 | 15.7503 | 2.1691 | 0.4052 | 0.2786 | 0.1836 | 0.1274 |
| 1.8785 | 1.85 | 52200 | 1.7 | 15.7559 | 2.1683 | 0.4132 | 0.2793 | 0.1796 | 0.1216 |
| 1.881 | 1.87 | 52800 | 1.2867 | 16.0345 | 2.1372 | 0.416 | 0.2824 | 0.1837 | 0.1264 |
| 1.8833 | 1.89 | 53400 | 1.761 | 16.0966 | 2.1501 | 0.4126 | 0.2808 | 0.1825 | 0.1253 |
| 1.8727 | 1.91 | 54000 | 1.9868 | 15.8221 | 2.1504 | 0.4165 | 0.2828 | 0.1826 | 0.1233 |
| 1.8901 | 1.93 | 54600 | 1.801 | 14.9393 | 2.2104 | 0.4151 | 0.2846 | 0.1848 | 0.1258 |
| 1.8802 | 1.95 | 55200 | 2.0887 | 15.8069 | 2.1555 | 0.407 | 0.2766 | 0.1794 | 0.1214 |
| 1.8827 | 1.97 | 55800 | 1.8323 | 15.8524 | 2.1510 | 0.4221 | 0.291 | 0.193 | 0.135 |
| 1.8673 | 1.99 | 56400 | 1.2667 | 15.4262 | 2.1620 | 0.4092 | 0.2795 | 0.1836 | 0.1275 |
| 1.6735 | 2.01 | 57000 | 1.821 | 15.8538 | 2.1836 | 0.4193 | 0.2875 | 0.189 | 0.1317 |
| 1.6367 | 2.04 | 57600 | 2.5547 | 15.8055 | 2.1941 | 0.415 | 0.2831 | 0.1849 | 0.1284 |
| 1.6326 | 2.06 | 58200 | 2.0999 | 15.9352 | 2.1743 | 0.4157 | 0.2829 | 0.1842 | 0.1267 |
| 1.6354 | 2.08 | 58800 | 2.3907 | 15.68 | 2.1879 | 0.4233 | 0.2921 | 0.1936 | 0.1361 |
| 1.6352 | 2.1 | 59400 | 1.979 | 16.1807 | 2.1735 | 0.4236 | 0.2907 | 0.193 | 0.1346 |
| 1.6428 | 2.12 | 60000 | 2.2266 | 15.8759 | 2.1858 | 0.4204 | 0.2881 | 0.1896 | 0.1308 |
| 1.6483 | 2.14 | 60600 | 1.9294 | 15.8469 | 2.1878 | 0.4237 | 0.2892 | 0.1901 | 0.1317 |
| 1.6502 | 2.16 | 61200 | 1.7967 | 15.7131 | 2.1814 | 0.4164 | 0.2835 | 0.1852 | 0.1275 |
| 1.6585 | 2.18 | 61800 | 1.1843 | 16.0579 | 2.1620 | 0.413 | 0.2828 | 0.1852 | 0.128 |
| 1.6457 | 2.21 | 62400 | 1.7951 | 15.9862 | 2.1873 | 0.4194 | 0.2885 | 0.1908 | 0.1341 |
| 1.6433 | 2.23 | 63000 | 1.6297 | 16.1324 | 2.1770 | 0.4039 | 0.2741 | 0.1773 | 0.1209 |
| 1.6493 | 2.25 | 63600 | 1.8762 | 15.5131 | 2.1702 | 0.414 | 0.2851 | 0.1883 | 0.1292 |
| 1.672 | 2.27 | 64200 | 2.1811 | 16.1945 | 2.1433 | 0.4198 | 0.2852 | 0.1854 | 0.1272 |
| 1.6411 | 2.29 | 64800 | 2.0637 | 16.1434 | 2.1661 | 0.4103 | 0.2809 | 0.1848 | 0.1275 |
| 1.6561 | 2.31 | 65400 | 2.452 | 15.5724 | 2.1761 | 0.4204 | 0.292 | 0.1935 | 0.135 |
| 1.6516 | 2.33 | 66000 | 2.216 | 15.7048 | 2.1836 | 0.4186 | 0.2887 | 0.1909 | 0.1326 |
| 1.6738 | 2.35 | 66600 | 1.7496 | 15.731 | 2.1452 | 0.4186 | 0.2904 | 0.1944 | 0.1364 |
| 1.672 | 2.38 | 67200 | 1.3179 | 15.7697 | 2.1412 | 0.4206 | 0.2898 | 0.1936 | 0.1358 |
| 1.6625 | 2.4 | 67800 | 2.3606 | 15.76 | 2.1412 | 0.4134 | 0.285 | 0.189 | 0.1315 |
| 1.6725 | 2.42 | 68400 | 2.3687 | 15.4745 | 2.1825 | 0.4165 | 0.2874 | 0.1883 | 0.1303 |
| 1.6588 | 2.44 | 69000 | 2.2056 | 15.8841 | 2.1307 | 0.4259 | 0.2952 | 0.1974 | 0.139 |
| 1.6629 | 2.46 | 69600 | 1.7605 | 15.469 | 2.1523 | 0.4149 | 0.2861 | 0.1901 | 0.1327 |
| 1.6716 | 2.48 | 70200 | 1.3733 | 15.3683 | 2.1546 | 0.4202 | 0.2889 | 0.1897 | 0.1314 |
| 1.6708 | 2.5 | 70800 | 2.6313 | 15.7214 | 2.1408 | 0.4236 | 0.2937 | 0.1972 | 0.1395 |
| 1.6637 | 2.52 | 71400 | 2.5112 | 15.909 | 2.1252 | 0.4203 | 0.2903 | 0.1935 | 0.1361 |
| 1.6743 | 2.55 | 72000 | 2.2902 | 15.749 | 2.1326 | 0.426 | 0.297 | 0.1989 | 0.1404 |
| 1.6681 | 2.57 | 72600 | 2.1003 | 16.3338 | 2.1120 | 0.4185 | 0.2876 | 0.1904 | 0.1342 |
| 1.6791 | 2.59 | 73200 | 1.7082 | 15.7283 | 2.1269 | 0.4268 | 0.2968 | 0.1988 | 0.1392 |
| 1.6643 | 2.61 | 73800 | 1.9914 | 16.0552 | 2.1166 | 0.4177 | 0.2886 | 0.1939 | 0.1369 |
| 1.6666 | 2.63 | 74400 | 1.8012 | 16.0276 | 2.1242 | 0.4174 | 0.2875 | 0.19 | 0.1328 |
| 1.67 | 2.65 | 75000 | 1.696 | 15.5559 | 2.1619 | 0.4196 | 0.2919 | 0.1939 | 0.136 |
| 1.6794 | 2.67 | 75600 | 2.0322 | 15.6221 | 2.1425 | 0.4166 | 0.2871 | 0.1891 | 0.1328 |
| 1.6753 | 2.69 | 76200 | 2.5736 | 15.7407 | 2.1432 | 0.4215 | 0.2928 | 0.1958 | 0.1388 |
| 1.6807 | 2.71 | 76800 | 2.3404 | 15.7186 | 2.1240 | 0.4181 | 0.2885 | 0.1917 | 0.1346 |
| 1.6707 | 2.74 | 77400 | 2.4439 | 15.5724 | 2.1246 | 0.4191 | 0.2906 | 0.1936 | 0.1359 |
| 1.6736 | 2.76 | 78000 | 2.0595 | 16.2731 | 2.1053 | 0.4158 | 0.2869 | 0.1902 | 0.1324 |
| 1.6651 | 2.78 | 78600 | 1.6489 | 15.6772 | 2.1365 | 0.4242 | 0.2924 | 0.1938 | 0.1346 |
| 1.6746 | 2.8 | 79200 | 1.1565 | 15.9062 | 2.1232 | 0.4161 | 0.2848 | 0.1872 | 0.1308 |
| 1.6666 | 2.82 | 79800 | 1.7445 | 15.9407 | 2.1417 | 0.414 | 0.2807 | 0.1817 | 0.1249 |
| 1.6687 | 2.84 | 80400 | 1.9425 | 15.8676 | 2.1240 | 0.4088 | 0.2786 | 0.1821 | 0.1269 |
| 1.6678 | 2.86 | 81000 | 1.6419 | 15.9214 | 2.1125 | 0.417 | 0.2873 | 0.188 | 0.1303 |
| 1.6609 | 2.88 | 81600 | 1.8123 | 15.8579 | 2.1227 | 0.4199 | 0.2904 | 0.1916 | 0.1323 |
### Framework versions
- Transformers 4.8.2
- Pytorch 1.8.1+cu111
- Datasets 1.9.0
- Tokenizers 0.10.3
|
dccuchile/bert-base-spanish-wwm-uncased-finetuned-mldoc | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 39 | "2021-07-07T18:25:16Z" | ---
language:
- zh_CN
- zh_CN
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model_index:
- name: tmp
results:
- task:
name: Translation
type: translation
metric:
name: Bleu
type: bleu
value: 0.0099
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tmp
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unkown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Bleu: 0.0099
- Gen Len: 3.3917
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1024
- eval_batch_size: 1024
- seed: 13
- gradient_accumulation_steps: 2
- total_train_batch_size: 2048
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 1 | nan | 0.0114 | 3.3338 |
| No log | 2.0 | 2 | nan | 0.0114 | 3.3338 |
| No log | 3.0 | 3 | nan | 0.0114 | 3.3338 |
| No log | 4.0 | 4 | nan | 0.0114 | 3.3338 |
| No log | 5.0 | 5 | nan | 0.0114 | 3.3338 |
| No log | 6.0 | 6 | nan | 0.0114 | 3.3338 |
| No log | 7.0 | 7 | nan | 0.0114 | 3.3338 |
| No log | 8.0 | 8 | nan | 0.0114 | 3.3338 |
| No log | 9.0 | 9 | nan | 0.0114 | 3.3338 |
| No log | 10.0 | 10 | nan | 0.0114 | 3.3338 |
| No log | 11.0 | 11 | nan | 0.0114 | 3.3338 |
| No log | 12.0 | 12 | nan | 0.0114 | 3.3338 |
| No log | 13.0 | 13 | nan | 0.0114 | 3.3338 |
| No log | 14.0 | 14 | nan | 0.0114 | 3.3338 |
| No log | 15.0 | 15 | nan | 0.0114 | 3.3338 |
| No log | 16.0 | 16 | nan | 0.0114 | 3.3338 |
| No log | 17.0 | 17 | nan | 0.0114 | 3.3338 |
| No log | 18.0 | 18 | nan | 0.0114 | 3.3338 |
| No log | 19.0 | 19 | nan | 0.0114 | 3.3338 |
| No log | 20.0 | 20 | nan | 0.0114 | 3.3338 |
### Framework versions
- Transformers 4.8.2
- Pytorch 1.8.1+cu111
- Datasets 1.9.0
- Tokenizers 0.10.3
|
dccuchile/bert-base-spanish-wwm-uncased-finetuned-ner | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | "2021-07-02T15:01:28Z" | ---
license: mit
tags:
- generated_from_trainer
datasets:
- null
model_index:
- name: topicalchat-multiturn
results:
- task:
name: Causal Language Modeling
type: text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# topicalchat-multiturn
This model is a fine-tuned version of [microsoft/DialoGPT-small](https://huggingface.co/microsoft/DialoGPT-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5260
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 73 | 4.2992 |
| No log | 2.0 | 146 | 3.4433 |
| No log | 3.0 | 219 | 3.1606 |
| No log | 4.0 | 292 | 3.0366 |
| No log | 5.0 | 365 | 2.9679 |
| No log | 6.0 | 438 | 2.9131 |
| 4.1401 | 7.0 | 511 | 2.8752 |
| 4.1401 | 8.0 | 584 | 2.8391 |
| 4.1401 | 9.0 | 657 | 2.8118 |
| 4.1401 | 10.0 | 730 | 2.7871 |
| 4.1401 | 11.0 | 803 | 2.7659 |
| 4.1401 | 12.0 | 876 | 2.7489 |
| 4.1401 | 13.0 | 949 | 2.7331 |
| 2.9768 | 14.0 | 1022 | 2.7196 |
| 2.9768 | 15.0 | 1095 | 2.7071 |
| 2.9768 | 16.0 | 1168 | 2.6940 |
| 2.9768 | 17.0 | 1241 | 2.6854 |
| 2.9768 | 18.0 | 1314 | 2.6728 |
| 2.9768 | 19.0 | 1387 | 2.6647 |
| 2.9768 | 20.0 | 1460 | 2.6562 |
| 2.7864 | 21.0 | 1533 | 2.6482 |
| 2.7864 | 22.0 | 1606 | 2.6439 |
| 2.7864 | 23.0 | 1679 | 2.6326 |
| 2.7864 | 24.0 | 1752 | 2.6107 |
| 2.7864 | 25.0 | 1825 | 2.6043 |
| 2.7864 | 26.0 | 1898 | 2.5970 |
| 2.7864 | 27.0 | 1971 | 2.5908 |
| 2.6568 | 28.0 | 2044 | 2.5862 |
| 2.6568 | 29.0 | 2117 | 2.5828 |
| 2.6568 | 30.0 | 2190 | 2.5765 |
| 2.6568 | 31.0 | 2263 | 2.5742 |
| 2.6568 | 32.0 | 2336 | 2.5682 |
| 2.6568 | 33.0 | 2409 | 2.5656 |
| 2.6568 | 34.0 | 2482 | 2.5614 |
| 2.5489 | 35.0 | 2555 | 2.5605 |
| 2.5489 | 36.0 | 2628 | 2.5552 |
| 2.5489 | 37.0 | 2701 | 2.5541 |
| 2.5489 | 38.0 | 2774 | 2.5494 |
| 2.5489 | 39.0 | 2847 | 2.5491 |
| 2.5489 | 40.0 | 2920 | 2.5455 |
| 2.5489 | 41.0 | 2993 | 2.5452 |
| 2.475 | 42.0 | 3066 | 2.5433 |
| 2.475 | 43.0 | 3139 | 2.5397 |
| 2.475 | 44.0 | 3212 | 2.5386 |
| 2.475 | 45.0 | 3285 | 2.5400 |
| 2.475 | 46.0 | 3358 | 2.5339 |
| 2.475 | 47.0 | 3431 | 2.5327 |
| 2.4144 | 48.0 | 3504 | 2.5327 |
| 2.4144 | 49.0 | 3577 | 2.5312 |
| 2.4144 | 50.0 | 3650 | 2.5338 |
| 2.4144 | 51.0 | 3723 | 2.5314 |
| 2.4144 | 52.0 | 3796 | 2.5309 |
| 2.4144 | 53.0 | 3869 | 2.5289 |
| 2.4144 | 54.0 | 3942 | 2.5290 |
| 2.3642 | 55.0 | 4015 | 2.5270 |
| 2.3642 | 56.0 | 4088 | 2.5270 |
| 2.3642 | 57.0 | 4161 | 2.5263 |
| 2.3642 | 58.0 | 4234 | 2.5267 |
| 2.3642 | 59.0 | 4307 | 2.5273 |
| 2.3642 | 60.0 | 4380 | 2.5258 |
| 2.3642 | 61.0 | 4453 | 2.5253 |
| 2.3216 | 62.0 | 4526 | 2.5244 |
| 2.3216 | 63.0 | 4599 | 2.5256 |
| 2.3216 | 64.0 | 4672 | 2.5227 |
| 2.3216 | 65.0 | 4745 | 2.5241 |
| 2.3216 | 66.0 | 4818 | 2.5244 |
| 2.3216 | 67.0 | 4891 | 2.5236 |
| 2.3216 | 68.0 | 4964 | 2.5251 |
| 2.2879 | 69.0 | 5037 | 2.5231 |
| 2.2879 | 70.0 | 5110 | 2.5254 |
| 2.2879 | 71.0 | 5183 | 2.5242 |
| 2.2879 | 72.0 | 5256 | 2.5254 |
| 2.2879 | 73.0 | 5329 | 2.5253 |
| 2.2879 | 74.0 | 5402 | 2.5228 |
| 2.2879 | 75.0 | 5475 | 2.5247 |
| 2.261 | 76.0 | 5548 | 2.5243 |
| 2.261 | 77.0 | 5621 | 2.5247 |
| 2.261 | 78.0 | 5694 | 2.5250 |
| 2.261 | 79.0 | 5767 | 2.5248 |
| 2.261 | 80.0 | 5840 | 2.5236 |
| 2.261 | 81.0 | 5913 | 2.5264 |
| 2.261 | 82.0 | 5986 | 2.5249 |
| 2.2396 | 83.0 | 6059 | 2.5256 |
| 2.2396 | 84.0 | 6132 | 2.5267 |
| 2.2396 | 85.0 | 6205 | 2.5258 |
| 2.2396 | 86.0 | 6278 | 2.5242 |
| 2.2396 | 87.0 | 6351 | 2.5233 |
| 2.2396 | 88.0 | 6424 | 2.5249 |
| 2.2396 | 89.0 | 6497 | 2.5253 |
| 2.2238 | 90.0 | 6570 | 2.5252 |
| 2.2238 | 91.0 | 6643 | 2.5255 |
| 2.2238 | 92.0 | 6716 | 2.5263 |
| 2.2238 | 93.0 | 6789 | 2.5261 |
| 2.2238 | 94.0 | 6862 | 2.5257 |
| 2.2238 | 95.0 | 6935 | 2.5253 |
| 2.213 | 96.0 | 7008 | 2.5267 |
| 2.213 | 97.0 | 7081 | 2.5258 |
| 2.213 | 98.0 | 7154 | 2.5258 |
| 2.213 | 99.0 | 7227 | 2.5259 |
| 2.213 | 100.0 | 7300 | 2.5260 |
### Framework versions
- Transformers 4.8.1
- Pytorch 1.8.1+cu111
- Datasets 1.8.0
- Tokenizers 0.10.3
|
CennetOguz/distilbert-base-uncased-finetuned-recipe-accelerate | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"DistilBertForMaskedLM"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ## Usage
The model can be used directly (without a language model) as follows:
```python
import soundfile as sf
import torch
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import argparse
def parse_transcription(wav_file):
# load pretrained model
processor = Wav2Vec2Processor.from_pretrained("addy88/hindi-wav2vec2-stt")
model = Wav2Vec2ForCTC.from_pretrained("addy88/hindi-wav2vec2-stt")
# load audio
audio_input, sample_rate = sf.read(wav_file)
# pad input values and return pt tensor
input_values = processor(audio_input, sampling_rate=sample_rate, return_tensors="pt").input_values
# INFERENCE
# retrieve logits & take argmax
logits = model(input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
# transcribe
transcription = processor.decode(predicted_ids[0], skip_special_tokens=True)
print(transcription)
``` |
Chaewon/mnmt_decoder_en_gpt2 | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- itihasa
model-index:
- name: t5-base-finetuned-sn-to-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-finetuned-sn-to-en
This model is a fine-tuned version of [google/t5-v1_1-base](https://huggingface.co/google/t5-v1_1-base) on the itihasa dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
Chakita/Friends | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | "2021-12-19T16:52:53Z" | ## Usage
The model can be used directly (without a language model) as follows:
```python
import soundfile as sf
import torch
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import argparse
def parse_transcription(wav_file):
# load pretrained model
processor = Wav2Vec2Processor.from_pretrained("addy88/addy88/wav2vec2-assamese-stt")
model = Wav2Vec2ForCTC.from_pretrained("addy88/addy88/wav2vec2-assamese-stt")
# load audio
audio_input, sample_rate = sf.read(wav_file)
# pad input values and return pt tensor
input_values = processor(audio_input, sampling_rate=sample_rate, return_tensors="pt").input_values
# INFERENCE
# retrieve logits & take argmax
logits = model(input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
# transcribe
transcription = processor.decode(predicted_ids[0], skip_special_tokens=True)
print(transcription)
``` |
ChauhanVipul/BERT | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ## Usage
The model can be used directly (without a language model) as follows:
```python
import soundfile as sf
import torch
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import argparse
def parse_transcription(wav_file):
# load pretrained model
processor = Wav2Vec2Processor.from_pretrained("addy88/wav2vec2-punjabi-stt")
model = Wav2Vec2ForCTC.from_pretrained("addy88/wav2vec2-punjabi-stt")
# load audio
audio_input, sample_rate = sf.read(wav_file)
# pad input values and return pt tensor
input_values = processor(audio_input, sampling_rate=sample_rate, return_tensors="pt").input_values
# INFERENCE
# retrieve logits & take argmax
logits = model(input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
# transcribe
transcription = processor.decode(predicted_ids[0], skip_special_tokens=True)
print(transcription)
``` |
Venkatakrishnan-Ramesh/Text_gen | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# aditeyabaral/sentencetransformer-roberta-base
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('aditeyabaral/sentencetransformer-roberta-base')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('aditeyabaral/sentencetransformer-roberta-base')
model = AutoModel.from_pretrained('aditeyabaral/sentencetransformer-roberta-base')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=aditeyabaral/sentencetransformer-roberta-base)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 9234 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
ComCom/gpt2-large | [
"pytorch",
"gpt2",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"GPT2Model"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
Contrastive-Tension/BERT-Distil-CT | [
"pytorch",
"tf",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"DistilBertForMaskedLM"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
tags:
- conversational
---
# Rick DialoGPT medium model |
Contrastive-Tension/BERT-Distil-NLI-CT | [
"pytorch",
"tf",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"DistilBertForMaskedLM"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | distilbert-base-uncased finetuned on the conll2003 dataset for NER. |
Crasher222/kaggle-comp-test | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:Crasher222/autonlp-data-kaggle-test",
"transformers",
"autonlp",
"co2_eq_emissions"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 29 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
Crumped/imdb-simpleRNN | [
"keras"
] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language: en
tags:
- financial-sentiment-analysis
- sentiment-analysis
datasets:
- financial_phrasebank
widget:
- text: Operating profit rose to EUR 13.1 mn from EUR 8.7 mn in the corresponding period in 2007 representing 7.7 % of net sales.
- text: Bids or offers include at least 1,000 shares and the value of the shares must correspond to at least EUR 4,000.
- text: Raute reported a loss per share of EUR 0.86 for the first half of 2009 , against EPS of EUR 0.74 in the corresponding period of 2008.
---
### FinancialBERT for Sentiment Analysis
[*FinancialBERT*](https://huggingface.co/ahmedrachid/FinancialBERT) is a BERT model pre-trained on a large corpora of financial texts. The purpose is to enhance financial NLP research and practice in financial domain, hoping that financial practitioners and researchers can benefit from this model without the necessity of the significant computational resources required to train the model.
The model was fine-tuned for Sentiment Analysis task on _Financial PhraseBank_ dataset. Experiments show that this model outperforms the general BERT and other financial domain-specific models.
More details on `FinancialBERT`'s pre-training process can be found at: https://www.researchgate.net/publication/358284785_FinancialBERT_-_A_Pretrained_Language_Model_for_Financial_Text_Mining
### Training data
FinancialBERT model was fine-tuned on [Financial PhraseBank](https://www.researchgate.net/publication/251231364_FinancialPhraseBank-v10), a dataset consisting of 4840 Financial News categorised by sentiment (negative, neutral, positive).
### Fine-tuning hyper-parameters
- learning_rate = 2e-5
- batch_size = 32
- max_seq_length = 512
- num_train_epochs = 5
### Evaluation metrics
The evaluation metrics used are: Precision, Recall and F1-score. The following is the classification report on the test set.
| sentiment | precision | recall | f1-score | support |
| ------------- |:-------------:|:-------------:|:-------------:| -----:|
| negative | 0.96 | 0.97 | 0.97 | 58 |
| neutral | 0.98 | 0.99 | 0.98 | 279 |
| positive | 0.98 | 0.97 | 0.97 | 148 |
| macro avg | 0.97 | 0.98 | 0.98 | 485 |
| weighted avg | 0.98 | 0.98 | 0.98 | 485 |
### How to use
The model can be used thanks to Transformers pipeline for sentiment analysis.
```python
from transformers import BertTokenizer, BertForSequenceClassification
from transformers import pipeline
model = BertForSequenceClassification.from_pretrained("ahmedrachid/FinancialBERT-Sentiment-Analysis",num_labels=3)
tokenizer = BertTokenizer.from_pretrained("ahmedrachid/FinancialBERT-Sentiment-Analysis")
nlp = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer)
sentences = ["Operating profit rose to EUR 13.1 mn from EUR 8.7 mn in the corresponding period in 2007 representing 7.7 % of net sales.",
"Bids or offers include at least 1,000 shares and the value of the shares must correspond to at least EUR 4,000.",
"Raute reported a loss per share of EUR 0.86 for the first half of 2009 , against EPS of EUR 0.74 in the corresponding period of 2008.",
]
results = nlp(sentences)
print(results)
[{'label': 'positive', 'score': 0.9998133778572083},
{'label': 'neutral', 'score': 0.9997822642326355},
{'label': 'negative', 'score': 0.9877365231513977}]
```
> Created by [Ahmed Rachid Hazourli](https://www.linkedin.com/in/ahmed-rachid/)
|
Culmenus/opus-mt-de-is-finetuned-de-to-is_nr2-finetuned-de-to-is_nr2 | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9260322366968425
- name: Recall
type: recall
value: 0.9383599955252265
- name: F1
type: f1
value: 0.9321553592265377
- name: Accuracy
type: accuracy
value: 0.9834146186474335
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0607
- Precision: 0.9260
- Recall: 0.9384
- F1: 0.9322
- Accuracy: 0.9834
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2545 | 1.0 | 878 | 0.0711 | 0.9096 | 0.9214 | 0.9154 | 0.9800 |
| 0.0555 | 2.0 | 1756 | 0.0593 | 0.9185 | 0.9356 | 0.9270 | 0.9827 |
| 0.0297 | 3.0 | 2634 | 0.0607 | 0.9260 | 0.9384 | 0.9322 | 0.9834 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
DaWang/demo | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-small-wikinewssum-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-wikinewssum-test
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9354
- Rouge1: 6.8433
- Rouge2: 2.5498
- Rougel: 5.6114
- Rougelsum: 6.353
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| No log | 1.0 | 661 | 3.2810 | 6.4161 | 2.403 | 5.3674 | 6.0329 |
| No log | 2.0 | 1322 | 3.1515 | 6.9291 | 2.6826 | 5.6839 | 6.4359 |
| No log | 3.0 | 1983 | 3.0565 | 6.7939 | 2.6113 | 5.6133 | 6.3126 |
| No log | 4.0 | 2644 | 2.9815 | 6.0279 | 2.1637 | 4.9892 | 5.5962 |
| No log | 5.0 | 3305 | 2.9645 | 6.3926 | 2.339 | 5.2716 | 5.9443 |
| 3.9937 | 6.0 | 3966 | 2.9476 | 6.4739 | 2.3615 | 5.3473 | 6.0089 |
| 3.9937 | 7.0 | 4627 | 2.9405 | 6.615 | 2.4309 | 5.4493 | 6.1445 |
| 3.9937 | 8.0 | 5288 | 2.9354 | 6.8433 | 2.5498 | 5.6114 | 6.353 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Darkrider/covidbert_mednli | [
"transformers"
] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
language: th
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
- robust-speech-event
- speech
- xlsr-fine-tuning
license: cc-by-sa-4.0
model-index:
- name: XLS-R-53 - Thai
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: th
metrics:
- name: Test WER
type: wer
value: 0.9524
- name: Test SER
type: ser
value: 1.2346
- name: Test CER
type: cer
value: 0.1623
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: sv
metrics:
- name: Test WER
type: wer
value: null
- name: Test SER
type: ser
value: null
- name: Test CER
type: cer
value: null
---
# `wav2vec2-large-xlsr-53-th`
Finetuning `wav2vec2-large-xlsr-53` on Thai [Common Voice 7.0](https://commonvoice.mozilla.org/en/datasets)
[Read more on our blog](https://medium.com/airesearch-in-th/airesearch-in-th-3c1019a99cd)
We finetune [wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) based on [Fine-tuning Wav2Vec2 for English ASR](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Fine_tuning_Wav2Vec2_for_English_ASR.ipynb) using Thai examples of [Common Voice Corpus 7.0](https://commonvoice.mozilla.org/en/datasets). The notebooks and scripts can be found in [vistec-ai/wav2vec2-large-xlsr-53-th](https://github.com/vistec-ai/wav2vec2-large-xlsr-53-th). The pretrained model and processor can be found at [airesearch/wav2vec2-large-xlsr-53-th](https://huggingface.co/airesearch/wav2vec2-large-xlsr-53-th).
## `robust-speech-event`
Add `syllable_tokenize`, `word_tokenize` ([PyThaiNLP](https://github.com/PyThaiNLP/pythainlp)) and [deepcut](https://github.com/rkcosmos/deepcut) tokenizers to `eval.py` from [robust-speech-event](https://github.com/huggingface/transformers/tree/master/examples/research_projects/robust-speech-event#evaluation)
```
> python eval.py --model_id ./ --dataset mozilla-foundation/common_voice_7_0 --config th --split test --log_outputs --thai_tokenizer newmm/syllable/deepcut/cer
```
### Eval results on Common Voice 7 "test":
| | WER PyThaiNLP 2.3.1 | WER deepcut | SER | CER |
|---------------------------------|---------------------|-------------|---------|---------|
| Only Tokenization | 0.9524% | 2.5316% | 1.2346% | 0.1623% |
| Cleaning rules and Tokenization | TBD | TBD | TBD | TBD |
## Usage
```
#load pretrained processor and model
processor = Wav2Vec2Processor.from_pretrained("airesearch/wav2vec2-large-xlsr-53-th")
model = Wav2Vec2ForCTC.from_pretrained("airesearch/wav2vec2-large-xlsr-53-th")
#function to resample to 16_000
def speech_file_to_array_fn(batch,
text_col="sentence",
fname_col="path",
resampling_to=16000):
speech_array, sampling_rate = torchaudio.load(batch[fname_col])
resampler=torchaudio.transforms.Resample(sampling_rate, resampling_to)
batch["speech"] = resampler(speech_array)[0].numpy()
batch["sampling_rate"] = resampling_to
batch["target_text"] = batch[text_col]
return batch
#get 2 examples as sample input
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
#infer
with torch.no_grad():
logits = model(inputs.input_values,).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
>> Prediction: ['และ เขา ก็ สัมผัส ดีบุก', 'คุณ สามารถ รับทราบ เมื่อ ข้อความ นี้ ถูก อ่าน แล้ว']
>> Reference: ['และเขาก็สัมผัสดีบุก', 'คุณสามารถรับทราบเมื่อข้อความนี้ถูกอ่านแล้ว']
```
## Datasets
Common Voice Corpus 7.0](https://commonvoice.mozilla.org/en/datasets) contains 133 validated hours of Thai (255 total hours) at 5GB. We pre-tokenize with `pythainlp.tokenize.word_tokenize`. We preprocess the dataset using cleaning rules described in `notebooks/cv-preprocess.ipynb` by [@tann9949](https://github.com/tann9949). We then deduplicate and split as described in [ekapolc/Thai_commonvoice_split](https://github.com/ekapolc/Thai_commonvoice_split) in order to 1) avoid data leakage due to random splits after cleaning in [Common Voice Corpus 7.0](https://commonvoice.mozilla.org/en/datasets) and 2) preserve the majority of the data for the training set. The dataset loading script is `scripts/th_common_voice_70.py`. You can use this scripts together with `train_cleand.tsv`, `validation_cleaned.tsv` and `test_cleaned.tsv` to have the same splits as we do. The resulting dataset is as follows:
```
DatasetDict({
train: Dataset({
features: ['path', 'sentence'],
num_rows: 86586
})
test: Dataset({
features: ['path', 'sentence'],
num_rows: 2502
})
validation: Dataset({
features: ['path', 'sentence'],
num_rows: 3027
})
})
```
## Training
We fintuned using the following configuration on a single V100 GPU and chose the checkpoint with the lowest validation loss. The finetuning script is `scripts/wav2vec2_finetune.py`
```
# create model
model = Wav2Vec2ForCTC.from_pretrained(
"facebook/wav2vec2-large-xlsr-53",
attention_dropout=0.1,
hidden_dropout=0.1,
feat_proj_dropout=0.0,
mask_time_prob=0.05,
layerdrop=0.1,
gradient_checkpointing=True,
ctc_loss_reduction="mean",
pad_token_id=processor.tokenizer.pad_token_id,
vocab_size=len(processor.tokenizer)
)
model.freeze_feature_extractor()
training_args = TrainingArguments(
output_dir="../data/wav2vec2-large-xlsr-53-thai",
group_by_length=True,
per_device_train_batch_size=32,
gradient_accumulation_steps=1,
per_device_eval_batch_size=16,
metric_for_best_model='wer',
evaluation_strategy="steps",
eval_steps=1000,
logging_strategy="steps",
logging_steps=1000,
save_strategy="steps",
save_steps=1000,
num_train_epochs=100,
fp16=True,
learning_rate=1e-4,
warmup_steps=1000,
save_total_limit=3,
report_to="tensorboard"
)
```
## Evaluation
We benchmark on the test set using WER with words tokenized by [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp) 2.3.1 and [deepcut](https://github.com/rkcosmos/deepcut), and CER. We also measure performance when spell correction using [TNC](http://www.arts.chula.ac.th/ling/tnc/) ngrams is applied. Evaluation codes can be found in `notebooks/wav2vec2_finetuning_tutorial.ipynb`. Benchmark is performed on `test-unique` split.
| | WER PyThaiNLP 2.3.1 | WER deepcut | CER |
|--------------------------------|---------------------|----------------|----------------|
| [Kaldi from scratch](https://github.com/vistec-AI/commonvoice-th) | 23.04 | | 7.57 |
| Ours without spell correction | 13.634024 | **8.152052** | **2.813019** |
| Ours with spell correction | 17.996397 | 14.167975 | 5.225761 |
| Google Web Speech API※ | 13.711234 | 10.860058 | 7.357340 |
| Microsoft Bing Speech API※ | **12.578819** | 9.620991 | 5.016620 |
| Amazon Transcribe※ | 21.86334 | 14.487553 | 7.077562 |
| NECTEC AI for Thai Partii API※ | 20.105887 | 15.515631 | 9.551027 |
※ APIs are not finetuned with Common Voice 7.0 data
## LICENSE
[cc-by-sa 4.0](https://github.com/vistec-AI/wav2vec2-large-xlsr-53-th/blob/main/LICENSE)
## Ackowledgements
* model training and validation notebooks/scripts [@cstorm125](https://github.com/cstorm125/)
* dataset cleaning scripts [@tann9949](https://github.com/tann9949)
* dataset splits [@ekapolc](https://github.com/ekapolc/) and [@14mss](https://github.com/14mss)
* running the training [@mrpeerat](https://github.com/mrpeerat)
* spell correction [@wannaphong](https://github.com/wannaphong)
|
Darren/darren | [
"pytorch"
] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
widget:
- text: "สวนกุหลาบเป็นโรงเรียนอะไร"
context: "โรงเรียนสวนกุหลาบวิทยาลัย (Suankularb Wittayalai School) (อักษรย่อ : ส.ก. / S.K.) เป็นโรงเรียนชายล้วน ระดับชั้นมัธยมศึกษาขนาดใหญ่พิเศษ สังกัดสำนักงานเขตพื้นที่การศึกษามัธยมศึกษาเขต 1 สำนักงานคณะกรรมการการศึกษาขั้นพื้นฐาน (ชื่อเดิม: กรมสามัญศึกษา) กระทรวงศึกษาธิการ ก่อตั้งโดย พระบาทสมเด็จพระจุลจอมเกล้าเจ้าอยู่หัว ได้รับการสถาปนาขึ้นในวันที่ 8 มีนาคม พ.ศ. 2424 (ขณะนั้นนับวันที่ 1 เมษายน เป็นวันขึ้นปีใหม่ เมื่อนับอย่างสากลถือเป็น พ.ศ. 2425) โดยเป็นโรงเรียนรัฐบาลแห่งแรกของประเทศไทย"
---
# xlm-roberta-base-finetune-qa
Finetuning `xlm-roberta-base` with the training set of `iapp_wiki_qa_squad`, `thaiqa_squad`, and `nsc_qa` (removed examples which have cosine similarity with validation and test examples over 0.8; contexts of the latter two are trimmed to be around 300 `newmm` words). Benchmarks shared on [wandb](https://wandb.ai/cstorm125/wangchanberta-qa) using validation and test sets of `iapp_wiki_qa_squad`.
Trained with [thai2transformers](https://github.com/vistec-AI/thai2transformers/blob/dev/scripts/downstream/train_question_answering_lm_finetuning.py).
Train with:
```
export WANDB_PROJECT=wangchanberta-qa
export MODEL_NAME=xlm-roberta-base
python train_question_answering_lm_finetuning.py \
--model_name $MODEL_NAME \
--dataset_name chimera_qa \
--output_dir $MODEL_NAME-finetune-chimera_qa-model \
--log_dir $MODEL_NAME-finetune-chimera_qa-log \
--pad_on_right \
--fp16
```
|
DarshanDeshpande/marathi-distilbert | [
"pytorch",
"tf",
"distilbert",
"fill-mask",
"mr",
"dataset:Oscar Corpus, News, Stories",
"arxiv:1910.01108",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"DistilBertForMaskedLM"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 14 | null | # Finetuend `xlm-roberta-base` model on Thai sequence and token classification datasets
<br>
Finetuned XLM Roberta BASE model on Thai sequence and token classification datasets
The script and documentation can be found at [this repository](https://github.com/vistec-AI/thai2transformers).
<br>
## Model description
<br>
We use the pretrained cross-lingual RoBERTa model as proposed by [[Conneau et al., 2020]](https://arxiv.org/abs/1911.02116). We download the pretrained PyTorch model via HuggingFace's Model Hub (https://huggingface.co/xlm-roberta-base)
<br>
## Intended uses & limitations
<br>
You can use the finetuned models for multiclass/multilabel text classification and token classification task.
<br>
**Multiclass text classification**
- `wisesight_sentiment`
4-class text classification task (`positive`, `neutral`, `negative`, and `question`) based on social media posts and tweets.
- `wongnai_reivews`
Users' review rating classification task (scale is ranging from 1 to 5)
- `generated_reviews_enth` : (`review_star` as label)
Generated users' review rating classification task (scale is ranging from 1 to 5).
**Multilabel text classification**
- `prachathai67k`
Thai topic classification with 12 labels based on news article corpus from prachathai.com. The detail is described in this [page](https://huggingface.co/datasets/prachathai67k).
**Token classification**
- `thainer`
Named-entity recognition tagging with 13 named-entities as descibed in this [page](https://huggingface.co/datasets/thainer).
- `lst20` : NER NER and POS tagging
Named-entity recognition tagging with 10 named-entities and Part-of-Speech tagging with 16 tags as descibed in this [page](https://huggingface.co/datasets/lst20).
<br>
## How to use
<br>
The example notebook demonstrating how to use finetuned model for inference can be found at this [Colab notebook](https://colab.research.google.com/drive/1Kbk6sBspZLwcnOE61adAQo30xxqOQ9ko)
<br>
**BibTeX entry and citation info**
```
@misc{lowphansirikul2021wangchanberta,
title={WangchanBERTa: Pretraining transformer-based Thai Language Models},
author={Lalita Lowphansirikul and Charin Polpanumas and Nawat Jantrakulchai and Sarana Nutanong},
year={2021},
eprint={2101.09635},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
DataikuNLP/average_word_embeddings_glove.6B.300d | [
"arxiv:1908.10084",
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"license:apache-2.0"
] | sentence-similarity | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- conversational
---
# Michael Scott DialoGPT Model |
DataikuNLP/camembert-base | [
"pytorch",
"tf",
"camembert",
"fill-mask",
"fr",
"dataset:oscar",
"arxiv:1911.03894",
"transformers",
"license:mit",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"CamembertForMaskedLM"
],
"model_type": "camembert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
tags:
- conversational
---
# Harry Potter DialoGPT Model |
Davlan/byt5-base-eng-yor-mt | [
"pytorch",
"t5",
"text2text-generation",
"arxiv:2103.08647",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 11 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xlsr-53-AL-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-AL-colab
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5358
- Wer: 0.5443
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.9391 | 0.4 | 400 | 2.0722 | 0.9249 |
| 0.8775 | 0.8 | 800 | 1.7171 | 0.6778 |
| 0.665 | 1.2 | 1200 | 1.7250 | 0.6235 |
| 0.6135 | 1.6 | 1600 | 1.4021 | 0.5847 |
| 0.5795 | 2.0 | 2000 | 1.6191 | 0.5696 |
| 0.5031 | 2.4 | 2400 | 1.6767 | 0.5586 |
| 0.4933 | 2.8 | 2800 | 1.5358 | 0.5443 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
Davlan/byt5-base-yor-eng-mt | [
"pytorch",
"t5",
"text2text-generation",
"arxiv:2103.08647",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xlsr-53-AL
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-AL
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2712
- Wer: 0.6940
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.073 | 8.0 | 200 | 1.0990 | 0.7002 |
| 0.0561 | 16.0 | 400 | 1.1455 | 0.6805 |
| 0.0378 | 24.0 | 600 | 1.2712 | 0.6940 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
Davlan/distilbert-base-multilingual-cased-masakhaner | [
"pytorch",
"tf",
"distilbert",
"token-classification",
"arxiv:2103.11811",
"transformers",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"DistilBertForTokenClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 16 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xlsr-53-Total
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-Total
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2814
- Wer: 0.2260
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 2.9157 | 0.2 | 400 | 2.8204 | 0.9707 |
| 0.9554 | 0.4 | 800 | 0.5295 | 0.5046 |
| 0.7585 | 0.6 | 1200 | 0.4007 | 0.3850 |
| 0.7288 | 0.8 | 1600 | 0.3632 | 0.3447 |
| 0.6792 | 1.0 | 2000 | 0.3433 | 0.3216 |
| 0.6085 | 1.2 | 2400 | 0.3254 | 0.2928 |
| 0.6225 | 1.4 | 2800 | 0.3161 | 0.2832 |
| 0.6183 | 1.6 | 3200 | 0.3111 | 0.2721 |
| 0.5947 | 1.8 | 3600 | 0.2969 | 0.2615 |
| 0.5953 | 2.0 | 4000 | 0.2912 | 0.2515 |
| 0.5358 | 2.2 | 4400 | 0.2920 | 0.2501 |
| 0.5535 | 2.4 | 4800 | 0.2939 | 0.2538 |
| 0.5408 | 2.6 | 5200 | 0.2854 | 0.2452 |
| 0.5272 | 2.8 | 5600 | 0.2816 | 0.2434 |
| 0.5248 | 3.0 | 6000 | 0.2755 | 0.2354 |
| 0.4923 | 3.2 | 6400 | 0.2795 | 0.2353 |
| 0.489 | 3.4 | 6800 | 0.2767 | 0.2330 |
| 0.4932 | 3.6 | 7200 | 0.2821 | 0.2335 |
| 0.4841 | 3.8 | 7600 | 0.2756 | 0.2349 |
| 0.4794 | 4.0 | 8000 | 0.2751 | 0.2265 |
| 0.444 | 4.2 | 8400 | 0.2809 | 0.2283 |
| 0.4533 | 4.4 | 8800 | 0.2804 | 0.2312 |
| 0.4563 | 4.6 | 9200 | 0.2830 | 0.2256 |
| 0.4498 | 4.8 | 9600 | 0.2819 | 0.2251 |
| 0.4532 | 5.0 | 10000 | 0.2814 | 0.2260 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
Davlan/mt5_base_eng_yor_mt | [
"pytorch",
"mt5",
"text2text-generation",
"arxiv:2103.08647",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"MT5ForConditionalGeneration"
],
"model_type": "mt5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null | ---
language: "id"
widget:
- text: "dia orang yang baik ya bunds."
---
## how to use
```python
from transformers import pipeline, set_seed
path = "akahana/indonesia-emotion-roberta"
emotion = pipeline('text-classification',
model=path,device=0)
set_seed(42)
kalimat = "dia orang yang baik ya bunds."
preds = emotion(kalimat)
preds
[{'label': 'BAHAGIA', 'score': 0.8790940046310425}]
``` |
Davlan/xlm-roberta-base-finetuned-naija | [
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"XLMRobertaForMaskedLM"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-tamil-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-tamil-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8072
- Wer: 0.6531
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 11.0967 | 1.0 | 118 | 4.6437 | 1.0 |
| 3.4973 | 2.0 | 236 | 3.2588 | 1.0 |
| 3.1305 | 3.0 | 354 | 2.6566 | 1.0 |
| 1.2931 | 4.0 | 472 | 0.9156 | 0.9944 |
| 0.6851 | 5.0 | 590 | 0.7474 | 0.8598 |
| 0.525 | 6.0 | 708 | 0.6649 | 0.7995 |
| 0.4325 | 7.0 | 826 | 0.6740 | 0.7752 |
| 0.3766 | 8.0 | 944 | 0.6220 | 0.7628 |
| 0.3256 | 9.0 | 1062 | 0.6316 | 0.7322 |
| 0.2802 | 10.0 | 1180 | 0.6442 | 0.7305 |
| 0.2575 | 11.0 | 1298 | 0.6885 | 0.7280 |
| 0.2248 | 12.0 | 1416 | 0.6702 | 0.7197 |
| 0.2089 | 13.0 | 1534 | 0.6781 | 0.7173 |
| 0.1893 | 14.0 | 1652 | 0.6981 | 0.7049 |
| 0.1652 | 15.0 | 1770 | 0.7154 | 0.7436 |
| 0.1643 | 16.0 | 1888 | 0.6798 | 0.7023 |
| 0.1472 | 17.0 | 2006 | 0.7381 | 0.6947 |
| 0.1372 | 18.0 | 2124 | 0.7240 | 0.7065 |
| 0.1318 | 19.0 | 2242 | 0.7305 | 0.6714 |
| 0.1211 | 20.0 | 2360 | 0.7288 | 0.6597 |
| 0.1178 | 21.0 | 2478 | 0.7417 | 0.6699 |
| 0.1118 | 22.0 | 2596 | 0.7476 | 0.6753 |
| 0.1016 | 23.0 | 2714 | 0.7973 | 0.6647 |
| 0.0998 | 24.0 | 2832 | 0.8027 | 0.6633 |
| 0.0917 | 25.0 | 2950 | 0.8045 | 0.6680 |
| 0.0907 | 26.0 | 3068 | 0.7884 | 0.6565 |
| 0.0835 | 27.0 | 3186 | 0.8009 | 0.6622 |
| 0.0749 | 28.0 | 3304 | 0.8123 | 0.6536 |
| 0.0755 | 29.0 | 3422 | 0.8006 | 0.6555 |
| 0.074 | 30.0 | 3540 | 0.8072 | 0.6531 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
Davlan/xlm-roberta-large-masakhaner | [
"pytorch",
"tf",
"xlm-roberta",
"token-classification",
"arxiv:2103.11811",
"transformers",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"XLMRobertaForTokenClassification"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1,449 | null |
---
language: en
datasets:
- cuad
---
# Model Card for RoBERTa Large Model fine-tuned with CUAD dataset
This model is the fine-tuned version of "RoBERTa Large" using CUAD dataset
# Model Details
## Model Description
The [Contract Understanding Atticus Dataset (CUAD)](https://www.atticusprojectai.org/cuad), pronounced "kwad", a dataset for legal contract review curated by the Atticus Project.
Contract review is a task about "finding needles in a haystack."
We find that Transformer models have nascent performance on CUAD, but that this performance is strongly influenced by model design and training dataset size. Despite some promising results, there is still substantial room for improvement. As one of the only large, specialized NLP benchmarks annotated by experts, CUAD can serve as a challenging research benchmark for the broader NLP community.
- **Developed by:** TheAtticusProject
- **Shared by [Optional]:** HuggingFace
- **Model type:** Language model
- **Language(s) (NLP):** en
- **License:** More information needed
- **Related Models:** RoBERTA
- **Parent Model:**RoBERTA Large
- **Resources for more information:**
- [GitHub Repo](https://github.com/TheAtticusProject/cuad)
- [Associated Paper](https://arxiv.org/abs/2103.06268)
# Uses
## Direct Use
Legal contract review
## Downstream Use [Optional]
More information needed
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recomendations.
# Training Details
## Training Data
See [cuad dataset card](https://huggingface.co/datasets/cuad) for further details
## Training Procedure
More information needed
### Preprocessing
More information needed
### Speeds, Sizes, Times
More information needed
# Evaluation
## Testing Data, Factors & Metrics
### Testing Data
#### Extra Data
Researchers may be interested in several gigabytes of unlabeled contract pretraining data, which is available [here](https://drive.google.com/file/d/1of37X0hAhECQ3BN_004D8gm6V88tgZaB/view?usp=sharing).
### Factors
More information needed
### Metrics
More information needed
## Results
We [provide checkpoints](https://zenodo.org/record/4599830) for three of the best models fine-tuned on CUAD: RoBERTa-base (~100M parameters), RoBERTa-large (~300M parameters), and DeBERTa-xlarge (~900M parameters).
# Model Examination
More information needed
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** More information needed
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications [optional]
## Model Architecture and Objective
More information needed
## Compute Infrastructure
More information needed
### Hardware
More information needed
### Software
The HuggingFace [Transformers](https://huggingface.co/transformers) library. It was tested with Python 3.8, PyTorch 1.7, and Transformers 4.3/4.4.
# Citation
**BibTeX:**
@article{hendrycks2021cuad,
title={CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review},
author={Dan Hendrycks and Collin Burns and Anya Chen and Spencer Ball},
journal={NeurIPS},
year={2021}
}
# Glossary [optional]
More information needed
# More Information [optional]
For more details about CUAD and legal contract review, see the [Atticus Project website](https://www.atticusprojectai.org/cuad).
# Model Card Authors [optional]
TheAtticusProject
# Model Card Contact
[TheAtticusProject](https://www.atticusprojectai.org/), in collaboration with the Ezi Ozoani and the HuggingFace Team
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained("akdeniz27/roberta-large-cuad")
model = AutoModelForQuestionAnswering.from_pretrained("akdeniz27/roberta-large-cuad")
```
</details>
|
Davlan/xlm-roberta-large-ner-hrl | [
"pytorch",
"tf",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"XLMRobertaForTokenClassification"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1,322 | null | ---
language: tr
widget:
- text: "Mustafa Kemal Atatürk 19 Mayıs 1919'da Samsun'a çıktı."
---
# Turkish Named Entity Recognition (NER) Model
This model is the fine-tuned version of "xlm-roberta-base"
(a multilingual version of RoBERTa)
using a reviewed version of well known Turkish NER dataset
(https://github.com/stefan-it/turkish-bert/files/4558187/nerdata.txt).
# Fine-tuning parameters:
```
task = "ner"
model_checkpoint = "xlm-roberta-base"
batch_size = 8
label_list = ['O', 'B-PER', 'I-PER', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC']
max_length = 512
learning_rate = 2e-5
num_train_epochs = 2
weight_decay = 0.01
```
# How to use:
```
model = AutoModelForTokenClassification.from_pretrained("akdeniz27/xlm-roberta-base-turkish-ner")
tokenizer = AutoTokenizer.from_pretrained("akdeniz27/xlm-roberta-base-turkish-ner")
ner = pipeline('ner', model=model, tokenizer=tokenizer, aggregation_strategy="simple")
ner("<your text here>")
```
Pls refer "https://huggingface.co/transformers/_modules/transformers/pipelines/token_classification.html" for entity grouping with aggregation_strategy parameter.
# Reference test results:
* accuracy: 0.9919343118732742
* f1: 0.9492100796448622
* precision: 0.9407349896480332
* recall: 0.9578392621870883
|
Declan/CNN_model_v3 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
language: "ar"
tags:
- text-generation
datasets:
- Arabic poetry from several eras
---
# GPT2-Small-Arabic-Poetry
## Model description
Fine-tuned model of Arabic poetry dataset based on gpt2-small-arabic.
## Intended uses & limitations
#### How to use
An example is provided in this [colab notebook](https://colab.research.google.com/drive/1mRl7c-5v-Klx27EEAEOAbrfkustL4g7a?usp=sharing).
#### Limitations and bias
Both the GPT2-small-arabic (trained on Arabic Wikipedia) and this model have several limitations in terms of coverage and training performance.
Use them as demonstrations or proof of concepts but not as production code.
## Training data
This pretrained model used the [Arabic Poetry dataset](https://www.kaggle.com/ahmedabelal/arabic-poetry) from 9 different eras with a total of around 40k poems.
The dataset was trained (fine-tuned) based on the [gpt2-small-arabic](https://huggingface.co/akhooli/gpt2-small-arabic) transformer model.
## Training procedure
Training was done using [Simple Transformers](https://github.com/ThilinaRajapakse/simpletransformers) library on Kaggle, using free GPU.
## Eval results
Final perplexity reached ws 76.3, loss: 4.33
### BibTeX entry and citation info
```bibtex
@inproceedings{Abed Khooli,
year={2020}
}
```
|
Declan/CNN_model_v5 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | "2020-09-06T15:15:36Z" | ---
tags:
- translation
language:
- ar
- en
license: mit
---
### mbart-large-ar-en
This is mbart-large-cc25, finetuned on a subset of the OPUS corpus for ar_en.
Usage: see [example notebook](https://colab.research.google.com/drive/1I6RFOWMaTpPBX7saJYjnSTddW0TD6H1t?usp=sharing)
Note: model has limited training set, not fully trained (do not use for production).
Other models by me: [Abed Khooli](https://huggingface.co/akhooli)
|
Declan/FoxNews_model_v1 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: conserv_fulltext_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# conserv_fulltext_model
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
unbalanced_texts gpt2
|
Declan/FoxNews_model_v8 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | This is a copy of: https://huggingface.co/hf-internal-testing/tiny-random-gpt2
Changes: use old format for `pytorch_model.bin`.
|
Declan/HuffPost_model_v1 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | This is a copy of: https://huggingface.co/hf-internal-testing/tiny-random-mbart
Changes: use old format for `pytorch_model.bin`.
|
Declan/HuffPost_model_v2 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | This is a copy of: https://huggingface.co/hf-internal-testing/tiny-random-mpnet
Changes: use old format for `pytorch_model.bin`.
|
Declan/HuffPost_model_v4 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | This is a copy of: https://huggingface.co/hf-internal-testing/tiny-random-xlnet
Changes: use old format for `pytorch_model.bin`.
|
Declan/HuffPost_model_v8 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- matthews_correlation
model_index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
metric:
name: Matthews Correlation
type: matthews_correlation
value: 0.6290322580645161
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unkown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0475
- Matthews Correlation: 0.6290
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| No log | 1.0 | 16 | 1.3863 | 0.0 |
| No log | 2.0 | 32 | 1.2695 | 0.4503 |
| No log | 3.0 | 48 | 1.1563 | 0.6110 |
| No log | 4.0 | 64 | 1.0757 | 0.6290 |
| No log | 5.0 | 80 | 1.0475 | 0.6290 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
Declan/NPR_model_v2 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-cloud-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cloud-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0812
- Precision: 0.8975
- Recall: 0.9080
- F1: 0.9027
- Accuracy: 0.9703
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 166 | 0.1326 | 0.7990 | 0.8043 | 0.8017 | 0.9338 |
| No log | 2.0 | 332 | 0.0925 | 0.8770 | 0.8946 | 0.8858 | 0.9618 |
| No log | 3.0 | 498 | 0.0812 | 0.8975 | 0.9080 | 0.9027 | 0.9703 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Declan/NPR_model_v3 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-cloud1-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cloud1-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0074
- Precision: 0.9714
- Recall: 0.9855
- F1: 0.9784
- Accuracy: 0.9972
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 166 | 0.0160 | 0.9653 | 0.9420 | 0.9535 | 0.9945 |
| No log | 2.0 | 332 | 0.0089 | 0.9623 | 0.9855 | 0.9737 | 0.9965 |
| No log | 3.0 | 498 | 0.0074 | 0.9714 | 0.9855 | 0.9784 | 0.9972 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Declan/NPR_model_v4 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-cloud2-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cloud2-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8866
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
- Accuracy: 0.8453
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| No log | 1.0 | 162 | 0.7804 | 0.0 | 0.0 | 0.0 | 0.8447 |
| No log | 2.0 | 324 | 0.8303 | 0.0 | 0.0 | 0.0 | 0.8465 |
| No log | 3.0 | 486 | 0.8866 | 0.0 | 0.0 | 0.0 | 0.8453 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Declan/NPR_model_v6 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9988
- Precision: 0.3
- Recall: 0.6
- F1: 0.4
- Accuracy: 0.7870
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 84 | 0.8399 | 0.2105 | 0.4 | 0.2759 | 0.75 |
| No log | 2.0 | 168 | 0.9664 | 0.3 | 0.6 | 0.4 | 0.7870 |
| No log | 3.0 | 252 | 0.9988 | 0.3 | 0.6 | 0.4 | 0.7870 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
|
Declan/WallStreetJournal_model_v2 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- alecmullen/autonlp-data-group-classification
co2_eq_emissions: 0.4362732160754736
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 441411446
- CO2 Emissions (in grams): 0.4362732160754736
## Validation Metrics
- Loss: 0.7598486542701721
- Accuracy: 0.8222222222222222
- Macro F1: 0.2912091747693842
- Micro F1: 0.8222222222222222
- Weighted F1: 0.7707160863181806
- Macro Precision: 0.29631463146314635
- Micro Precision: 0.8222222222222222
- Weighted Precision: 0.7341339689524508
- Macro Recall: 0.30174603174603176
- Micro Recall: 0.8222222222222222
- Weighted Recall: 0.8222222222222222
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/alecmullen/autonlp-group-classification-441411446
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("alecmullen/autonlp-group-classification-441411446", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("alecmullen/autonlp-group-classification-441411446", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
DeltaHub/adapter_t5-3b_mrpc | [
"pytorch",
"transformers"
] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
model-index:
- name: t5-small-finetuned300-en-to-de
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned300-en-to-de
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 136 | 1.1454 | 14.2319 | 17.8329 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
DeltaHub/lora_t5-base_mrpc | [
"pytorch",
"transformers"
] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
model-index:
- name: t5-small-finetuned8-en-to-de
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned8-en-to-de
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 136 | 3.6717 | 3.9127 | 4.0207 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Deniskin/essays_small_2000 | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language:
- ru
multilinguality:
- monolingual
widget:
- text: "Жалобы на боль внизу <mask> после приёма пищи."
example_title: "pain_example"
- text: "Пациентка наблюдалась у <mask> по поводу грибкового поражения кожи."
example_title: "spec_example"
- text: "Появился зуд тела, <mask> веса, потливость, проводил контроль сахара крови."
example_title: "weight_example"
---
Paper: https://arxiv.org/abs/2204.03951
Code: https://github.com/alexyalunin/RuBioRoBERTa
### Citation
```
@misc{alex2022rubioroberta,
title={RuBioRoBERTa: a pre-trained biomedical language model for Russian language biomedical text mining},
author={Alexander Yalunin and Alexander Nesterov and Dmitriy Umerenkov},
year={2022},
eprint={2204.03951},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
Deniskin/gpt3_medium | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"has_space"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 52 | null | ---
language:
- ar
- dz
tags:
- pytorch
- bert
- multilingual
- ar
- dz
license: apache-2.0
widget:
- text: " أنا من الجزائر من ولاية [MASK] "
- text: "rabi [MASK] khouya sami"
- text: " ربي [MASK] خويا لعزيز"
- text: "tahya el [MASK]."
- text: "rouhi ya dzayer [MASK]"
inference: true
---
<img src="https://raw.githubusercontent.com/alger-ia/dziribert/main/dziribert_drawing.png" alt="drawing" width="25%" height="25%" align="right"/>
# DziriBERT
DziriBERT is the first Transformer-based Language Model that has been pre-trained specifically for the Algerian Dialect. It handles Algerian text contents written using both Arabic and Latin characters. It sets new state of the art results on Algerian text classification datasets, even if it has been pre-trained on much less data (~1 million tweets).
For more information, please visit our paper: https://arxiv.org/pdf/2109.12346.pdf.
## How to use
```python
from transformers import BertTokenizer, BertForMaskedLM
tokenizer = BertTokenizer.from_pretrained("alger-ia/dziribert")
model = BertForMaskedLM.from_pretrained("alger-ia/dziribert")
```
You can find a fine-tuning script in our Github repo: https://github.com/alger-ia/dziribert
## Limitations
The pre-training data used in this project comes from social media (Twitter). Therefore, the Masked Language Modeling objective may predict offensive words in some situations. Modeling this kind of words may be either an advantage (e.g. when training a hate speech model) or a disadvantage (e.g. when generating answers that are directly sent to the end user). Depending on your downstream task, you may need to filter out such words especially when returning automatically generated text to the end user.
### How to cite
```bibtex
@article{dziribert,
title={DziriBERT: a Pre-trained Language Model for the Algerian Dialect},
author={Abdaoui, Amine and Berrimi, Mohamed and Oussalah, Mourad and Moussaoui, Abdelouahab},
journal={arXiv preprint arXiv:2109.12346},
year={2021}
}
```
## Contact
Please contact [email protected] for any question, feedback or request.
|
DeskDown/MarianMixFT_en-vi | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-uncased_token_itr0_0.0001_all_01_03_2022-04_48_27
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased_token_itr0_0.0001_all_01_03_2022-04_48_27
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2899
- Precision: 0.3170
- Recall: 0.5261
- F1: 0.3956
- Accuracy: 0.8799
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 30 | 0.2912 | 0.2752 | 0.4444 | 0.3400 | 0.8730 |
| No log | 2.0 | 60 | 0.2772 | 0.4005 | 0.4589 | 0.4277 | 0.8911 |
| No log | 3.0 | 90 | 0.2267 | 0.3642 | 0.5281 | 0.4311 | 0.9043 |
| No log | 4.0 | 120 | 0.2129 | 0.3617 | 0.5455 | 0.4350 | 0.9140 |
| No log | 5.0 | 150 | 0.2399 | 0.3797 | 0.5556 | 0.4511 | 0.9114 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
Devid/DialoGPT-small-Miku | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: correct_BERT_token_itr0_0.0001_editorials_01_03_2022-15_50_21
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# correct_BERT_token_itr0_0.0001_editorials_01_03_2022-15_50_21
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1059
- Precision: 0.0637
- Recall: 0.0080
- F1: 0.0141
- Accuracy: 0.9707
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 15 | 0.1103 | 0.12 | 0.0135 | 0.0243 | 0.9772 |
| No log | 2.0 | 30 | 0.0842 | 0.12 | 0.0135 | 0.0243 | 0.9772 |
| No log | 3.0 | 45 | 0.0767 | 0.12 | 0.0135 | 0.0243 | 0.9772 |
| No log | 4.0 | 60 | 0.0754 | 0.12 | 0.0135 | 0.0243 | 0.9772 |
| No log | 5.0 | 75 | 0.0735 | 0.12 | 0.0135 | 0.0243 | 0.9772 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
Devmapall/paraphrase-quora | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": true,
"length_penalty": 2,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to German: "
},
"translation_en_to_fr": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to French: "
},
"translation_en_to_ro": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to Romanian: "
}
}
} | 3 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: correct_BERT_token_itr0_0.0001_essays_01_03_2022-15_48_47
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# correct_BERT_token_itr0_0.0001_essays_01_03_2022-15_48_47
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1801
- Precision: 0.6153
- Recall: 0.7301
- F1: 0.6678
- Accuracy: 0.9346
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 11 | 0.2746 | 0.4586 | 0.5922 | 0.5169 | 0.9031 |
| No log | 2.0 | 22 | 0.2223 | 0.5233 | 0.6181 | 0.5668 | 0.9148 |
| No log | 3.0 | 33 | 0.2162 | 0.5335 | 0.6699 | 0.5940 | 0.9274 |
| No log | 4.0 | 44 | 0.2053 | 0.5989 | 0.7055 | 0.6478 | 0.9237 |
| No log | 5.0 | 55 | 0.2123 | 0.5671 | 0.7249 | 0.6364 | 0.9267 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
Devrim/prism-default | [
"license:mit"
] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: correct_BERT_token_itr0_0.0001_webDiscourse_01_03_2022-15_47_14
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# correct_BERT_token_itr0_0.0001_webDiscourse_01_03_2022-15_47_14
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6542
- Precision: 0.0092
- Recall: 0.0403
- F1: 0.0150
- Accuracy: 0.7291
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 10 | 0.5856 | 0.0012 | 0.0125 | 0.0022 | 0.6950 |
| No log | 2.0 | 20 | 0.5933 | 0.0 | 0.0 | 0.0 | 0.7282 |
| No log | 3.0 | 30 | 0.5729 | 0.0051 | 0.025 | 0.0085 | 0.7155 |
| No log | 4.0 | 40 | 0.6178 | 0.0029 | 0.0125 | 0.0047 | 0.7143 |
| No log | 5.0 | 50 | 0.6707 | 0.0110 | 0.0375 | 0.0170 | 0.7178 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
DevsIA/Devs_IA | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: correct_distilBERT_token_itr0_1e-05_all_01_03_2022-15_43_47
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# correct_distilBERT_token_itr0_1e-05_all_01_03_2022-15_43_47
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3343
- Precision: 0.1651
- Recall: 0.3039
- F1: 0.2140
- Accuracy: 0.8493
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 30 | 0.4801 | 0.0352 | 0.0591 | 0.0441 | 0.7521 |
| No log | 2.0 | 60 | 0.3795 | 0.0355 | 0.0795 | 0.0491 | 0.8020 |
| No log | 3.0 | 90 | 0.3359 | 0.0591 | 0.1294 | 0.0812 | 0.8334 |
| No log | 4.0 | 120 | 0.3205 | 0.0785 | 0.1534 | 0.1039 | 0.8486 |
| No log | 5.0 | 150 | 0.3144 | 0.0853 | 0.1571 | 0.1105 | 0.8516 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
DevsIA/imagenes | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: correct_distilBERT_token_itr0_1e-05_editorials_01_03_2022-15_42_32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# correct_distilBERT_token_itr0_1e-05_editorials_01_03_2022-15_42_32
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1206
- Precision: 0.0637
- Recall: 0.0080
- F1: 0.0141
- Accuracy: 0.9707
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 15 | 0.1222 | 0.12 | 0.0139 | 0.0249 | 0.9736 |
| No log | 2.0 | 30 | 0.1159 | 0.12 | 0.0139 | 0.0249 | 0.9736 |
| No log | 3.0 | 45 | 0.1082 | 0.12 | 0.0139 | 0.0249 | 0.9736 |
| No log | 4.0 | 60 | 0.1042 | 0.12 | 0.0139 | 0.0249 | 0.9736 |
| No log | 5.0 | 75 | 0.1029 | 0.12 | 0.0139 | 0.0249 | 0.9736 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
DewiBrynJones/wav2vec2-large-xlsr-welsh | [
"cy",
"dataset:common_voice",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: correct_distilBERT_token_itr0_1e-05_essays_01_03_2022-15_41_29
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# correct_distilBERT_token_itr0_1e-05_essays_01_03_2022-15_41_29
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3097
- Precision: 0.2769
- Recall: 0.4391
- F1: 0.3396
- Accuracy: 0.8878
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 11 | 0.4573 | 0.0094 | 0.0027 | 0.0042 | 0.7702 |
| No log | 2.0 | 22 | 0.3660 | 0.1706 | 0.3253 | 0.2239 | 0.8516 |
| No log | 3.0 | 33 | 0.3096 | 0.2339 | 0.408 | 0.2974 | 0.8827 |
| No log | 4.0 | 44 | 0.2868 | 0.2963 | 0.4693 | 0.3633 | 0.8928 |
| No log | 5.0 | 55 | 0.2798 | 0.3141 | 0.48 | 0.3797 | 0.8960 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
DheerajPranav/Dialo-GPT-Rick-bot | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: correct_distilBERT_token_itr0_1e-05_webDiscourse_01_03_2022-15_40_24
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# correct_distilBERT_token_itr0_1e-05_webDiscourse_01_03_2022-15_40_24
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5794
- Precision: 0.0094
- Recall: 0.0147
- F1: 0.0115
- Accuracy: 0.7156
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 10 | 0.6319 | 0.08 | 0.0312 | 0.0449 | 0.6753 |
| No log | 2.0 | 20 | 0.6265 | 0.0364 | 0.0312 | 0.0336 | 0.6764 |
| No log | 3.0 | 30 | 0.6216 | 0.0351 | 0.0312 | 0.0331 | 0.6762 |
| No log | 4.0 | 40 | 0.6193 | 0.0274 | 0.0312 | 0.0292 | 0.6759 |
| No log | 5.0 | 50 | 0.6183 | 0.0222 | 0.0312 | 0.0260 | 0.6773 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
Dhito/am | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | "2022-03-01T14:36:14Z" | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: correct_twitter_RoBERTa_token_itr0_1e-05_all_01_03_2022-15_36_04
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# correct_twitter_RoBERTa_token_itr0_1e-05_all_01_03_2022-15_36_04
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2876
- Precision: 0.2345
- Recall: 0.4281
- F1: 0.3030
- Accuracy: 0.8728
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 30 | 0.3907 | 0.0433 | 0.0824 | 0.0568 | 0.7626 |
| No log | 2.0 | 60 | 0.3046 | 0.2302 | 0.4095 | 0.2947 | 0.8598 |
| No log | 3.0 | 90 | 0.2945 | 0.2084 | 0.4095 | 0.2762 | 0.8668 |
| No log | 4.0 | 120 | 0.2687 | 0.2847 | 0.4607 | 0.3519 | 0.8761 |
| No log | 5.0 | 150 | 0.2643 | 0.2779 | 0.4444 | 0.3420 | 0.8788 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
Dhruva/Interstellar | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: correct_twitter_RoBERTa_token_itr0_1e-05_essays_01_03_2022-15_32_16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# correct_twitter_RoBERTa_token_itr0_1e-05_essays_01_03_2022-15_32_16
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2663
- Precision: 0.3644
- Recall: 0.4985
- F1: 0.4210
- Accuracy: 0.8997
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 11 | 0.5174 | 0.0120 | 0.0061 | 0.0081 | 0.6997 |
| No log | 2.0 | 22 | 0.4029 | 0.1145 | 0.3098 | 0.1672 | 0.8265 |
| No log | 3.0 | 33 | 0.3604 | 0.2539 | 0.4448 | 0.3233 | 0.8632 |
| No log | 4.0 | 44 | 0.3449 | 0.2992 | 0.4755 | 0.3673 | 0.8704 |
| No log | 5.0 | 55 | 0.3403 | 0.3340 | 0.4816 | 0.3945 | 0.8760 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
Dibyaranjan/nl_image_search | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: correct_twitter_RoBERTa_token_itr0_1e-05_webDiscourse_01_03_2022-15_30_39
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# correct_twitter_RoBERTa_token_itr0_1e-05_webDiscourse_01_03_2022-15_30_39
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6169
- Precision: 0.0031
- Recall: 0.0357
- F1: 0.0057
- Accuracy: 0.6464
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 10 | 0.6339 | 0.0116 | 0.0120 | 0.0118 | 0.6662 |
| No log | 2.0 | 20 | 0.6182 | 0.0064 | 0.0120 | 0.0084 | 0.6688 |
| No log | 3.0 | 30 | 0.6139 | 0.0029 | 0.0241 | 0.0052 | 0.6659 |
| No log | 4.0 | 40 | 0.6172 | 0.0020 | 0.0241 | 0.0037 | 0.6622 |
| No log | 5.0 | 50 | 0.6165 | 0.0019 | 0.0241 | 0.0036 | 0.6599 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
DicoTiar/wisdomfiy | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilBERT_token_itr0_0.0001_all_01_03_2022-15_22_12
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilBERT_token_itr0_0.0001_all_01_03_2022-15_22_12
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2811
- Precision: 0.3231
- Recall: 0.5151
- F1: 0.3971
- Accuracy: 0.8913
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 30 | 0.2881 | 0.2089 | 0.3621 | 0.2650 | 0.8715 |
| No log | 2.0 | 60 | 0.2500 | 0.2619 | 0.3842 | 0.3115 | 0.8845 |
| No log | 3.0 | 90 | 0.2571 | 0.2327 | 0.4338 | 0.3030 | 0.8809 |
| No log | 4.0 | 120 | 0.2479 | 0.3051 | 0.4761 | 0.3719 | 0.8949 |
| No log | 5.0 | 150 | 0.2783 | 0.3287 | 0.4761 | 0.3889 | 0.8936 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
DiegoBalam12/institute_classification | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilBERT_token_itr0_0.0001_essays_01_03_2022-15_18_35
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilBERT_token_itr0_0.0001_essays_01_03_2022-15_18_35
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1832
- Precision: 0.6138
- Recall: 0.7169
- F1: 0.6613
- Accuracy: 0.9332
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 11 | 0.2740 | 0.4554 | 0.5460 | 0.4966 | 0.8943 |
| No log | 2.0 | 22 | 0.2189 | 0.5470 | 0.6558 | 0.5965 | 0.9193 |
| No log | 3.0 | 33 | 0.2039 | 0.5256 | 0.6706 | 0.5893 | 0.9198 |
| No log | 4.0 | 44 | 0.2097 | 0.5401 | 0.6795 | 0.6018 | 0.9237 |
| No log | 5.0 | 55 | 0.2255 | 0.6117 | 0.6825 | 0.6452 | 0.9223 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
Digakive/Hsgshs | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilBERT_token_itr0_0.0001_webDiscourse_01_03_2022-15_16_57
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilBERT_token_itr0_0.0001_webDiscourse_01_03_2022-15_16_57
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5923
- Precision: 0.0039
- Recall: 0.0212
- F1: 0.0066
- Accuracy: 0.7084
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 10 | 0.6673 | 0.0476 | 0.0128 | 0.0202 | 0.6652 |
| No log | 2.0 | 20 | 0.6211 | 0.0 | 0.0 | 0.0 | 0.6707 |
| No log | 3.0 | 30 | 0.6880 | 0.0038 | 0.0128 | 0.0058 | 0.6703 |
| No log | 4.0 | 40 | 0.6566 | 0.0030 | 0.0128 | 0.0049 | 0.6690 |
| No log | 5.0 | 50 | 0.6036 | 0.0 | 0.0 | 0.0 | 0.6868 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
Dilmk2/DialoGPT-small-harrypotter | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 13 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilBERT_token_itr0_1e-05_all_01_03_2022-15_14_04
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilBERT_token_itr0_1e-05_all_01_03_2022-15_14_04
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3121
- Precision: 0.1204
- Recall: 0.2430
- F1: 0.1611
- Accuracy: 0.8538
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 30 | 0.4480 | 0.0209 | 0.0223 | 0.0216 | 0.7794 |
| No log | 2.0 | 60 | 0.3521 | 0.0559 | 0.1218 | 0.0767 | 0.8267 |
| No log | 3.0 | 90 | 0.3177 | 0.1208 | 0.2504 | 0.1629 | 0.8487 |
| No log | 4.0 | 120 | 0.3009 | 0.1296 | 0.2607 | 0.1731 | 0.8602 |
| No log | 5.0 | 150 | 0.2988 | 0.1393 | 0.2693 | 0.1836 | 0.8599 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
DimaOrekhov/transformer-method-name | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilBERT_token_itr0_1e-05_essays_01_03_2022-15_11_44
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilBERT_token_itr0_1e-05_essays_01_03_2022-15_11_44
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3082
- Precision: 0.2796
- Recall: 0.4373
- F1: 0.3411
- Accuracy: 0.8887
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 11 | 0.5018 | 0.0192 | 0.0060 | 0.0091 | 0.7370 |
| No log | 2.0 | 22 | 0.4066 | 0.1541 | 0.2814 | 0.1992 | 0.8340 |
| No log | 3.0 | 33 | 0.3525 | 0.1768 | 0.3234 | 0.2286 | 0.8612 |
| No log | 4.0 | 44 | 0.3250 | 0.2171 | 0.3503 | 0.2680 | 0.8766 |
| No log | 5.0 | 55 | 0.3160 | 0.2353 | 0.3713 | 0.2880 | 0.8801 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
Dimedrolza/DialoGPT-small-cyberpunk | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilBERT_token_itr0_1e-05_webDiscourse_01_03_2022-15_10_39
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilBERT_token_itr0_1e-05_webDiscourse_01_03_2022-15_10_39
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5867
- Precision: 0.0119
- Recall: 0.0116
- F1: 0.0118
- Accuracy: 0.6976
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 10 | 0.5730 | 0.0952 | 0.0270 | 0.0421 | 0.7381 |
| No log | 2.0 | 20 | 0.5755 | 0.0213 | 0.0135 | 0.0165 | 0.7388 |
| No log | 3.0 | 30 | 0.5635 | 0.0196 | 0.0135 | 0.016 | 0.7416 |
| No log | 4.0 | 40 | 0.5549 | 0.0392 | 0.0270 | 0.032 | 0.7429 |
| No log | 5.0 | 50 | 0.5530 | 0.0357 | 0.0270 | 0.0308 | 0.7438 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
Dmitry12/sber | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert_token_itr0_1e-05_all_01_03_2022-14_33_33
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_token_itr0_1e-05_all_01_03_2022-14_33_33
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3255
- Precision: 0.1412
- Recall: 0.25
- F1: 0.1805
- Accuracy: 0.8491
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 30 | 0.4549 | 0.0228 | 0.0351 | 0.0276 | 0.7734 |
| No log | 2.0 | 60 | 0.3577 | 0.0814 | 0.1260 | 0.0989 | 0.8355 |
| No log | 3.0 | 90 | 0.3116 | 0.1534 | 0.2648 | 0.1943 | 0.8611 |
| No log | 4.0 | 120 | 0.2975 | 0.1792 | 0.2967 | 0.2234 | 0.8690 |
| No log | 5.0 | 150 | 0.2935 | 0.1873 | 0.2998 | 0.2305 | 0.8715 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
DongHyoungLee/distilbert-base-uncased-finetuned-cola | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | {
"architectures": [
"DistilBertForSequenceClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 27 | "2022-02-27T18:11:24Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr0_0.0002_all_27_02_2022-19_11_17
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_0.0002_all_27_02_2022-19_11_17
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4064
- Accuracy: 0.8289
- F1: 0.8901
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 195 | 0.4163 | 0.8085 | 0.8780 |
| No log | 2.0 | 390 | 0.4098 | 0.8268 | 0.8878 |
| 0.312 | 3.0 | 585 | 0.5892 | 0.8244 | 0.8861 |
| 0.312 | 4.0 | 780 | 0.7580 | 0.8232 | 0.8845 |
| 0.312 | 5.0 | 975 | 0.9028 | 0.8183 | 0.8824 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
Donghyun/L2_BERT | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr0_0.0002_editorials_27_02_2022-19_42_36
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_0.0002_editorials_27_02_2022-19_42_36
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0926
- Accuracy: 0.9772
- F1: 0.9883
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 104 | 0.0539 | 0.9885 | 0.9942 |
| No log | 2.0 | 208 | 0.0282 | 0.9885 | 0.9942 |
| No log | 3.0 | 312 | 0.0317 | 0.9914 | 0.9956 |
| No log | 4.0 | 416 | 0.0462 | 0.9885 | 0.9942 |
| 0.0409 | 5.0 | 520 | 0.0517 | 0.9885 | 0.9942 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
Dongjae/mrc2reader | [
"pytorch",
"xlm-roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | {
"architectures": [
"XLMRobertaForQuestionAnswering"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr0_0.0002_essays_27_02_2022-19_33_10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_0.0002_essays_27_02_2022-19_33_10
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3358
- Accuracy: 0.8688
- F1: 0.9225
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 81 | 0.4116 | 0.8382 | 0.9027 |
| No log | 2.0 | 162 | 0.4360 | 0.8382 | 0.8952 |
| No log | 3.0 | 243 | 0.5719 | 0.8382 | 0.8995 |
| No log | 4.0 | 324 | 0.7251 | 0.8493 | 0.9021 |
| No log | 5.0 | 405 | 0.8384 | 0.8456 | 0.9019 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
Dongmin/testmodel | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": true,
"length_penalty": 2,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to German: "
},
"translation_en_to_fr": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to French: "
},
"translation_en_to_ro": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to Romanian: "
}
}
} | 11 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr0_0.0002_webDiscourse_27_02_2022-19_25_06
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_0.0002_webDiscourse_27_02_2022-19_25_06
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5777
- Accuracy: 0.6794
- F1: 0.5010
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 48 | 0.6059 | 0.63 | 0.4932 |
| No log | 2.0 | 96 | 0.6327 | 0.705 | 0.5630 |
| No log | 3.0 | 144 | 0.7003 | 0.695 | 0.5197 |
| No log | 4.0 | 192 | 0.9368 | 0.69 | 0.4655 |
| No log | 5.0 | 240 | 1.1935 | 0.685 | 0.4425 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
Waynehillsdev/Wayne_NLP_mT5 | [
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"MT5ForConditionalGeneration"
],
"model_type": "mt5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 11 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: finetuned_sentence_itr0_1e-05_all_01_03_2022-13_25_32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_1e-05_all_01_03_2022-13_25_32
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4787
- Accuracy: 0.8138
- F1: 0.8785
- Precision: 0.8489
- Recall: 0.9101
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 1.0 | 390 | 0.4335 | 0.7732 | 0.8533 | 0.8209 | 0.8883 |
| 0.5141 | 2.0 | 780 | 0.4196 | 0.8037 | 0.8721 | 0.8446 | 0.9015 |
| 0.3368 | 3.0 | 1170 | 0.4519 | 0.8098 | 0.8779 | 0.8386 | 0.9212 |
| 0.2677 | 4.0 | 1560 | 0.4787 | 0.8122 | 0.8785 | 0.8452 | 0.9146 |
| 0.2677 | 5.0 | 1950 | 0.4912 | 0.8146 | 0.8794 | 0.8510 | 0.9097 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
Waynehillsdev/wav2vec2-base-timit-demo-colab | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
] | automatic-speech-recognition | {
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: finetuned_sentence_itr0_2e-05_all_01_03_2022-13_11_55
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_2e-05_all_01_03_2022-13_11_55
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6168
- Accuracy: 0.8286
- F1: 0.8887
- Precision: 0.8628
- Recall: 0.9162
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 1.0 | 390 | 0.3890 | 0.8110 | 0.8749 | 0.8631 | 0.8871 |
| 0.4535 | 2.0 | 780 | 0.3921 | 0.8439 | 0.8984 | 0.8721 | 0.9264 |
| 0.266 | 3.0 | 1170 | 0.4454 | 0.8415 | 0.8947 | 0.8860 | 0.9034 |
| 0.16 | 4.0 | 1560 | 0.5610 | 0.8427 | 0.8957 | 0.8850 | 0.9067 |
| 0.16 | 5.0 | 1950 | 0.6180 | 0.8488 | 0.9010 | 0.8799 | 0.9231 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
Doohae/p_encoder | [
"pytorch"
] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr0_2e-05_all_26_02_2022-03_57_45
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_2e-05_all_26_02_2022-03_57_45
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4345
- Accuracy: 0.8321
- F1: 0.8904
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 195 | 0.3922 | 0.8061 | 0.8747 |
| No log | 2.0 | 390 | 0.3764 | 0.8171 | 0.8837 |
| 0.4074 | 3.0 | 585 | 0.3873 | 0.8220 | 0.8843 |
| 0.4074 | 4.0 | 780 | 0.4361 | 0.8232 | 0.8854 |
| 0.4074 | 5.0 | 975 | 0.4555 | 0.8159 | 0.8793 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
Doohae/roberta | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | {
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr0_2e-05_all_27_02_2022-19_05_42
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_2e-05_all_27_02_2022-19_05_42
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4917
- Accuracy: 0.8231
- F1: 0.8833
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 195 | 0.3883 | 0.8146 | 0.8833 |
| No log | 2.0 | 390 | 0.3607 | 0.8390 | 0.8964 |
| 0.4085 | 3.0 | 585 | 0.3812 | 0.8488 | 0.9042 |
| 0.4085 | 4.0 | 780 | 0.3977 | 0.8549 | 0.9077 |
| 0.4085 | 5.0 | 975 | 0.4233 | 0.8573 | 0.9092 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
albert-base-v1 | [
"pytorch",
"tf",
"safetensors",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"exbert",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 38,156 | "2022-02-27T17:07:05Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr2_0.0002_all_27_02_2022-18_06_59
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr2_0.0002_all_27_02_2022-18_06_59
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7600
- Accuracy: 0.8144
- F1: 0.8788
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 195 | 0.3514 | 0.8427 | 0.8979 |
| No log | 2.0 | 390 | 0.3853 | 0.8293 | 0.8936 |
| 0.3147 | 3.0 | 585 | 0.5494 | 0.8268 | 0.8868 |
| 0.3147 | 4.0 | 780 | 0.6235 | 0.8427 | 0.8995 |
| 0.3147 | 5.0 | 975 | 0.8302 | 0.8378 | 0.8965 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
albert-base-v2 | [
"pytorch",
"tf",
"jax",
"rust",
"safetensors",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4,785,283 | "2022-02-26T03:09:07Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr2_2e-05_all_26_02_2022-04_09_01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr2_2e-05_all_26_02_2022-04_09_01
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4676
- Accuracy: 0.8299
- F1: 0.8892
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 195 | 0.4087 | 0.8073 | 0.8754 |
| No log | 2.0 | 390 | 0.3952 | 0.8159 | 0.8803 |
| 0.4084 | 3.0 | 585 | 0.4183 | 0.8195 | 0.8831 |
| 0.4084 | 4.0 | 780 | 0.4596 | 0.8280 | 0.8867 |
| 0.4084 | 5.0 | 975 | 0.4919 | 0.8280 | 0.8873 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
albert-large-v1 | [
"pytorch",
"tf",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 687 | "2022-02-27T16:39:04Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr2_2e-05_all_27_02_2022-17_38_58
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr2_2e-05_all_27_02_2022-17_38_58
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4095
- Accuracy: 0.8263
- F1: 0.8865
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 195 | 0.3685 | 0.8293 | 0.8911 |
| No log | 2.0 | 390 | 0.3495 | 0.8415 | 0.8992 |
| 0.4065 | 3.0 | 585 | 0.3744 | 0.8463 | 0.9014 |
| 0.4065 | 4.0 | 780 | 0.4260 | 0.8427 | 0.8980 |
| 0.4065 | 5.0 | 975 | 0.4548 | 0.8366 | 0.8940 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
albert-large-v2 | [
"pytorch",
"tf",
"safetensors",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 26,792 | "2022-02-27T17:56:36Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr2_2e-05_webDiscourse_27_02_2022-18_56_32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr2_2e-05_webDiscourse_27_02_2022-18_56_32
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6049
- Accuracy: 0.6926
- F1: 0.4160
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 48 | 0.5835 | 0.71 | 0.0333 |
| No log | 2.0 | 96 | 0.5718 | 0.715 | 0.3871 |
| No log | 3.0 | 144 | 0.5731 | 0.715 | 0.4 |
| No log | 4.0 | 192 | 0.6009 | 0.705 | 0.3516 |
| No log | 5.0 | 240 | 0.6122 | 0.7 | 0.4000 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
albert-xlarge-v1 | [
"pytorch",
"tf",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 341 | "2022-02-27T17:35:08Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr2_3e-05_all_27_02_2022-18_35_02
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr2_3e-05_all_27_02_2022-18_35_02
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3962
- Accuracy: 0.8231
- F1: 0.8873
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 195 | 0.3591 | 0.8366 | 0.8950 |
| No log | 2.0 | 390 | 0.3558 | 0.8415 | 0.9012 |
| 0.3647 | 3.0 | 585 | 0.4049 | 0.8427 | 0.8983 |
| 0.3647 | 4.0 | 780 | 0.5030 | 0.8378 | 0.8949 |
| 0.3647 | 5.0 | 975 | 0.5719 | 0.8354 | 0.8943 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
albert-xlarge-v2 | [
"pytorch",
"tf",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2,973 | "2022-02-27T17:12:41Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr3_0.0002_all_27_02_2022-18_12_34
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr3_0.0002_all_27_02_2022-18_12_34
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7600
- Accuracy: 0.8144
- F1: 0.8788
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 195 | 0.3514 | 0.8427 | 0.8979 |
| No log | 2.0 | 390 | 0.3853 | 0.8293 | 0.8936 |
| 0.3147 | 3.0 | 585 | 0.5494 | 0.8268 | 0.8868 |
| 0.3147 | 4.0 | 780 | 0.6235 | 0.8427 | 0.8995 |
| 0.3147 | 5.0 | 975 | 0.8302 | 0.8378 | 0.8965 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
Subsets and Splits