Search is not available for this dataset
pipeline_tag
stringclasses 48
values | library_name
stringclasses 205
values | text
stringlengths 0
18.3M
| metadata
stringlengths 2
1.07B
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
|
---|---|---|---|---|---|---|---|---|
null | null | {} | banBitcoin/first | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | transformers | {} | bana513/opennmt-translator-en-hu | null | [
"transformers",
"pytorch",
"opennmt-translator6",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-generation | transformers |
TRIGGER WARNING
---------------
This model was created by training GPT2-medium on a custom dataset containing tens of thousands of blog posts about people's experiences living with mental illnesses. As such, the texts that this model generates may be triggering and/or NSFW. Please explore at your own discretion.
The blog posts that were compiled were specifically about 6 different mental health conditions: depression, ptsd, cptsd, borderline personality disorder, bipolar (non-specific), and dissociation. These are very serious illnesses so please treat this with respect, and I encourage everyone to learn more about these conditions.
Thank you, and enjoy! | {"language": "en", "widget": [{"text": "I feel "}, {"text": "I want "}, {"text": "I believe "}]} | banalyst/wonder-egg | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | bananano/distilbert-base-uncased-finetuned-ner | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | transformers | {} | bandainamco-mirai/distilbert-base-japanese | null | [
"transformers",
"pytorch",
"distilbert",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-generation | transformers |
# Rick Sanchez DialoGPT Model | {"tags": ["conversational"]} | banden/DialoGPT-medium-RickBot | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-generation | transformers |
# Loki DialoGPT Model | {"tags": ["conversational"]} | banden/DialoGPT-small-LokiBot | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
## Overview
This model was trained with data from https://registry.opendata.aws/helpful-sentences-from-reviews/ to predict how "helpful" a review is.
The model was fine-tuned from the `distilbert-base-uncased` model
### Labels
LABEL_0 - Not helpful
LABEL_1 - Helpful
### How to use
The following code shows how to make a prediction with this model
```python
from transformers import (
AutoTokenizer,
AutoModelForSequenceClassification,
TextClassificationPipeline,
)
tokenizer = AutoTokenizer.from_pretrained("banjtheman/distilbert-base-uncased-helpful-amazon")
model = AutoModelForSequenceClassification.from_pretrained(
"banjtheman/distilbert-base-uncased-helpful-amazon"
)
pipe = TextClassificationPipeline(model=model, tokenizer=tokenizer)
result = pipe("This was a Christmas gift for my grandson.")
print(result)
#[{'label': 'LABEL_0', 'score': 0.998775064945221}]
# This is NOT A HELPFUL comment
``` | {"license": "apache-2.0"} | banjtheman/distilbert-base-uncased-helpful-amazon | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-generation | transformers | Model based on [ruGPT-3](https://huggingface.co/sberbank-ai/rugpt3small_based_on_gpt2) for generating songs.
Tuned on lyrics collected from [genius](https://genius.com/).
Examples of used artists:
* [Oxxxymiron](https://genius.com/artists/Oxxxymiron)
* [Моргенштерн](https://genius.com/artists/Morgenshtern)
* [ЛСП](https://genius.com/artists/Lsp)
* [Гражданская оборона](https://genius.com/artists/Civil-defense)
* [Король и Шут](https://genius.com/artists/The-king-and-the-jester)
* etc | {"language": ["ru"], "tags": ["PyTorch", "Transformers"], "widget": [{"text": "\u0411\u0430\u0442\u044f \u0432\u043e\u0437\u0432\u0440\u0430\u0449\u0430\u0435\u0442\u0441\u044f \u0442\u0440\u0435\u0437\u0432\u044b\u0439, \u0432 \u0440\u0443\u043a\u0435 \u0431\u0443\u0445\u0430\u043d\u043a\u0430", "example_title": "Example 1"}, {"text": "\u041a\u0430\u043a \u0434\u0435\u043b\u0430? \u041a\u0430\u043a \u0434\u0435\u043b\u0430? \u042d\u0442\u043e \u043d\u043e\u0432\u044b\u0439 \u043a\u0430\u0434\u0438\u043b\u043b\u0430\u043a", "example_title": "Example 2"}, {"text": "4:20 \u043d\u0430 \u0447\u0430\u0441\u0430\u0445 \u0438 \u044f \u0434\u0440\u043e\u0447\u0443 \u043d\u0430 \u0442\u0432\u043e\u0451 \u0444\u043e\u0442\u043e", "example_title": "Example 3"}], "inference": {"parameters": {"temperature": 0.9, "k": 50, "p": 0.95, "length": 1500}}} | bankholdup/rugpt3_song_writer | null | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"PyTorch",
"Transformers",
"ru",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7523
- Matthews Correlation: 0.5259
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.533 | 1.0 | 535 | 0.5318 | 0.3887 |
| 0.3562 | 2.0 | 1070 | 0.5145 | 0.5100 |
| 0.2429 | 3.0 | 1605 | 0.6558 | 0.4888 |
| 0.1831 | 4.0 | 2140 | 0.7523 | 0.5259 |
| 0.1352 | 5.0 | 2675 | 0.8406 | 0.5182 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model-index": [{"name": "distilbert-base-uncased-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.5258663312307151, "name": "Matthews Correlation"}]}]}]} | banri/distilbert-base-uncased-finetuned-cola | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text2text-generation | transformers | {} | baophuc27/tbwt_grammar | null | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {"license": "afl-3.0"} | baotripccc/baotripccc | null | [
"license:afl-3.0",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | baraki/Tigrinya-1.0 | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers | {} | barissayil/bert-sentiment-analysis-sst | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | bashar/distilbert-base-uncased-finetuned-ner | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
fill-mask | transformers | # Multi-dialect-Arabic-BERT
This is a repository of Multi-dialect Arabic BERT model.
By [Mawdoo3-AI](https://ai.mawdoo3.com/).
<p align="center">
<br>
<img src="https://raw.githubusercontent.com/mawdoo3/Multi-dialect-Arabic-BERT/master/multidialct_arabic_bert.png" alt="Background reference: http://www.qfi.org/wp-content/uploads/2018/02/Qfi_Infographic_Mother-Language_Final.pdf" width="500"/>
<br>
<p>
### About our Multi-dialect-Arabic-BERT model
Instead of training the Multi-dialect Arabic BERT model from scratch, we initialized the weights of the model using [Arabic-BERT](https://github.com/alisafaya/Arabic-BERT) and trained it on 10M arabic tweets from the unlabled data of [The Nuanced Arabic Dialect Identification (NADI) shared task](https://sites.google.com/view/nadi-shared-task).
### To cite this work
```
@misc{talafha2020multidialect,
title={Multi-Dialect Arabic BERT for Country-Level Dialect Identification},
author={Bashar Talafha and Mohammad Ali and Muhy Eddin Za'ter and Haitham Seelawi and Ibraheem Tuffaha and Mostafa Samir and Wael Farhan and Hussein T. Al-Natsheh},
year={2020},
eprint={2007.05612},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Usage
The model weights can be loaded using `transformers` library by HuggingFace.
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("bashar-talafha/multi-dialect-bert-base-arabic")
model = AutoModel.from_pretrained("bashar-talafha/multi-dialect-bert-base-arabic")
```
Example using `pipeline`:
```python
from transformers import pipeline
fill_mask = pipeline(
"fill-mask",
model="bashar-talafha/multi-dialect-bert-base-arabic ",
tokenizer="bashar-talafha/multi-dialect-bert-base-arabic "
)
fill_mask(" سافر الرحالة من مطار [MASK] ")
```
```
[{'sequence': '[CLS] سافر الرحالة من مطار الكويت [SEP]', 'score': 0.08296813815832138, 'token': 3226},
{'sequence': '[CLS] سافر الرحالة من مطار دبي [SEP]', 'score': 0.05123933032155037, 'token': 4747},
{'sequence': '[CLS] سافر الرحالة من مطار مسقط [SEP]', 'score': 0.046838656067848206, 'token': 13205},
{'sequence': '[CLS] سافر الرحالة من مطار القاهرة [SEP]', 'score': 0.03234650194644928, 'token': 4003},
{'sequence': '[CLS] سافر الرحالة من مطار الرياض [SEP]', 'score': 0.02606341242790222, 'token': 2200}]
```
### Repository
Please check the [original repository](https://github.com/mawdoo3/Multi-dialect-Arabic-BERT) for more information.
| {"language": "ar", "datasets": ["nadi"], "thumbnail": "https://raw.githubusercontent.com/mawdoo3/Multi-dialect-Arabic-BERT/master/multidialct_arabic_bert.png"} | bashar-talafha/multi-dialect-bert-base-arabic | null | [
"transformers",
"pytorch",
"jax",
"bert",
"fill-mask",
"ar",
"dataset:nadi",
"arxiv:2007.05612",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | basilw/distilbert-base-uncased-finetuned-ner | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | battatawada/My | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | battatawada/bert-base-uncased-finetuned-bert | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | battatawada/bert-base-uncased-finetuned-wikitext2 | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | battatawada/data | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers |
# BatteryBERT-cased for Battery Abstract Classification
**Language model:** batterybert-cased
**Language:** English
**Downstream-task:** Text Classification
**Training data:** training\_data.csv
**Eval data:** val\_data.csv
**Code:** See [example](https://github.com/ShuHuang/batterybert)
**Infrastructure**: 8x DGX A100
## Hyperparameters
```
batch_size = 32
n_epochs = 11
base_LM_model = "batterybert-cased"
learning_rate = 2e-5
```
## Performance
```
"Validation accuracy": 97.29,
"Test accuracy": 96.85,
```
## Usage
### In Transformers
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
model_name = "batterydata/batterybert-cased-abstract"
# a) Get predictions
nlp = pipeline('text-classification', model=model_name, tokenizer=model_name)
input = {'The typical non-aqueous electrolyte for commercial Li-ion cells is a solution of LiPF6 in linear and cyclic carbonates.'}
res = nlp(input)
# b) Load model & tokenizer
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement | {"language": "en", "license": "apache-2.0", "tags": "Text Classification", "datasets": ["batterydata/paper-abstracts"], "metrics": "glue"} | batterydata/batterybert-cased-abstract | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"Text Classification",
"en",
"dataset:batterydata/paper-abstracts",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
# BatteryBERT-cased for QA
**Language model:** batterybert-cased
**Language:** English
**Downstream-task:** Extractive QA
**Training data:** SQuAD v1
**Eval data:** SQuAD v1
**Code:** See [example](https://github.com/ShuHuang/batterybert)
**Infrastructure**: 8x DGX A100
## Hyperparameters
```
batch_size = 16
n_epochs = 4
base_LM_model = "batterybert-cased"
max_seq_len = 386
learning_rate = 2e-5
doc_stride=128
max_query_length=64
```
## Performance
Evaluated on the SQuAD v1.0 dev set.
```
"exact": 81.54,
"f1": 89.16,
```
Evaluated on the battery device dataset.
```
"precision": 70.74,
"recall": 84.19,
```
## Usage
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "batterydata/batterybert-cased-squad-v1"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'What is the electrolyte?',
'context': 'The typical non-aqueous electrolyte for commercial Li-ion cells is a solution of LiPF6 in linear and cyclic carbonates.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement
| {"language": "en", "license": "apache-2.0", "tags": "question answering", "datasets": ["squad", "batterydata/battery-device-data-qa"], "metrics": "squad"} | batterydata/batterybert-cased-squad-v1 | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"question answering",
"en",
"dataset:squad",
"dataset:batterydata/battery-device-data-qa",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
fill-mask | transformers |
# BatteryBERT-uncased model
Pretrained model on a large corpus of battery research papers using a masked language modeling (MLM) objective, starting with the [bert-base-cased](https://huggingface.co/bert-base-cased) weights. It was introduced in
[this paper](paper_link) and first released in
[this repository](https://github.com/ShuHuang/batterybert). This model is case-sensitive: it makes a difference between english and English.
## Model description
BatteryBERT is a transformers model pretrained on a large corpus of battery research papers in a self-supervised fashion, starting with the [bert-base-cased](https://huggingface.co/bert-base-cased) weights. This means
it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model
randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict
the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one
after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to
learn a bidirectional representation of the sentence.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Training data
The BatteryBERT model was pretrained on the full text of battery papers only, after initialized from the [bert-base-cased](https://huggingface.co/bert-base-cased) weights. The paper corpus contains a total of 400,366 battery research papers that are published from 2000 to June 2021, from the publishers Royal Society of Chemistry (RSC), Elsevier, and Springer. The list of DOIs can be found at [Github](https://github.com/ShuHuang/batterybert/blob/main/corpus.txt).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 28,996. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 8 NVIDIA DGX A100 GPUs for 1,000,000 steps with a batch size of 256. The sequence length was limited to 512 tokens. The optimizer used is Adam with a learning rate of 2e-5, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Intended uses & limitations
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.
See the [model hub](https://huggingface.co/models?filter=batterybert) to look for fine-tuned versions on a task that
interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='batterydata/batterybert-cased')
>>> unmasker("Hello I'm a <mask> model.")
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('batterydata/batterybert-cased')
model = BertModel.from_pretrained('batterydata/batterybert-cased')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('batterydata/batterybert-cased')
model = TFBertModel.from_pretrained('batterydata/batterybert-cased')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Evaluation results
Final loss: 0.9609.
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement
| {"language": "en", "license": "apache-2.0", "tags": ["exbert"], "datasets": ["batterypapers"]} | batterydata/batterybert-cased | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"exbert",
"en",
"dataset:batterypapers",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
# BatteryBERT-uncased for Battery Abstract Classification
**Language model:** batterybert-uncased
**Language:** English
**Downstream-task:** Text Classification
**Training data:** training\_data.csv
**Eval data:** val\_data.csv
**Code:** See [example](https://github.com/ShuHuang/batterybert)
**Infrastructure**: 8x DGX A100
## Hyperparameters
```
batch_size = 32
n_epochs = 11
base_LM_model = "batterybert-uncased"
learning_rate = 2e-5
```
## Performance
```
"Validation accuracy": 97.10,
"Test accuracy": 96.94,
```
## Usage
### In Transformers
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
model_name = "batterydata/batterybert-uncased-abstract"
# a) Get predictions
nlp = pipeline('text-classification', model=model_name, tokenizer=model_name)
input = {'The typical non-aqueous electrolyte for commercial Li-ion cells is a solution of LiPF6 in linear and cyclic carbonates.'}
res = nlp(input)
# b) Load model & tokenizer
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement | {"language": "en", "license": "apache-2.0", "tags": "Text Classification", "datasets": ["batterydata/paper-abstracts"], "metrics": "glue"} | batterydata/batterybert-uncased-abstract | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"Text Classification",
"en",
"dataset:batterydata/paper-abstracts",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
# BatteryBERT-uncased for QA
**Language model:** batterybert-uncased
**Language:** English
**Downstream-task:** Extractive QA
**Training data:** SQuAD v1
**Eval data:** SQuAD v1
**Code:** See [example](https://github.com/ShuHuang/batterybert)
**Infrastructure**: 8x DGX A100
## Hyperparameters
```
batch_size = 32
n_epochs = 3
base_LM_model = "batterybert-uncased"
max_seq_len = 386
learning_rate = 3e-5
doc_stride=128
max_query_length=64
```
## Performance
Evaluated on the SQuAD v1.0 dev set.
```
"exact": 81.08,
"f1": 88.41,
```
Evaluated on the battery device dataset.
```
"precision": 68.27,
"recall": 80.88,
```
## Usage
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "batterydata/batterybert-uncased-squad-v1"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'What is the electrolyte?',
'context': 'The typical non-aqueous electrolyte for commercial Li-ion cells is a solution of LiPF6 in linear and cyclic carbonates.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement
| {"language": "en", "license": "apache-2.0", "tags": "question answering", "datasets": ["squad", "batterydata/battery-device-data-qa"], "metrics": "squad"} | batterydata/batterybert-uncased-squad-v1 | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"question answering",
"en",
"dataset:squad",
"dataset:batterydata/battery-device-data-qa",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
fill-mask | transformers |
# BatteryBERT-uncased model
Pretrained model on a large corpus of battery research papers using a masked language modeling (MLM) objective, starting with the [bert-base-uncased](https://huggingface.co/bert-base-uncased) weights. It was introduced in
[this paper](paper_link) and first released in
[this repository](https://github.com/ShuHuang/batterybert). This model is uncased: it does not make a difference
between english and English.
## Model description
BatteryBERT is a transformers model pretrained on a large corpus of battery research papers in a self-supervised fashion, starting with the [bert-base-uncased](https://huggingface.co/bert-base-uncased) weights. This means
it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model
randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict
the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one
after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to
learn a bidirectional representation of the sentence.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Training data
The BatteryBERT model was pretrained on the full text of battery papers only, after initialized from the [bert-base-uncased](https://huggingface.co/bert-base-uncased) weights. The paper corpus contains a total of 400,366 battery research papers that are published from 2000 to June 2021, from the publishers Royal Society of Chemistry (RSC), Elsevier, and Springer. The list of DOIs can be found at [Github](https://github.com/ShuHuang/batterybert/blob/main/corpus.txt).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,522. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 8 NVIDIA DGX A100 GPUs for 1,000,000 steps with a batch size of 256. The sequence length was limited to 512 tokens. The optimizer used is Adam with a learning rate of 2e-5, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Intended uses & limitations
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.
See the [model hub](https://huggingface.co/models?filter=batterybert) to look for fine-tuned versions on a task that
interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='batterydata/batterybert-uncased')
>>> unmasker("Hello I'm a <mask> model.")
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('batterydata/batterybert-uncased')
model = BertModel.from_pretrained('batterydata/batterybert-uncased')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('batterydata/batterybert-uncased')
model = TFBertModel.from_pretrained('batterydata/batterybert-uncased')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Evaluation results
Final loss: 1.0317.
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement
| {"language": "en", "license": "apache-2.0", "tags": ["exbert"], "datasets": ["batterypapers"]} | batterydata/batterybert-uncased | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"exbert",
"en",
"dataset:batterypapers",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
# BatteryOnlyBERT-cased for Battery Abstract Classification
**Language model:** batteryonlybert-cased
**Language:** English
**Downstream-task:** Text Classification
**Training data:** training\_data.csv
**Eval data:** val\_data.csv
**Code:** See [example](https://github.com/ShuHuang/batterybert)
**Infrastructure**: 8x DGX A100
## Hyperparameters
```
batch_size = 32
n_epochs = 14
base_LM_model = "batteryonlybert-cased"
learning_rate = 2e-5
```
## Performance
```
"Validation accuracy": 97.33,
"Test accuracy": 97.34,
```
## Usage
### In Transformers
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
model_name = "batterydata/batteryonlybert-cased-abstract"
# a) Get predictions
nlp = pipeline('text-classification', model=model_name, tokenizer=model_name)
input = {'The typical non-aqueous electrolyte for commercial Li-ion cells is a solution of LiPF6 in linear and cyclic carbonates.'}
res = nlp(input)
# b) Load model & tokenizer
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement | {"language": "en", "license": "apache-2.0", "tags": "Text Classification", "datasets": ["batterydata/paper-abstracts"], "metrics": "glue"} | batterydata/batteryonlybert-cased-abstract | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"Text Classification",
"en",
"dataset:batterydata/paper-abstracts",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
# BatteryOnlyBERT-cased for QA
**Language model:** batteryonlybert-cased
**Language:** English
**Downstream-task:** Extractive QA
**Training data:** SQuAD v1
**Eval data:** SQuAD v1
**Code:** See [example](https://github.com/ShuHuang/batterybert)
**Infrastructure**: 8x DGX A100
## Hyperparameters
```
batch_size = 16
n_epochs = 3
base_LM_model = "batteryonlybert-cased"
max_seq_len = 386
learning_rate = 2e-5
doc_stride=128
max_query_length=64
```
## Performance
Evaluated on the SQuAD v1.0 dev set.
```
"exact": 79.61,
"f1": 87.30,
```
Evaluated on the battery device dataset.
```
"precision": 64.28,
"recall": 82.72,
```
## Usage
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "batterydata/batteryonlybert-cased-squad-v1"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'What is the electrolyte?',
'context': 'The typical non-aqueous electrolyte for commercial Li-ion cells is a solution of LiPF6 in linear and cyclic carbonates.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement | {"language": "en", "license": "apache-2.0", "tags": "question answering", "datasets": ["squad", "batterydata/battery-device-data-qa"], "metrics": "squad"} | batterydata/batteryonlybert-cased-squad-v1 | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"question answering",
"en",
"dataset:squad",
"dataset:batterydata/battery-device-data-qa",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
# BatteryOnlyBERT-uncased for Battery Abstract Classification
**Language model:** batteryonlybert-uncased
**Language:** English
**Downstream-task:** Text Classification
**Training data:** training\_data.csv
**Eval data:** val\_data.csv
**Code:** See [example](https://github.com/ShuHuang/batterybert)
**Infrastructure**: 8x DGX A100
## Hyperparameters
```
batch_size = 16
n_epochs = 13
base_LM_model = "batteryonlybert-uncased"
learning_rate = 3e-5
```
## Performance
```
"Validation accuracy": 97.18,
"Test accuracy": 97.08,
```
## Usage
### In Transformers
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
model_name = "batterydata/batteryonlybert-uncased-abstract"
# a) Get predictions
nlp = pipeline('text-classification', model=model_name, tokenizer=model_name)
input = {'The typical non-aqueous electrolyte for commercial Li-ion cells is a solution of LiPF6 in linear and cyclic carbonates.'}
res = nlp(input)
# b) Load model & tokenizer
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement | {"language": "en", "license": "apache-2.0", "tags": "Text Classification", "datasets": ["batterydata/paper-abstracts"], "metrics": "glue"} | batterydata/batteryonlybert-uncased-abstract | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"Text Classification",
"en",
"dataset:batterydata/paper-abstracts",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
# BatteryOnlyBERT-uncased for QA
**Language model:** batteryonlybert-uncased
**Language:** English
**Downstream-task:** Extractive QA
**Training data:** SQuAD v1
**Eval data:** SQuAD v1
**Code:** See [example](https://github.com/ShuHuang/batterybert)
**Infrastructure**: 8x DGX A100
## Hyperparameters
```
batch_size = 16
n_epochs = 2
base_LM_model = "batteryonlybert-uncased"
max_seq_len = 386
learning_rate = 2e-5
doc_stride=128
max_query_length=64
```
## Performance
Evaluated on the SQuAD v1.0 dev set.
```
"exact": 79.53,
"f1": 87.22,
```
Evaluated on the battery device dataset.
```
"precision": 67.20,
"recall": 83.82,
```
## Usage
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "batterydata/batteryonlybert-uncased-squad-v1"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'What is the electrolyte?',
'context': 'The typical non-aqueous electrolyte for commercial Li-ion cells is a solution of LiPF6 in linear and cyclic carbonates.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement | {"language": "en", "license": "apache-2.0", "tags": "question answering", "datasets": ["squad", "batterydata/battery-device-data-qa"], "metrics": "squad"} | batterydata/batteryonlybert-uncased-squad-v1 | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"question answering",
"en",
"dataset:squad",
"dataset:batterydata/battery-device-data-qa",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
# BatterySciBERT-cased for Battery Abstract Classification
**Language model:** batteryscibert-cased
**Language:** English
**Downstream-task:** Text Classification
**Training data:** training\_data.csv
**Eval data:** val\_data.csv
**Code:** See [example](https://github.com/ShuHuang/batterybert)
**Infrastructure**: 8x DGX A100
## Hyperparameters
```
batch_size = 32
n_epochs = 11
base_LM_model = "batteryscibert-cased"
learning_rate = 2e-5
```
## Performance
```
"Validation accuracy": 97.06,
"Test accuracy": 97.19,
```
## Usage
### In Transformers
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
model_name = "batterydata/batteryscibert-cased-abstract"
# a) Get predictions
nlp = pipeline('text-classification', model=model_name, tokenizer=model_name)
input = {'The typical non-aqueous electrolyte for commercial Li-ion cells is a solution of LiPF6 in linear and cyclic carbonates.'}
res = nlp(input)
# b) Load model & tokenizer
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement | {"language": "en", "license": "apache-2.0", "tags": "Text Classification", "datasets": ["batterydata/paper-abstracts"], "metrics": "glue"} | batterydata/batteryscibert-cased-abstract | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"Text Classification",
"en",
"dataset:batterydata/paper-abstracts",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
# BatterySciBERT-cased for QA
**Language model:** batteryscibert-cased
**Language:** English
**Downstream-task:** Extractive QA
**Training data:** SQuAD v1
**Eval data:** SQuAD v1
**Code:** See [example](https://github.com/ShuHuang/batterybert)
**Infrastructure**: 8x DGX A100
## Hyperparameters
```
batch_size = 32
n_epochs = 3
base_LM_model = "batteryscibert-cased"
max_seq_len = 386
learning_rate = 2e-5
doc_stride=128
max_query_length=64
```
## Performance
Evaluated on the SQuAD v1.0 dev set.
```
"exact": 79.66,
"f1": 87.43,
```
Evaluated on the battery device dataset.
```
"precision": 65.09,
"recall": 84.56,
```
## Usage
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "batterydata/batteryscibert-cased-squad-v1"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'What is the electrolyte?',
'context': 'The typical non-aqueous electrolyte for commercial Li-ion cells is a solution of LiPF6 in linear and cyclic carbonates.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement | {"language": "en", "license": "apache-2.0", "tags": "question answering", "datasets": ["squad", "batterydata/battery-device-data-qa"], "metrics": "squad"} | batterydata/batteryscibert-cased-squad-v1 | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"question answering",
"en",
"dataset:squad",
"dataset:batterydata/battery-device-data-qa",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
fill-mask | transformers |
# BatterySciBERT-cased model
Pretrained model on a large corpus of battery research papers using a masked language modeling (MLM) objective, starting with the [SciBERT-cased](https://huggingface.co/allenai/scibert_scivocab_cased) weights. It was introduced in
[this paper](paper_link) and first released in
[this repository](https://github.com/ShuHuang/batterybert). This model is case-sensitive: it makes a difference between english and English.
## Model description
BatterySciBERT is a transformers model pretrained on a large corpus of battery research papers in a self-supervised fashion, starting with the [SciBERT-cased](https://huggingface.co/allenai/scibert_scivocab_cased) weights. This means
it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model
randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict
the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one
after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to
learn a bidirectional representation of the sentence.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Training data
The BatterySciBERT model was pretrained on the full text of battery papers only, after initialized from the [SciBERT-cased](https://huggingface.co/allenai/scibert_scivocab_cased) weights. The paper corpus contains a total of 400,366 battery research papers that are published from 2000 to June 2021, from the publishers Royal Society of Chemistry (RSC), Elsevier, and Springer. The list of DOIs can be found at [Github](https://github.com/ShuHuang/batterybert/blob/main/corpus.txt).
## Training procedure
### Preprocessing
The texts are tokenized using WordPiece and a vocabulary size of 31,116. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 8 NVIDIA DGX A100 GPUs for 1,000,000 steps with a batch size of 256. The sequence length was limited to 512 tokens. The optimizer used is Adam with a learning rate of 2e-5, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Intended uses & limitations
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.
See the [model hub](https://huggingface.co/models?filter=batterybert) to look for fine-tuned versions on a task that
interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='batterydata/batteryscibert-cased')
>>> unmasker("Hello I'm a <mask> model.")
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('batterydata/batteryscibert-cased')
model = BertModel.from_pretrained('batterydata/batteryscibert-cased')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('batterydata/batteryscibert-cased')
model = TFBertModel.from_pretrained('batterydata/batteryscibert-cased')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Evaluation results
Final loss: 1.0505.
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement
| {"language": "en", "license": "apache-2.0", "tags": ["exbert"], "datasets": ["batterypapers"]} | batterydata/batteryscibert-cased | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"exbert",
"en",
"dataset:batterypapers",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
# BatterySciBERT-uncased for Battery Abstract Classification
**Language model:** batteryscibert-uncased
**Language:** English
**Downstream-task:** Text Classification
**Training data:** training\_data.csv
**Eval data:** val\_data.csv
**Code:** See [example](https://github.com/ShuHuang/batterybert)
**Infrastructure**: 8x DGX A100
## Hyperparameters
```
batch_size = 32
n_epochs = 14
base_LM_model = "batteryscibert-uncased"
learning_rate = 2e-5
```
## Performance
```
"Validation accuracy": 97.12,
"Test accuracy": 97.47,
```
## Usage
### In Transformers
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
model_name = "batterydata/batteryscibert-uncased-abstract"
# a) Get predictions
nlp = pipeline('text-classification', model=model_name, tokenizer=model_name)
input = {'The typical non-aqueous electrolyte for commercial Li-ion cells is a solution of LiPF6 in linear and cyclic carbonates.'}
res = nlp(input)
# b) Load model & tokenizer
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement
| {"language": "en", "license": "apache-2.0", "tags": "Text Classification", "datasets": ["batterydata/paper-abstracts"], "metrics": "glue"} | batterydata/batteryscibert-uncased-abstract | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"Text Classification",
"en",
"dataset:batterydata/paper-abstracts",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
# BatterySciBERT-uncased for QA
**Language model:** batteryscibert-uncased
**Language:** English
**Downstream-task:** Extractive QA
**Training data:** SQuAD v1
**Eval data:** SQuAD v1
**Code:** See [example](https://github.com/ShuHuang/batterybert)
**Infrastructure**: 8x DGX A100
## Hyperparameters
```
batch_size = 32
n_epochs = 3
base_LM_model = "batteryscibert-uncased"
max_seq_len = 386
learning_rate = 2e-5
doc_stride=128
max_query_length=64
```
## Performance
Evaluated on the SQuAD v1.0 dev set.
```
"exact": 79.81,
"f1": 87.66,
```
Evaluated on the battery device dataset.
```
"precision": 66.65,
"recall": 85.29,
```
## Usage
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "batterydata/batteryscibert-uncased-squad-v1"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'What is the electrolyte?',
'context': 'The typical non-aqueous electrolyte for commercial Li-ion cells is a solution of LiPF6 in linear and cyclic carbonates.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement | {"language": "en", "license": "apache-2.0", "tags": "question answering", "datasets": ["squad", "batterydata/battery-device-data-qa"], "metrics": "squad"} | batterydata/batteryscibert-uncased-squad-v1 | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"question answering",
"en",
"dataset:squad",
"dataset:batterydata/battery-device-data-qa",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
fill-mask | transformers |
# BatterySciBERT-uncased model
Pretrained model on a large corpus of battery research papers using a masked language modeling (MLM) objective, starting with the [SciBERT-uncased](https://huggingface.co/allenai/scibert_scivocab_uncased) weights. It was introduced in
[this paper](paper_link) and first released in
[this repository](https://github.com/ShuHuang/batterybert). This model is uncased: it does not make a difference
between english and English.
## Model description
BatterySciBERT is a transformers model pretrained on a large corpus of battery research papers in a self-supervised fashion, starting with the [SciBERT-uncased](https://huggingface.co/allenai/scibert_scivocab_uncased) weights. This means
it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model
randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict
the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one
after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to
learn a bidirectional representation of the sentence.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Training data
The BatterySciBERT model was pretrained on the full text of battery papers only, after initialized from the [SciBERT-uncased](https://huggingface.co/allenai/scibert_scivocab_uncased) weights. The paper corpus contains a total of 400,366 battery research papers that are published from 2000 to June 2021, from the publishers Royal Society of Chemistry (RSC), Elsevier, and Springer. The list of DOIs can be found at [Github](https://github.com/ShuHuang/batterybert/blob/main/corpus.txt).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 31,090. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 8 NVIDIA DGX A100 GPUs for 1,000,000 steps with a batch size of 256. The sequence length was limited to 512 tokens. The optimizer used is Adam with a learning rate of 2e-5, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Intended uses & limitations
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.
See the [model hub](https://huggingface.co/models?filter=batterybert) to look for fine-tuned versions on a task that
interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='batterydata/batteryscibert-uncased')
>>> unmasker("Hello I'm a <mask> model.")
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('batterydata/batteryscibert-uncased')
model = BertModel.from_pretrained('batterydata/batteryscibert-uncased')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('batterydata/batteryscibert-uncased')
model = TFBertModel.from_pretrained('batterydata/batteryscibert-uncased')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Evaluation results
Final loss: 1.095.
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement
| {"language": "en", "license": "apache-2.0", "tags": ["exbert"], "datasets": ["batterypapers"]} | batterydata/batteryscibert-uncased | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"exbert",
"en",
"dataset:batterypapers",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
# BERT-base-cased for Battery Abstract Classification
**Language model:** bert-base-cased
**Language:** English
**Downstream-task:** Text Classification
**Training data:** training\_data.csv
**Eval data:** val\_data.csv
**Code:** See [example](https://github.com/ShuHuang/batterybert)
**Infrastructure**: 8x DGX A100
## Hyperparameters
```
batch_size = 32
n_epochs = 15
base_LM_model = "bert-base-cased"
learning_rate = 2e-5
```
## Performance
```
"Validation accuracy": 96.84,
"Test accuracy": 96.83,
```
## Usage
### In Transformers
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
model_name = "batterydata/bert-base-cased-abstract"
# a) Get predictions
nlp = pipeline('text-classification', model=model_name, tokenizer=model_name)
input = {'The typical non-aqueous electrolyte for commercial Li-ion cells is a solution of LiPF6 in linear and cyclic carbonates.'}
res = nlp(input)
# b) Load model & tokenizer
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement | {"language": "en", "license": "apache-2.0", "tags": "Text Classification", "datasets": ["batterydata/paper-abstracts"], "metrics": "glue"} | batterydata/bert-base-cased-abstract | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"Text Classification",
"en",
"dataset:batterydata/paper-abstracts",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
# BERT-base-cased for QA
**Language model:** bert-base-cased
**Language:** English
**Downstream-task:** Extractive QA
**Training data:** SQuAD v1
**Eval data:** SQuAD v1
**Code:** See [example](https://github.com/ShuHuang/batterybert)
**Infrastructure**: 8x DGX A100
## Hyperparameters
```
batch_size = 32
n_epochs = 2
base_LM_model = "bert-base-cased"
max_seq_len = 386
learning_rate = 5e-5
doc_stride=128
max_query_length=64
```
## Performance
Evaluated on the SQuAD v1.0 dev set.
```
"exact": 81.30,
"f1": 88.58,
```
Evaluated on the battery device dataset.
```
"precision": 67.02,
"recall": 80.15,
```
## Usage
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "batterydata/bert-base-cased-squad-v1"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'What is the electrolyte?',
'context': 'The typical non-aqueous electrolyte for commercial Li-ion cells is a solution of LiPF6 in linear and cyclic carbonates.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement | {"language": "en", "license": "apache-2.0", "tags": "question answering", "datasets": ["squad", "batterydata/battery-device-data-qa"], "metrics": "squad"} | batterydata/bert-base-cased-squad-v1 | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"question answering",
"en",
"dataset:squad",
"dataset:batterydata/battery-device-data-qa",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
# BERT-base-uncased for Battery Abstract Classification
**Language model:** bert-base-uncased
**Language:** English
**Downstream-task:** Text Classification
**Training data:** training\_data.csv
**Eval data:** val\_data.csv
**Code:** See [example](https://github.com/ShuHuang/batterybert)
**Infrastructure**: 8x DGX A100
## Hyperparameters
```
batch_size = 32
n_epochs = 13
base_LM_model = "bert-base-uncased"
learning_rate = 2e-5
```
## Performance
```
"Validation accuracy": 96.79,
"Test accuracy": 96.29,
```
## Usage
### In Transformers
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
model_name = "batterydata/bert-base-uncased-abstract"
# a) Get predictions
nlp = pipeline('text-classification', model=model_name, tokenizer=model_name)
input = {'The typical non-aqueous electrolyte for commercial Li-ion cells is a solution of LiPF6 in linear and cyclic carbonates.'}
res = nlp(input)
# b) Load model & tokenizer
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement | {"language": "en", "license": "apache-2.0", "tags": "Text Classification", "datasets": ["batterydata/paper-abstracts"], "metrics": "glue"} | batterydata/bert-base-uncased-abstract | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"Text Classification",
"en",
"dataset:batterydata/paper-abstracts",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
# BERT-base-cased for QA
**Language model:** bert-base-uncased
**Language:** English
**Downstream-task:** Extractive QA
**Training data:** SQuAD v1
**Eval data:** SQuAD v1
**Code:** See [example](https://github.com/ShuHuang/batterybert)
**Infrastructure**: 8x DGX A100
## Hyperparameters
```
batch_size = 32
n_epochs = 3
base_LM_model = "bert-base-uncased"
max_seq_len = 386
learning_rate = 3e-5
doc_stride=128
max_query_length=64
```
## Performance
Evaluated on the SQuAD v1.0 dev set.
```
"exact": 80.93,
"f1": 88.20,
```
Evaluated on the battery device dataset.
```
"precision": 62.19,
"recall": 75.00,
```
## Usage
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "batterydata/bert-base-uncased-squad-v1"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'What is the electrolyte?',
'context': 'The typical non-aqueous electrolyte for commercial Li-ion cells is a solution of LiPF6 in linear and cyclic carbonates.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement | {"language": "en", "license": "apache-2.0", "tags": "question answering", "datasets": ["squad", "batterydata/battery-device-data-qa"], "metrics": "squad"} | batterydata/bert-base-uncased-squad-v1 | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"question answering",
"en",
"dataset:squad",
"dataset:batterydata/battery-device-data-qa",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | baursafi/distilbert-base-uncased-finetuned-cola | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
fill-mask | transformers |
# ALBERT-Mongolian
[pretraining repo link](https://github.com/bayartsogt-ya/albert-mongolian)
## Model description
Here we provide pretrained ALBERT model and trained SentencePiece model for Mongolia text. Training data is the Mongolian wikipedia corpus from Wikipedia Downloads and Mongolian News corpus.
## Evaluation Result:
```
loss = 1.7478163
masked_lm_accuracy = 0.6838185
masked_lm_loss = 1.6687671
sentence_order_accuracy = 0.998125
sentence_order_loss = 0.007942731
```
## Fine-tuning Result on Eduge Dataset:
```
precision recall f1-score support
байгал орчин 0.85 0.83 0.84 999
боловсрол 0.80 0.80 0.80 873
спорт 0.98 0.98 0.98 2736
технологи 0.88 0.93 0.91 1102
улс төр 0.92 0.85 0.89 2647
урлаг соёл 0.93 0.94 0.94 1457
хууль 0.89 0.87 0.88 1651
эдийн засаг 0.83 0.88 0.86 2509
эрүүл мэнд 0.89 0.92 0.90 1159
accuracy 0.90 15133
macro avg 0.89 0.89 0.89 15133
weighted avg 0.90 0.90 0.90 15133
```
## Reference
1. [ALBERT - official repo](https://github.com/google-research/albert)
2. [WikiExtrator](https://github.com/attardi/wikiextractor)
3. [Mongolian BERT](https://github.com/tugstugi/mongolian-bert)
4. [ALBERT - Japanese](https://github.com/alinear-corp/albert-japanese)
5. [Mongolian Text Classification](https://github.com/sharavsambuu/mongolian-text-classification)
6. [You's paper](https://arxiv.org/abs/1904.00962)
## Citation
```
@misc{albert-mongolian,
author = {Bayartsogt Yadamsuren},
title = {ALBERT Pretrained Model on Mongolian Datasets},
year = {2020},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/bayartsogt-ya/albert-mongolian/}}
}
```
## For More Information
Please contact by [email protected]
| {"language": "mn"} | bayartsogt/albert-mongolian | null | [
"transformers",
"pytorch",
"tf",
"safetensors",
"albert",
"fill-mask",
"mn",
"arxiv:1904.00962",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | |fold|accuracy|
|-|-|
| fold 0 | 0.974197247706422 |
| fold 1 | 0.9627293577981652 |
| fold 2 | 0.9724770642201835 |
| fold 3 | 0.9696100917431193 |
| fold 4 | 0.9684633027522935 |
| OOF Acc | 0.9694954128440367 | | {} | bayartsogt/mlub-bert-base-uncased-tr5meaning | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | |fold|accuracy|
|-|-|
| fold 0 | 0.9730504587155964 |
| fold 1 | 0.9690366972477065 |
| fold 2 | 0.970756880733945 |
| fold 3 | 0.9684633027522935 |
| fold 4 | 0.9719036697247706 |
| OOF Acc | 0.9706422018348624 | | {} | bayartsogt/mlub-bert-large-cased-tr5do30ep25s42 | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | |fold|accuracy|
|-|-|
| fold 0 | 0.9753440366972477 |
| fold 1 | 0.9678899082568807 |
| fold 2 | 0.9747706422018348 |
| fold 3 | 0.9690366972477065 |
| fold 4 | 0.9759174311926605 |
| OOF Acc | 0.9725917431192661 | | {} | bayartsogt/mlub-bert-large-uncased-tr5do20ep25s42 | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | |fold|accuracy|
|-|-|
| fold 0 | 0.974197247706422 |
| fold 1 | 0.9678899082568807 |
| fold 2 | 0.9724770642201835 |
| fold 3 | 0.9701834862385321 |
| fold 4 | 0.9736238532110092 |
| OOF Acc | 0.9716743119266055 |
```
synset_word
ав 1.000000
ам 0.931507
баг 0.980000
байр 0.943548
бараа 0.964789
гар 0.950210
гол 0.938731
гүн 0.912088
зах 0.946667
зуу 0.995798
зүрх 0.918367
мөнгө 0.973333
нуруу 0.968750
нүд 1.000000
нүүр 0.987805
салбар 0.963636
сар 0.996627
сум 0.816667
тэрэг 0.822581
түүх 0.980237
төр 0.998428
хий 0.993077
хураа 0.858268
хэлбэр 0.727273
хөндий 1.000000
шат 1.000000
эм 1.000000
эрүүл 1.000000
dtype: float64
``` | {} | bayartsogt/mlub-bert-large-uncased-tr5do30ep25 | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-generation | transformers | {} | bayartsogt/mongolian-gpt2 | null | [
"transformers",
"pytorch",
"jax",
"safetensors",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
fill-mask | transformers | {} | bayartsogt/mongolian-roberta-base | null | [
"transformers",
"pytorch",
"jax",
"tensorboard",
"safetensors",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
fill-mask | transformers | {} | bayartsogt/mongolian-roberta-large | null | [
"transformers",
"pytorch",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
fill-mask | transformers | # StructBERT: Un-Official Copy
Official Repository Link: https://github.com/alibaba/AliceMind/tree/main/StructBERT
**Claimer**
* This model card is not produced by [AliceMind Team](https://github.com/alibaba/AliceMind/)
## Reproduce HFHub models:
Download model/tokenizer vocab
```bash
wget https://raw.githubusercontent.com/alibaba/AliceMind/main/StructBERT/config/large_bert_config.json && mv large_bert_config.json config.json
wget https://raw.githubusercontent.com/alibaba/AliceMind/main/StructBERT/config/vocab.txt
wget https://alice-open.oss-cn-zhangjiakou.aliyuncs.com/StructBERT/en_model && mv en_model pytorch_model.bin
```
```python
from transformers import AutoConfig, AutoModelForMaskedLM, AutoTokenizer
config = AutoConfig.from_pretrained("./config.json")
model = AutoModelForMaskedLM.from_pretrained(".", config=config)
tokenizer = AutoTokenizer.from_pretrained(".", config=config)
model.push_to_hub("structbert-large")
tokenizer.push_to_hub("structbert-large")
```
[https://arxiv.org/abs/1908.04577](https://arxiv.org/abs/1908.04577)
# StructBERT: Incorporating Language Structures into Pre-training for Deep Language Understanding
## Introduction
We extend BERT to a new model, StructBERT, by incorporating language structures into pre-training.
Specifically, we pre-train StructBERT with two auxiliary tasks to make the most of the sequential
order of words and sentences, which leverage language structures at the word and sentence levels,
respectively.
## Pre-trained models
|Model | Description | #params | Download |
|------------------------|-------------------------------------------|------|------|
|structbert.en.large | StructBERT using the BERT-large architecture | 340M | [structbert.en.large](https://alice-open.oss-cn-zhangjiakou.aliyuncs.com/StructBERT/en_model) |
|structroberta.en.large | StructRoBERTa continue training from RoBERTa | 355M | Coming soon |
|structbert.ch.large | Chinese StructBERT; BERT-large architecture | 330M | [structbert.ch.large](https://alice-open.oss-cn-zhangjiakou.aliyuncs.com/StructBERT/ch_model) |
## Results
The results of GLUE & CLUE tasks can be reproduced using the hyperparameters listed in the following "Example usage" section.
#### structbert.en.large
[GLUE benchmark](https://gluebenchmark.com/leaderboard)
|Model| MNLI | QNLIv2 | QQP | SST-2 | MRPC |
|--------------------|-------|-------|-------|-------|-------|
|structbert.en.large |86.86% |93.04% |91.67% |93.23% |86.51% |
#### structbert.ch.large
[CLUE benchmark](https://www.cluebenchmarks.com/)
|Model | CMNLI | OCNLI | TNEWS | AFQMC |
|--------------------|-------|-------|-------|-------|
|structbert.ch.large |84.47% |81.28% |68.67% |76.11% |
## Example usage
#### Requirements and Installation
* [PyTorch](https://pytorch.org/) version >= 1.0.1
* Install other libraries via
```
pip install -r requirements.txt
```
* For faster training install NVIDIA's [apex](https://github.com/NVIDIA/apex) library
#### Finetune MNLI
```
python run_classifier_multi_task.py \
--task_name MNLI \
--do_train \
--do_eval \
--do_test \
--amp_type O1 \
--lr_decay_factor 1 \
--dropout 0.1 \
--do_lower_case \
--detach_index -1 \
--core_encoder bert \
--data_dir path_to_glue_data \
--vocab_file config/vocab.txt \
--bert_config_file config/large_bert_config.json \
--init_checkpoint path_to_pretrained_model \
--max_seq_length 128 \
--train_batch_size 32 \
--learning_rate 2e-5 \
--num_train_epochs 3 \
--fast_train \
--gradient_accumulation_steps 1 \
--output_dir path_to_output_dir
```
## Citation
If you use our work, please cite:
```
@article{wang2019structbert,
title={Structbert: Incorporating language structures into pre-training for deep language understanding},
author={Wang, Wei and Bi, Bin and Yan, Ming and Wu, Chen and Bao, Zuyi and Xia, Jiangnan and Peng, Liwei and Si, Luo},
journal={arXiv preprint arXiv:1908.04577},
year={2019}
}
``` | {} | bayartsogt/structbert-large | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"arxiv:1908.04577",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-to-speech | fairseq | # tts_transformer-mn-mbspeech
[Transformer](https://arxiv.org/abs/1809.08895) text-to-speech model from fairseq S^2 ([paper](https://arxiv.org/abs/2109.06912)/[code](https://github.com/pytorch/fairseq/tree/main/examples/speech_synthesis)):
- Mongolian
- Single-speaker male voice
- Trained on [MBSpeech](https://github.com/tugstugi/mongolian-nlp/blob/master/datasets/MBSpeech-1.0-csv.zip)
| {"language": "mn", "library_name": "fairseq", "tags": ["fairseq", "audio", "text-to-speech"], "datasets": ["mbspeech"], "task": "text-to-speech", "widget": [{"text": "\u043c\u0438\u043d\u0438\u0439 \u043d\u044d\u0440\u0438\u0439\u0433 \u0431\u0430\u044f\u0440\u0446\u043e\u0433\u0442 \u0433\u044d\u0434\u044d\u0433", "example_title": "Say my name!"}, {"text": "\u0431\u0438 \u043c\u043e\u043d\u0433\u043e\u043b \u0443\u043b\u0441\u044b\u043d \u043d\u0438\u0439\u0441\u043b\u044d\u043b, \u0443\u043b\u0430\u0430\u043d\u0431\u0430\u0430\u0442\u0430\u0440 \u0445\u043e\u0442\u043e\u0434 \u0430\u043c\u044c\u0434\u0430\u0440\u0434\u0430\u0433", "example_title": "Where I am from?"}, {"text": "\u044d\u043d\u044d\u0445\u04af\u04af \u04e9\u0433\u04e9\u0433\u0434\u043b\u0438\u0439\u0433 \u043d\u044d\u044d\u043b\u0442\u0442\u044d\u0439 \u0431\u043e\u043b\u0433\u043e\u0441\u043e\u043d, \u0431\u043e\u043b\u043e\u0440 \u0441\u043e\u043e\u0444\u0442\u044b\u043d\u0445\u043e\u043d\u0434 \u0431\u0430\u044f\u0440\u043b\u0430\u043b\u0430\u0430", "example_title": "Thank you!"}, {"text": "\u044d\u043d\u044d\u0445\u04af\u04af \u0430\u0436\u043b\u044b\u043d \u0438\u0445\u044d\u043d\u0445 \u0445\u044d\u0441\u0433\u0438\u0439\u0433, \u0442\u04e9\u0433\u04e9\u043b\u0434\u04e9\u0440 \u0430\u0445 \u0445\u0438\u0439\u0441\u044d\u043d \u0431\u043e\u043b\u043d\u043e", "example_title": "Shout out to original creater"}]} | bayartsogt/tts_transformer-mn-mbspeech | null | [
"fairseq",
"audio",
"text-to-speech",
"mn",
"dataset:mbspeech",
"arxiv:1809.08895",
"arxiv:2109.06912",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
automatic-speech-recognition | transformers |
# Wav2Vec2-Large-XLSR-53-Mongolian-v1
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Mongolian using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "mn", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("bayartsogt/wav2vec2-large-xlsr-mongolian-v1")
model = Wav2Vec2ForCTC.from_pretrained("bayartsogt/wav2vec2-large-xlsr-mongolian-v1")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Mongolian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "mn", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("bayartsogt/wav2vec2-large-xlsr-mongolian-v1")
model = Wav2Vec2ForCTC.from_pretrained("bayartsogt/wav2vec2-large-xlsr-mongolian-v1")
model.to("cuda")
chars_to_ignore_regex = '[\!\"\'\,\.\«\»\?\-]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 34.64 %
## Training
The Common Voice `train` dataset was used for training as well as ... and ... | {"language": "mn", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice mn"], "metrics": ["wer"], "model-index": [{"name": "XLSR Wav2Vec2 Mongolian V1 by Bayartsogt", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice mn", "type": "common_voice", "args": "mn"}, "metrics": [{"type": "wer", "value": 34.64, "name": "Test WER"}]}]}]} | bayartsogt/wav2vec2-large-xlsr-mongolian-v1 | null | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"mn",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
automatic-speech-recognition | transformers |
# Wav2Vec2-Large-XLSR-53-Mongolian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Mongolian using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "mn", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("bayartsogt/wav2vec2-large-xlsr-mongolian")
model = Wav2Vec2ForCTC.from_pretrained("bayartsogt/wav2vec2-large-xlsr-mongolian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
\\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Mongolian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "mn", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("bayartsogt/wav2vec2-large-xlsr-mongolian")
model = Wav2Vec2ForCTC.from_pretrained("bayartsogt/wav2vec2-large-xlsr-mongolian")
model.to("cuda")
chars_to_ignore_regex = '[\\\\,\\\\?\\\\.\\\\!\\\\-\\\\;\\\\:\\\\"\\\\“\\\\%\\\\‘\\\\”\\\\�\\\\'h\\\\«\\\\»]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\\tbatch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
\\tinputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
\\twith torch.no_grad():
\\t\\tlogits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
\\tbatch["pred_strings"] = processor.batch_decode(pred_ids)
\\treturn batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 45.82%
## Training
❌ The Common Voice `train`, `validation`, and ... datasets were used for training as well as ... and ... # TODO: adapt to state all the datasets that were used for training.
❌ The script used for training can be found [here](...) # TODO: fill in a link to your training script here. If you trained your model in a colab, simply fill in the link here. If you trained the model locally, it would be great if you could upload the training script on github and paste the link here.
| {"language": "mn", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "model-index": [{"name": "XLSR Wav2Vec2 Mongolian by Bayartsogt", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice mn", "type": "common_voice", "args": "mn"}, "metrics": [{"type": "wer", "value": 45.82, "name": "Test WER"}]}]}]} | bayartsogt/wav2vec2-large-xlsr-mongolian | null | [
"transformers",
"pytorch",
"jax",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"mn",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | bazena/bart-large-cnn-finetuned-xsum | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | bazena/t5-base-finetuned-xsum | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | bazena/xlnet-base-cased-finetuned-xsum | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | bbenedict/hello_world | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | bburnskm/frog-and-toad | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | bcartisol/model | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | bcbach190/for-bot | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | bcghost/GPT2-Chinese | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
sentence-similarity | sentence-transformers |
# bchan007/fnctech
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('bchan007/fnctech')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('bchan007/fnctech')
model = AutoModel.from_pretrained('bchan007/fnctech')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=bchan007/fnctech)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> | {"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"} | bchan007/fnctech | null | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
fill-mask | transformers | {} | bclee232/custombert | null | [
"transformers",
"albert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | bcooper138/dummy-model | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["xsum"], "model-index": [{"name": "t5-small-finetuned-xsum", "results": []}]} | bdwjaya/t5-small-finetuned-xsum | null | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:xsum",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | be4rr/bert-finetuned-ner | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | be4rr/bert-finetuned-pos | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-generation | transformers |
RICK!!! | {"tags": ["conversational"]} | beatajackowska/DialoGPT-RickBot | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
fill-mask | transformers |
# DiLBERT (Disease Language BERT)
The objective of this model was to obtain a specialized disease-related language, trained **from scratch**. <br>
We created a pre-training corpora starting from **ICD-11** entities, and enriched it with documents from **PubMed** and **Wikipedia** related to the same entities. <br>
Results of finetuning show that DiLBERT leads to comparable or higher accuracy scores on various classification tasks compared with other general-purpose or in-domain models (e.g., BioClinicalBERT, RoBERTa, XLNet).
Model released with the paper "**DiLBERT: Cheap Embeddings for Disease Related Medical NLP**". <br>
To summarize the practical implications of our work: we pre-trained and fine-tuned a domain specific BERT model on a small corpora, with comparable or better performance than state-of-the-art models.
This approach may also simplify the development of models for languages different from English, due to the minor quantity of data needed for training.
### Composition of the pretraining corpus
| Source | Documents | Words |
|---|---:|---:|
| ICD-11 descriptions | 34,676 | 1.0 million |
| PubMed Title and Abstracts | 852,550 | 184.6 million |
| Wikipedia pages | 37,074 | 6.1 million |
### Main repository
For more details check the main repo https://github.com/KevinRoitero/dilbert
# Usage
```python
from transformers import AutoModelForMaskedLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("beatrice-portelli/DiLBERT")
model = AutoModelForMaskedLM.from_pretrained("beatrice-portelli/DiLBERT")
```
# How to cite
```
@article{roitero2021dilbert,
title={{DilBERT}: Cheap Embeddings for Disease Related Medical NLP},
author={Roitero, Kevin and Portelli, Beatrice and Popescu, Mihai Horia and Della Mea, Vincenzo},
journal={IEEE Access},
volume={},
pages={},
year={2021},
publisher={IEEE},
note = {In Press}
}
```
| {"language": ["en"], "tags": ["medical", "disease", "classification"]} | beatrice-portelli/DiLBERT | null | [
"transformers",
"pytorch",
"tf",
"bert",
"fill-mask",
"medical",
"disease",
"classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | beg99/OPsporilerOrNot | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "distilgpt2-finetuned", "results": []}]} | begar/distilgpt2-finetuned | null | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0276
- Mae: 0.5310
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1582 | 1.0 | 308 | 1.0625 | 0.5221 |
| 1.0091 | 2.0 | 616 | 1.0276 | 0.5310 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["amazon_reviews_multi"], "model-index": [{"name": "xlm-roberta-base-finetuned-marc", "results": []}]} | begar/xlm-roberta-base-finetuned-marc | null | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | begimayk/Semantir2 | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | begimayk/task | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | from transformers import pipeline
import json
import requests
API_URL = "https://api-inference.huggingface.co/models/EleutherAI/gpt-neo-2.7B"
headers = {"Authorization": "Bearer api_hwKbAMoHAzOVDdCxgfpPxMjjcrdKHMakhg"}
def query(payload):
\tdata = json.dumps(payload)
\tresponse = requests.request("POST", API_URL, headers=headers, data=data)
\treturn json.loads(response.content.decode("utf-8"))
data = query("Can you please let us know more details about your ") | {} | begimayk/try1 | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | beibei/111 | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
fill-mask | transformers | {} | beiluo/nlpload | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers | {} | bella/bert_finetuning_test | null | [
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-generation | transformers |
# DaddyBen DialoGPT Model | {"tags": ["conversational"]} | benajtil/DialoGPT-small-Daddyben | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-generation | transformers |
# Rick And Morty Scripts DialoGPT Model | {"tags": ["conversational"]} | benajtil/DialoGPT-small-RickAndMortyScripts | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | benajtil/DialoGPT-small-harrypotter | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-generation | transformers | {} | benbeshara/vic_presser_bot | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | benfrancis314/amath_563_project | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | bengayashan/DialoGPT-small-joshua | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | bengayashan/bot | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-generation | transformers |
# GerPT2
German large and small versions of GPT2:
- https://huggingface.co/benjamin/gerpt2
- https://huggingface.co/benjamin/gerpt2-large
See the [GPT2 model card](https://huggingface.co/gpt2) for considerations on limitations and bias. See the [GPT2 documentation](https://huggingface.co/transformers/model_doc/gpt2.html) for details on GPT2.
## Comparison to [dbmdz/german-gpt2](https://huggingface.co/dbmdz/german-gpt2)
I evaluated both GerPT2-large and the other German GPT2, [dbmdz/german-gpt2](https://huggingface.co/dbmdz/german-gpt2) on the [CC-100](http://data.statmt.org/cc-100/) dataset and on the German Wikipedia:
| | CC-100 (PPL) | Wikipedia (PPL) |
|-------------------|--------------|-----------------|
| dbmdz/german-gpt2 | 49.47 | 62.92 |
| GerPT2 | 24.78 | 35.33 |
| GerPT2-large | __16.08__ | __23.26__ |
| | | |
See the script `evaluate.py` in the [GerPT2 Github repository](https://github.com/bminixhofer/gerpt2) for the code.
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("benjamin/gerpt2-large")
model = AutoModelForCausalLM.from_pretrained("benjamin/gerpt2-large")
prompt = "<your prompt>"
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
print(pipe(prompt)[0]["generated_text"])
```
Also, two tricks might improve the generated text:
```python
output = model.generate(
# during training an EOS token was used to mark the beginning of each text
# so it can help to insert it at the start
torch.tensor(
[tokenizer.eos_token_id] + tokenizer.encode(prompt)
).unsqueeze(0),
do_sample=True,
# try setting bad_words_ids=[[0]] to disallow generating an EOS token, without this the model is
# prone to ending generation early because a significant number of texts from the training corpus
# is quite short
bad_words_ids=[[0]],
max_length=max_length,
)[0]
print(tokenizer.decode(output))
```
## Training details
GerPT2-large is trained on the entire German data from the [CC-100 Corpus](http://data.statmt.org/cc-100/) and weights were initialized from the [English GPT2 model](https://huggingface.co/gpt2-large).
GerPT2-large was trained with:
- a batch size of 256
- using OneCycle learning rate with a maximum of 5e-3
- with AdamW with a weight decay of 0.01
- for 2 epochs
Training took roughly 12 days on 8 TPUv3 cores.
To train GerPT2-large, follow these steps. Scripts are located in the [Github repository](https://github.com/bminixhofer/gerpt2):
0. Download and unzip training data from http://data.statmt.org/cc-100/.
1. Train a tokenizer using `prepare/train_tokenizer.py`. As training data for the tokenizer I used a random subset of 5% of the CC-100 data.
2. (optionally) generate a German input embedding matrix with `prepare/generate_aligned_wte.py`. This uses a neat trick to semantically map tokens from the English tokenizer to tokens from the German tokenizer using aligned word embeddings. E. g.:
```
ĠMinde -> Ġleast
Ġjed -> Ġwhatsoever
flughafen -> Air
vermittlung -> employment
teilung -> ignment
ĠInterpretation -> Ġinterpretation
Ġimport -> Ġimported
hansa -> irl
genehmigungen -> exempt
ĠAuflist -> Ġlists
Ġverschwunden -> Ġdisappeared
ĠFlyers -> ĠFlyers
Kanal -> Channel
Ġlehr -> Ġteachers
Ġnahelie -> Ġconvenient
gener -> Generally
mitarbeiter -> staff
```
This helps a lot on a trial run I did, although I wasn't able to do a full comparison due to budget and time constraints. To use this WTE matrix it can be passed via the `wte_path` to the training script. Credit to [this blogpost](https://medium.com/@pierre_guillou/faster-than-training-from-scratch-fine-tuning-the-english-gpt-2-in-any-language-with-hugging-f2ec05c98787) for the idea of initializing GPT2 from English weights.
3. Tokenize the corpus using `prepare/tokenize_text.py`. This generates files for train and validation tokens in JSON Lines format.
4. Run the training script `train.py`! `run.sh` shows how this was executed for the full run with config `configs/tpu_large.json`.
## License
GerPT2 is licensed under the MIT License.
## Citing
Please cite GerPT2 as follows:
```
@misc{Minixhofer_GerPT2_German_large_2020,
author = {Minixhofer, Benjamin},
doi = {10.5281/zenodo.5509984},
month = {12},
title = {{GerPT2: German large and small versions of GPT2}},
url = {https://github.com/bminixhofer/gerpt2},
year = {2020}
}
```
## Acknowledgements
Thanks to [Hugging Face](https://huggingface.co) for awesome tools and infrastructure.
Huge thanks to [Artus Krohn-Grimberghe](https://twitter.com/artuskg) at [LYTiQ](https://www.lytiq.de/) for making this possible by sponsoring the resources used for training.
| {"language": "de", "license": "mit", "widget": [{"text": "In einer schockierenden Entdeckung fanden Wissenschaftler eine Herde Einh\u00f6rner, die in einem abgelegenen, zuvor unerforschten Tal in den Anden lebten."}]} | benjamin/gerpt2-large | null | [
"transformers",
"pytorch",
"jax",
"safetensors",
"gpt2",
"text-generation",
"de",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-generation | transformers |
# GerPT2
German large and small versions of GPT2:
- https://huggingface.co/benjamin/gerpt2
- https://huggingface.co/benjamin/gerpt2-large
See the [GPT2 model card](https://huggingface.co/gpt2) for considerations on limitations and bias. See the [GPT2 documentation](https://huggingface.co/transformers/model_doc/gpt2.html) for details on GPT2.
## Comparison to [dbmdz/german-gpt2](https://huggingface.co/dbmdz/german-gpt2)
I evaluated both GerPT2-large and the other German GPT2, [dbmdz/german-gpt2](https://huggingface.co/dbmdz/german-gpt2) on the [CC-100](http://data.statmt.org/cc-100/) dataset and on the German Wikipedia:
| | CC-100 (PPL) | Wikipedia (PPL) |
|-------------------|--------------|-----------------|
| dbmdz/german-gpt2 | 49.47 | 62.92 |
| GerPT2 | 24.78 | 35.33 |
| GerPT2-large | __16.08__ | __23.26__ |
| | | |
See the script `evaluate.py` in the [GerPT2 Github repository](https://github.com/bminixhofer/gerpt2) for the code.
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("benjamin/gerpt2-large")
model = AutoModelForCausalLM.from_pretrained("benjamin/gerpt2-large")
prompt = "<your prompt>"
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
print(pipe(prompt)[0]["generated_text"])
```
Also, two tricks might improve the generated text:
```python
output = model.generate(
# during training an EOS token was used to mark the beginning of each text
# so it can help to insert it at the start
torch.tensor(
[tokenizer.eos_token_id] + tokenizer.encode(prompt)
).unsqueeze(0),
do_sample=True,
# try setting bad_words_ids=[[0]] to disallow generating an EOS token, without this the model is
# prone to ending generation early because a significant number of texts from the training corpus
# is quite short
bad_words_ids=[[0]],
max_length=max_length,
)[0]
print(tokenizer.decode(output))
```
## Training details
GerPT2-large is trained on the entire German data from the [CC-100 Corpus](http://data.statmt.org/cc-100/) and weights were initialized from the [English GPT2 model](https://huggingface.co/gpt2-large).
GerPT2-large was trained with:
- a batch size of 256
- using OneCycle learning rate with a maximum of 5e-3
- with AdamW with a weight decay of 0.01
- for 2 epochs
Training took roughly 12 days on 8 TPUv3 cores.
To train GerPT2-large, follow these steps. Scripts are located in the [Github repository](https://github.com/bminixhofer/gerpt2):
0. Download and unzip training data from http://data.statmt.org/cc-100/.
1. Train a tokenizer using `prepare/train_tokenizer.py`. As training data for the tokenizer I used a random subset of 5% of the CC-100 data.
2. (optionally) generate a German input embedding matrix with `prepare/generate_aligned_wte.py`. This uses a neat trick to semantically map tokens from the English tokenizer to tokens from the German tokenizer using aligned word embeddings. E. g.:
```
ĠMinde -> Ġleast
Ġjed -> Ġwhatsoever
flughafen -> Air
vermittlung -> employment
teilung -> ignment
ĠInterpretation -> Ġinterpretation
Ġimport -> Ġimported
hansa -> irl
genehmigungen -> exempt
ĠAuflist -> Ġlists
Ġverschwunden -> Ġdisappeared
ĠFlyers -> ĠFlyers
Kanal -> Channel
Ġlehr -> Ġteachers
Ġnahelie -> Ġconvenient
gener -> Generally
mitarbeiter -> staff
```
This helps a lot on a trial run I did, although I wasn't able to do a full comparison due to budget and time constraints. To use this WTE matrix it can be passed via the `wte_path` to the training script. Credit to [this blogpost](https://medium.com/@pierre_guillou/faster-than-training-from-scratch-fine-tuning-the-english-gpt-2-in-any-language-with-hugging-f2ec05c98787) for the idea of initializing GPT2 from English weights.
3. Tokenize the corpus using `prepare/tokenize_text.py`. This generates files for train and validation tokens in JSON Lines format.
4. Run the training script `train.py`! `run.sh` shows how this was executed for the full run with config `configs/tpu_large.json`.
## License
GerPT2 is licensed under the MIT License.
## Citing
Please cite GerPT2 as follows:
```
@misc{Minixhofer_GerPT2_German_large_2020,
author = {Minixhofer, Benjamin},
doi = {10.5281/zenodo.5509984},
month = {12},
title = {{GerPT2: German large and small versions of GPT2}},
url = {https://github.com/bminixhofer/gerpt2},
year = {2020}
}
```
## Acknowledgements
Thanks to [Hugging Face](https://huggingface.co) for awesome tools and infrastructure.
Huge thanks to [Artus Krohn-Grimberghe](https://twitter.com/artuskg) at [LYTiQ](https://www.lytiq.de/) for making this possible by sponsoring the resources used for training. | {"language": "de", "license": "mit", "widget": [{"text": "In einer schockierenden Entdeckung fanden Wissenschaftler eine Herde Einh\u00f6rner, die in einem abgelegenen, zuvor unerforschten Tal in den Anden lebten."}]} | benjamin/gerpt2 | null | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"gpt2",
"text-generation",
"de",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-generation | transformers |
# gpt2-wechsel-chinese
Model trained with WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.
See the code here: https://github.com/CPJKU/wechsel
And the paper here: https://aclanthology.org/2022.naacl-main.293/
## Performance
### RoBERTa
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-french` | **82.43** | **90.88** | **86.65** |
| `camembert-base` | 80.88 | 90.26 | 85.57 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-german` | **81.79** | **89.72** | **85.76** |
| `deepset/gbert-base` | 78.64 | 89.46 | 84.05 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-chinese` | **78.32** | 80.55 | **79.44** |
| `bert-base-chinese` | 76.55 | **82.05** | 79.30 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-swahili` | **75.05** | **87.39** | **81.22** |
| `xlm-roberta-base` | 69.18 | 87.37 | 78.28 |
### GPT2
| Model | PPL |
|---|---|
| `gpt2-wechsel-french` | **19.71** |
| `gpt2` (retrained from scratch) | 20.47 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-german` | **26.8** |
| `gpt2` (retrained from scratch) | 27.63 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-chinese` | **51.97** |
| `gpt2` (retrained from scratch) | 52.98 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-swahili` | **10.14** |
| `gpt2` (retrained from scratch) | 10.58 |
See our paper for details.
## Citation
Please cite WECHSEL as
```
@inproceedings{minixhofer-etal-2022-wechsel,
title = "{WECHSEL}: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models",
author = "Minixhofer, Benjamin and
Paischer, Fabian and
Rekabsaz, Navid",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.293",
pages = "3992--4006",
abstract = "Large pretrained language models (LMs) have become the central building block of many NLP applications. Training these models requires ever more computational resources and most of the existing models are trained on English text only. It is exceedingly expensive to train these models in other languages. To alleviate this problem, we introduce a novel method {--} called WECHSEL {--} to efficiently and effectively transfer pretrained LMs to new languages. WECHSEL can be applied to any model which uses subword-based tokenization and learns an embedding for each subword. The tokenizer of the source model (in English) is replaced with a tokenizer in the target language and token embeddings are initialized such that they are semantically similar to the English tokens by utilizing multilingual static word embeddings covering English and the target language. We use WECHSEL to transfer the English RoBERTa and GPT-2 models to four languages (French, German, Chinese and Swahili). We also study the benefits of our method on very low-resource languages. WECHSEL improves over proposed methods for cross-lingual parameter transfer and outperforms models of comparable size trained from scratch with up to 64x less training effort. Our method makes training large language models for new languages more accessible and less damaging to the environment. We make our code and models publicly available.",
}
```
| {"language": "zh", "license": "mit"} | benjamin/gpt2-wechsel-chinese | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"zh",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-generation | transformers |
# gpt2-wechsel-french
Model trained with WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.
See the code here: https://github.com/CPJKU/wechsel
And the paper here: https://aclanthology.org/2022.naacl-main.293/
## Performance
### RoBERTa
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-french` | **82.43** | **90.88** | **86.65** |
| `camembert-base` | 80.88 | 90.26 | 85.57 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-german` | **81.79** | **89.72** | **85.76** |
| `deepset/gbert-base` | 78.64 | 89.46 | 84.05 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-chinese` | **78.32** | 80.55 | **79.44** |
| `bert-base-chinese` | 76.55 | **82.05** | 79.30 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-swahili` | **75.05** | **87.39** | **81.22** |
| `xlm-roberta-base` | 69.18 | 87.37 | 78.28 |
### GPT2
| Model | PPL |
|---|---|
| `gpt2-wechsel-french` | **19.71** |
| `gpt2` (retrained from scratch) | 20.47 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-german` | **26.8** |
| `gpt2` (retrained from scratch) | 27.63 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-chinese` | **51.97** |
| `gpt2` (retrained from scratch) | 52.98 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-swahili` | **10.14** |
| `gpt2` (retrained from scratch) | 10.58 |
See our paper for details.
## Citation
Please cite WECHSEL as
```
@inproceedings{minixhofer-etal-2022-wechsel,
title = "{WECHSEL}: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models",
author = "Minixhofer, Benjamin and
Paischer, Fabian and
Rekabsaz, Navid",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.293",
pages = "3992--4006",
abstract = "Large pretrained language models (LMs) have become the central building block of many NLP applications. Training these models requires ever more computational resources and most of the existing models are trained on English text only. It is exceedingly expensive to train these models in other languages. To alleviate this problem, we introduce a novel method {--} called WECHSEL {--} to efficiently and effectively transfer pretrained LMs to new languages. WECHSEL can be applied to any model which uses subword-based tokenization and learns an embedding for each subword. The tokenizer of the source model (in English) is replaced with a tokenizer in the target language and token embeddings are initialized such that they are semantically similar to the English tokens by utilizing multilingual static word embeddings covering English and the target language. We use WECHSEL to transfer the English RoBERTa and GPT-2 models to four languages (French, German, Chinese and Swahili). We also study the benefits of our method on very low-resource languages. WECHSEL improves over proposed methods for cross-lingual parameter transfer and outperforms models of comparable size trained from scratch with up to 64x less training effort. Our method makes training large language models for new languages more accessible and less damaging to the environment. We make our code and models publicly available.",
}
```
| {"language": "fr", "license": "mit"} | benjamin/gpt2-wechsel-french | null | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"fr",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-generation | transformers |
# gpt2-wechsel-german
Model trained with WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.
See the code here: https://github.com/CPJKU/wechsel
And the paper here: https://aclanthology.org/2022.naacl-main.293/
## Performance
### RoBERTa
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-french` | **82.43** | **90.88** | **86.65** |
| `camembert-base` | 80.88 | 90.26 | 85.57 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-german` | **81.79** | **89.72** | **85.76** |
| `deepset/gbert-base` | 78.64 | 89.46 | 84.05 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-chinese` | **78.32** | 80.55 | **79.44** |
| `bert-base-chinese` | 76.55 | **82.05** | 79.30 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-swahili` | **75.05** | **87.39** | **81.22** |
| `xlm-roberta-base` | 69.18 | 87.37 | 78.28 |
### GPT2
| Model | PPL |
|---|---|
| `gpt2-wechsel-french` | **19.71** |
| `gpt2` (retrained from scratch) | 20.47 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-german` | **26.8** |
| `gpt2` (retrained from scratch) | 27.63 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-chinese` | **51.97** |
| `gpt2` (retrained from scratch) | 52.98 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-swahili` | **10.14** |
| `gpt2` (retrained from scratch) | 10.58 |
See our paper for details.
## Citation
Please cite WECHSEL as
```
@inproceedings{minixhofer-etal-2022-wechsel,
title = "{WECHSEL}: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models",
author = "Minixhofer, Benjamin and
Paischer, Fabian and
Rekabsaz, Navid",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.293",
pages = "3992--4006",
abstract = "Large pretrained language models (LMs) have become the central building block of many NLP applications. Training these models requires ever more computational resources and most of the existing models are trained on English text only. It is exceedingly expensive to train these models in other languages. To alleviate this problem, we introduce a novel method {--} called WECHSEL {--} to efficiently and effectively transfer pretrained LMs to new languages. WECHSEL can be applied to any model which uses subword-based tokenization and learns an embedding for each subword. The tokenizer of the source model (in English) is replaced with a tokenizer in the target language and token embeddings are initialized such that they are semantically similar to the English tokens by utilizing multilingual static word embeddings covering English and the target language. We use WECHSEL to transfer the English RoBERTa and GPT-2 models to four languages (French, German, Chinese and Swahili). We also study the benefits of our method on very low-resource languages. WECHSEL improves over proposed methods for cross-lingual parameter transfer and outperforms models of comparable size trained from scratch with up to 64x less training effort. Our method makes training large language models for new languages more accessible and less damaging to the environment. We make our code and models publicly available.",
}
```
| {"language": "de", "license": "mit"} | benjamin/gpt2-wechsel-german | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"de",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-generation | transformers |
# gpt2-wechsel-swahili
Model trained with WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.
See the code here: https://github.com/CPJKU/wechsel
And the paper here: https://aclanthology.org/2022.naacl-main.293/
## Performance
### RoBERTa
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-french` | **82.43** | **90.88** | **86.65** |
| `camembert-base` | 80.88 | 90.26 | 85.57 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-german` | **81.79** | **89.72** | **85.76** |
| `deepset/gbert-base` | 78.64 | 89.46 | 84.05 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-chinese` | **78.32** | 80.55 | **79.44** |
| `bert-base-chinese` | 76.55 | **82.05** | 79.30 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-swahili` | **75.05** | **87.39** | **81.22** |
| `xlm-roberta-base` | 69.18 | 87.37 | 78.28 |
### GPT2
| Model | PPL |
|---|---|
| `gpt2-wechsel-french` | **19.71** |
| `gpt2` (retrained from scratch) | 20.47 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-german` | **26.8** |
| `gpt2` (retrained from scratch) | 27.63 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-chinese` | **51.97** |
| `gpt2` (retrained from scratch) | 52.98 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-swahili` | **10.14** |
| `gpt2` (retrained from scratch) | 10.58 |
See our paper for details.
## Citation
Please cite WECHSEL as
```
@inproceedings{minixhofer-etal-2022-wechsel,
title = "{WECHSEL}: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models",
author = "Minixhofer, Benjamin and
Paischer, Fabian and
Rekabsaz, Navid",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.293",
pages = "3992--4006",
abstract = "Large pretrained language models (LMs) have become the central building block of many NLP applications. Training these models requires ever more computational resources and most of the existing models are trained on English text only. It is exceedingly expensive to train these models in other languages. To alleviate this problem, we introduce a novel method {--} called WECHSEL {--} to efficiently and effectively transfer pretrained LMs to new languages. WECHSEL can be applied to any model which uses subword-based tokenization and learns an embedding for each subword. The tokenizer of the source model (in English) is replaced with a tokenizer in the target language and token embeddings are initialized such that they are semantically similar to the English tokens by utilizing multilingual static word embeddings covering English and the target language. We use WECHSEL to transfer the English RoBERTa and GPT-2 models to four languages (French, German, Chinese and Swahili). We also study the benefits of our method on very low-resource languages. WECHSEL improves over proposed methods for cross-lingual parameter transfer and outperforms models of comparable size trained from scratch with up to 64x less training effort. Our method makes training large language models for new languages more accessible and less damaging to the environment. We make our code and models publicly available.",
}
```
| {"language": "sw", "license": "mit"} | benjamin/gpt2-wechsel-swahili | null | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"sw",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
fill-mask | transformers |
# roberta-base-wechsel-chinese
Model trained with WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.
See the code here: https://github.com/CPJKU/wechsel
And the paper here: https://aclanthology.org/2022.naacl-main.293/
## Performance
### RoBERTa
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-french` | **82.43** | **90.88** | **86.65** |
| `camembert-base` | 80.88 | 90.26 | 85.57 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-german` | **81.79** | **89.72** | **85.76** |
| `deepset/gbert-base` | 78.64 | 89.46 | 84.05 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-chinese` | **78.32** | 80.55 | **79.44** |
| `bert-base-chinese` | 76.55 | **82.05** | 79.30 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-swahili` | **75.05** | **87.39** | **81.22** |
| `xlm-roberta-base` | 69.18 | 87.37 | 78.28 |
### GPT2
| Model | PPL |
|---|---|
| `gpt2-wechsel-french` | **19.71** |
| `gpt2` (retrained from scratch) | 20.47 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-german` | **26.8** |
| `gpt2` (retrained from scratch) | 27.63 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-chinese` | **51.97** |
| `gpt2` (retrained from scratch) | 52.98 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-swahili` | **10.14** |
| `gpt2` (retrained from scratch) | 10.58 |
See our paper for details.
## Citation
Please cite WECHSEL as
```
@inproceedings{minixhofer-etal-2022-wechsel,
title = "{WECHSEL}: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models",
author = "Minixhofer, Benjamin and
Paischer, Fabian and
Rekabsaz, Navid",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.293",
pages = "3992--4006",
abstract = "Large pretrained language models (LMs) have become the central building block of many NLP applications. Training these models requires ever more computational resources and most of the existing models are trained on English text only. It is exceedingly expensive to train these models in other languages. To alleviate this problem, we introduce a novel method {--} called WECHSEL {--} to efficiently and effectively transfer pretrained LMs to new languages. WECHSEL can be applied to any model which uses subword-based tokenization and learns an embedding for each subword. The tokenizer of the source model (in English) is replaced with a tokenizer in the target language and token embeddings are initialized such that they are semantically similar to the English tokens by utilizing multilingual static word embeddings covering English and the target language. We use WECHSEL to transfer the English RoBERTa and GPT-2 models to four languages (French, German, Chinese and Swahili). We also study the benefits of our method on very low-resource languages. WECHSEL improves over proposed methods for cross-lingual parameter transfer and outperforms models of comparable size trained from scratch with up to 64x less training effort. Our method makes training large language models for new languages more accessible and less damaging to the environment. We make our code and models publicly available.",
}
```
| {"language": "zh", "license": "mit"} | benjamin/roberta-base-wechsel-chinese | null | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"zh",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
fill-mask | transformers |
# roberta-base-wechsel-french
Model trained with WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.
See the code here: https://github.com/CPJKU/wechsel
And the paper here: https://aclanthology.org/2022.naacl-main.293/
## Performance
### RoBERTa
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-french` | **82.43** | **90.88** | **86.65** |
| `camembert-base` | 80.88 | 90.26 | 85.57 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-german` | **81.79** | **89.72** | **85.76** |
| `deepset/gbert-base` | 78.64 | 89.46 | 84.05 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-chinese` | **78.32** | 80.55 | **79.44** |
| `bert-base-chinese` | 76.55 | **82.05** | 79.30 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-swahili` | **75.05** | **87.39** | **81.22** |
| `xlm-roberta-base` | 69.18 | 87.37 | 78.28 |
### GPT2
| Model | PPL |
|---|---|
| `gpt2-wechsel-french` | **19.71** |
| `gpt2` (retrained from scratch) | 20.47 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-german` | **26.8** |
| `gpt2` (retrained from scratch) | 27.63 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-chinese` | **51.97** |
| `gpt2` (retrained from scratch) | 52.98 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-swahili` | **10.14** |
| `gpt2` (retrained from scratch) | 10.58 |
See our paper for details.
## Citation
Please cite WECHSEL as
```
@inproceedings{minixhofer-etal-2022-wechsel,
title = "{WECHSEL}: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models",
author = "Minixhofer, Benjamin and
Paischer, Fabian and
Rekabsaz, Navid",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.293",
pages = "3992--4006",
abstract = "Large pretrained language models (LMs) have become the central building block of many NLP applications. Training these models requires ever more computational resources and most of the existing models are trained on English text only. It is exceedingly expensive to train these models in other languages. To alleviate this problem, we introduce a novel method {--} called WECHSEL {--} to efficiently and effectively transfer pretrained LMs to new languages. WECHSEL can be applied to any model which uses subword-based tokenization and learns an embedding for each subword. The tokenizer of the source model (in English) is replaced with a tokenizer in the target language and token embeddings are initialized such that they are semantically similar to the English tokens by utilizing multilingual static word embeddings covering English and the target language. We use WECHSEL to transfer the English RoBERTa and GPT-2 models to four languages (French, German, Chinese and Swahili). We also study the benefits of our method on very low-resource languages. WECHSEL improves over proposed methods for cross-lingual parameter transfer and outperforms models of comparable size trained from scratch with up to 64x less training effort. Our method makes training large language models for new languages more accessible and less damaging to the environment. We make our code and models publicly available.",
}
```
| {"language": "fr", "license": "mit"} | benjamin/roberta-base-wechsel-french | null | [
"transformers",
"pytorch",
"safetensors",
"roberta",
"fill-mask",
"fr",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
fill-mask | transformers |
# roberta-base-wechsel-german
Model trained with WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.
See the code here: https://github.com/CPJKU/wechsel
And the paper here: https://aclanthology.org/2022.naacl-main.293/
## Performance
### RoBERTa
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-french` | **82.43** | **90.88** | **86.65** |
| `camembert-base` | 80.88 | 90.26 | 85.57 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-german` | **81.79** | **89.72** | **85.76** |
| `deepset/gbert-base` | 78.64 | 89.46 | 84.05 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-chinese` | **78.32** | 80.55 | **79.44** |
| `bert-base-chinese` | 76.55 | **82.05** | 79.30 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-swahili` | **75.05** | **87.39** | **81.22** |
| `xlm-roberta-base` | 69.18 | 87.37 | 78.28 |
### GPT2
| Model | PPL |
|---|---|
| `gpt2-wechsel-french` | **19.71** |
| `gpt2` (retrained from scratch) | 20.47 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-german` | **26.8** |
| `gpt2` (retrained from scratch) | 27.63 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-chinese` | **51.97** |
| `gpt2` (retrained from scratch) | 52.98 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-swahili` | **10.14** |
| `gpt2` (retrained from scratch) | 10.58 |
See our paper for details.
## Citation
Please cite WECHSEL as
```
@inproceedings{minixhofer-etal-2022-wechsel,
title = "{WECHSEL}: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models",
author = "Minixhofer, Benjamin and
Paischer, Fabian and
Rekabsaz, Navid",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.293",
pages = "3992--4006",
abstract = "Large pretrained language models (LMs) have become the central building block of many NLP applications. Training these models requires ever more computational resources and most of the existing models are trained on English text only. It is exceedingly expensive to train these models in other languages. To alleviate this problem, we introduce a novel method {--} called WECHSEL {--} to efficiently and effectively transfer pretrained LMs to new languages. WECHSEL can be applied to any model which uses subword-based tokenization and learns an embedding for each subword. The tokenizer of the source model (in English) is replaced with a tokenizer in the target language and token embeddings are initialized such that they are semantically similar to the English tokens by utilizing multilingual static word embeddings covering English and the target language. We use WECHSEL to transfer the English RoBERTa and GPT-2 models to four languages (French, German, Chinese and Swahili). We also study the benefits of our method on very low-resource languages. WECHSEL improves over proposed methods for cross-lingual parameter transfer and outperforms models of comparable size trained from scratch with up to 64x less training effort. Our method makes training large language models for new languages more accessible and less damaging to the environment. We make our code and models publicly available.",
}
```
| {"language": "de", "license": "mit"} | benjamin/roberta-base-wechsel-german | null | [
"transformers",
"pytorch",
"safetensors",
"roberta",
"fill-mask",
"de",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
fill-mask | transformers |
# roberta-base-wechsel-swahili
Model trained with WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.
See the code here: https://github.com/CPJKU/wechsel
And the paper here: https://aclanthology.org/2022.naacl-main.293/
## Performance
### RoBERTa
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-french` | **82.43** | **90.88** | **86.65** |
| `camembert-base` | 80.88 | 90.26 | 85.57 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-german` | **81.79** | **89.72** | **85.76** |
| `deepset/gbert-base` | 78.64 | 89.46 | 84.05 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-chinese` | **78.32** | 80.55 | **79.44** |
| `bert-base-chinese` | 76.55 | **82.05** | 79.30 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-swahili` | **75.05** | **87.39** | **81.22** |
| `xlm-roberta-base` | 69.18 | 87.37 | 78.28 |
### GPT2
| Model | PPL |
|---|---|
| `gpt2-wechsel-french` | **19.71** |
| `gpt2` (retrained from scratch) | 20.47 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-german` | **26.8** |
| `gpt2` (retrained from scratch) | 27.63 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-chinese` | **51.97** |
| `gpt2` (retrained from scratch) | 52.98 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-swahili` | **10.14** |
| `gpt2` (retrained from scratch) | 10.58 |
See our paper for details.
## Citation
Please cite WECHSEL as
```
@inproceedings{minixhofer-etal-2022-wechsel,
title = "{WECHSEL}: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models",
author = "Minixhofer, Benjamin and
Paischer, Fabian and
Rekabsaz, Navid",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.293",
pages = "3992--4006",
abstract = "Large pretrained language models (LMs) have become the central building block of many NLP applications. Training these models requires ever more computational resources and most of the existing models are trained on English text only. It is exceedingly expensive to train these models in other languages. To alleviate this problem, we introduce a novel method {--} called WECHSEL {--} to efficiently and effectively transfer pretrained LMs to new languages. WECHSEL can be applied to any model which uses subword-based tokenization and learns an embedding for each subword. The tokenizer of the source model (in English) is replaced with a tokenizer in the target language and token embeddings are initialized such that they are semantically similar to the English tokens by utilizing multilingual static word embeddings covering English and the target language. We use WECHSEL to transfer the English RoBERTa and GPT-2 models to four languages (French, German, Chinese and Swahili). We also study the benefits of our method on very low-resource languages. WECHSEL improves over proposed methods for cross-lingual parameter transfer and outperforms models of comparable size trained from scratch with up to 64x less training effort. Our method makes training large language models for new languages more accessible and less damaging to the environment. We make our code and models publicly available.",
}
```
| {"language": "sw", "license": "mit"} | benjamin/roberta-base-wechsel-swahili | null | [
"transformers",
"pytorch",
"safetensors",
"roberta",
"fill-mask",
"sw",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
fill-mask | transformers | {} | benjamin/roberta-large-wechsel-hindi | null | [
"transformers",
"pytorch",
"safetensors",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
fill-mask | transformers | {} | benjamin/roberta-large-wechsel-tamil | null | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-generation | transformers | {} | benjaminbeilharz/bart-base-empatheticdialogues | null | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers | {} | benjaminbeilharz/bert-base-uncased-dailydialog-turn-classifier | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.