pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
sequencelengths 0
201
| languages
sequencelengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
sequencelengths 0
722
| processed_texts
sequencelengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fill-mask | transformers |
# HPLT Bert for Danish
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language models. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_en")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_en", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Cite us
```bibtex
@misc{degibert2024new,
title={A New Massive Multilingual Dataset for High-Performance Language Technologies},
author={Ona de Gibert and Graeme Nail and Nikolay Arefyev and Marta Bañón and Jelmer van der Linde and Shaoxiong Ji and Jaume Zaragoza-Bernabeu and Mikko Aulamo and Gema Ramírez-Sánchez and Andrey Kutuzov and Sampo Pyysalo and Stephan Oepen and Jörg Tiedemann},
year={2024},
eprint={2403.14009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["da"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "datasets": ["HPLT/hplt_monolingual_v1_2"], "inference": false} | HPLT/hplt_bert_base_da | null | [
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"da",
"dataset:HPLT/hplt_monolingual_v1_2",
"arxiv:2403.14009",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null | 2024-04-22T01:14:47+00:00 | [
"2403.14009"
] | [
"da"
] | TAGS
#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #da #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us
|
# HPLT Bert for Danish
<img src="URL width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the HPLT project.
It is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.
A monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our language model training report.
The training code.
The training statistics of all 75 runs
## Example usage
This model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.
The following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.
## Cite us
| [
"# HPLT Bert for Danish\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] | [
"TAGS\n#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #da #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us \n",
"# HPLT Bert for Danish\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] |
fill-mask | transformers |
# HPLT Bert for German
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language models. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_en")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_en", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Cite us
```bibtex
@misc{degibert2024new,
title={A New Massive Multilingual Dataset for High-Performance Language Technologies},
author={Ona de Gibert and Graeme Nail and Nikolay Arefyev and Marta Bañón and Jelmer van der Linde and Shaoxiong Ji and Jaume Zaragoza-Bernabeu and Mikko Aulamo and Gema Ramírez-Sánchez and Andrey Kutuzov and Sampo Pyysalo and Stephan Oepen and Jörg Tiedemann},
year={2024},
eprint={2403.14009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["de"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "datasets": ["HPLT/hplt_monolingual_v1_2"], "inference": false} | HPLT/hplt_bert_base_de | null | [
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"de",
"dataset:HPLT/hplt_monolingual_v1_2",
"arxiv:2403.14009",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null | 2024-04-22T01:15:14+00:00 | [
"2403.14009"
] | [
"de"
] | TAGS
#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #de #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us
|
# HPLT Bert for German
<img src="URL width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the HPLT project.
It is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.
A monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our language model training report.
The training code.
The training statistics of all 75 runs
## Example usage
This model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.
The following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.
## Cite us
| [
"# HPLT Bert for German\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] | [
"TAGS\n#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #de #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us \n",
"# HPLT Bert for German\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] |
fill-mask | transformers |
# HPLT Bert for Greek
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language models. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_en")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_en", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Cite us
```bibtex
@misc{degibert2024new,
title={A New Massive Multilingual Dataset for High-Performance Language Technologies},
author={Ona de Gibert and Graeme Nail and Nikolay Arefyev and Marta Bañón and Jelmer van der Linde and Shaoxiong Ji and Jaume Zaragoza-Bernabeu and Mikko Aulamo and Gema Ramírez-Sánchez and Andrey Kutuzov and Sampo Pyysalo and Stephan Oepen and Jörg Tiedemann},
year={2024},
eprint={2403.14009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["el"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "datasets": ["HPLT/hplt_monolingual_v1_2"], "inference": false} | HPLT/hplt_bert_base_el | null | [
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"el",
"dataset:HPLT/hplt_monolingual_v1_2",
"arxiv:2403.14009",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null | 2024-04-22T01:15:37+00:00 | [
"2403.14009"
] | [
"el"
] | TAGS
#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #el #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us
|
# HPLT Bert for Greek
<img src="URL width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the HPLT project.
It is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.
A monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our language model training report.
The training code.
The training statistics of all 75 runs
## Example usage
This model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.
The following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.
## Cite us
| [
"# HPLT Bert for Greek\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] | [
"TAGS\n#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #el #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us \n",
"# HPLT Bert for Greek\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] |
null | null | GGUF-IQ-Imatrix quants for NLPark/Test1_SLIDE as requested in [#28](https://huggingface.co/Lewdiculous/Model-Requests/discussions/28).
> [!WARNING]
> Recommended presets [here](https://huggingface.co/Lewdiculous/Model-Requests/tree/main/data/presets/cope-llama-3-0.1) or [here](https://huggingface.co/Virt-io/SillyTavern-Presets). <br>
> Use the latest version of KoboldCpp. **Use the provided presets.** <br>
> This is all still highly experimental, modified configs were used to avoid the tokenizer issues.
"Due to the poor performance of Test0 in Asian Languages, we trained a new preview model."
"It's a merge of https://huggingface.co/NLPark/Test1_SLIDE , https://huggingface.co/vicgalle/Configurable-Llama-3-8B-v0.3"
"The chat template of our chat models is similar as Llama3."
 | {"license": "apache-2.0"} | Lewdiculous/Test2_SLIDE-GGUF-IQ-Imatrix | null | [
"gguf",
"license:apache-2.0",
"region:us"
] | null | 2024-04-22T01:16:02+00:00 | [] | [] | TAGS
#gguf #license-apache-2.0 #region-us
| GGUF-IQ-Imatrix quants for NLPark/Test1_SLIDE as requested in #28.
> [!WARNING]
> Recommended presets here or here. <br>
> Use the latest version of KoboldCpp. Use the provided presets. <br>
> This is all still highly experimental, modified configs were used to avoid the tokenizer issues.
"Due to the poor performance of Test0 in Asian Languages, we trained a new preview model."
"It's a merge of URL , URL
"The chat template of our chat models is similar as Llama3."
!URL | [] | [
"TAGS\n#gguf #license-apache-2.0 #region-us \n"
] |
fill-mask | transformers |
# HPLT Bert for Esperanto
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language models. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_en")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_en", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Cite us
```bibtex
@misc{degibert2024new,
title={A New Massive Multilingual Dataset for High-Performance Language Technologies},
author={Ona de Gibert and Graeme Nail and Nikolay Arefyev and Marta Bañón and Jelmer van der Linde and Shaoxiong Ji and Jaume Zaragoza-Bernabeu and Mikko Aulamo and Gema Ramírez-Sánchez and Andrey Kutuzov and Sampo Pyysalo and Stephan Oepen and Jörg Tiedemann},
year={2024},
eprint={2403.14009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["eo"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "datasets": ["HPLT/hplt_monolingual_v1_2"], "inference": false} | HPLT/hplt_bert_base_eo | null | [
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"eo",
"dataset:HPLT/hplt_monolingual_v1_2",
"arxiv:2403.14009",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null | 2024-04-22T01:16:06+00:00 | [
"2403.14009"
] | [
"eo"
] | TAGS
#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #eo #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us
|
# HPLT Bert for Esperanto
<img src="URL width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the HPLT project.
It is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.
A monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our language model training report.
The training code.
The training statistics of all 75 runs
## Example usage
This model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.
The following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.
## Cite us
| [
"# HPLT Bert for Esperanto\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] | [
"TAGS\n#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #eo #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us \n",
"# HPLT Bert for Esperanto\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ppo_zephyr1
This model is a fine-tuned version of [HuggingFaceH4/mistral-7b-sft-beta](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 32
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "HuggingFaceH4/mistral-7b-sft-beta", "model-index": [{"name": "ppo_zephyr1", "results": []}]} | vwxyzjn/ppo_zephyr1 | null | [
"transformers",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:HuggingFaceH4/mistral-7b-sft-beta",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-22T01:16:21+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #mistral #text-generation #generated_from_trainer #conversational #base_model-HuggingFaceH4/mistral-7b-sft-beta #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# ppo_zephyr1
This model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 32
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.19.1
| [
"# ppo_zephyr1\n\nThis model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-06\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #mistral #text-generation #generated_from_trainer #conversational #base_model-HuggingFaceH4/mistral-7b-sft-beta #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# ppo_zephyr1\n\nThis model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-06\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.19.1"
] |
fill-mask | transformers |
# HPLT Bert for Spanish
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language models. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_en")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_en", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Cite us
```bibtex
@misc{degibert2024new,
title={A New Massive Multilingual Dataset for High-Performance Language Technologies},
author={Ona de Gibert and Graeme Nail and Nikolay Arefyev and Marta Bañón and Jelmer van der Linde and Shaoxiong Ji and Jaume Zaragoza-Bernabeu and Mikko Aulamo and Gema Ramírez-Sánchez and Andrey Kutuzov and Sampo Pyysalo and Stephan Oepen and Jörg Tiedemann},
year={2024},
eprint={2403.14009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["es"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "datasets": ["HPLT/hplt_monolingual_v1_2"], "inference": false} | HPLT/hplt_bert_base_es | null | [
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"es",
"dataset:HPLT/hplt_monolingual_v1_2",
"arxiv:2403.14009",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null | 2024-04-22T01:16:32+00:00 | [
"2403.14009"
] | [
"es"
] | TAGS
#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #es #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us
|
# HPLT Bert for Spanish
<img src="URL width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the HPLT project.
It is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.
A monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our language model training report.
The training code.
The training statistics of all 75 runs
## Example usage
This model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.
The following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.
## Cite us
| [
"# HPLT Bert for Spanish\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] | [
"TAGS\n#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #es #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us \n",
"# HPLT Bert for Spanish\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] |
fill-mask | transformers |
# HPLT Bert for Estonian
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language models. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_en")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_en", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Cite us
```bibtex
@misc{degibert2024new,
title={A New Massive Multilingual Dataset for High-Performance Language Technologies},
author={Ona de Gibert and Graeme Nail and Nikolay Arefyev and Marta Bañón and Jelmer van der Linde and Shaoxiong Ji and Jaume Zaragoza-Bernabeu and Mikko Aulamo and Gema Ramírez-Sánchez and Andrey Kutuzov and Sampo Pyysalo and Stephan Oepen and Jörg Tiedemann},
year={2024},
eprint={2403.14009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["et"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "datasets": ["HPLT/hplt_monolingual_v1_2"], "inference": false} | HPLT/hplt_bert_base_et | null | [
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"et",
"dataset:HPLT/hplt_monolingual_v1_2",
"arxiv:2403.14009",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null | 2024-04-22T01:16:56+00:00 | [
"2403.14009"
] | [
"et"
] | TAGS
#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #et #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us
|
# HPLT Bert for Estonian
<img src="URL width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the HPLT project.
It is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.
A monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our language model training report.
The training code.
The training statistics of all 75 runs
## Example usage
This model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.
The following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.
## Cite us
| [
"# HPLT Bert for Estonian\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] | [
"TAGS\n#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #et #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us \n",
"# HPLT Bert for Estonian\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] |
fill-mask | transformers |
# HPLT Bert for Basque
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language models. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_en")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_en", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Cite us
```bibtex
@misc{degibert2024new,
title={A New Massive Multilingual Dataset for High-Performance Language Technologies},
author={Ona de Gibert and Graeme Nail and Nikolay Arefyev and Marta Bañón and Jelmer van der Linde and Shaoxiong Ji and Jaume Zaragoza-Bernabeu and Mikko Aulamo and Gema Ramírez-Sánchez and Andrey Kutuzov and Sampo Pyysalo and Stephan Oepen and Jörg Tiedemann},
year={2024},
eprint={2403.14009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["eu"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "datasets": ["HPLT/hplt_monolingual_v1_2"], "inference": false} | HPLT/hplt_bert_base_eu | null | [
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"eu",
"dataset:HPLT/hplt_monolingual_v1_2",
"arxiv:2403.14009",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null | 2024-04-22T01:17:19+00:00 | [
"2403.14009"
] | [
"eu"
] | TAGS
#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #eu #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us
|
# HPLT Bert for Basque
<img src="URL width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the HPLT project.
It is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.
A monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our language model training report.
The training code.
The training statistics of all 75 runs
## Example usage
This model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.
The following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.
## Cite us
| [
"# HPLT Bert for Basque\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] | [
"TAGS\n#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #eu #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us \n",
"# HPLT Bert for Basque\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] |
reinforcement-learning | null |
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
| {"tags": ["Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class"], "model-index": [{"name": "Reinforce-Unit4", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Pixelcopter-PLE-v0", "type": "Pixelcopter-PLE-v0"}, "metrics": [{"type": "mean_reward", "value": "10.20 +/- 13.01", "name": "mean_reward", "verified": false}]}]}]} | Saraaaaaaaaa/Reinforce-Unit4 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | null | 2024-04-22T01:17:36+00:00 | [] | [] | TAGS
#Pixelcopter-PLE-v0 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us
|
# Reinforce Agent playing Pixelcopter-PLE-v0
This is a trained model of a Reinforce agent playing Pixelcopter-PLE-v0 .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL
| [
"# Reinforce Agent playing Pixelcopter-PLE-v0\n This is a trained model of a Reinforce agent playing Pixelcopter-PLE-v0 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL"
] | [
"TAGS\n#Pixelcopter-PLE-v0 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us \n",
"# Reinforce Agent playing Pixelcopter-PLE-v0\n This is a trained model of a Reinforce agent playing Pixelcopter-PLE-v0 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL"
] |
fill-mask | transformers |
# HPLT Bert for Persian
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language models. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_en")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_en", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Cite us
```bibtex
@misc{degibert2024new,
title={A New Massive Multilingual Dataset for High-Performance Language Technologies},
author={Ona de Gibert and Graeme Nail and Nikolay Arefyev and Marta Bañón and Jelmer van der Linde and Shaoxiong Ji and Jaume Zaragoza-Bernabeu and Mikko Aulamo and Gema Ramírez-Sánchez and Andrey Kutuzov and Sampo Pyysalo and Stephan Oepen and Jörg Tiedemann},
year={2024},
eprint={2403.14009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["fa"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "datasets": ["HPLT/hplt_monolingual_v1_2"], "inference": false} | HPLT/hplt_bert_base_fa | null | [
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"fa",
"dataset:HPLT/hplt_monolingual_v1_2",
"arxiv:2403.14009",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null | 2024-04-22T01:17:43+00:00 | [
"2403.14009"
] | [
"fa"
] | TAGS
#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #fa #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us
|
# HPLT Bert for Persian
<img src="URL width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the HPLT project.
It is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.
A monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our language model training report.
The training code.
The training statistics of all 75 runs
## Example usage
This model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.
The following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.
## Cite us
| [
"# HPLT Bert for Persian\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] | [
"TAGS\n#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #fa #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us \n",
"# HPLT Bert for Persian\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] |
fill-mask | transformers |
# HPLT Bert for Finnish
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language models. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_en")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_en", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Cite us
```bibtex
@misc{degibert2024new,
title={A New Massive Multilingual Dataset for High-Performance Language Technologies},
author={Ona de Gibert and Graeme Nail and Nikolay Arefyev and Marta Bañón and Jelmer van der Linde and Shaoxiong Ji and Jaume Zaragoza-Bernabeu and Mikko Aulamo and Gema Ramírez-Sánchez and Andrey Kutuzov and Sampo Pyysalo and Stephan Oepen and Jörg Tiedemann},
year={2024},
eprint={2403.14009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["fi"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "datasets": ["HPLT/hplt_monolingual_v1_2"], "inference": false} | HPLT/hplt_bert_base_fi | null | [
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"fi",
"dataset:HPLT/hplt_monolingual_v1_2",
"arxiv:2403.14009",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null | 2024-04-22T01:18:04+00:00 | [
"2403.14009"
] | [
"fi"
] | TAGS
#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #fi #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us
|
# HPLT Bert for Finnish
<img src="URL width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the HPLT project.
It is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.
A monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our language model training report.
The training code.
The training statistics of all 75 runs
## Example usage
This model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.
The following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.
## Cite us
| [
"# HPLT Bert for Finnish\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] | [
"TAGS\n#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #fi #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us \n",
"# HPLT Bert for Finnish\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] |
text-generation | null |
## Llamacpp iMatrix Quantizations of Meta-Llama-3-8B-Instruct
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2710">b2710</a> for quantization.
Original model: https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct
All quants made using imatrix option with dataset provided by Kalomaze [here](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
## Prompt format
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Meta-Llama-3-8B-Instruct-Q8_0.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-GGUF/blob/main/Meta-Llama-3-8B-Instruct-Q8_0.gguf) | Q8_0 | 8.54GB | Extremely high quality, generally unneeded but max available quant. |
| [Meta-Llama-3-8B-Instruct-Q6_K.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-GGUF/blob/main/Meta-Llama-3-8B-Instruct-Q6_K.gguf) | Q6_K | 6.59GB | Very high quality, near perfect, *recommended*. |
| [Meta-Llama-3-8B-Instruct-Q5_K_M.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-GGUF/blob/main/Meta-Llama-3-8B-Instruct-Q5_K_M.gguf) | Q5_K_M | 5.73GB | High quality, *recommended*. |
| [Meta-Llama-3-8B-Instruct-Q5_K_S.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-GGUF/blob/main/Meta-Llama-3-8B-Instruct-Q5_K_S.gguf) | Q5_K_S | 5.59GB | High quality, *recommended*. |
| [Meta-Llama-3-8B-Instruct-Q4_K_M.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-GGUF/blob/main/Meta-Llama-3-8B-Instruct-Q4_K_M.gguf) | Q4_K_M | 4.92GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [Meta-Llama-3-8B-Instruct-Q4_K_S.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-GGUF/blob/main/Meta-Llama-3-8B-Instruct-Q4_K_S.gguf) | Q4_K_S | 4.69GB | Slightly lower quality with more space savings, *recommended*. |
| [Meta-Llama-3-8B-Instruct-IQ4_NL.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-GGUF/blob/main/Meta-Llama-3-8B-Instruct-IQ4_NL.gguf) | IQ4_NL | 4.67GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. |
| [Meta-Llama-3-8B-Instruct-IQ4_XS.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-GGUF/blob/main/Meta-Llama-3-8B-Instruct-IQ4_XS.gguf) | IQ4_XS | 4.44GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Meta-Llama-3-8B-Instruct-Q3_K_L.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-GGUF/blob/main/Meta-Llama-3-8B-Instruct-Q3_K_L.gguf) | Q3_K_L | 4.32GB | Lower quality but usable, good for low RAM availability. |
| [Meta-Llama-3-8B-Instruct-Q3_K_M.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-GGUF/blob/main/Meta-Llama-3-8B-Instruct-Q3_K_M.gguf) | Q3_K_M | 4.01GB | Even lower quality. |
| [Meta-Llama-3-8B-Instruct-IQ3_M.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-GGUF/blob/main/Meta-Llama-3-8B-Instruct-IQ3_M.gguf) | IQ3_M | 3.78GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Meta-Llama-3-8B-Instruct-IQ3_S.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-GGUF/blob/main/Meta-Llama-3-8B-Instruct-IQ3_S.gguf) | IQ3_S | 3.68GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| [Meta-Llama-3-8B-Instruct-Q3_K_S.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-GGUF/blob/main/Meta-Llama-3-8B-Instruct-Q3_K_S.gguf) | Q3_K_S | 3.66GB | Low quality, not recommended. |
| [Meta-Llama-3-8B-Instruct-IQ3_XS.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-GGUF/blob/main/Meta-Llama-3-8B-Instruct-IQ3_XS.gguf) | IQ3_XS | 3.51GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Meta-Llama-3-8B-Instruct-IQ3_XXS.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-GGUF/blob/main/Meta-Llama-3-8B-Instruct-IQ3_XXS.gguf) | IQ3_XXS | 3.27GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [Meta-Llama-3-8B-Instruct-Q2_K.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-GGUF/blob/main/Meta-Llama-3-8B-Instruct-Q2_K.gguf) | Q2_K | 3.17GB | Very low quality but surprisingly usable. |
| [Meta-Llama-3-8B-Instruct-IQ2_M.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-GGUF/blob/main/Meta-Llama-3-8B-Instruct-IQ2_M.gguf) | IQ2_M | 2.94GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [Meta-Llama-3-8B-Instruct-IQ2_S.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-GGUF/blob/main/Meta-Llama-3-8B-Instruct-IQ2_S.gguf) | IQ2_S | 2.75GB | Very low quality, uses SOTA techniques to be usable. |
| [Meta-Llama-3-8B-Instruct-IQ2_XS.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-GGUF/blob/main/Meta-Llama-3-8B-Instruct-IQ2_XS.gguf) | IQ2_XS | 2.60GB | Very low quality, uses SOTA techniques to be usable. |
| [Meta-Llama-3-8B-Instruct-IQ2_XXS.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-GGUF/blob/main/Meta-Llama-3-8B-Instruct-IQ2_XXS.gguf) | IQ2_XXS | 2.39GB | Lower quality, uses SOTA techniques to be usable. |
| [Meta-Llama-3-8B-Instruct-IQ1_M.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-GGUF/blob/main/Meta-Llama-3-8B-Instruct-IQ1_M.gguf) | IQ1_M | 2.16GB | Extremely low quality, *not* recommended. |
| [Meta-Llama-3-8B-Instruct-IQ1_S.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-GGUF/blob/main/Meta-Llama-3-8B-Instruct-IQ1_S.gguf) | IQ1_S | 2.01GB | Extremely low quality, *not* recommended. |
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
| {"language": ["en"], "license": "other", "tags": ["facebook", "meta", "pytorch", "llama", "llama-3"], "pipeline_tag": "text-generation", "license_name": "llama3", "license_link": "LICENSE", "extra_gated_prompt": "### META LLAMA 3 COMMUNITY LICENSE AGREEMENT\nMeta Llama 3 Version Release Date: April 18, 2024\n\"Agreement\" means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein.\n\"Documentation\" means the specifications, manuals and documentation accompanying Meta Llama 3 distributed by Meta at https://llama.meta.com/get-started/.\n\"Licensee\" or \"you\" means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity\u2019s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf.\n\"Meta Llama 3\" means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://llama.meta.com/llama-downloads.\n\"Llama Materials\" means, collectively, Meta\u2019s proprietary Meta Llama 3 and Documentation (and any portion thereof) made available under this Agreement.\n\"Meta\" or \"we\" means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).\n \n1. License Rights and Redistribution.\na. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta\u2019s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials.\nb. Redistribution and Use.\ni. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service that uses any of them, including another AI model, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display \u201cBuilt with Meta Llama 3\u201d on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include \u201cLlama 3\u201d at the beginning of any such AI model name.\nii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you.\niii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a \u201cNotice\u201d text file distributed as a part of such copies: \u201cMeta Llama 3 is licensed under the Meta Llama 3 Community License, Copyright \u00a9 Meta Platforms, Inc. All Rights Reserved.\u201d\niv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://llama.meta.com/llama3/use-policy), which is hereby incorporated by reference into this Agreement.\nv. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Meta Llama 3 or derivative works thereof).\n2. Additional Commercial Terms. If, on the Meta Llama 3 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee\u2019s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN \u201cAS IS\u201d BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\na. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use \u201cLlama 3\u201d (the \u201cMark\u201d) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta\u2019s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/ ). All goodwill arising out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to Meta\u2019s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications.\nc. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials.\n6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement.\n7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement.\n### Meta Llama 3 Acceptable Use Policy\nMeta is committed to promoting safe and fair use of its tools and features, including Meta Llama 3. If you access or use Meta Llama 3, you agree to this Acceptable Use Policy (\u201cPolicy\u201d). The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy)\n#### Prohibited Uses\nWe want everyone to use Meta Llama 3 safely and responsibly. You agree you will not use, or allow others to use, Meta Llama 3 to: 1. Violate the law or others\u2019 rights, including to:\n 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:\n 1. Violence or terrorism\n 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material\n 3. Human trafficking, exploitation, and sexual violence\n 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.\n 5. Sexual solicitation\n 6. Any other criminal activity\n 2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals\n 3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services\n 4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices\n 5. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws\n 6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials\n 7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system\n2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Meta Llama 3 related to the following:\n 1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State\n 2. Guns and illegal weapons (including weapon development)\n 3. Illegal drugs and regulated/controlled substances\n 4. Operation of critical infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm or harm to others, including suicide, cutting, and eating disorders\n 6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive or mislead others, including use of Meta Llama 3 related to the following:\n 1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n 2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content\n 3. Generating, promoting, or further distributing spam\n 4. Impersonating another individual without consent, authorization, or legal right\n 5. Representing that the use of Meta Llama 3 or outputs are human-generated\n 6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement\n4. Fail to appropriately disclose to end users any known dangers of your AI system\nPlease report any violation of this Policy, software \u201cbug,\u201d or other problems that could lead to a violation of this Policy through one of the following means:\n * Reporting issues with the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3)\n * Reporting risky content generated by the model:\n developers.facebook.com/llama_output_feedback\n * Reporting bugs and security concerns: facebook.com/whitehat/info\n * Reporting violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]", "extra_gated_fields": {"First Name": "text", "Last Name": "text", "Date of birth": "date_picker", "Country": "country", "Affiliation": "text", "geo": "ip_location", "By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy": "checkbox"}, "extra_gated_description": "The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).", "extra_gated_button_content": "Submit", "quantized_by": "bartowski"} | nitsuai/Meta-Llama-3-8B-Instruct-GGUF | null | [
"gguf",
"facebook",
"meta",
"pytorch",
"llama",
"llama-3",
"text-generation",
"en",
"license:other",
"region:us"
] | null | 2024-04-22T01:18:13+00:00 | [] | [
"en"
] | TAGS
#gguf #facebook #meta #pytorch #llama #llama-3 #text-generation #en #license-other #region-us
| Llamacpp iMatrix Quantizations of Meta-Llama-3-8B-Instruct
----------------------------------------------------------
Using <a href="URL release <a href="URL for quantization.
Original model: URL
All quants made using imatrix option with dataset provided by Kalomaze here
Prompt format
-------------
Download a file (not the whole branch) from below:
--------------------------------------------------
Which file should I choose?
---------------------------
A great write up with charts showing various performances is provided by Artefact2 here
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX\_K\_X', like Q5\_K\_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
URL feature matrix
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX\_X, like IQ3\_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: URL
| [] | [
"TAGS\n#gguf #facebook #meta #pytorch #llama #llama-3 #text-generation #en #license-other #region-us \n"
] |
fill-mask | transformers |
# HPLT Bert for French
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language models. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_en")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_en", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Cite us
```bibtex
@misc{degibert2024new,
title={A New Massive Multilingual Dataset for High-Performance Language Technologies},
author={Ona de Gibert and Graeme Nail and Nikolay Arefyev and Marta Bañón and Jelmer van der Linde and Shaoxiong Ji and Jaume Zaragoza-Bernabeu and Mikko Aulamo and Gema Ramírez-Sánchez and Andrey Kutuzov and Sampo Pyysalo and Stephan Oepen and Jörg Tiedemann},
year={2024},
eprint={2403.14009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["fr"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "datasets": ["HPLT/hplt_monolingual_v1_2"], "inference": false} | HPLT/hplt_bert_base_fr | null | [
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"fr",
"dataset:HPLT/hplt_monolingual_v1_2",
"arxiv:2403.14009",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null | 2024-04-22T01:18:33+00:00 | [
"2403.14009"
] | [
"fr"
] | TAGS
#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #fr #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us
|
# HPLT Bert for French
<img src="URL width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the HPLT project.
It is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.
A monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our language model training report.
The training code.
The training statistics of all 75 runs
## Example usage
This model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.
The following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.
## Cite us
| [
"# HPLT Bert for French\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] | [
"TAGS\n#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #fr #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us \n",
"# HPLT Bert for French\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] |
fill-mask | transformers |
# HPLT Bert for Irish
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language models. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_en")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_en", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Cite us
```bibtex
@misc{degibert2024new,
title={A New Massive Multilingual Dataset for High-Performance Language Technologies},
author={Ona de Gibert and Graeme Nail and Nikolay Arefyev and Marta Bañón and Jelmer van der Linde and Shaoxiong Ji and Jaume Zaragoza-Bernabeu and Mikko Aulamo and Gema Ramírez-Sánchez and Andrey Kutuzov and Sampo Pyysalo and Stephan Oepen and Jörg Tiedemann},
year={2024},
eprint={2403.14009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["ga"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "datasets": ["HPLT/hplt_monolingual_v1_2"], "inference": false} | HPLT/hplt_bert_base_ga | null | [
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"ga",
"dataset:HPLT/hplt_monolingual_v1_2",
"arxiv:2403.14009",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null | 2024-04-22T01:19:07+00:00 | [
"2403.14009"
] | [
"ga"
] | TAGS
#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #ga #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us
|
# HPLT Bert for Irish
<img src="URL width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the HPLT project.
It is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.
A monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our language model training report.
The training code.
The training statistics of all 75 runs
## Example usage
This model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.
The following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.
## Cite us
| [
"# HPLT Bert for Irish\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] | [
"TAGS\n#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #ga #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us \n",
"# HPLT Bert for Irish\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] |
text-generation | null |
## Llamacpp Quantizations of Meta-Llama-3-70B-Instruct
Since official Llama 3 support has arrived to llama.cpp release, I will be remaking this entirely and uploading as soon as it's done.
This model has the <|eot_id|> token set to not-special, which seems to work better with current inference engines.
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> fork from pcuenca <a href="https://github.com/pcuenca/llama.cpp/tree/llama3-conversion">llama3-conversion</a> for quantization.
Original model: https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct
## Prompt format
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Meta-Llama-3-70B-Instruct-Q5_K_M.gguf](https://huggingface.co/bartowski/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-Q5_K_M.gguf) | Q5_K_M | 49.94GB | High quality, *recommended*. |
| [Meta-Llama-3-70B-Instruct-Q5_K_S.gguf](https://huggingface.co/bartowski/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-Q5_K_S.gguf) | Q5_K_S | 48.65GB | High quality, *recommended*. |
| [Meta-Llama-3-70B-Instruct-Q4_K_M.gguf](https://huggingface.co/bartowski/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-Q4_K_M.gguf) | Q4_K_M | 42.52GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [Meta-Llama-3-70B-Instruct-Q4_K_S.gguf](https://huggingface.co/bartowski/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-Q4_K_S.gguf) | Q4_K_S | 40.34GB | Slightly lower quality with more space savings, *recommended*. |
| [Meta-Llama-3-70B-Instruct-IQ4_NL.gguf](https://huggingface.co/bartowski/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-IQ4_NL.gguf) | IQ4_NL | 40.34GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. |
| [Meta-Llama-3-70B-Instruct-IQ4_XS.gguf](https://huggingface.co/bartowski/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-IQ4_XS.gguf) | IQ4_XS | 38.26GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Meta-Llama-3-70B-Instruct-Q3_K_L.gguf](https://huggingface.co/bartowski/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-Q3_K_L.gguf) | Q3_K_L | 37.14GB | Lower quality but usable, good for low RAM availability. |
| [Meta-Llama-3-70B-Instruct-Q3_K_M.gguf](https://huggingface.co/bartowski/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-Q3_K_M.gguf) | Q3_K_M | 34.26GB | Even lower quality. |
| [Meta-Llama-3-70B-Instruct-IQ3_M.gguf](https://huggingface.co/bartowski/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-IQ3_M.gguf) | IQ3_M | 31.93GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Meta-Llama-3-70B-Instruct-IQ3_S.gguf](https://huggingface.co/bartowski/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-IQ3_S.gguf) | IQ3_S | 30.91GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| [Meta-Llama-3-70B-Instruct-Q3_K_S.gguf](https://huggingface.co/bartowski/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-Q3_K_S.gguf) | Q3_K_S | 30.91GB | Low quality, not recommended. |
| [Meta-Llama-3-70B-Instruct-IQ3_XS.gguf](https://huggingface.co/bartowski/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-IQ3_XS.gguf) | IQ3_XS | 29.30GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Meta-Llama-3-70B-Instruct-Q2_K.gguf](https://huggingface.co/bartowski/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-Q2_K.gguf) | Q2_K | 26.37GB | Very low quality but surprisingly usable. |
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
| {"language": ["en"], "license": "other", "tags": ["facebook", "meta", "pytorch", "llama", "llama-3"], "pipeline_tag": "text-generation", "license_name": "llama3", "license_link": "LICENSE", "extra_gated_prompt": "### META LLAMA 3 COMMUNITY LICENSE AGREEMENT\nMeta Llama 3 Version Release Date: April 18, 2024\n\"Agreement\" means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein.\n\"Documentation\" means the specifications, manuals and documentation accompanying Meta Llama 3 distributed by Meta at https://llama.meta.com/get-started/.\n\"Licensee\" or \"you\" means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity\u2019s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf.\n\"Meta Llama 3\" means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://llama.meta.com/llama-downloads.\n\"Llama Materials\" means, collectively, Meta\u2019s proprietary Meta Llama 3 and Documentation (and any portion thereof) made available under this Agreement.\n\"Meta\" or \"we\" means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).\n \n1. License Rights and Redistribution.\na. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta\u2019s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials.\nb. Redistribution and Use.\ni. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service that uses any of them, including another AI model, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display \u201cBuilt with Meta Llama 3\u201d on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include \u201cLlama 3\u201d at the beginning of any such AI model name.\nii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you.\niii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a \u201cNotice\u201d text file distributed as a part of such copies: \u201cMeta Llama 3 is licensed under the Meta Llama 3 Community License, Copyright \u00a9 Meta Platforms, Inc. All Rights Reserved.\u201d\niv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://llama.meta.com/llama3/use-policy), which is hereby incorporated by reference into this Agreement.\nv. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Meta Llama 3 or derivative works thereof).\n2. Additional Commercial Terms. If, on the Meta Llama 3 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee\u2019s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN \u201cAS IS\u201d BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\na. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use \u201cLlama 3\u201d (the \u201cMark\u201d) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta\u2019s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/ ). All goodwill arising out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to Meta\u2019s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications.\nc. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials.\n6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement.\n7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement.\n### Meta Llama 3 Acceptable Use Policy\nMeta is committed to promoting safe and fair use of its tools and features, including Meta Llama 3. If you access or use Meta Llama 3, you agree to this Acceptable Use Policy (\u201cPolicy\u201d). The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy)\n#### Prohibited Uses\nWe want everyone to use Meta Llama 3 safely and responsibly. You agree you will not use, or allow others to use, Meta Llama 3 to: 1. Violate the law or others\u2019 rights, including to:\n 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:\n 1. Violence or terrorism\n 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material\n 3. Human trafficking, exploitation, and sexual violence\n 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.\n 5. Sexual solicitation\n 6. Any other criminal activity\n 2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals\n 3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services\n 4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices\n 5. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws\n 6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials\n 7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system\n2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Meta Llama 3 related to the following:\n 1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State\n 2. Guns and illegal weapons (including weapon development)\n 3. Illegal drugs and regulated/controlled substances\n 4. Operation of critical infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm or harm to others, including suicide, cutting, and eating disorders\n 6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive or mislead others, including use of Meta Llama 3 related to the following:\n 1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n 2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content\n 3. Generating, promoting, or further distributing spam\n 4. Impersonating another individual without consent, authorization, or legal right\n 5. Representing that the use of Meta Llama 3 or outputs are human-generated\n 6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement\n4. Fail to appropriately disclose to end users any known dangers of your AI system\nPlease report any violation of this Policy, software \u201cbug,\u201d or other problems that could lead to a violation of this Policy through one of the following means:\n * Reporting issues with the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3)\n * Reporting risky content generated by the model:\n developers.facebook.com/llama_output_feedback\n * Reporting bugs and security concerns: facebook.com/whitehat/info\n * Reporting violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]", "extra_gated_fields": {"First Name": "text", "Last Name": "text", "Date of birth": "date_picker", "Country": "country", "Affiliation": "text", "geo": "ip_location", "By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy": "checkbox"}, "extra_gated_description": "The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).", "extra_gated_button_content": "Submit", "quantized_by": "bartowski"} | nitsuai/Meta-Llama-3-70B-Instruct-GGUF | null | [
"gguf",
"facebook",
"meta",
"pytorch",
"llama",
"llama-3",
"text-generation",
"en",
"license:other",
"region:us"
] | null | 2024-04-22T01:19:26+00:00 | [] | [
"en"
] | TAGS
#gguf #facebook #meta #pytorch #llama #llama-3 #text-generation #en #license-other #region-us
| Llamacpp Quantizations of Meta-Llama-3-70B-Instruct
---------------------------------------------------
Since official Llama 3 support has arrived to URL release, I will be remaking this entirely and uploading as soon as it's done.
This model has the <|eot\_id|> token set to not-special, which seems to work better with current inference engines.
Using <a href="URL fork from pcuenca <a href="URL for quantization.
Original model: URL
Prompt format
-------------
Download a file (not the whole branch) from below:
--------------------------------------------------
Which file should I choose?
---------------------------
A great write up with charts showing various performances is provided by Artefact2 here
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX\_K\_X', like Q5\_K\_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
URL feature matrix
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX\_X, like IQ3\_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: URL
| [] | [
"TAGS\n#gguf #facebook #meta #pytorch #llama #llama-3 #text-generation #en #license-other #region-us \n"
] |
fill-mask | transformers |
# HPLT Bert for Galician
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language models. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_en")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_en", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Cite us
```bibtex
@misc{degibert2024new,
title={A New Massive Multilingual Dataset for High-Performance Language Technologies},
author={Ona de Gibert and Graeme Nail and Nikolay Arefyev and Marta Bañón and Jelmer van der Linde and Shaoxiong Ji and Jaume Zaragoza-Bernabeu and Mikko Aulamo and Gema Ramírez-Sánchez and Andrey Kutuzov and Sampo Pyysalo and Stephan Oepen and Jörg Tiedemann},
year={2024},
eprint={2403.14009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["gl"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "datasets": ["HPLT/hplt_monolingual_v1_2"], "inference": false} | HPLT/hplt_bert_base_gl | null | [
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"gl",
"dataset:HPLT/hplt_monolingual_v1_2",
"arxiv:2403.14009",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null | 2024-04-22T01:19:30+00:00 | [
"2403.14009"
] | [
"gl"
] | TAGS
#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #gl #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us
|
# HPLT Bert for Galician
<img src="URL width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the HPLT project.
It is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.
A monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our language model training report.
The training code.
The training statistics of all 75 runs
## Example usage
This model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.
The following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.
## Cite us
| [
"# HPLT Bert for Galician\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] | [
"TAGS\n#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #gl #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us \n",
"# HPLT Bert for Galician\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] |
fill-mask | transformers |
# HPLT Bert for Gujarati
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language models. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_en")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_en", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Cite us
```bibtex
@misc{degibert2024new,
title={A New Massive Multilingual Dataset for High-Performance Language Technologies},
author={Ona de Gibert and Graeme Nail and Nikolay Arefyev and Marta Bañón and Jelmer van der Linde and Shaoxiong Ji and Jaume Zaragoza-Bernabeu and Mikko Aulamo and Gema Ramírez-Sánchez and Andrey Kutuzov and Sampo Pyysalo and Stephan Oepen and Jörg Tiedemann},
year={2024},
eprint={2403.14009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["gu"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "datasets": ["HPLT/hplt_monolingual_v1_2"], "inference": false} | HPLT/hplt_bert_base_gu | null | [
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"gu",
"dataset:HPLT/hplt_monolingual_v1_2",
"arxiv:2403.14009",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null | 2024-04-22T01:19:55+00:00 | [
"2403.14009"
] | [
"gu"
] | TAGS
#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #gu #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us
|
# HPLT Bert for Gujarati
<img src="URL width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the HPLT project.
It is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.
A monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our language model training report.
The training code.
The training statistics of all 75 runs
## Example usage
This model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.
The following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.
## Cite us
| [
"# HPLT Bert for Gujarati\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] | [
"TAGS\n#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #gu #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us \n",
"# HPLT Bert for Gujarati\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] |
fill-mask | transformers |
# HPLT Bert for Serbo-Croatian
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language models. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_en")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_en", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Cite us
```bibtex
@misc{degibert2024new,
title={A New Massive Multilingual Dataset for High-Performance Language Technologies},
author={Ona de Gibert and Graeme Nail and Nikolay Arefyev and Marta Bañón and Jelmer van der Linde and Shaoxiong Ji and Jaume Zaragoza-Bernabeu and Mikko Aulamo and Gema Ramírez-Sánchez and Andrey Kutuzov and Sampo Pyysalo and Stephan Oepen and Jörg Tiedemann},
year={2024},
eprint={2403.14009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["hbs"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "datasets": ["HPLT/hplt_monolingual_v1_2"], "inference": false} | HPLT/hplt_bert_base_hbs | null | [
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"hbs",
"dataset:HPLT/hplt_monolingual_v1_2",
"arxiv:2403.14009",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null | 2024-04-22T01:20:19+00:00 | [
"2403.14009"
] | [
"hbs"
] | TAGS
#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #hbs #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us
|
# HPLT Bert for Serbo-Croatian
<img src="URL width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the HPLT project.
It is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.
A monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our language model training report.
The training code.
The training statistics of all 75 runs
## Example usage
This model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.
The following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.
## Cite us
| [
"# HPLT Bert for Serbo-Croatian\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] | [
"TAGS\n#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #hbs #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us \n",
"# HPLT Bert for Serbo-Croatian\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] |
null | null |
```
e88 88e d8
d888 888b 8888 8888 ,"Y88b 888 8e d88
C8888 8888D 8888 8888 "8" 888 888 88b d88888
Y888 888P Y888 888P ,ee 888 888 888 888
"88 88" "88 88" "88 888 888 888 888
b
8b,
e88'Y88 d8 888
d888 'Y ,"Y88b 888,8, d88 ,e e, 888
C8888 "8" 888 888 " d88888 d88 88b 888
Y888 ,d ,ee 888 888 888 888 , 888
"88,d88 "88 888 888 888 "YeeP" 888
PROUDLY PRESENTS
```
# Llama-3-8B-Instruct-DADA-exl2-rpcal
Quantized using 200 samples of 8192 tokens from an RP-oriented [PIPPA](https://huggingface.co/datasets/royallab/PIPPA-cleaned) dataset.
Branches:
- `main` -- `measurement.json`
- `8b8h` -- 8bpw, 8bit lm_head
- `6b6h` -- 6bpw, 6bit lm_head
- `4b6h` -- 4bpw, 6bit lm_head
Original model link: [Envoid/Llama-3-8B-Instruct-DADA](https://huggingface.co/Envoid/Llama-3-8B-Instruct-DADA)
Original model README below.
-----
## Llama-3-8B-Instruct-DADA

# Warning: This model is experimental and thus potentially unpredictable.
This model employs the same strategy as [Mixtral Instruct ITR DADA](https://huggingface.co/Envoid/Mixtral-Instruct-ITR-DADA-8x7B)
I trained [Llama-3-8B-Instruct](meta-llama/Meta-Llama-3-8B-Instruct) on the Alpaca-DADA dataset for 10 epochs at 1e-6 learning rate.
I then did a 50/50 SLERP merge of the resulting model back onto Llama-3-8B-Instruct
This model may require custom stopping strings to tame due to current issues surrounding Llama-3 EOS tokens and various back-ends.
It certainly gives some interesting answers using an assistant template/card in SillyTavern, though.
The below answer is one of the more interesting answers I've gotten out of an LLM on the same query, although there was an indentiation error (indicated by the red circle)

Training was done using [qlora-pipe](https://github.com/tdrussell/qlora-pipe)
[GGUFs care of Quant Cartel](https://huggingface.co/Quant-Cartel/Llama-3-8B-Instruct-DADA-iMat-GGUF) | {"license": "cc-by-nc-4.0"} | Quant-Cartel/Llama-3-8B-Instruct-DADA-exl2-rpcal | null | [
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-04-22T01:20:37+00:00 | [] | [] | TAGS
#license-cc-by-nc-4.0 #region-us
|
# Llama-3-8B-Instruct-DADA-exl2-rpcal
Quantized using 200 samples of 8192 tokens from an RP-oriented PIPPA dataset.
Branches:
- 'main' -- 'URL'
- '8b8h' -- 8bpw, 8bit lm_head
- '6b6h' -- 6bpw, 6bit lm_head
- '4b6h' -- 4bpw, 6bit lm_head
Original model link: Envoid/Llama-3-8B-Instruct-DADA
Original model README below.
-----
## Llama-3-8B-Instruct-DADA

\n\n.
It is a so called masked language models. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_en")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_en", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Cite us
```bibtex
@misc{degibert2024new,
title={A New Massive Multilingual Dataset for High-Performance Language Technologies},
author={Ona de Gibert and Graeme Nail and Nikolay Arefyev and Marta Bañón and Jelmer van der Linde and Shaoxiong Ji and Jaume Zaragoza-Bernabeu and Mikko Aulamo and Gema Ramírez-Sánchez and Andrey Kutuzov and Sampo Pyysalo and Stephan Oepen and Jörg Tiedemann},
year={2024},
eprint={2403.14009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["he"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "datasets": ["HPLT/hplt_monolingual_v1_2"], "inference": false} | HPLT/hplt_bert_base_he | null | [
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"he",
"dataset:HPLT/hplt_monolingual_v1_2",
"arxiv:2403.14009",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null | 2024-04-22T01:20:45+00:00 | [
"2403.14009"
] | [
"he"
] | TAGS
#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #he #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us
|
# HPLT Bert for Hebrew
<img src="URL width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the HPLT project.
It is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.
A monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our language model training report.
The training code.
The training statistics of all 75 runs
## Example usage
This model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.
The following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.
## Cite us
| [
"# HPLT Bert for Hebrew\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] | [
"TAGS\n#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #he #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us \n",
"# HPLT Bert for Hebrew\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | epiverseai/test1 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-22T01:21:09+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
fill-mask | transformers |
# HPLT Bert for Hindi
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language models. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_en")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_en", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Cite us
```bibtex
@misc{degibert2024new,
title={A New Massive Multilingual Dataset for High-Performance Language Technologies},
author={Ona de Gibert and Graeme Nail and Nikolay Arefyev and Marta Bañón and Jelmer van der Linde and Shaoxiong Ji and Jaume Zaragoza-Bernabeu and Mikko Aulamo and Gema Ramírez-Sánchez and Andrey Kutuzov and Sampo Pyysalo and Stephan Oepen and Jörg Tiedemann},
year={2024},
eprint={2403.14009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["hi"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "datasets": ["HPLT/hplt_monolingual_v1_2"], "inference": false} | HPLT/hplt_bert_base_hi | null | [
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"hi",
"dataset:HPLT/hplt_monolingual_v1_2",
"arxiv:2403.14009",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null | 2024-04-22T01:21:12+00:00 | [
"2403.14009"
] | [
"hi"
] | TAGS
#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #hi #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us
|
# HPLT Bert for Hindi
<img src="URL width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the HPLT project.
It is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.
A monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our language model training report.
The training code.
The training statistics of all 75 runs
## Example usage
This model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.
The following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.
## Cite us
| [
"# HPLT Bert for Hindi\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] | [
"TAGS\n#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #hi #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us \n",
"# HPLT Bert for Hindi\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] |
fill-mask | transformers |
# HPLT Bert for Hungarian
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language models. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_en")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_en", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Cite us
```bibtex
@misc{degibert2024new,
title={A New Massive Multilingual Dataset for High-Performance Language Technologies},
author={Ona de Gibert and Graeme Nail and Nikolay Arefyev and Marta Bañón and Jelmer van der Linde and Shaoxiong Ji and Jaume Zaragoza-Bernabeu and Mikko Aulamo and Gema Ramírez-Sánchez and Andrey Kutuzov and Sampo Pyysalo and Stephan Oepen and Jörg Tiedemann},
year={2024},
eprint={2403.14009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["hu"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "datasets": ["HPLT/hplt_monolingual_v1_2"], "inference": false} | HPLT/hplt_bert_base_hu | null | [
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"hu",
"dataset:HPLT/hplt_monolingual_v1_2",
"arxiv:2403.14009",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null | 2024-04-22T01:21:38+00:00 | [
"2403.14009"
] | [
"hu"
] | TAGS
#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #hu #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us
|
# HPLT Bert for Hungarian
<img src="URL width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the HPLT project.
It is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.
A monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our language model training report.
The training code.
The training statistics of all 75 runs
## Example usage
This model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.
The following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.
## Cite us
| [
"# HPLT Bert for Hungarian\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] | [
"TAGS\n#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #hu #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us \n",
"# HPLT Bert for Hungarian\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] |
text-generation | transformers |
## Llamacpp iMatrix Quantizations of llama-3-neural-chat-v1-8b
This model has the <|eot_id|> token set to not-special, which seems to work better with current inference engines.
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> fork from pcuenca <a href="https://github.com/pcuenca/llama.cpp/tree/llama3-conversion">llama3-conversion</a> for quantization.
Original model: https://huggingface.co/Locutusque/llama-3-neural-chat-v1-8b
All quants made using imatrix option with dataset provided by Kalomaze [here](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
## Prompt format
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [llama-3-neural-chat-v1-8b-Q8_0.gguf](https://huggingface.co/bartowski/llama-3-neural-chat-v1-8b-GGUF/blob/main/llama-3-neural-chat-v1-8b-Q8_0.gguf) | Q8_0 | 8.54GB | Extremely high quality, generally unneeded but max available quant. |
| [llama-3-neural-chat-v1-8b-Q6_K.gguf](https://huggingface.co/bartowski/llama-3-neural-chat-v1-8b-GGUF/blob/main/llama-3-neural-chat-v1-8b-Q6_K.gguf) | Q6_K | 6.59GB | Very high quality, near perfect, *recommended*. |
| [llama-3-neural-chat-v1-8b-Q5_K_M.gguf](https://huggingface.co/bartowski/llama-3-neural-chat-v1-8b-GGUF/blob/main/llama-3-neural-chat-v1-8b-Q5_K_M.gguf) | Q5_K_M | 5.73GB | High quality, *recommended*. |
| [llama-3-neural-chat-v1-8b-Q5_K_S.gguf](https://huggingface.co/bartowski/llama-3-neural-chat-v1-8b-GGUF/blob/main/llama-3-neural-chat-v1-8b-Q5_K_S.gguf) | Q5_K_S | 5.59GB | High quality, *recommended*. |
| [llama-3-neural-chat-v1-8b-Q4_K_M.gguf](https://huggingface.co/bartowski/llama-3-neural-chat-v1-8b-GGUF/blob/main/llama-3-neural-chat-v1-8b-Q4_K_M.gguf) | Q4_K_M | 4.92GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [llama-3-neural-chat-v1-8b-Q4_K_S.gguf](https://huggingface.co/bartowski/llama-3-neural-chat-v1-8b-GGUF/blob/main/llama-3-neural-chat-v1-8b-Q4_K_S.gguf) | Q4_K_S | 4.69GB | Slightly lower quality with more space savings, *recommended*. |
| [llama-3-neural-chat-v1-8b-IQ4_NL.gguf](https://huggingface.co/bartowski/llama-3-neural-chat-v1-8b-GGUF/blob/main/llama-3-neural-chat-v1-8b-IQ4_NL.gguf) | IQ4_NL | 4.67GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. |
| [llama-3-neural-chat-v1-8b-IQ4_XS.gguf](https://huggingface.co/bartowski/llama-3-neural-chat-v1-8b-GGUF/blob/main/llama-3-neural-chat-v1-8b-IQ4_XS.gguf) | IQ4_XS | 4.44GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [llama-3-neural-chat-v1-8b-Q3_K_L.gguf](https://huggingface.co/bartowski/llama-3-neural-chat-v1-8b-GGUF/blob/main/llama-3-neural-chat-v1-8b-Q3_K_L.gguf) | Q3_K_L | 4.32GB | Lower quality but usable, good for low RAM availability. |
| [llama-3-neural-chat-v1-8b-Q3_K_M.gguf](https://huggingface.co/bartowski/llama-3-neural-chat-v1-8b-GGUF/blob/main/llama-3-neural-chat-v1-8b-Q3_K_M.gguf) | Q3_K_M | 4.01GB | Even lower quality. |
| [llama-3-neural-chat-v1-8b-IQ3_M.gguf](https://huggingface.co/bartowski/llama-3-neural-chat-v1-8b-GGUF/blob/main/llama-3-neural-chat-v1-8b-IQ3_M.gguf) | IQ3_M | 3.78GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [llama-3-neural-chat-v1-8b-IQ3_S.gguf](https://huggingface.co/bartowski/llama-3-neural-chat-v1-8b-GGUF/blob/main/llama-3-neural-chat-v1-8b-IQ3_S.gguf) | IQ3_S | 3.68GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| [llama-3-neural-chat-v1-8b-Q3_K_S.gguf](https://huggingface.co/bartowski/llama-3-neural-chat-v1-8b-GGUF/blob/main/llama-3-neural-chat-v1-8b-Q3_K_S.gguf) | Q3_K_S | 3.66GB | Low quality, not recommended. |
| [llama-3-neural-chat-v1-8b-IQ3_XS.gguf](https://huggingface.co/bartowski/llama-3-neural-chat-v1-8b-GGUF/blob/main/llama-3-neural-chat-v1-8b-IQ3_XS.gguf) | IQ3_XS | 3.51GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [llama-3-neural-chat-v1-8b-IQ3_XXS.gguf](https://huggingface.co/bartowski/llama-3-neural-chat-v1-8b-GGUF/blob/main/llama-3-neural-chat-v1-8b-IQ3_XXS.gguf) | IQ3_XXS | 3.27GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [llama-3-neural-chat-v1-8b-Q2_K.gguf](https://huggingface.co/bartowski/llama-3-neural-chat-v1-8b-GGUF/blob/main/llama-3-neural-chat-v1-8b-Q2_K.gguf) | Q2_K | 3.17GB | Very low quality but surprisingly usable. |
| [llama-3-neural-chat-v1-8b-IQ2_M.gguf](https://huggingface.co/bartowski/llama-3-neural-chat-v1-8b-GGUF/blob/main/llama-3-neural-chat-v1-8b-IQ2_M.gguf) | IQ2_M | 2.94GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [llama-3-neural-chat-v1-8b-IQ2_S.gguf](https://huggingface.co/bartowski/llama-3-neural-chat-v1-8b-GGUF/blob/main/llama-3-neural-chat-v1-8b-IQ2_S.gguf) | IQ2_S | 2.75GB | Very low quality, uses SOTA techniques to be usable. |
| [llama-3-neural-chat-v1-8b-IQ2_XS.gguf](https://huggingface.co/bartowski/llama-3-neural-chat-v1-8b-GGUF/blob/main/llama-3-neural-chat-v1-8b-IQ2_XS.gguf) | IQ2_XS | 2.60GB | Very low quality, uses SOTA techniques to be usable. |
| [llama-3-neural-chat-v1-8b-IQ2_XXS.gguf](https://huggingface.co/bartowski/llama-3-neural-chat-v1-8b-GGUF/blob/main/llama-3-neural-chat-v1-8b-IQ2_XXS.gguf) | IQ2_XXS | 2.39GB | Lower quality, uses SOTA techniques to be usable. |
| [llama-3-neural-chat-v1-8b-IQ1_M.gguf](https://huggingface.co/bartowski/llama-3-neural-chat-v1-8b-GGUF/blob/main/llama-3-neural-chat-v1-8b-IQ1_M.gguf) | IQ1_M | 2.16GB | Extremely low quality, *not* recommended. |
| [llama-3-neural-chat-v1-8b-IQ1_S.gguf](https://huggingface.co/bartowski/llama-3-neural-chat-v1-8b-GGUF/blob/main/llama-3-neural-chat-v1-8b-IQ1_S.gguf) | IQ1_S | 2.01GB | Extremely low quality, *not* recommended. |
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
| {"license": "other", "library_name": "transformers", "datasets": ["mlabonne/orpo-dpo-mix-40k", "Open-Orca/SlimOrca-Dedup", "jondurbin/airoboros-3.2", "microsoft/orca-math-word-problems-200k", "m-a-p/Code-Feedback", "MaziyarPanahi/WizardLM_evol_instruct_V2_196k"], "base_model": "meta-llama/Meta-Llama-3-8B", "quantized_by": "bartowski", "pipeline_tag": "text-generation"} | nitsuai/llama-3-neural-chat-v1-8b-GGUF | null | [
"transformers",
"gguf",
"text-generation",
"dataset:mlabonne/orpo-dpo-mix-40k",
"dataset:Open-Orca/SlimOrca-Dedup",
"dataset:jondurbin/airoboros-3.2",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:m-a-p/Code-Feedback",
"dataset:MaziyarPanahi/WizardLM_evol_instruct_V2_196k",
"base_model:meta-llama/Meta-Llama-3-8B",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-04-22T01:21:42+00:00 | [] | [] | TAGS
#transformers #gguf #text-generation #dataset-mlabonne/orpo-dpo-mix-40k #dataset-Open-Orca/SlimOrca-Dedup #dataset-jondurbin/airoboros-3.2 #dataset-microsoft/orca-math-word-problems-200k #dataset-m-a-p/Code-Feedback #dataset-MaziyarPanahi/WizardLM_evol_instruct_V2_196k #base_model-meta-llama/Meta-Llama-3-8B #license-other #endpoints_compatible #region-us
| Llamacpp iMatrix Quantizations of llama-3-neural-chat-v1-8b
-----------------------------------------------------------
This model has the <|eot\_id|> token set to not-special, which seems to work better with current inference engines.
Using <a href="URL fork from pcuenca <a href="URL for quantization.
Original model: URL
All quants made using imatrix option with dataset provided by Kalomaze here
Prompt format
-------------
Download a file (not the whole branch) from below:
--------------------------------------------------
Which file should I choose?
---------------------------
A great write up with charts showing various performances is provided by Artefact2 here
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX\_K\_X', like Q5\_K\_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
URL feature matrix
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX\_X, like IQ3\_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: URL
| [] | [
"TAGS\n#transformers #gguf #text-generation #dataset-mlabonne/orpo-dpo-mix-40k #dataset-Open-Orca/SlimOrca-Dedup #dataset-jondurbin/airoboros-3.2 #dataset-microsoft/orca-math-word-problems-200k #dataset-m-a-p/Code-Feedback #dataset-MaziyarPanahi/WizardLM_evol_instruct_V2_196k #base_model-meta-llama/Meta-Llama-3-8B #license-other #endpoints_compatible #region-us \n"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
| {"library_name": "peft", "base_model": "TinyLlama/TinyLlama-1.1B-Chat-v1.0"} | bmehrba/TinyLlama-1.1B-Chat-v1.0-fine-tuned-adapters_Gpt4_t1_tiny_Seed103 | null | [
"peft",
"arxiv:1910.09700",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"region:us"
] | null | 2024-04-22T01:21:59+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #arxiv-1910.09700 #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
## Training procedure
The following 'bitsandbytes' quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
## Training procedure
The following 'bitsandbytes' quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.7.0.dev0",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.7.0.dev0"
] | [
"TAGS\n#peft #arxiv-1910.09700 #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.7.0.dev0",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.7.0.dev0"
] |
fill-mask | transformers |
# HPLT Bert for Armenian
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language models. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_en")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_en", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Cite us
```bibtex
@misc{degibert2024new,
title={A New Massive Multilingual Dataset for High-Performance Language Technologies},
author={Ona de Gibert and Graeme Nail and Nikolay Arefyev and Marta Bañón and Jelmer van der Linde and Shaoxiong Ji and Jaume Zaragoza-Bernabeu and Mikko Aulamo and Gema Ramírez-Sánchez and Andrey Kutuzov and Sampo Pyysalo and Stephan Oepen and Jörg Tiedemann},
year={2024},
eprint={2403.14009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["hy"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "datasets": ["HPLT/hplt_monolingual_v1_2"], "inference": false} | HPLT/hplt_bert_base_hy | null | [
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"hy",
"dataset:HPLT/hplt_monolingual_v1_2",
"arxiv:2403.14009",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null | 2024-04-22T01:22:02+00:00 | [
"2403.14009"
] | [
"hy"
] | TAGS
#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #hy #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us
|
# HPLT Bert for Armenian
<img src="URL width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the HPLT project.
It is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.
A monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our language model training report.
The training code.
The training statistics of all 75 runs
## Example usage
This model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.
The following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.
## Cite us
| [
"# HPLT Bert for Armenian\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] | [
"TAGS\n#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #hy #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us \n",
"# HPLT Bert for Armenian\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
| {"library_name": "peft", "base_model": "TinyLlama/TinyLlama-1.1B-Chat-v1.0"} | bmehrba/TinyLlama-1.1B-Chat-v1.0-fine-tuned_Gpt4_t1_tiny_Seed103 | null | [
"peft",
"arxiv:1910.09700",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"region:us"
] | null | 2024-04-22T01:22:04+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #arxiv-1910.09700 #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
## Training procedure
The following 'bitsandbytes' quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.7.0.dev0"
] | [
"TAGS\n#peft #arxiv-1910.09700 #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.7.0.dev0"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | relu-ntnu/bart-large-cnn_v4_trained_on_1000_lr_1e-4 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-22T01:22:19+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
fill-mask | transformers |
# HPLT Bert for Indonesian
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language models. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_en")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_en", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Cite us
```bibtex
@misc{degibert2024new,
title={A New Massive Multilingual Dataset for High-Performance Language Technologies},
author={Ona de Gibert and Graeme Nail and Nikolay Arefyev and Marta Bañón and Jelmer van der Linde and Shaoxiong Ji and Jaume Zaragoza-Bernabeu and Mikko Aulamo and Gema Ramírez-Sánchez and Andrey Kutuzov and Sampo Pyysalo and Stephan Oepen and Jörg Tiedemann},
year={2024},
eprint={2403.14009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["id"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "datasets": ["HPLT/hplt_monolingual_v1_2"], "inference": false} | HPLT/hplt_bert_base_id | null | [
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"id",
"dataset:HPLT/hplt_monolingual_v1_2",
"arxiv:2403.14009",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null | 2024-04-22T01:22:26+00:00 | [
"2403.14009"
] | [
"id"
] | TAGS
#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #id #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us
|
# HPLT Bert for Indonesian
<img src="URL width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the HPLT project.
It is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.
A monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our language model training report.
The training code.
The training statistics of all 75 runs
## Example usage
This model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.
The following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.
## Cite us
| [
"# HPLT Bert for Indonesian\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] | [
"TAGS\n#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #id #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us \n",
"# HPLT Bert for Indonesian\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] |
fill-mask | transformers |
# HPLT Bert for Icelandic
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language models. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_en")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_en", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Cite us
```bibtex
@misc{degibert2024new,
title={A New Massive Multilingual Dataset for High-Performance Language Technologies},
author={Ona de Gibert and Graeme Nail and Nikolay Arefyev and Marta Bañón and Jelmer van der Linde and Shaoxiong Ji and Jaume Zaragoza-Bernabeu and Mikko Aulamo and Gema Ramírez-Sánchez and Andrey Kutuzov and Sampo Pyysalo and Stephan Oepen and Jörg Tiedemann},
year={2024},
eprint={2403.14009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["is"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "datasets": ["HPLT/hplt_monolingual_v1_2"], "inference": false} | HPLT/hplt_bert_base_is | null | [
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"is",
"dataset:HPLT/hplt_monolingual_v1_2",
"arxiv:2403.14009",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null | 2024-04-22T01:22:54+00:00 | [
"2403.14009"
] | [
"is"
] | TAGS
#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #is #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us
|
# HPLT Bert for Icelandic
<img src="URL width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the HPLT project.
It is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.
A monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our language model training report.
The training code.
The training statistics of all 75 runs
## Example usage
This model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.
The following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.
## Cite us
| [
"# HPLT Bert for Icelandic\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] | [
"TAGS\n#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #is #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us \n",
"# HPLT Bert for Icelandic\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] |
fill-mask | transformers |
# HPLT Bert for Italian
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language models. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_en")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_en", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Cite us
```bibtex
@misc{degibert2024new,
title={A New Massive Multilingual Dataset for High-Performance Language Technologies},
author={Ona de Gibert and Graeme Nail and Nikolay Arefyev and Marta Bañón and Jelmer van der Linde and Shaoxiong Ji and Jaume Zaragoza-Bernabeu and Mikko Aulamo and Gema Ramírez-Sánchez and Andrey Kutuzov and Sampo Pyysalo and Stephan Oepen and Jörg Tiedemann},
year={2024},
eprint={2403.14009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["it"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "datasets": ["HPLT/hplt_monolingual_v1_2"], "inference": false} | HPLT/hplt_bert_base_it | null | [
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"it",
"dataset:HPLT/hplt_monolingual_v1_2",
"arxiv:2403.14009",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null | 2024-04-22T01:23:18+00:00 | [
"2403.14009"
] | [
"it"
] | TAGS
#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #it #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us
|
# HPLT Bert for Italian
<img src="URL width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the HPLT project.
It is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.
A monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our language model training report.
The training code.
The training statistics of all 75 runs
## Example usage
This model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.
The following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.
## Cite us
| [
"# HPLT Bert for Italian\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] | [
"TAGS\n#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #it #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us \n",
"# HPLT Bert for Italian\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] |
fill-mask | transformers |
# HPLT Bert for Japanese
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language models. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_en")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_en", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Cite us
```bibtex
@misc{degibert2024new,
title={A New Massive Multilingual Dataset for High-Performance Language Technologies},
author={Ona de Gibert and Graeme Nail and Nikolay Arefyev and Marta Bañón and Jelmer van der Linde and Shaoxiong Ji and Jaume Zaragoza-Bernabeu and Mikko Aulamo and Gema Ramírez-Sánchez and Andrey Kutuzov and Sampo Pyysalo and Stephan Oepen and Jörg Tiedemann},
year={2024},
eprint={2403.14009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["ja"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "datasets": ["HPLT/hplt_monolingual_v1_2"], "inference": false} | HPLT/hplt_bert_base_ja | null | [
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"ja",
"dataset:HPLT/hplt_monolingual_v1_2",
"arxiv:2403.14009",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null | 2024-04-22T01:23:44+00:00 | [
"2403.14009"
] | [
"ja"
] | TAGS
#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #ja #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us
|
# HPLT Bert for Japanese
<img src="URL width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the HPLT project.
It is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.
A monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our language model training report.
The training code.
The training statistics of all 75 runs
## Example usage
This model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.
The following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.
## Cite us
| [
"# HPLT Bert for Japanese\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] | [
"TAGS\n#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #ja #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us \n",
"# HPLT Bert for Japanese\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] |
fill-mask | transformers |
# HPLT Bert for Georgian
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language models. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_en")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_en", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Cite us
```bibtex
@misc{degibert2024new,
title={A New Massive Multilingual Dataset for High-Performance Language Technologies},
author={Ona de Gibert and Graeme Nail and Nikolay Arefyev and Marta Bañón and Jelmer van der Linde and Shaoxiong Ji and Jaume Zaragoza-Bernabeu and Mikko Aulamo and Gema Ramírez-Sánchez and Andrey Kutuzov and Sampo Pyysalo and Stephan Oepen and Jörg Tiedemann},
year={2024},
eprint={2403.14009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["ka"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "datasets": ["HPLT/hplt_monolingual_v1_2"], "inference": false} | HPLT/hplt_bert_base_ka | null | [
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"ka",
"dataset:HPLT/hplt_monolingual_v1_2",
"arxiv:2403.14009",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null | 2024-04-22T01:24:12+00:00 | [
"2403.14009"
] | [
"ka"
] | TAGS
#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #ka #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us
|
# HPLT Bert for Georgian
<img src="URL width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the HPLT project.
It is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.
A monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our language model training report.
The training code.
The training statistics of all 75 runs
## Example usage
This model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.
The following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.
## Cite us
| [
"# HPLT Bert for Georgian\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] | [
"TAGS\n#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #ka #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us \n",
"# HPLT Bert for Georgian\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] |
visual-question-answering | transformers |
[Comes with two line fixes for multi-GPUs](https://github.com/OpenGVLab/InternVL/issues/96)
# Original Model Card for InternVL-Chat-V1.5
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/D60YzQBIzvoCvLRp2gZ0A.jpeg" alt="Image Description" width="300" height="300" />
</p>
> _Two interns holding hands, symbolizing the integration of InternViT and InternLM._
\[[Paper](https://arxiv.org/abs/2312.14238)\] \[[GitHub](https://github.com/OpenGVLab/InternVL)\] \[[Chat Demo](https://internvl.opengvlab.com/)\] \[[中文解读](https://zhuanlan.zhihu.com/p/675877376)]
We introduce InternVL 1.5, an open-source multimodal large language model (MLLM) to bridge the capability gap between open-source and proprietary commercial models in multimodal understanding.
We introduce three simple designs:
1. Strong Vision Encoder: we explored a continuous learning strategy for the large-scale vision foundation model---InternViT-6B, boosting its visual understanding capabilities, and making it can be transferred and reused in different LLMs.
2. Dynamic High-Resolution: we divide images into tiles ranging from 1 to 40 of 448 × 448 pixels according to the aspect ratio and resolution of the input images, which supports up to 4K resolution input.
3. High-Quality Bilingual Dataset: we carefully collected a high-quality bilingual dataset that covers common scenes, document images, and annotated them with English and Chinese question-answer pairs, significantly enhancing performance in OCR- and Chinese-related tasks.
## Model Details
- **Model Type:** multimodal large language model (MLLM)
- **Model Stats:**
- Architecture: [InternViT-6B-448px-V1-5](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-5) + MLP + [InternLM2-Chat-20B](https://huggingface.co/internlm/internlm2-chat-20b)
- Image size: dynamic resolution, max to 40 tiles of 448 x 448 (4K resolution).
- Params: 25.5B
- **Training Strategy:**
- Pretraining Stage
- Learnable Component: ViT + MLP
- Data: Please see our technical report.
- SFT Stage
- Learnable Component: ViT + MLP + LLM
- Data: Please see our technical report.
## Released Models
| Model | Vision Foundation Model | Release Date |Note |
| :---------------------------------------------------------:|:--------------------------------------------------------------------------: |:----------------------:| :---------------------------------- |
| InternVL-Chat-V1.5(🤗 [HF link](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-5)) | InternViT-6B-448px-V1-5(🤗 [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-5)) |2024.04.18 | support 4K image; super strong OCR; Approaching the performance of GPT-4V and Gemini Pro on various benchmarks like MMMU, DocVQA, ChartQA, MathVista, etc. (🔥new)|
| InternVL-Chat-V1.2-Plus(🤗 [HF link](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-2-Plus) ) |InternViT-6B-448px-V1-2(🤗 [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-2)) |2024.02.21 | more SFT data and stronger |
| InternVL-Chat-V1.2(🤗 [HF link](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-2) ) |InternViT-6B-448px-V1-2(🤗 [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-2)) |2024.02.11 | scaling up LLM to 34B |
| InternVL-Chat-V1.1(🤗 [HF link](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-1)) |InternViT-6B-448px-V1-0(🤗 [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-0)) |2024.01.24 | support Chinese and stronger OCR |
## Performance


## Examples











## Model Usage
We provide an example code to run InternVL-Chat-V1.5 using `transformers`.
You also can use our [online demo](https://internvl.opengvlab.com/) for a quick experience of this model.
```python
import json
import os
from transformers import AutoTokenizer, AutoModel
from tqdm import tqdm
import torch
import torchvision.transforms as T
from PIL import Image
from torchvision.transforms.functional import InterpolationMode
IMAGENET_MEAN = (0.485, 0.456, 0.406)
IMAGENET_STD = (0.229, 0.224, 0.225)
def build_transform(input_size):
MEAN, STD = IMAGENET_MEAN, IMAGENET_STD
transform = T.Compose([
T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img),
T.Resize((input_size, input_size), interpolation=InterpolationMode.BICUBIC),
T.ToTensor(),
T.Normalize(mean=MEAN, std=STD)
])
return transform
def find_closest_aspect_ratio(aspect_ratio, target_ratios, width, height, image_size):
best_ratio_diff = float('inf')
best_ratio = (1, 1)
area = width * height
for ratio in target_ratios:
target_aspect_ratio = ratio[0] / ratio[1]
ratio_diff = abs(aspect_ratio - target_aspect_ratio)
if ratio_diff < best_ratio_diff:
best_ratio_diff = ratio_diff
best_ratio = ratio
elif ratio_diff == best_ratio_diff:
if area > 0.5 * image_size * image_size * ratio[0] * ratio[1]:
best_ratio = ratio
return best_ratio
def dynamic_preprocess(image, min_num=1, max_num=6, image_size=448, use_thumbnail=False):
orig_width, orig_height = image.size
aspect_ratio = orig_width / orig_height
# calculate the existing image aspect ratio
target_ratios = set(
(i, j) for n in range(min_num, max_num + 1) for i in range(1, n + 1) for j in range(1, n + 1) if
i * j <= max_num and i * j >= min_num)
target_ratios = sorted(target_ratios, key=lambda x: x[0] * x[1])
# find the closest aspect ratio to the target
target_aspect_ratio = find_closest_aspect_ratio(
aspect_ratio, target_ratios, orig_width, orig_height, image_size)
# calculate the target width and height
target_width = image_size * target_aspect_ratio[0]
target_height = image_size * target_aspect_ratio[1]
blocks = target_aspect_ratio[0] * target_aspect_ratio[1]
# resize the image
resized_img = image.resize((target_width, target_height))
processed_images = []
for i in range(blocks):
box = (
(i % (target_width // image_size)) * image_size,
(i // (target_width // image_size)) * image_size,
((i % (target_width // image_size)) + 1) * image_size,
((i // (target_width // image_size)) + 1) * image_size
)
# split the image
split_img = resized_img.crop(box)
processed_images.append(split_img)
assert len(processed_images) == blocks
if use_thumbnail and len(processed_images) != 1:
thumbnail_img = image.resize((image_size, image_size))
processed_images.append(thumbnail_img)
return processed_images
def load_image(image_file, input_size=448, max_num=6):
image = Image.open(image_file).convert('RGB')
transform = build_transform(input_size=input_size)
images = dynamic_preprocess(image, image_size=input_size, use_thumbnail=True, max_num=max_num)
pixel_values = [transform(image) for image in images]
pixel_values = torch.stack(pixel_values)
return pixel_values
path = "OpenGVLab/InternVL-Chat-V1-5"
# If you have an 80G A100 GPU, you can put the entire model on a single GPU.
model = AutoModel.from_pretrained(
path,
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
trust_remote_code=True).eval().cuda()
# Otherwise, you need to set device_map='auto' to use multiple GPUs for inference.
# model = AutoModel.from_pretrained(
# path,
# torch_dtype=torch.bfloat16,
# low_cpu_mem_usage=True,
# trust_remote_code=True,
# device_map='auto').eval()
tokenizer = AutoTokenizer.from_pretrained(path, trust_remote_code=True)
# set the max number of tiles in `max_num`
pixel_values = load_image('./examples/image1.jpg', max_num=6).to(torch.bfloat16).cuda()
generation_config = dict(
num_beams=1,
max_new_tokens=512,
do_sample=False,
)
# single-round single-image conversation
question = "请详细描述图片"
response = model.chat(tokenizer, pixel_values, question, generation_config)
print(question, response)
# multi-round single-image conversation
question = "请详细描述图片"
response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=None, return_history=True)
print(question, response)
question = "请根据图片写一首诗"
response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=history, return_history=True)
print(question, response)
# multi-round multi-image conversation
pixel_values1 = load_image('./examples/image1.jpg', max_num=6).to(torch.bfloat16).cuda()
pixel_values2 = load_image('./examples/image2.jpg', max_num=6).to(torch.bfloat16).cuda()
pixel_values = torch.cat((pixel_values1, pixel_values2), dim=0)
question = "详细描述这两张图片"
response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=None, return_history=True)
print(question, response)
# 第一张图片是一只红熊猫,它有着独特的橙红色皮毛,脸部、耳朵和四肢的末端有白色斑块。红熊猫的眼睛周围有深色的环,它的耳朵是圆形的,上面有白色的毛。它正坐在一个木制的结构上,看起来像是一个平台或休息的地方。背景中有树木和竹子,这表明红熊猫可能在一个模拟自然环境的动物园或保护区内。
#
# 第二张图片是一只大熊猫,它是中国的国宝,以其黑白相间的皮毛而闻名。大熊猫的眼睛、耳朵和四肢的末端是黑色的,而它的脸部、耳朵内侧和身体其他部分是白色的。大熊猫正坐在地上,周围有竹子,这是它们的主要食物来源。背景中也有树木,这表明大熊猫可能在一个为它们提供自然栖息地模拟的动物园或保护区内。
question = "这两张图片的相同点和区别分别是什么"
response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=history, return_history=True)
print(question, response)
# 这两张图片的相同点:
#
# 1. 都展示了熊猫,这是两种不同的熊猫物种。
# 2. 熊猫都处于一个看起来像是模拟自然环境的场所,可能是动物园或保护区。
# 3. 熊猫周围都有竹子,这是它们的主要食物来源。
#
# 这两张图片的区别:
#
# 1. 熊猫的种类不同:第一张图片是一只红熊猫,第二张图片是一只大熊猫。
# 2. 熊猫的皮毛颜色和图案不同:红熊猫的皮毛是橙红色,脸部、耳朵和四肢的末端有白色斑块;而大熊猫的皮毛是黑白相间的,眼睛、耳朵和四肢的末端是黑色的,脸部、耳朵内侧和身体其他部分是白色的。
# 3. 熊猫的姿态和位置不同:红熊猫坐在一个木制的结构上,而大熊猫坐在地上。
# 4. 背景中的植被和环境细节略有不同,但都包含树木和竹子。
```
## Citation
If you find this project useful in your research, please consider citing:
```BibTeX
@article{chen2023internvl,
title={InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks},
author={Chen, Zhe and Wu, Jiannan and Wang, Wenhai and Su, Weijie and Chen, Guo and Xing, Sen and Zhong, Muyan and Zhang, Qinglong and Zhu, Xizhou and Lu, Lewei and Li, Bin and Luo, Ping and Lu, Tong and Qiao, Yu and Dai, Jifeng},
journal={arXiv preprint arXiv:2312.14238},
year={2023}
}
```
## License
This project is released under the MIT license.
## Acknowledgement
InternVL is built with reference to the code of the following projects: [OpenAI CLIP](https://github.com/openai/CLIP), [Open CLIP](https://github.com/mlfoundations/open_clip), [CLIP Benchmark](https://github.com/LAION-AI/CLIP_benchmark), [EVA](https://github.com/baaivision/EVA/tree/master), [InternImage](https://github.com/OpenGVLab/InternImage), [ViT-Adapter](https://github.com/czczup/ViT-Adapter), [MMSegmentation](https://github.com/open-mmlab/mmsegmentation), [Transformers](https://github.com/huggingface/transformers), [DINOv2](https://github.com/facebookresearch/dinov2), [BLIP-2](https://github.com/salesforce/LAVIS/tree/main/projects/blip2), [Qwen-VL](https://github.com/QwenLM/Qwen-VL/tree/master/eval_mm), and [LLaVA-1.5](https://github.com/haotian-liu/LLaVA). Thanks for their awesome work! | {"license": "mit", "datasets": ["laion/laion2B-en", "laion/laion-coco", "laion/laion2B-multi", "kakaobrain/coyo-700m", "conceptual_captions", "wanng/wukong100m"], "pipeline_tag": "visual-question-answering"} | failspy/InternVL-Chat-V1-5-quantable | null | [
"transformers",
"safetensors",
"internvl_chat",
"feature-extraction",
"visual-question-answering",
"custom_code",
"dataset:laion/laion2B-en",
"dataset:laion/laion-coco",
"dataset:laion/laion2B-multi",
"dataset:kakaobrain/coyo-700m",
"dataset:conceptual_captions",
"dataset:wanng/wukong100m",
"arxiv:2312.14238",
"license:mit",
"region:us"
] | null | 2024-04-22T01:24:14+00:00 | [
"2312.14238"
] | [] | TAGS
#transformers #safetensors #internvl_chat #feature-extraction #visual-question-answering #custom_code #dataset-laion/laion2B-en #dataset-laion/laion-coco #dataset-laion/laion2B-multi #dataset-kakaobrain/coyo-700m #dataset-conceptual_captions #dataset-wanng/wukong100m #arxiv-2312.14238 #license-mit #region-us
| Comes with two line fixes for multi-GPUs
Original Model Card for InternVL-Chat-V1.5
==========================================

>
> *Two interns holding hands, symbolizing the integration of InternViT and InternLM.*
>
>
>
[Paper] [GitHub] [Chat Demo] [中文解读]
We introduce InternVL 1.5, an open-source multimodal large language model (MLLM) to bridge the capability gap between open-source and proprietary commercial models in multimodal understanding.
We introduce three simple designs:
1. Strong Vision Encoder: we explored a continuous learning strategy for the large-scale vision foundation model---InternViT-6B, boosting its visual understanding capabilities, and making it can be transferred and reused in different LLMs.
2. Dynamic High-Resolution: we divide images into tiles ranging from 1 to 40 of 448 × 448 pixels according to the aspect ratio and resolution of the input images, which supports up to 4K resolution input.
3. High-Quality Bilingual Dataset: we carefully collected a high-quality bilingual dataset that covers common scenes, document images, and annotated them with English and Chinese question-answer pairs, significantly enhancing performance in OCR- and Chinese-related tasks.
Model Details
-------------
* Model Type: multimodal large language model (MLLM)
* Model Stats:
+ Architecture: InternViT-6B-448px-V1-5 + MLP + InternLM2-Chat-20B
+ Image size: dynamic resolution, max to 40 tiles of 448 x 448 (4K resolution).
+ Params: 25.5B
* Training Strategy:
+ Pretraining Stage
- Learnable Component: ViT + MLP
- Data: Please see our technical report.
+ SFT Stage
- Learnable Component: ViT + MLP + LLM
- Data: Please see our technical report.
Released Models
---------------
Performance
-----------
!image/png
!image/png
Examples
--------
!image/png
!image/png
!image/png
!image/png
!image/png
!image/png
!image/png
!image/png
!image/png
!image/png
!image/png
Model Usage
-----------
We provide an example code to run InternVL-Chat-V1.5 using 'transformers'.
You also can use our online demo for a quick experience of this model.
If you find this project useful in your research, please consider citing:
License
-------
This project is released under the MIT license.
Acknowledgement
---------------
InternVL is built with reference to the code of the following projects: OpenAI CLIP, Open CLIP, CLIP Benchmark, EVA, InternImage, ViT-Adapter, MMSegmentation, Transformers, DINOv2, BLIP-2, Qwen-VL, and LLaVA-1.5. Thanks for their awesome work!
| [] | [
"TAGS\n#transformers #safetensors #internvl_chat #feature-extraction #visual-question-answering #custom_code #dataset-laion/laion2B-en #dataset-laion/laion-coco #dataset-laion/laion2B-multi #dataset-kakaobrain/coyo-700m #dataset-conceptual_captions #dataset-wanng/wukong100m #arxiv-2312.14238 #license-mit #region-us \n"
] |
fill-mask | transformers |
# HPLT Bert for Kazakh
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language models. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_en")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_en", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Cite us
```bibtex
@misc{degibert2024new,
title={A New Massive Multilingual Dataset for High-Performance Language Technologies},
author={Ona de Gibert and Graeme Nail and Nikolay Arefyev and Marta Bañón and Jelmer van der Linde and Shaoxiong Ji and Jaume Zaragoza-Bernabeu and Mikko Aulamo and Gema Ramírez-Sánchez and Andrey Kutuzov and Sampo Pyysalo and Stephan Oepen and Jörg Tiedemann},
year={2024},
eprint={2403.14009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["kk"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "datasets": ["HPLT/hplt_monolingual_v1_2"], "inference": false} | HPLT/hplt_bert_base_kk | null | [
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"kk",
"dataset:HPLT/hplt_monolingual_v1_2",
"arxiv:2403.14009",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null | 2024-04-22T01:24:41+00:00 | [
"2403.14009"
] | [
"kk"
] | TAGS
#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #kk #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us
|
# HPLT Bert for Kazakh
<img src="URL width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the HPLT project.
It is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.
A monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our language model training report.
The training code.
The training statistics of all 75 runs
## Example usage
This model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.
The following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.
## Cite us
| [
"# HPLT Bert for Kazakh\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] | [
"TAGS\n#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #kk #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us \n",
"# HPLT Bert for Kazakh\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] |
fill-mask | transformers |
# HPLT Bert for Kannada
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language models. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_en")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_en", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Cite us
```bibtex
@misc{degibert2024new,
title={A New Massive Multilingual Dataset for High-Performance Language Technologies},
author={Ona de Gibert and Graeme Nail and Nikolay Arefyev and Marta Bañón and Jelmer van der Linde and Shaoxiong Ji and Jaume Zaragoza-Bernabeu and Mikko Aulamo and Gema Ramírez-Sánchez and Andrey Kutuzov and Sampo Pyysalo and Stephan Oepen and Jörg Tiedemann},
year={2024},
eprint={2403.14009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["kn"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "datasets": ["HPLT/hplt_monolingual_v1_2"], "inference": false} | HPLT/hplt_bert_base_kn | null | [
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"kn",
"dataset:HPLT/hplt_monolingual_v1_2",
"arxiv:2403.14009",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null | 2024-04-22T01:25:08+00:00 | [
"2403.14009"
] | [
"kn"
] | TAGS
#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #kn #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us
|
# HPLT Bert for Kannada
<img src="URL width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the HPLT project.
It is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.
A monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our language model training report.
The training code.
The training statistics of all 75 runs
## Example usage
This model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.
The following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.
## Cite us
| [
"# HPLT Bert for Kannada\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] | [
"TAGS\n#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #kn #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us \n",
"# HPLT Bert for Kannada\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] |
fill-mask | transformers |
# HPLT Bert for Korean
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language models. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_en")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_en", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Cite us
```bibtex
@misc{degibert2024new,
title={A New Massive Multilingual Dataset for High-Performance Language Technologies},
author={Ona de Gibert and Graeme Nail and Nikolay Arefyev and Marta Bañón and Jelmer van der Linde and Shaoxiong Ji and Jaume Zaragoza-Bernabeu and Mikko Aulamo and Gema Ramírez-Sánchez and Andrey Kutuzov and Sampo Pyysalo and Stephan Oepen and Jörg Tiedemann},
year={2024},
eprint={2403.14009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["ko"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "datasets": ["HPLT/hplt_monolingual_v1_2"], "inference": false} | HPLT/hplt_bert_base_ko | null | [
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"ko",
"dataset:HPLT/hplt_monolingual_v1_2",
"arxiv:2403.14009",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null | 2024-04-22T01:25:31+00:00 | [
"2403.14009"
] | [
"ko"
] | TAGS
#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #ko #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us
|
# HPLT Bert for Korean
<img src="URL width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the HPLT project.
It is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.
A monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our language model training report.
The training code.
The training statistics of all 75 runs
## Example usage
This model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.
The following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.
## Cite us
| [
"# HPLT Bert for Korean\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] | [
"TAGS\n#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #ko #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us \n",
"# HPLT Bert for Korean\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# outputs
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- training_steps: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "mistralai/Mistral-7B-Instruct-v0.2", "model-index": [{"name": "outputs", "results": []}]} | franknnind/outputs | null | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2024-04-22T01:25:31+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #generated_from_trainer #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #region-us
|
# outputs
This model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- training_steps: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 | [
"# outputs\n\nThis model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 2\n- training_steps: 10\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] | [
"TAGS\n#peft #tensorboard #safetensors #generated_from_trainer #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #region-us \n",
"# outputs\n\nThis model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 2\n- training_steps: 10\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] |
text-generation | transformers | # About this model
This model can handle (limited) TSF content. If you Character Card have complex plot, maybe you should try other model (maybe bigger parameter?).
- Another version of [main model](https://huggingface.co/Alsebay/Narumashi-11B), which have lora rank is 128 to reduce underfitting.
Do you know TSF, TS, TG? A lot of model don't really know about that, so I do some experiment to finetune TSF dataset.
- **Finetuned with Chinese Novels dataset, to increase the accuracy in TSF theme, which is not quite popular.
(R18 dataset). You should include chinese/japanese word about tag you want(search it in pixiv) in your character card to trigger it.
This finetune idea is suitable for Chinese Roleplay than English (Becaue I could only find good Chinese datasets about it 🙃, it is nice that if you can open a discussion about English TSF datasets). But it still affect the models writing styles, so maybe less GPT-like response in both Chinese and English?.**
- **Finetuned from model :** Sao10K/Fimbulvetr-11B-v2 . Thank Sao10K a lot :)
## 8k Context Length BTW, the original Fimbulvetr and Solar have only 4k context length, so I extended it 😆.
## GGUF version? [here is it](https://huggingface.co/Alsebay/Narumashi-11B-v1.1-GGUF).
## Dataset
All chinese novels dataset
```
Dataset(all are novels):
60% skinsuit
25% possession
5% transform(shapeshift)
10% other
```
# Thank Unsloth for good finetuning tool. This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) | {"language": ["en"], "license": "cc-by-nc-4.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl", "sft", "Roleplay", "roleplay"], "base_model": "Sao10K/Fimbulvetr-11B-v2"} | Alsebay/Narumashi-11B-v1.1 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"Roleplay",
"roleplay",
"en",
"base_model:Sao10K/Fimbulvetr-11B-v2",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-22T01:25:45+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #sft #Roleplay #roleplay #en #base_model-Sao10K/Fimbulvetr-11B-v2 #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #region-us
| # About this model
This model can handle (limited) TSF content. If you Character Card have complex plot, maybe you should try other model (maybe bigger parameter?).
- Another version of main model, which have lora rank is 128 to reduce underfitting.
Do you know TSF, TS, TG? A lot of model don't really know about that, so I do some experiment to finetune TSF dataset.
- Finetuned with Chinese Novels dataset, to increase the accuracy in TSF theme, which is not quite popular.
(R18 dataset). You should include chinese/japanese word about tag you want(search it in pixiv) in your character card to trigger it.
This finetune idea is suitable for Chinese Roleplay than English (Becaue I could only find good Chinese datasets about it , it is nice that if you can open a discussion about English TSF datasets). But it still affect the models writing styles, so maybe less GPT-like response in both Chinese and English?.
- Finetuned from model : Sao10K/Fimbulvetr-11B-v2 . Thank Sao10K a lot :)
## 8k Context Length BTW, the original Fimbulvetr and Solar have only 4k context length, so I extended it .
## GGUF version? here is it.
## Dataset
All chinese novels dataset
# Thank Unsloth for good finetuning tool. This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/> | [
"# About this model\n\nThis model can handle (limited) TSF content. If you Character Card have complex plot, maybe you should try other model (maybe bigger parameter?).\n\n- Another version of main model, which have lora rank is 128 to reduce underfitting.\n\nDo you know TSF, TS, TG? A lot of model don't really know about that, so I do some experiment to finetune TSF dataset.\n\n- Finetuned with Chinese Novels dataset, to increase the accuracy in TSF theme, which is not quite popular.\n (R18 dataset). You should include chinese/japanese word about tag you want(search it in pixiv) in your character card to trigger it.\n This finetune idea is suitable for Chinese Roleplay than English (Becaue I could only find good Chinese datasets about it , it is nice that if you can open a discussion about English TSF datasets). But it still affect the models writing styles, so maybe less GPT-like response in both Chinese and English?.\n- Finetuned from model : Sao10K/Fimbulvetr-11B-v2 . Thank Sao10K a lot :)",
"## 8k Context Length BTW, the original Fimbulvetr and Solar have only 4k context length, so I extended it .",
"## GGUF version? here is it.",
"## Dataset\nAll chinese novels dataset",
"# Thank Unsloth for good finetuning tool. This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #sft #Roleplay #roleplay #en #base_model-Sao10K/Fimbulvetr-11B-v2 #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# About this model\n\nThis model can handle (limited) TSF content. If you Character Card have complex plot, maybe you should try other model (maybe bigger parameter?).\n\n- Another version of main model, which have lora rank is 128 to reduce underfitting.\n\nDo you know TSF, TS, TG? A lot of model don't really know about that, so I do some experiment to finetune TSF dataset.\n\n- Finetuned with Chinese Novels dataset, to increase the accuracy in TSF theme, which is not quite popular.\n (R18 dataset). You should include chinese/japanese word about tag you want(search it in pixiv) in your character card to trigger it.\n This finetune idea is suitable for Chinese Roleplay than English (Becaue I could only find good Chinese datasets about it , it is nice that if you can open a discussion about English TSF datasets). But it still affect the models writing styles, so maybe less GPT-like response in both Chinese and English?.\n- Finetuned from model : Sao10K/Fimbulvetr-11B-v2 . Thank Sao10K a lot :)",
"## 8k Context Length BTW, the original Fimbulvetr and Solar have only 4k context length, so I extended it .",
"## GGUF version? here is it.",
"## Dataset\nAll chinese novels dataset",
"# Thank Unsloth for good finetuning tool. This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
fill-mask | transformers |
# HPLT Bert for Kyrgyz
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language models. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_en")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_en", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Cite us
```bibtex
@misc{degibert2024new,
title={A New Massive Multilingual Dataset for High-Performance Language Technologies},
author={Ona de Gibert and Graeme Nail and Nikolay Arefyev and Marta Bañón and Jelmer van der Linde and Shaoxiong Ji and Jaume Zaragoza-Bernabeu and Mikko Aulamo and Gema Ramírez-Sánchez and Andrey Kutuzov and Sampo Pyysalo and Stephan Oepen and Jörg Tiedemann},
year={2024},
eprint={2403.14009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["ky"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "datasets": ["HPLT/hplt_monolingual_v1_2"], "inference": false} | HPLT/hplt_bert_base_ky | null | [
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"ky",
"dataset:HPLT/hplt_monolingual_v1_2",
"arxiv:2403.14009",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null | 2024-04-22T01:25:56+00:00 | [
"2403.14009"
] | [
"ky"
] | TAGS
#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #ky #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us
|
# HPLT Bert for Kyrgyz
<img src="URL width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the HPLT project.
It is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.
A monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our language model training report.
The training code.
The training statistics of all 75 runs
## Example usage
This model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.
The following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.
## Cite us
| [
"# HPLT Bert for Kyrgyz\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] | [
"TAGS\n#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #ky #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us \n",
"# HPLT Bert for Kyrgyz\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] |
fill-mask | transformers |
# HPLT Bert for Latin
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language models. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_en")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_en", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Cite us
```bibtex
@misc{degibert2024new,
title={A New Massive Multilingual Dataset for High-Performance Language Technologies},
author={Ona de Gibert and Graeme Nail and Nikolay Arefyev and Marta Bañón and Jelmer van der Linde and Shaoxiong Ji and Jaume Zaragoza-Bernabeu and Mikko Aulamo and Gema Ramírez-Sánchez and Andrey Kutuzov and Sampo Pyysalo and Stephan Oepen and Jörg Tiedemann},
year={2024},
eprint={2403.14009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["la"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "datasets": ["HPLT/hplt_monolingual_v1_2"], "inference": false} | HPLT/hplt_bert_base_la | null | [
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"la",
"dataset:HPLT/hplt_monolingual_v1_2",
"arxiv:2403.14009",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null | 2024-04-22T01:26:20+00:00 | [
"2403.14009"
] | [
"la"
] | TAGS
#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #la #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us
|
# HPLT Bert for Latin
<img src="URL width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the HPLT project.
It is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.
A monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our language model training report.
The training code.
The training statistics of all 75 runs
## Example usage
This model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.
The following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.
## Cite us
| [
"# HPLT Bert for Latin\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] | [
"TAGS\n#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #la #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us \n",
"# HPLT Bert for Latin\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] |
fill-mask | transformers |
# HPLT Bert for Lithuanian
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language models. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_en")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_en", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Cite us
```bibtex
@misc{degibert2024new,
title={A New Massive Multilingual Dataset for High-Performance Language Technologies},
author={Ona de Gibert and Graeme Nail and Nikolay Arefyev and Marta Bañón and Jelmer van der Linde and Shaoxiong Ji and Jaume Zaragoza-Bernabeu and Mikko Aulamo and Gema Ramírez-Sánchez and Andrey Kutuzov and Sampo Pyysalo and Stephan Oepen and Jörg Tiedemann},
year={2024},
eprint={2403.14009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["lt"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "datasets": ["HPLT/hplt_monolingual_v1_2"], "inference": false} | HPLT/hplt_bert_base_lt | null | [
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"lt",
"dataset:HPLT/hplt_monolingual_v1_2",
"arxiv:2403.14009",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null | 2024-04-22T01:26:43+00:00 | [
"2403.14009"
] | [
"lt"
] | TAGS
#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #lt #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us
|
# HPLT Bert for Lithuanian
<img src="URL width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the HPLT project.
It is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.
A monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our language model training report.
The training code.
The training statistics of all 75 runs
## Example usage
This model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.
The following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.
## Cite us
| [
"# HPLT Bert for Lithuanian\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] | [
"TAGS\n#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #lt #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us \n",
"# HPLT Bert for Lithuanian\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] |
fill-mask | transformers |
# HPLT Bert for Latvian
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language models. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_en")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_en", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Cite us
```bibtex
@misc{degibert2024new,
title={A New Massive Multilingual Dataset for High-Performance Language Technologies},
author={Ona de Gibert and Graeme Nail and Nikolay Arefyev and Marta Bañón and Jelmer van der Linde and Shaoxiong Ji and Jaume Zaragoza-Bernabeu and Mikko Aulamo and Gema Ramírez-Sánchez and Andrey Kutuzov and Sampo Pyysalo and Stephan Oepen and Jörg Tiedemann},
year={2024},
eprint={2403.14009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["lv"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "datasets": ["HPLT/hplt_monolingual_v1_2"], "inference": false} | HPLT/hplt_bert_base_lv | null | [
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"lv",
"dataset:HPLT/hplt_monolingual_v1_2",
"arxiv:2403.14009",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null | 2024-04-22T01:27:09+00:00 | [
"2403.14009"
] | [
"lv"
] | TAGS
#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #lv #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us
|
# HPLT Bert for Latvian
<img src="URL width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the HPLT project.
It is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.
A monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our language model training report.
The training code.
The training statistics of all 75 runs
## Example usage
This model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.
The following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.
## Cite us
| [
"# HPLT Bert for Latvian\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] | [
"TAGS\n#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #lv #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us \n",
"# HPLT Bert for Latvian\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] |
fill-mask | transformers |
# HPLT Bert for Macedonian
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language models. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_en")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_en", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Cite us
```bibtex
@misc{degibert2024new,
title={A New Massive Multilingual Dataset for High-Performance Language Technologies},
author={Ona de Gibert and Graeme Nail and Nikolay Arefyev and Marta Bañón and Jelmer van der Linde and Shaoxiong Ji and Jaume Zaragoza-Bernabeu and Mikko Aulamo and Gema Ramírez-Sánchez and Andrey Kutuzov and Sampo Pyysalo and Stephan Oepen and Jörg Tiedemann},
year={2024},
eprint={2403.14009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["mk"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "datasets": ["HPLT/hplt_monolingual_v1_2"], "inference": false} | HPLT/hplt_bert_base_mk | null | [
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"mk",
"dataset:HPLT/hplt_monolingual_v1_2",
"arxiv:2403.14009",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null | 2024-04-22T01:27:32+00:00 | [
"2403.14009"
] | [
"mk"
] | TAGS
#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #mk #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us
|
# HPLT Bert for Macedonian
<img src="URL width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the HPLT project.
It is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.
A monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our language model training report.
The training code.
The training statistics of all 75 runs
## Example usage
This model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.
The following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.
## Cite us
| [
"# HPLT Bert for Macedonian\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] | [
"TAGS\n#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #mk #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us \n",
"# HPLT Bert for Macedonian\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] |
fill-mask | transformers |
# HPLT Bert for Malayalam
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language models. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_en")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_en", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Cite us
```bibtex
@misc{degibert2024new,
title={A New Massive Multilingual Dataset for High-Performance Language Technologies},
author={Ona de Gibert and Graeme Nail and Nikolay Arefyev and Marta Bañón and Jelmer van der Linde and Shaoxiong Ji and Jaume Zaragoza-Bernabeu and Mikko Aulamo and Gema Ramírez-Sánchez and Andrey Kutuzov and Sampo Pyysalo and Stephan Oepen and Jörg Tiedemann},
year={2024},
eprint={2403.14009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["ml"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "datasets": ["HPLT/hplt_monolingual_v1_2"], "inference": false} | HPLT/hplt_bert_base_ml | null | [
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"ml",
"dataset:HPLT/hplt_monolingual_v1_2",
"arxiv:2403.14009",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null | 2024-04-22T01:27:56+00:00 | [
"2403.14009"
] | [
"ml"
] | TAGS
#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #ml #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us
|
# HPLT Bert for Malayalam
<img src="URL width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the HPLT project.
It is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.
A monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our language model training report.
The training code.
The training statistics of all 75 runs
## Example usage
This model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.
The following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.
## Cite us
| [
"# HPLT Bert for Malayalam\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] | [
"TAGS\n#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #ml #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us \n",
"# HPLT Bert for Malayalam\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] |
text-to-image | null | # SDXL Lightning - Onnx Olive DirectML Optimized
## Original Model
https://huggingface.co/ByteDance/SDXL-Lightning
## C# Inference Demo
https://github.com/saddam213/OnnxStack
```csharp
// Create Pipeline
var pipeline = StableDiffusionXLPipeline.CreatePipeline("D:\\Repositories\\SDXL-Lightning-onnx");
// Prompt
var promptOptions = new PromptOptions
{
Prompt = "photo of a cat drinking at a bar"
};
// Scheduler Options
var schedulerOptions = pipeline.DefaultSchedulerOptions with
{
SchedulerType = SchedulerType.Euler,
InferenceSteps = 8,
GuidanceScale = 0,
BetaSchedule = BetaScheduleType.Linear
};
// Run pipeline
var result = await pipeline.GenerateImageAsync(promptOptions, schedulerOptions);
// Save Image Result
await result.SaveAsync("Result.png");
```
## Inference Result
 | {"pipeline_tag": "text-to-image"} | saddam213/SDXL-Lightning-onnx | null | [
"onnx",
"text-to-image",
"region:us"
] | null | 2024-04-22T01:28:16+00:00 | [] | [] | TAGS
#onnx #text-to-image #region-us
| # SDXL Lightning - Onnx Olive DirectML Optimized
## Original Model
URL
## C# Inference Demo
URL
## Inference Result
!Intro Image | [
"# SDXL Lightning - Onnx Olive DirectML Optimized",
"## Original Model\nURL",
"## C# Inference Demo\nURL",
"## Inference Result\n!Intro Image"
] | [
"TAGS\n#onnx #text-to-image #region-us \n",
"# SDXL Lightning - Onnx Olive DirectML Optimized",
"## Original Model\nURL",
"## C# Inference Demo\nURL",
"## Inference Result\n!Intro Image"
] |
fill-mask | transformers |
# HPLT Bert for Mongolian
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language models. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_en")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_en", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Cite us
```bibtex
@misc{degibert2024new,
title={A New Massive Multilingual Dataset for High-Performance Language Technologies},
author={Ona de Gibert and Graeme Nail and Nikolay Arefyev and Marta Bañón and Jelmer van der Linde and Shaoxiong Ji and Jaume Zaragoza-Bernabeu and Mikko Aulamo and Gema Ramírez-Sánchez and Andrey Kutuzov and Sampo Pyysalo and Stephan Oepen and Jörg Tiedemann},
year={2024},
eprint={2403.14009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["mn"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "datasets": ["HPLT/hplt_monolingual_v1_2"], "inference": false} | HPLT/hplt_bert_base_mn | null | [
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"mn",
"dataset:HPLT/hplt_monolingual_v1_2",
"arxiv:2403.14009",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null | 2024-04-22T01:28:20+00:00 | [
"2403.14009"
] | [
"mn"
] | TAGS
#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #mn #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us
|
# HPLT Bert for Mongolian
<img src="URL width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the HPLT project.
It is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.
A monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our language model training report.
The training code.
The training statistics of all 75 runs
## Example usage
This model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.
The following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.
## Cite us
| [
"# HPLT Bert for Mongolian\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] | [
"TAGS\n#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #mn #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us \n",
"# HPLT Bert for Mongolian\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] |
fill-mask | transformers |
# HPLT Bert for Marathi
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language models. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_en")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_en", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Cite us
```bibtex
@misc{degibert2024new,
title={A New Massive Multilingual Dataset for High-Performance Language Technologies},
author={Ona de Gibert and Graeme Nail and Nikolay Arefyev and Marta Bañón and Jelmer van der Linde and Shaoxiong Ji and Jaume Zaragoza-Bernabeu and Mikko Aulamo and Gema Ramírez-Sánchez and Andrey Kutuzov and Sampo Pyysalo and Stephan Oepen and Jörg Tiedemann},
year={2024},
eprint={2403.14009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["mr"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "datasets": ["HPLT/hplt_monolingual_v1_2"], "inference": false} | HPLT/hplt_bert_base_mr | null | [
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"mr",
"dataset:HPLT/hplt_monolingual_v1_2",
"arxiv:2403.14009",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null | 2024-04-22T01:28:48+00:00 | [
"2403.14009"
] | [
"mr"
] | TAGS
#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #mr #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us
|
# HPLT Bert for Marathi
<img src="URL width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the HPLT project.
It is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.
A monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our language model training report.
The training code.
The training statistics of all 75 runs
## Example usage
This model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.
The following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.
## Cite us
| [
"# HPLT Bert for Marathi\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] | [
"TAGS\n#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #mr #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us \n",
"# HPLT Bert for Marathi\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] |
fill-mask | transformers |
# HPLT Bert for Malay
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language models. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_en")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_en", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Cite us
```bibtex
@misc{degibert2024new,
title={A New Massive Multilingual Dataset for High-Performance Language Technologies},
author={Ona de Gibert and Graeme Nail and Nikolay Arefyev and Marta Bañón and Jelmer van der Linde and Shaoxiong Ji and Jaume Zaragoza-Bernabeu and Mikko Aulamo and Gema Ramírez-Sánchez and Andrey Kutuzov and Sampo Pyysalo and Stephan Oepen and Jörg Tiedemann},
year={2024},
eprint={2403.14009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["ms"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "datasets": ["HPLT/hplt_monolingual_v1_2"], "inference": false} | HPLT/hplt_bert_base_ms | null | [
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"ms",
"dataset:HPLT/hplt_monolingual_v1_2",
"arxiv:2403.14009",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null | 2024-04-22T01:29:16+00:00 | [
"2403.14009"
] | [
"ms"
] | TAGS
#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #ms #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us
|
# HPLT Bert for Malay
<img src="URL width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the HPLT project.
It is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.
A monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our language model training report.
The training code.
The training statistics of all 75 runs
## Example usage
This model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.
The following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.
## Cite us
| [
"# HPLT Bert for Malay\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] | [
"TAGS\n#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #ms #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us \n",
"# HPLT Bert for Malay\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] |
fill-mask | transformers |
# HPLT Bert for Maltese
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language models. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_en")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_en", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Cite us
```bibtex
@misc{degibert2024new,
title={A New Massive Multilingual Dataset for High-Performance Language Technologies},
author={Ona de Gibert and Graeme Nail and Nikolay Arefyev and Marta Bañón and Jelmer van der Linde and Shaoxiong Ji and Jaume Zaragoza-Bernabeu and Mikko Aulamo and Gema Ramírez-Sánchez and Andrey Kutuzov and Sampo Pyysalo and Stephan Oepen and Jörg Tiedemann},
year={2024},
eprint={2403.14009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["mt"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "datasets": ["HPLT/hplt_monolingual_v1_2"], "inference": false} | HPLT/hplt_bert_base_mt | null | [
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"mt",
"dataset:HPLT/hplt_monolingual_v1_2",
"arxiv:2403.14009",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null | 2024-04-22T01:29:42+00:00 | [
"2403.14009"
] | [
"mt"
] | TAGS
#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #mt #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us
|
# HPLT Bert for Maltese
<img src="URL width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the HPLT project.
It is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.
A monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our language model training report.
The training code.
The training statistics of all 75 runs
## Example usage
This model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.
The following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.
## Cite us
| [
"# HPLT Bert for Maltese\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] | [
"TAGS\n#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #mt #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us \n",
"# HPLT Bert for Maltese\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] |
fill-mask | transformers |
# HPLT Bert for Burmese
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language models. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_en")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_en", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Cite us
```bibtex
@misc{degibert2024new,
title={A New Massive Multilingual Dataset for High-Performance Language Technologies},
author={Ona de Gibert and Graeme Nail and Nikolay Arefyev and Marta Bañón and Jelmer van der Linde and Shaoxiong Ji and Jaume Zaragoza-Bernabeu and Mikko Aulamo and Gema Ramírez-Sánchez and Andrey Kutuzov and Sampo Pyysalo and Stephan Oepen and Jörg Tiedemann},
year={2024},
eprint={2403.14009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["my"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "datasets": ["HPLT/hplt_monolingual_v1_2"], "inference": false} | HPLT/hplt_bert_base_my | null | [
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"my",
"dataset:HPLT/hplt_monolingual_v1_2",
"arxiv:2403.14009",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null | 2024-04-22T01:30:04+00:00 | [
"2403.14009"
] | [
"my"
] | TAGS
#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #my #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us
|
# HPLT Bert for Burmese
<img src="URL width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the HPLT project.
It is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.
A monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our language model training report.
The training code.
The training statistics of all 75 runs
## Example usage
This model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.
The following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.
## Cite us
| [
"# HPLT Bert for Burmese\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] | [
"TAGS\n#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #my #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us \n",
"# HPLT Bert for Burmese\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] |
fill-mask | transformers |
# HPLT Bert for Norwegian Bokmål
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language models. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_en")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_en", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Cite us
```bibtex
@misc{degibert2024new,
title={A New Massive Multilingual Dataset for High-Performance Language Technologies},
author={Ona de Gibert and Graeme Nail and Nikolay Arefyev and Marta Bañón and Jelmer van der Linde and Shaoxiong Ji and Jaume Zaragoza-Bernabeu and Mikko Aulamo and Gema Ramírez-Sánchez and Andrey Kutuzov and Sampo Pyysalo and Stephan Oepen and Jörg Tiedemann},
year={2024},
eprint={2403.14009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["nb"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "datasets": ["HPLT/hplt_monolingual_v1_2"], "inference": false} | HPLT/hplt_bert_base_nb | null | [
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"nb",
"dataset:HPLT/hplt_monolingual_v1_2",
"arxiv:2403.14009",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null | 2024-04-22T01:30:28+00:00 | [
"2403.14009"
] | [
"nb"
] | TAGS
#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #nb #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us
|
# HPLT Bert for Norwegian Bokmål
<img src="URL width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the HPLT project.
It is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.
A monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our language model training report.
The training code.
The training statistics of all 75 runs
## Example usage
This model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.
The following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.
## Cite us
| [
"# HPLT Bert for Norwegian Bokmål\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] | [
"TAGS\n#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #nb #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us \n",
"# HPLT Bert for Norwegian Bokmål\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] |
fill-mask | transformers |
# HPLT Bert for Nepali
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language models. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_en")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_en", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Cite us
```bibtex
@misc{degibert2024new,
title={A New Massive Multilingual Dataset for High-Performance Language Technologies},
author={Ona de Gibert and Graeme Nail and Nikolay Arefyev and Marta Bañón and Jelmer van der Linde and Shaoxiong Ji and Jaume Zaragoza-Bernabeu and Mikko Aulamo and Gema Ramírez-Sánchez and Andrey Kutuzov and Sampo Pyysalo and Stephan Oepen and Jörg Tiedemann},
year={2024},
eprint={2403.14009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["ne"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "datasets": ["HPLT/hplt_monolingual_v1_2"], "inference": false} | HPLT/hplt_bert_base_ne | null | [
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"ne",
"dataset:HPLT/hplt_monolingual_v1_2",
"arxiv:2403.14009",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null | 2024-04-22T01:30:52+00:00 | [
"2403.14009"
] | [
"ne"
] | TAGS
#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #ne #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us
|
# HPLT Bert for Nepali
<img src="URL width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the HPLT project.
It is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.
A monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our language model training report.
The training code.
The training statistics of all 75 runs
## Example usage
This model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.
The following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.
## Cite us
| [
"# HPLT Bert for Nepali\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] | [
"TAGS\n#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #ne #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us \n",
"# HPLT Bert for Nepali\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] |
fill-mask | transformers |
# HPLT Bert for Dutch
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language models. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_en")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_en", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Cite us
```bibtex
@misc{degibert2024new,
title={A New Massive Multilingual Dataset for High-Performance Language Technologies},
author={Ona de Gibert and Graeme Nail and Nikolay Arefyev and Marta Bañón and Jelmer van der Linde and Shaoxiong Ji and Jaume Zaragoza-Bernabeu and Mikko Aulamo and Gema Ramírez-Sánchez and Andrey Kutuzov and Sampo Pyysalo and Stephan Oepen and Jörg Tiedemann},
year={2024},
eprint={2403.14009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["nl"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "datasets": ["HPLT/hplt_monolingual_v1_2"], "inference": false} | HPLT/hplt_bert_base_nl | null | [
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"nl",
"dataset:HPLT/hplt_monolingual_v1_2",
"arxiv:2403.14009",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null | 2024-04-22T01:31:23+00:00 | [
"2403.14009"
] | [
"nl"
] | TAGS
#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #nl #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us
|
# HPLT Bert for Dutch
<img src="URL width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the HPLT project.
It is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.
A monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our language model training report.
The training code.
The training statistics of all 75 runs
## Example usage
This model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.
The following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.
## Cite us
| [
"# HPLT Bert for Dutch\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] | [
"TAGS\n#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #nl #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us \n",
"# HPLT Bert for Dutch\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] |
fill-mask | transformers |
# HPLT Bert for Norwegian Nynorsk
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language models. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_en")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_en", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Cite us
```bibtex
@misc{degibert2024new,
title={A New Massive Multilingual Dataset for High-Performance Language Technologies},
author={Ona de Gibert and Graeme Nail and Nikolay Arefyev and Marta Bañón and Jelmer van der Linde and Shaoxiong Ji and Jaume Zaragoza-Bernabeu and Mikko Aulamo and Gema Ramírez-Sánchez and Andrey Kutuzov and Sampo Pyysalo and Stephan Oepen and Jörg Tiedemann},
year={2024},
eprint={2403.14009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["nn"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "datasets": ["HPLT/hplt_monolingual_v1_2"], "inference": false} | HPLT/hplt_bert_base_nn | null | [
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"nn",
"dataset:HPLT/hplt_monolingual_v1_2",
"arxiv:2403.14009",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null | 2024-04-22T01:31:46+00:00 | [
"2403.14009"
] | [
"nn"
] | TAGS
#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #nn #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us
|
# HPLT Bert for Norwegian Nynorsk
<img src="URL width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the HPLT project.
It is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.
A monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our language model training report.
The training code.
The training statistics of all 75 runs
## Example usage
This model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.
The following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.
## Cite us
| [
"# HPLT Bert for Norwegian Nynorsk\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] | [
"TAGS\n#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #nn #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us \n",
"# HPLT Bert for Norwegian Nynorsk\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] |
fill-mask | transformers |
# HPLT Bert for Panjabi
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language models. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_en")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_en", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Cite us
```bibtex
@misc{degibert2024new,
title={A New Massive Multilingual Dataset for High-Performance Language Technologies},
author={Ona de Gibert and Graeme Nail and Nikolay Arefyev and Marta Bañón and Jelmer van der Linde and Shaoxiong Ji and Jaume Zaragoza-Bernabeu and Mikko Aulamo and Gema Ramírez-Sánchez and Andrey Kutuzov and Sampo Pyysalo and Stephan Oepen and Jörg Tiedemann},
year={2024},
eprint={2403.14009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["pa"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "datasets": ["HPLT/hplt_monolingual_v1_2"], "inference": false} | HPLT/hplt_bert_base_pa | null | [
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"pa",
"dataset:HPLT/hplt_monolingual_v1_2",
"arxiv:2403.14009",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null | 2024-04-22T01:32:09+00:00 | [
"2403.14009"
] | [
"pa"
] | TAGS
#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #pa #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us
|
# HPLT Bert for Panjabi
<img src="URL width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the HPLT project.
It is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.
A monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our language model training report.
The training code.
The training statistics of all 75 runs
## Example usage
This model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.
The following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.
## Cite us
| [
"# HPLT Bert for Panjabi\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] | [
"TAGS\n#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #pa #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us \n",
"# HPLT Bert for Panjabi\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] |
null | null | This is SmartEdit-7B checkpoint.
---
license: apache-2.0
---
| {} | TencentARC/SmartEdit-7B | null | [
"region:us"
] | null | 2024-04-22T01:32:10+00:00 | [] | [] | TAGS
#region-us
| This is SmartEdit-7B checkpoint.
---
license: apache-2.0
---
| [] | [
"TAGS\n#region-us \n"
] |
fill-mask | transformers |
# HPLT Bert for Polish
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language models. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_en")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_en", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Cite us
```bibtex
@misc{degibert2024new,
title={A New Massive Multilingual Dataset for High-Performance Language Technologies},
author={Ona de Gibert and Graeme Nail and Nikolay Arefyev and Marta Bañón and Jelmer van der Linde and Shaoxiong Ji and Jaume Zaragoza-Bernabeu and Mikko Aulamo and Gema Ramírez-Sánchez and Andrey Kutuzov and Sampo Pyysalo and Stephan Oepen and Jörg Tiedemann},
year={2024},
eprint={2403.14009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["pl"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "datasets": ["HPLT/hplt_monolingual_v1_2"], "inference": false} | HPLT/hplt_bert_base_pl | null | [
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"pl",
"dataset:HPLT/hplt_monolingual_v1_2",
"arxiv:2403.14009",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null | 2024-04-22T01:32:32+00:00 | [
"2403.14009"
] | [
"pl"
] | TAGS
#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #pl #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us
|
# HPLT Bert for Polish
<img src="URL width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the HPLT project.
It is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.
A monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our language model training report.
The training code.
The training statistics of all 75 runs
## Example usage
This model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.
The following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.
## Cite us
| [
"# HPLT Bert for Polish\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] | [
"TAGS\n#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #pl #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us \n",
"# HPLT Bert for Polish\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] |
null | null | This is SmartEdit-13B checkpoint.
---
license: apache-2.0
---
| {} | TencentARC/SmartEdit-13B | null | [
"region:us"
] | null | 2024-04-22T01:32:41+00:00 | [] | [] | TAGS
#region-us
| This is SmartEdit-13B checkpoint.
---
license: apache-2.0
---
| [] | [
"TAGS\n#region-us \n"
] |
fill-mask | transformers |
# HPLT Bert for Pushto
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language models. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_en")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_en", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Cite us
```bibtex
@misc{degibert2024new,
title={A New Massive Multilingual Dataset for High-Performance Language Technologies},
author={Ona de Gibert and Graeme Nail and Nikolay Arefyev and Marta Bañón and Jelmer van der Linde and Shaoxiong Ji and Jaume Zaragoza-Bernabeu and Mikko Aulamo and Gema Ramírez-Sánchez and Andrey Kutuzov and Sampo Pyysalo and Stephan Oepen and Jörg Tiedemann},
year={2024},
eprint={2403.14009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["ps"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "datasets": ["HPLT/hplt_monolingual_v1_2"], "inference": false} | HPLT/hplt_bert_base_ps | null | [
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"ps",
"dataset:HPLT/hplt_monolingual_v1_2",
"arxiv:2403.14009",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null | 2024-04-22T01:32:54+00:00 | [
"2403.14009"
] | [
"ps"
] | TAGS
#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #ps #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us
|
# HPLT Bert for Pushto
<img src="URL width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the HPLT project.
It is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.
A monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our language model training report.
The training code.
The training statistics of all 75 runs
## Example usage
This model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.
The following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.
## Cite us
| [
"# HPLT Bert for Pushto\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] | [
"TAGS\n#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #ps #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us \n",
"# HPLT Bert for Pushto\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-7b-sft-qlora
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0585
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:-----:|:---------------:|
| 1.0469 | 1.0000 | 17428 | 1.0585 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.40.0
- Pytorch 2.1.2+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "mistralai/Mistral-7B-v0.1", "model-index": [{"name": "zephyr-7b-sft-qlora", "results": []}]} | SF-Foundation/zephyr-7b-sft-qlora | null | [
"peft",
"tensorboard",
"safetensors",
"mistral",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"4-bit",
"region:us"
] | null | 2024-04-22T01:32:55+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #mistral #trl #sft #generated_from_trainer #dataset-generator #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #4-bit #region-us
| zephyr-7b-sft-qlora
===================
This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on the generator dataset.
It achieves the following results on the evaluation set:
* Loss: 1.0585
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0002
* train\_batch\_size: 4
* eval\_batch\_size: 8
* seed: 42
* distributed\_type: multi-GPU
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 8
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 1
### Training results
### Framework versions
* PEFT 0.7.1
* Transformers 4.40.0
* Pytorch 2.1.2+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.7.1\n* Transformers 4.40.0\n* Pytorch 2.1.2+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#peft #tensorboard #safetensors #mistral #trl #sft #generated_from_trainer #dataset-generator #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #4-bit #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.7.1\n* Transformers 4.40.0\n* Pytorch 2.1.2+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral7binstruct_summarize
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4155
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 0.03
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.6497 | 0.2083 | 25 | 1.4642 |
| 1.5558 | 0.4167 | 50 | 1.4155 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "mistralai/Mistral-7B-Instruct-v0.2", "model-index": [{"name": "mistral7binstruct_summarize", "results": []}]} | santhoshml/mistral7binstruct_summarize | null | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2024-04-22T01:33:01+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #region-us
| mistral7binstruct\_summarize
============================
This model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on the generator dataset.
It achieves the following results on the evaluation set:
* Loss: 1.4155
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0002
* train\_batch\_size: 1
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: constant
* lr\_scheduler\_warmup\_steps: 0.03
* training\_steps: 50
### Training results
### Framework versions
* PEFT 0.10.0
* Transformers 4.40.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: constant\n* lr\\_scheduler\\_warmup\\_steps: 0.03\n* training\\_steps: 50",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: constant\n* lr\\_scheduler\\_warmup\\_steps: 0.03\n* training\\_steps: 50",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
fill-mask | transformers |
# HPLT Bert for Portuguese
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language models. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_en")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_en", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Cite us
```bibtex
@misc{degibert2024new,
title={A New Massive Multilingual Dataset for High-Performance Language Technologies},
author={Ona de Gibert and Graeme Nail and Nikolay Arefyev and Marta Bañón and Jelmer van der Linde and Shaoxiong Ji and Jaume Zaragoza-Bernabeu and Mikko Aulamo and Gema Ramírez-Sánchez and Andrey Kutuzov and Sampo Pyysalo and Stephan Oepen and Jörg Tiedemann},
year={2024},
eprint={2403.14009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["pt"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "datasets": ["HPLT/hplt_monolingual_v1_2"], "inference": false} | HPLT/hplt_bert_base_pt | null | [
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"pt",
"dataset:HPLT/hplt_monolingual_v1_2",
"arxiv:2403.14009",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null | 2024-04-22T01:33:18+00:00 | [
"2403.14009"
] | [
"pt"
] | TAGS
#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #pt #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us
|
# HPLT Bert for Portuguese
<img src="URL width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the HPLT project.
It is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.
A monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our language model training report.
The training code.
The training statistics of all 75 runs
## Example usage
This model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.
The following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.
## Cite us
| [
"# HPLT Bert for Portuguese\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] | [
"TAGS\n#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #pt #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us \n",
"# HPLT Bert for Portuguese\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bertin_base_EXIST_detection_spa
This model is a fine-tuned version of [bertin-project/bertin-roberta-base-spanish](https://huggingface.co/bertin-project/bertin-roberta-base-spanish) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5205
- Accuracy: 0.7903
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 437 | 0.5033 | 0.7590 |
| 0.5573 | 2.0 | 874 | 0.5205 | 0.7903 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "cc-by-4.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "bertin-project/bertin-roberta-base-spanish", "model-index": [{"name": "bertin_base_EXIST_detection_spa", "results": []}]} | Gerard-1705/bertin_base_EXIST_detection_spa | null | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:bertin-project/bertin-roberta-base-spanish",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-22T01:33:29+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #roberta #text-classification #generated_from_trainer #base_model-bertin-project/bertin-roberta-base-spanish #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #region-us
| bertin\_base\_EXIST\_detection\_spa
===================================
This model is a fine-tuned version of bertin-project/bertin-roberta-base-spanish on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5205
* Accuracy: 0.7903
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.40.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #roberta #text-classification #generated_from_trainer #base_model-bertin-project/bertin-roberta-base-spanish #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
fill-mask | transformers |
# HPLT Bert for Romanian
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language models. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_en")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_en", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Cite us
```bibtex
@misc{degibert2024new,
title={A New Massive Multilingual Dataset for High-Performance Language Technologies},
author={Ona de Gibert and Graeme Nail and Nikolay Arefyev and Marta Bañón and Jelmer van der Linde and Shaoxiong Ji and Jaume Zaragoza-Bernabeu and Mikko Aulamo and Gema Ramírez-Sánchez and Andrey Kutuzov and Sampo Pyysalo and Stephan Oepen and Jörg Tiedemann},
year={2024},
eprint={2403.14009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["ro"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "datasets": ["HPLT/hplt_monolingual_v1_2"], "inference": false} | HPLT/hplt_bert_base_ro | null | [
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"ro",
"dataset:HPLT/hplt_monolingual_v1_2",
"arxiv:2403.14009",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null | 2024-04-22T01:33:43+00:00 | [
"2403.14009"
] | [
"ro"
] | TAGS
#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #ro #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us
|
# HPLT Bert for Romanian
<img src="URL width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the HPLT project.
It is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.
A monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our language model training report.
The training code.
The training statistics of all 75 runs
## Example usage
This model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.
The following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.
## Cite us
| [
"# HPLT Bert for Romanian\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] | [
"TAGS\n#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #ro #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us \n",
"# HPLT Bert for Romanian\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] |
fill-mask | transformers |
# HPLT Bert for Russian
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language models. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_en")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_en", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Cite us
```bibtex
@misc{degibert2024new,
title={A New Massive Multilingual Dataset for High-Performance Language Technologies},
author={Ona de Gibert and Graeme Nail and Nikolay Arefyev and Marta Bañón and Jelmer van der Linde and Shaoxiong Ji and Jaume Zaragoza-Bernabeu and Mikko Aulamo and Gema Ramírez-Sánchez and Andrey Kutuzov and Sampo Pyysalo and Stephan Oepen and Jörg Tiedemann},
year={2024},
eprint={2403.14009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["ru"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "datasets": ["HPLT/hplt_monolingual_v1_2"], "inference": false} | HPLT/hplt_bert_base_ru | null | [
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"ru",
"dataset:HPLT/hplt_monolingual_v1_2",
"arxiv:2403.14009",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null | 2024-04-22T01:34:02+00:00 | [
"2403.14009"
] | [
"ru"
] | TAGS
#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #ru #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us
|
# HPLT Bert for Russian
<img src="URL width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the HPLT project.
It is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.
A monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our language model training report.
The training code.
The training statistics of all 75 runs
## Example usage
This model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.
The following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.
## Cite us
| [
"# HPLT Bert for Russian\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] | [
"TAGS\n#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #ru #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us \n",
"# HPLT Bert for Russian\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] |
fill-mask | transformers |
# HPLT Bert for Sinhala
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language models. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_en")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_en", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Cite us
```bibtex
@misc{degibert2024new,
title={A New Massive Multilingual Dataset for High-Performance Language Technologies},
author={Ona de Gibert and Graeme Nail and Nikolay Arefyev and Marta Bañón and Jelmer van der Linde and Shaoxiong Ji and Jaume Zaragoza-Bernabeu and Mikko Aulamo and Gema Ramírez-Sánchez and Andrey Kutuzov and Sampo Pyysalo and Stephan Oepen and Jörg Tiedemann},
year={2024},
eprint={2403.14009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["si"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "datasets": ["HPLT/hplt_monolingual_v1_2"], "inference": false} | HPLT/hplt_bert_base_si | null | [
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"si",
"dataset:HPLT/hplt_monolingual_v1_2",
"arxiv:2403.14009",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null | 2024-04-22T01:34:26+00:00 | [
"2403.14009"
] | [
"si"
] | TAGS
#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #si #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us
|
# HPLT Bert for Sinhala
<img src="URL width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the HPLT project.
It is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.
A monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our language model training report.
The training code.
The training statistics of all 75 runs
## Example usage
This model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.
The following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.
## Cite us
| [
"# HPLT Bert for Sinhala\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] | [
"TAGS\n#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #si #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us \n",
"# HPLT Bert for Sinhala\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] |
reinforcement-learning | stable-baselines3 |
# **PPO** Agent playing **BipedalWalker-v3**
This is a trained model of a **PPO** agent playing **BipedalWalker-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| {"library_name": "stable-baselines3", "tags": ["BipedalWalker-v3", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "BipedalWalker-v3", "type": "BipedalWalker-v3"}, "metrics": [{"type": "mean_reward", "value": "-58.54 +/- 86.20", "name": "mean_reward", "verified": false}]}]}]} | CoderMan-O/ppo-BipedalWalker-v3 | null | [
"stable-baselines3",
"BipedalWalker-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | null | 2024-04-22T01:34:47+00:00 | [] | [] | TAGS
#stable-baselines3 #BipedalWalker-v3 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
|
# PPO Agent playing BipedalWalker-v3
This is a trained model of a PPO agent playing BipedalWalker-v3
using the stable-baselines3 library.
## Usage (with Stable-baselines3)
TODO: Add your code
| [
"# PPO Agent playing BipedalWalker-v3\nThis is a trained model of a PPO agent playing BipedalWalker-v3\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] | [
"TAGS\n#stable-baselines3 #BipedalWalker-v3 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n",
"# PPO Agent playing BipedalWalker-v3\nThis is a trained model of a PPO agent playing BipedalWalker-v3\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] |
fill-mask | transformers |
# HPLT Bert for Slovak
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language models. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_en")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_en", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Cite us
```bibtex
@misc{degibert2024new,
title={A New Massive Multilingual Dataset for High-Performance Language Technologies},
author={Ona de Gibert and Graeme Nail and Nikolay Arefyev and Marta Bañón and Jelmer van der Linde and Shaoxiong Ji and Jaume Zaragoza-Bernabeu and Mikko Aulamo and Gema Ramírez-Sánchez and Andrey Kutuzov and Sampo Pyysalo and Stephan Oepen and Jörg Tiedemann},
year={2024},
eprint={2403.14009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["sk"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "datasets": ["HPLT/hplt_monolingual_v1_2"], "inference": false} | HPLT/hplt_bert_base_sk | null | [
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"sk",
"dataset:HPLT/hplt_monolingual_v1_2",
"arxiv:2403.14009",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null | 2024-04-22T01:34:53+00:00 | [
"2403.14009"
] | [
"sk"
] | TAGS
#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #sk #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us
|
# HPLT Bert for Slovak
<img src="URL width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the HPLT project.
It is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.
A monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our language model training report.
The training code.
The training statistics of all 75 runs
## Example usage
This model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.
The following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.
## Cite us
| [
"# HPLT Bert for Slovak\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] | [
"TAGS\n#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #sk #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us \n",
"# HPLT Bert for Slovak\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] |
fill-mask | transformers |
# HPLT Bert for Slovenian
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language models. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_en")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_en", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Cite us
```bibtex
@misc{degibert2024new,
title={A New Massive Multilingual Dataset for High-Performance Language Technologies},
author={Ona de Gibert and Graeme Nail and Nikolay Arefyev and Marta Bañón and Jelmer van der Linde and Shaoxiong Ji and Jaume Zaragoza-Bernabeu and Mikko Aulamo and Gema Ramírez-Sánchez and Andrey Kutuzov and Sampo Pyysalo and Stephan Oepen and Jörg Tiedemann},
year={2024},
eprint={2403.14009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["sl"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "datasets": ["HPLT/hplt_monolingual_v1_2"], "inference": false} | HPLT/hplt_bert_base_sl | null | [
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"sl",
"dataset:HPLT/hplt_monolingual_v1_2",
"arxiv:2403.14009",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null | 2024-04-22T01:35:16+00:00 | [
"2403.14009"
] | [
"sl"
] | TAGS
#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #sl #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us
|
# HPLT Bert for Slovenian
<img src="URL width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the HPLT project.
It is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.
A monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our language model training report.
The training code.
The training statistics of all 75 runs
## Example usage
This model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.
The following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.
## Cite us
| [
"# HPLT Bert for Slovenian\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] | [
"TAGS\n#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #sl #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us \n",
"# HPLT Bert for Slovenian\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] |
fill-mask | transformers |
# HPLT Bert for Somali
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language models. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_en")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_en", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Cite us
```bibtex
@misc{degibert2024new,
title={A New Massive Multilingual Dataset for High-Performance Language Technologies},
author={Ona de Gibert and Graeme Nail and Nikolay Arefyev and Marta Bañón and Jelmer van der Linde and Shaoxiong Ji and Jaume Zaragoza-Bernabeu and Mikko Aulamo and Gema Ramírez-Sánchez and Andrey Kutuzov and Sampo Pyysalo and Stephan Oepen and Jörg Tiedemann},
year={2024},
eprint={2403.14009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["so"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "datasets": ["HPLT/hplt_monolingual_v1_2"], "inference": false} | HPLT/hplt_bert_base_so | null | [
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"so",
"dataset:HPLT/hplt_monolingual_v1_2",
"arxiv:2403.14009",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null | 2024-04-22T01:35:44+00:00 | [
"2403.14009"
] | [
"so"
] | TAGS
#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #so #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us
|
# HPLT Bert for Somali
<img src="URL width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the HPLT project.
It is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.
A monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our language model training report.
The training code.
The training statistics of all 75 runs
## Example usage
This model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.
The following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.
## Cite us
| [
"# HPLT Bert for Somali\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] | [
"TAGS\n#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #so #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us \n",
"# HPLT Bert for Somali\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] |
fill-mask | transformers |
# HPLT Bert for Albanian
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language models. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_en")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_en", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Cite us
```bibtex
@misc{degibert2024new,
title={A New Massive Multilingual Dataset for High-Performance Language Technologies},
author={Ona de Gibert and Graeme Nail and Nikolay Arefyev and Marta Bañón and Jelmer van der Linde and Shaoxiong Ji and Jaume Zaragoza-Bernabeu and Mikko Aulamo and Gema Ramírez-Sánchez and Andrey Kutuzov and Sampo Pyysalo and Stephan Oepen and Jörg Tiedemann},
year={2024},
eprint={2403.14009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["sq"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "datasets": ["HPLT/hplt_monolingual_v1_2"], "inference": false} | HPLT/hplt_bert_base_sq | null | [
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"sq",
"dataset:HPLT/hplt_monolingual_v1_2",
"arxiv:2403.14009",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null | 2024-04-22T01:36:07+00:00 | [
"2403.14009"
] | [
"sq"
] | TAGS
#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #sq #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us
|
# HPLT Bert for Albanian
<img src="URL width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the HPLT project.
It is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.
A monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our language model training report.
The training code.
The training statistics of all 75 runs
## Example usage
This model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.
The following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.
## Cite us
| [
"# HPLT Bert for Albanian\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] | [
"TAGS\n#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #sq #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us \n",
"# HPLT Bert for Albanian\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] |
fill-mask | transformers |
# HPLT Bert for Swedish
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language models. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_en")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_en", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Cite us
```bibtex
@misc{degibert2024new,
title={A New Massive Multilingual Dataset for High-Performance Language Technologies},
author={Ona de Gibert and Graeme Nail and Nikolay Arefyev and Marta Bañón and Jelmer van der Linde and Shaoxiong Ji and Jaume Zaragoza-Bernabeu and Mikko Aulamo and Gema Ramírez-Sánchez and Andrey Kutuzov and Sampo Pyysalo and Stephan Oepen and Jörg Tiedemann},
year={2024},
eprint={2403.14009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["sv"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "datasets": ["HPLT/hplt_monolingual_v1_2"], "inference": false} | HPLT/hplt_bert_base_sv | null | [
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"sv",
"dataset:HPLT/hplt_monolingual_v1_2",
"arxiv:2403.14009",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null | 2024-04-22T01:36:38+00:00 | [
"2403.14009"
] | [
"sv"
] | TAGS
#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #sv #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us
|
# HPLT Bert for Swedish
<img src="URL width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the HPLT project.
It is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.
A monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our language model training report.
The training code.
The training statistics of all 75 runs
## Example usage
This model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.
The following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.
## Cite us
| [
"# HPLT Bert for Swedish\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] | [
"TAGS\n#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #sv #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us \n",
"# HPLT Bert for Swedish\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
| {"library_name": "peft", "base_model": "TinyLlama/TinyLlama-1.1B-Chat-v1.0"} | bmehrba/TinyLlama-1.1B-Chat-v1.0-fine-tuned-adapters_ChatGPT_tiny_Seed103 | null | [
"peft",
"arxiv:1910.09700",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"region:us"
] | null | 2024-04-22T01:36:53+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #arxiv-1910.09700 #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
## Training procedure
The following 'bitsandbytes' quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
## Training procedure
The following 'bitsandbytes' quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.7.0.dev0",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.7.0.dev0"
] | [
"TAGS\n#peft #arxiv-1910.09700 #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.7.0.dev0",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.7.0.dev0"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
| {"library_name": "peft", "base_model": "TinyLlama/TinyLlama-1.1B-Chat-v1.0"} | bmehrba/TinyLlama-1.1B-Chat-v1.0-fine-tuned_ChatGPT_tiny_Seed103 | null | [
"peft",
"arxiv:1910.09700",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"region:us"
] | null | 2024-04-22T01:36:59+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #arxiv-1910.09700 #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
## Training procedure
The following 'bitsandbytes' quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.7.0.dev0"
] | [
"TAGS\n#peft #arxiv-1910.09700 #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.7.0.dev0"
] |
fill-mask | transformers |
# HPLT Bert for Swahili
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language models. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_en")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_en", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Cite us
```bibtex
@misc{degibert2024new,
title={A New Massive Multilingual Dataset for High-Performance Language Technologies},
author={Ona de Gibert and Graeme Nail and Nikolay Arefyev and Marta Bañón and Jelmer van der Linde and Shaoxiong Ji and Jaume Zaragoza-Bernabeu and Mikko Aulamo and Gema Ramírez-Sánchez and Andrey Kutuzov and Sampo Pyysalo and Stephan Oepen and Jörg Tiedemann},
year={2024},
eprint={2403.14009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["sw"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "datasets": ["HPLT/hplt_monolingual_v1_2"], "inference": false} | HPLT/hplt_bert_base_sw | null | [
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"sw",
"dataset:HPLT/hplt_monolingual_v1_2",
"arxiv:2403.14009",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null | 2024-04-22T01:37:06+00:00 | [
"2403.14009"
] | [
"sw"
] | TAGS
#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #sw #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us
|
# HPLT Bert for Swahili
<img src="URL width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the HPLT project.
It is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.
A monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our language model training report.
The training code.
The training statistics of all 75 runs
## Example usage
This model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.
The following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.
## Cite us
| [
"# HPLT Bert for Swahili\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] | [
"TAGS\n#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #sw #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us \n",
"# HPLT Bert for Swahili\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] |
fill-mask | transformers |
# HPLT Bert for Tamil
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language models. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_en")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_en", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Cite us
```bibtex
@misc{degibert2024new,
title={A New Massive Multilingual Dataset for High-Performance Language Technologies},
author={Ona de Gibert and Graeme Nail and Nikolay Arefyev and Marta Bañón and Jelmer van der Linde and Shaoxiong Ji and Jaume Zaragoza-Bernabeu and Mikko Aulamo and Gema Ramírez-Sánchez and Andrey Kutuzov and Sampo Pyysalo and Stephan Oepen and Jörg Tiedemann},
year={2024},
eprint={2403.14009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["ta"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "datasets": ["HPLT/hplt_monolingual_v1_2"], "inference": false} | HPLT/hplt_bert_base_ta | null | [
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"ta",
"dataset:HPLT/hplt_monolingual_v1_2",
"arxiv:2403.14009",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null | 2024-04-22T01:37:30+00:00 | [
"2403.14009"
] | [
"ta"
] | TAGS
#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #ta #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us
|
# HPLT Bert for Tamil
<img src="URL width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the HPLT project.
It is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.
A monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our language model training report.
The training code.
The training statistics of all 75 runs
## Example usage
This model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.
The following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.
## Cite us
| [
"# HPLT Bert for Tamil\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] | [
"TAGS\n#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #ta #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us \n",
"# HPLT Bert for Tamil\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] |
fill-mask | transformers |
# HPLT Bert for Telugu
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language models. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_en")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_en", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Cite us
```bibtex
@misc{degibert2024new,
title={A New Massive Multilingual Dataset for High-Performance Language Technologies},
author={Ona de Gibert and Graeme Nail and Nikolay Arefyev and Marta Bañón and Jelmer van der Linde and Shaoxiong Ji and Jaume Zaragoza-Bernabeu and Mikko Aulamo and Gema Ramírez-Sánchez and Andrey Kutuzov and Sampo Pyysalo and Stephan Oepen and Jörg Tiedemann},
year={2024},
eprint={2403.14009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["te"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "datasets": ["HPLT/hplt_monolingual_v1_2"], "inference": false} | HPLT/hplt_bert_base_te | null | [
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"te",
"dataset:HPLT/hplt_monolingual_v1_2",
"arxiv:2403.14009",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null | 2024-04-22T01:37:54+00:00 | [
"2403.14009"
] | [
"te"
] | TAGS
#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #te #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us
|
# HPLT Bert for Telugu
<img src="URL width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the HPLT project.
It is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.
A monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our language model training report.
The training code.
The training statistics of all 75 runs
## Example usage
This model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.
The following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.
## Cite us
| [
"# HPLT Bert for Telugu\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] | [
"TAGS\n#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #te #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us \n",
"# HPLT Bert for Telugu\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] |
fill-mask | transformers |
# HPLT Bert for Thai
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language models. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_en")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_en", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Cite us
```bibtex
@misc{degibert2024new,
title={A New Massive Multilingual Dataset for High-Performance Language Technologies},
author={Ona de Gibert and Graeme Nail and Nikolay Arefyev and Marta Bañón and Jelmer van der Linde and Shaoxiong Ji and Jaume Zaragoza-Bernabeu and Mikko Aulamo and Gema Ramírez-Sánchez and Andrey Kutuzov and Sampo Pyysalo and Stephan Oepen and Jörg Tiedemann},
year={2024},
eprint={2403.14009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["th"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "datasets": ["HPLT/hplt_monolingual_v1_2"], "inference": false} | HPLT/hplt_bert_base_th | null | [
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"th",
"dataset:HPLT/hplt_monolingual_v1_2",
"arxiv:2403.14009",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null | 2024-04-22T01:38:17+00:00 | [
"2403.14009"
] | [
"th"
] | TAGS
#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #th #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us
|
# HPLT Bert for Thai
<img src="URL width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the HPLT project.
It is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.
A monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our language model training report.
The training code.
The training statistics of all 75 runs
## Example usage
This model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.
The following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.
## Cite us
| [
"# HPLT Bert for Thai\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] | [
"TAGS\n#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #th #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us \n",
"# HPLT Bert for Thai\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] |
fill-mask | transformers |
# HPLT Bert for Tagalog
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language models. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_en")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_en", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Cite us
```bibtex
@misc{degibert2024new,
title={A New Massive Multilingual Dataset for High-Performance Language Technologies},
author={Ona de Gibert and Graeme Nail and Nikolay Arefyev and Marta Bañón and Jelmer van der Linde and Shaoxiong Ji and Jaume Zaragoza-Bernabeu and Mikko Aulamo and Gema Ramírez-Sánchez and Andrey Kutuzov and Sampo Pyysalo and Stephan Oepen and Jörg Tiedemann},
year={2024},
eprint={2403.14009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["tl"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "datasets": ["HPLT/hplt_monolingual_v1_2"], "inference": false} | HPLT/hplt_bert_base_tl | null | [
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"tl",
"dataset:HPLT/hplt_monolingual_v1_2",
"arxiv:2403.14009",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null | 2024-04-22T01:38:43+00:00 | [
"2403.14009"
] | [
"tl"
] | TAGS
#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #tl #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us
|
# HPLT Bert for Tagalog
<img src="URL width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the HPLT project.
It is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.
A monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our language model training report.
The training code.
The training statistics of all 75 runs
## Example usage
This model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.
The following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.
## Cite us
| [
"# HPLT Bert for Tagalog\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] | [
"TAGS\n#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #tl #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us \n",
"# HPLT Bert for Tagalog\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] |
fill-mask | transformers |
# HPLT Bert for Turkish
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language models. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_en")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_en", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Cite us
```bibtex
@misc{degibert2024new,
title={A New Massive Multilingual Dataset for High-Performance Language Technologies},
author={Ona de Gibert and Graeme Nail and Nikolay Arefyev and Marta Bañón and Jelmer van der Linde and Shaoxiong Ji and Jaume Zaragoza-Bernabeu and Mikko Aulamo and Gema Ramírez-Sánchez and Andrey Kutuzov and Sampo Pyysalo and Stephan Oepen and Jörg Tiedemann},
year={2024},
eprint={2403.14009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["tr"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "datasets": ["HPLT/hplt_monolingual_v1_2"], "inference": false} | HPLT/hplt_bert_base_tr | null | [
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"tr",
"dataset:HPLT/hplt_monolingual_v1_2",
"arxiv:2403.14009",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null | 2024-04-22T01:39:09+00:00 | [
"2403.14009"
] | [
"tr"
] | TAGS
#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #tr #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us
|
# HPLT Bert for Turkish
<img src="URL width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the HPLT project.
It is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.
A monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our language model training report.
The training code.
The training statistics of all 75 runs
## Example usage
This model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.
The following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.
## Cite us
| [
"# HPLT Bert for Turkish\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] | [
"TAGS\n#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #tr #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us \n",
"# HPLT Bert for Turkish\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] |
text-classification | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | mrinoyb2/bankbert | null | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-22T01:39:10+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
fill-mask | transformers |
# HPLT Bert for Tatar
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language models. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_en")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_en", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Cite us
```bibtex
@misc{degibert2024new,
title={A New Massive Multilingual Dataset for High-Performance Language Technologies},
author={Ona de Gibert and Graeme Nail and Nikolay Arefyev and Marta Bañón and Jelmer van der Linde and Shaoxiong Ji and Jaume Zaragoza-Bernabeu and Mikko Aulamo and Gema Ramírez-Sánchez and Andrey Kutuzov and Sampo Pyysalo and Stephan Oepen and Jörg Tiedemann},
year={2024},
eprint={2403.14009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["tt"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "datasets": ["HPLT/hplt_monolingual_v1_2"], "inference": false} | HPLT/hplt_bert_base_tt | null | [
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"tt",
"dataset:HPLT/hplt_monolingual_v1_2",
"arxiv:2403.14009",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null | 2024-04-22T01:39:34+00:00 | [
"2403.14009"
] | [
"tt"
] | TAGS
#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #tt #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us
|
# HPLT Bert for Tatar
<img src="URL width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the HPLT project.
It is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.
A monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our language model training report.
The training code.
The training statistics of all 75 runs
## Example usage
This model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.
The following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.
## Cite us
| [
"# HPLT Bert for Tatar\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] | [
"TAGS\n#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #tt #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us \n",
"# HPLT Bert for Tatar\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetune_starcoder2
This model is a fine-tuned version of [bigcode/starcoder2-7b](https://huggingface.co/bigcode/starcoder2-7b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 0
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 20
- training_steps: 1000
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.0
- Pytorch 2.1.2+git98a6632
- Datasets 2.17.1
- Tokenizers 0.19.1 | {"license": "bigcode-openrail-m", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "bigcode/starcoder2-7b", "model-index": [{"name": "finetune_starcoder2", "results": []}]} | Coolian/finetune_starcoder2 | null | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:bigcode/starcoder2-7b",
"license:bigcode-openrail-m",
"region:us"
] | null | 2024-04-22T01:39:45+00:00 | [] | [] | TAGS
#peft #safetensors #trl #sft #generated_from_trainer #base_model-bigcode/starcoder2-7b #license-bigcode-openrail-m #region-us
|
# finetune_starcoder2
This model is a fine-tuned version of bigcode/starcoder2-7b on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 0
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 20
- training_steps: 1000
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.0
- Pytorch 2.1.2+git98a6632
- Datasets 2.17.1
- Tokenizers 0.19.1 | [
"# finetune_starcoder2\n\nThis model is a fine-tuned version of bigcode/starcoder2-7b on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 0\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 8\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_steps: 20\n- training_steps: 1000",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.0\n- Pytorch 2.1.2+git98a6632\n- Datasets 2.17.1\n- Tokenizers 0.19.1"
] | [
"TAGS\n#peft #safetensors #trl #sft #generated_from_trainer #base_model-bigcode/starcoder2-7b #license-bigcode-openrail-m #region-us \n",
"# finetune_starcoder2\n\nThis model is a fine-tuned version of bigcode/starcoder2-7b on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 0\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 8\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_steps: 20\n- training_steps: 1000",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.0\n- Pytorch 2.1.2+git98a6632\n- Datasets 2.17.1\n- Tokenizers 0.19.1"
] |
fill-mask | transformers |
# HPLT Bert for Ukrainian
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language models. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_en")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_en", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Cite us
```bibtex
@misc{degibert2024new,
title={A New Massive Multilingual Dataset for High-Performance Language Technologies},
author={Ona de Gibert and Graeme Nail and Nikolay Arefyev and Marta Bañón and Jelmer van der Linde and Shaoxiong Ji and Jaume Zaragoza-Bernabeu and Mikko Aulamo and Gema Ramírez-Sánchez and Andrey Kutuzov and Sampo Pyysalo and Stephan Oepen and Jörg Tiedemann},
year={2024},
eprint={2403.14009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["uk"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "datasets": ["HPLT/hplt_monolingual_v1_2"], "inference": false} | HPLT/hplt_bert_base_uk | null | [
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"uk",
"dataset:HPLT/hplt_monolingual_v1_2",
"arxiv:2403.14009",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null | 2024-04-22T01:40:00+00:00 | [
"2403.14009"
] | [
"uk"
] | TAGS
#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #uk #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us
|
# HPLT Bert for Ukrainian
<img src="URL width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the HPLT project.
It is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.
A monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our language model training report.
The training code.
The training statistics of all 75 runs
## Example usage
This model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.
The following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.
## Cite us
| [
"# HPLT Bert for Ukrainian\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] | [
"TAGS\n#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #uk #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us \n",
"# HPLT Bert for Ukrainian\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] |
fill-mask | transformers |
# HPLT Bert for Urdu
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language models. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_en")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_en", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Cite us
```bibtex
@misc{degibert2024new,
title={A New Massive Multilingual Dataset for High-Performance Language Technologies},
author={Ona de Gibert and Graeme Nail and Nikolay Arefyev and Marta Bañón and Jelmer van der Linde and Shaoxiong Ji and Jaume Zaragoza-Bernabeu and Mikko Aulamo and Gema Ramírez-Sánchez and Andrey Kutuzov and Sampo Pyysalo and Stephan Oepen and Jörg Tiedemann},
year={2024},
eprint={2403.14009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["ur"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "datasets": ["HPLT/hplt_monolingual_v1_2"], "inference": false} | HPLT/hplt_bert_base_ur | null | [
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"ur",
"dataset:HPLT/hplt_monolingual_v1_2",
"arxiv:2403.14009",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null | 2024-04-22T01:40:25+00:00 | [
"2403.14009"
] | [
"ur"
] | TAGS
#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #ur #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us
|
# HPLT Bert for Urdu
<img src="URL width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the HPLT project.
It is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.
A monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our language model training report.
The training code.
The training statistics of all 75 runs
## Example usage
This model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.
The following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.
## Cite us
| [
"# HPLT Bert for Urdu\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] | [
"TAGS\n#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #ur #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us \n",
"# HPLT Bert for Urdu\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] |
fill-mask | transformers |
# HPLT Bert for Uzbek
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language models. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_en")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_en", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Cite us
```bibtex
@misc{degibert2024new,
title={A New Massive Multilingual Dataset for High-Performance Language Technologies},
author={Ona de Gibert and Graeme Nail and Nikolay Arefyev and Marta Bañón and Jelmer van der Linde and Shaoxiong Ji and Jaume Zaragoza-Bernabeu and Mikko Aulamo and Gema Ramírez-Sánchez and Andrey Kutuzov and Sampo Pyysalo and Stephan Oepen and Jörg Tiedemann},
year={2024},
eprint={2403.14009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["uz"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "datasets": ["HPLT/hplt_monolingual_v1_2"], "inference": false} | HPLT/hplt_bert_base_uz | null | [
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"uz",
"dataset:HPLT/hplt_monolingual_v1_2",
"arxiv:2403.14009",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null | 2024-04-22T01:40:51+00:00 | [
"2403.14009"
] | [
"uz"
] | TAGS
#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #uz #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us
|
# HPLT Bert for Uzbek
<img src="URL width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the HPLT project.
It is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.
A monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our language model training report.
The training code.
The training statistics of all 75 runs
## Example usage
This model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.
The following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.
## Cite us
| [
"# HPLT Bert for Uzbek\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] | [
"TAGS\n#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #uz #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us \n",
"# HPLT Bert for Uzbek\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] |
fill-mask | transformers |
# HPLT Bert for Vietnamese
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language models. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_en")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_en", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Cite us
```bibtex
@misc{degibert2024new,
title={A New Massive Multilingual Dataset for High-Performance Language Technologies},
author={Ona de Gibert and Graeme Nail and Nikolay Arefyev and Marta Bañón and Jelmer van der Linde and Shaoxiong Ji and Jaume Zaragoza-Bernabeu and Mikko Aulamo and Gema Ramírez-Sánchez and Andrey Kutuzov and Sampo Pyysalo and Stephan Oepen and Jörg Tiedemann},
year={2024},
eprint={2403.14009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["vi"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "datasets": ["HPLT/hplt_monolingual_v1_2"], "inference": false} | HPLT/hplt_bert_base_vi | null | [
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"vi",
"dataset:HPLT/hplt_monolingual_v1_2",
"arxiv:2403.14009",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null | 2024-04-22T01:41:15+00:00 | [
"2403.14009"
] | [
"vi"
] | TAGS
#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #vi #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us
|
# HPLT Bert for Vietnamese
<img src="URL width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the HPLT project.
It is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.
A monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our language model training report.
The training code.
The training statistics of all 75 runs
## Example usage
This model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.
The following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.
## Cite us
| [
"# HPLT Bert for Vietnamese\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] | [
"TAGS\n#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #vi #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us \n",
"# HPLT Bert for Vietnamese\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] |
fill-mask | transformers |
# HPLT Bert for Chinese
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language models. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_en")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_en", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Cite us
```bibtex
@misc{degibert2024new,
title={A New Massive Multilingual Dataset for High-Performance Language Technologies},
author={Ona de Gibert and Graeme Nail and Nikolay Arefyev and Marta Bañón and Jelmer van der Linde and Shaoxiong Ji and Jaume Zaragoza-Bernabeu and Mikko Aulamo and Gema Ramírez-Sánchez and Andrey Kutuzov and Sampo Pyysalo and Stephan Oepen and Jörg Tiedemann},
year={2024},
eprint={2403.14009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["zh"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "datasets": ["HPLT/hplt_monolingual_v1_2"], "inference": false} | HPLT/hplt_bert_base_zh | null | [
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"zh",
"dataset:HPLT/hplt_monolingual_v1_2",
"arxiv:2403.14009",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null | 2024-04-22T01:41:41+00:00 | [
"2403.14009"
] | [
"zh"
] | TAGS
#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #zh #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us
|
# HPLT Bert for Chinese
<img src="URL width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the HPLT project.
It is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.
A monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our language model training report.
The training code.
The training statistics of all 75 runs
## Example usage
This model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.
The following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.
## Cite us
| [
"# HPLT Bert for Chinese\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] | [
"TAGS\n#transformers #pytorch #fill-mask #BERT #HPLT #encoder #custom_code #zh #dataset-HPLT/hplt_monolingual_v1_2 #arxiv-2403.14009 #license-apache-2.0 #autotrain_compatible #region-us \n",
"# HPLT Bert for Chinese\n\n<img src=\"URL width=12.5%>\n\nThis is one of the encoder-only monolingual language models trained as a first release by the HPLT project.\nIt is a so called masked language models. In particular, we used the modification of the classic BERT model named LTG-BERT.\n\nA monolingual LTG-BERT model is trained for every major language in the HPLT 1.2 data release (*75* models total).\n\nAll the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:\n- hidden size: 768\n- attention heads: 12\n- layers: 12\n- vocabulary size: 32768\n\nEvery model uses its own tokenizer trained on language-specific HPLT data. \nSee sizes of the training corpora, evaluation results and more in our language model training report.\n\nThe training code.\n\nThe training statistics of all 75 runs",
"## Example usage\n\nThis model currently needs a custom wrapper from 'modeling_ltgbert.py', you should therefore load the model with 'trust_remote_code=True'.\n\n\n\nThe following classes are currently implemented: 'AutoModel', 'AutoModelMaskedLM', 'AutoModelForSequenceClassification', 'AutoModelForTokenClassification', 'AutoModelForQuestionAnswering' and 'AutoModeltForMultipleChoice'.",
"## Cite us"
] |
null | null |
<p align="center">
<img style="width: 20%;" src="llasmol.png">
</p>
**Paper**: [LlaSMol: Advancing Large Language Models for Chemistry with a Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset]()
**Page**: [https://osu-nlp-group.github.io/LlaSMol](https://osu-nlp-group.github.io/LlaSMol)
**Code**: [https://github.com/OSU-NLP-Group/LlaSMol](https://github.com/OSU-NLP-Group/LlaSMol)
**Models**:
- LlaSMol-Galactica-6.7B: [https://huggingface.co/osunlp/LlaSMol-Galactica-6.7B](https://huggingface.co/osunlp/LlaSMol-Galactica-6.7B)
- LlaSMol-Llama2-7B: [https://huggingface.co/osunlp/LlaSMol-Llama2-7B](https://huggingface.co/osunlp/LlaSMol-Llama2-7B)
- LlaSMol-CodeLlama-7B: [https://huggingface.co/osunlp/LlaSMol-CodeLlama-7B](https://huggingface.co/osunlp/LlaSMol-CodeLlama-7B)
- LlaSMol-Mistral-7B: [https://huggingface.co/osunlp/LlaSMol-Mistral-7B](https://huggingface.co/osunlp/LlaSMol-Mistral-7B)
LlaSMol-Llama2-7B is an LLM for chemistry. It is based on [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) and tuned on our [SMolInstruct](https://huggingface.co/datasets/osunlp/SMolInstruct) dataset with LoRA. This repo contains the weight of the low-rank adapter.
## ⚔️ Usage
For instructions to run the model, please refer to our [repository](https://github.com/OSU-NLP-Group/LlaSMol).
## 🚨 Limitations
While the model is carefully trained, we do not guarantee its effectiveness. The model may output incorrect or inaccurate information. Please use it at your own risk.
Additionally, the model is built as a mature product but solely for research purpose. It may generate harmful or biased information. We emphatically urge all users to adhere to the highest ethical standards when using the model, including maintaining fairness, transparency, and responsibility in their research. Any usage of the dataset that may lead to harm or pose a detriment to society is strictly **forbidden**.
## 📚 Citation
If our paper or related resources prove valuable to your research, we kindly ask for citation. Please feel free to contact us with any inquiries.
```
```
| {"language": ["en"], "license": "cc-by-4.0", "tags": ["instruction tuning", "chemistry", "molecule", "small molecule"]} | osunlp/LlaSMol-Llama2-7B | null | [
"instruction tuning",
"chemistry",
"molecule",
"small molecule",
"en",
"license:cc-by-4.0",
"region:us"
] | null | 2024-04-22T01:45:30+00:00 | [] | [
"en"
] | TAGS
#instruction tuning #chemistry #molecule #small molecule #en #license-cc-by-4.0 #region-us
|
<p align="center">
<img style="width: 20%;" src="URL">
</p>
Paper: [LlaSMol: Advancing Large Language Models for Chemistry with a Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset]()
Page: URL
Code: URL
Models:
- LlaSMol-Galactica-6.7B: URL
- LlaSMol-Llama2-7B: URL
- LlaSMol-CodeLlama-7B: URL
- LlaSMol-Mistral-7B: URL
LlaSMol-Llama2-7B is an LLM for chemistry. It is based on meta-llama/Llama-2-7b-hf and tuned on our SMolInstruct dataset with LoRA. This repo contains the weight of the low-rank adapter.
## ️ Usage
For instructions to run the model, please refer to our repository.
## Limitations
While the model is carefully trained, we do not guarantee its effectiveness. The model may output incorrect or inaccurate information. Please use it at your own risk.
Additionally, the model is built as a mature product but solely for research purpose. It may generate harmful or biased information. We emphatically urge all users to adhere to the highest ethical standards when using the model, including maintaining fairness, transparency, and responsibility in their research. Any usage of the dataset that may lead to harm or pose a detriment to society is strictly forbidden.
## Citation
If our paper or related resources prove valuable to your research, we kindly ask for citation. Please feel free to contact us with any inquiries.
| [
"## ️ Usage\n\nFor instructions to run the model, please refer to our repository.",
"## Limitations\n\nWhile the model is carefully trained, we do not guarantee its effectiveness. The model may output incorrect or inaccurate information. Please use it at your own risk.\n\nAdditionally, the model is built as a mature product but solely for research purpose. It may generate harmful or biased information. We emphatically urge all users to adhere to the highest ethical standards when using the model, including maintaining fairness, transparency, and responsibility in their research. Any usage of the dataset that may lead to harm or pose a detriment to society is strictly forbidden.",
"## Citation\nIf our paper or related resources prove valuable to your research, we kindly ask for citation. Please feel free to contact us with any inquiries."
] | [
"TAGS\n#instruction tuning #chemistry #molecule #small molecule #en #license-cc-by-4.0 #region-us \n",
"## ️ Usage\n\nFor instructions to run the model, please refer to our repository.",
"## Limitations\n\nWhile the model is carefully trained, we do not guarantee its effectiveness. The model may output incorrect or inaccurate information. Please use it at your own risk.\n\nAdditionally, the model is built as a mature product but solely for research purpose. It may generate harmful or biased information. We emphatically urge all users to adhere to the highest ethical standards when using the model, including maintaining fairness, transparency, and responsibility in their research. Any usage of the dataset that may lead to harm or pose a detriment to society is strictly forbidden.",
"## Citation\nIf our paper or related resources prove valuable to your research, we kindly ask for citation. Please feel free to contact us with any inquiries."
] |
null | null | <!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with GGUF.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***What is the model format?*** We use GGUF format.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
# Downloading and running the models
You can download the individual files from the Files & versions section. Here is a list of the different versions we provide. For more info checkout [this chart](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) and [this guide](https://www.reddit.com/r/LocalLLaMA/comments/1ba55rj/overview_of_gguf_quantization_methods/):
| Quant type | Description |
|------------|--------------------------------------------------------------------------------------------|
| Q5_K_M | High quality, recommended. |
| Q5_K_S | High quality, recommended. |
| Q4_K_M | Good quality, uses about 4.83 bits per weight, recommended. |
| Q4_K_S | Slightly lower quality with more space savings, recommended. |
| IQ4_NL | Decent quality, slightly smaller than Q4_K_S with similar performance, recommended. |
| IQ4_XS | Decent quality, smaller than Q4_K_S with similar performance, recommended. |
| Q3_K_L | Lower quality but usable, good for low RAM availability. |
| Q3_K_M | Even lower quality. |
| IQ3_M | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| IQ3_S | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| Q3_K_S | Low quality, not recommended. |
| IQ3_XS | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| Q2_K | Very low quality but surprisingly usable. |
## How to download GGUF files ?
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
- **Option A** - Downloading in `text-generation-webui`:
- **Step 1**: Under Download Model, you can enter the model repo: PrunaAI/Llama-3-13B-Instruct-v0.1-GGUF-smashed-smashed and below it, a specific filename to download, such as: phi-2.IQ3_M.gguf.
- **Step 2**: Then click Download.
- **Option B** - Downloading on the command line (including multiple files at once):
- **Step 1**: We recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
- **Step 2**: Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download PrunaAI/Llama-3-13B-Instruct-v0.1-GGUF-smashed-smashed Llama-3-13B-Instruct-v0.1.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
Alternatively, you can also download multiple files at once with a pattern:
```shell
huggingface-cli download PrunaAI/Llama-3-13B-Instruct-v0.1-GGUF-smashed-smashed --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download PrunaAI/Llama-3-13B-Instruct-v0.1-GGUF-smashed-smashed Llama-3-13B-Instruct-v0.1.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## How to run model in GGUF format?
- **Option A** - Introductory example with `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m Llama-3-13B-Instruct-v0.1.IQ3_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<s>[INST] {prompt\} [/INST]"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
- **Option B** - Running in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20-%20Model%20Tab.md#llamacpp).
- **Option C** - Running from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./Llama-3-13B-Instruct-v0.1.IQ3_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<s>[INST] {prompt} [/INST]", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./Llama-3-13B-Instruct-v0.1.IQ3_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
- **Option D** - Running with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
| {"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"} | PrunaAI/Llama-3-13B-Instruct-v0.1-GGUF-smashed | null | [
"gguf",
"pruna-ai",
"region:us"
] | null | 2024-04-22T01:46:24+00:00 | [] | [] | TAGS
#gguf #pruna-ai #region-us
|
[](URL target=)
:
* Step 1: We recommend using the 'huggingface-hub' Python library:
* Step 2: Then you can download any individual model file to the current directory, at high speed, with a command like this:
More advanced huggingface-cli download usage (click to read)
Alternatively, you can also download multiple files at once with a pattern:
For more documentation on downloading with 'huggingface-cli', please see: HF -> Hub Python Library -> Download files -> Download from the CLI.
To accelerate downloads on fast connections (1Gbit/s or higher), install 'hf\_transfer':
And set environment variable 'HF\_HUB\_ENABLE\_HF\_TRANSFER' to '1':
Windows Command Line users: You can set the environment variable by running 'set HF\_HUB\_ENABLE\_HF\_TRANSFER=1' before the download command.
How to run model in GGUF format?
--------------------------------
* Option A - Introductory example with 'URL' command
Make sure you are using 'URL' from commit d0cee0d or later.
Change '-ngl 32' to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change '-c 32768' to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by URL automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the '-p ' argument with '-i -ins'
For other parameters and how to use them, please refer to the URL documentation
* Option B - Running in 'text-generation-webui'
Further instructions can be found in the text-generation-webui documentation, here: text-generation-webui/docs/04 ‐ Model URL.
* Option C - Running from Python code
You can use GGUF models from Python using the llama-cpp-python or ctransformers libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
```
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: llama-cpp-python docs.
#### First install the package
Run one of the following commands, according to your system:
#### Simple llama-cpp-python example code
```
* Option D - Running with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* LangChain + llama-cpp-python
* LangChain + ctransformers
Configurations
--------------
The configuration info are in 'smash\_config.json'.
Credits & License
-----------------
The license of the smashed model follows the license of the original model. Please check the license of the original model before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.
Want to compress other models?
------------------------------
* Contact us and tell us which model to compress next here.
* Request access to easily compress your own AI models here.
| [
"### How to load this model in Python code, using llama-cpp-python\n\nFor full documentation, please see: llama-cpp-python docs.",
"#### First install the package\n\nRun one of the following commands, according to your system:",
"#### Simple llama-cpp-python example code\n\n```\n\n* Option D - Running with LangChain\n\n\nHere are guides on using llama-cpp-python and ctransformers with LangChain:\n\n\n* LangChain + llama-cpp-python\n* LangChain + ctransformers\n\n\nConfigurations\n--------------\n\n\nThe configuration info are in 'smash\\_config.json'.\n\n\nCredits & License\n-----------------\n\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.\n\n\nWant to compress other models?\n------------------------------\n\n\n* Contact us and tell us which model to compress next here.\n* Request access to easily compress your own AI models here."
] | [
"TAGS\n#gguf #pruna-ai #region-us \n",
"### How to load this model in Python code, using llama-cpp-python\n\nFor full documentation, please see: llama-cpp-python docs.",
"#### First install the package\n\nRun one of the following commands, according to your system:",
"#### Simple llama-cpp-python example code\n\n```\n\n* Option D - Running with LangChain\n\n\nHere are guides on using llama-cpp-python and ctransformers with LangChain:\n\n\n* LangChain + llama-cpp-python\n* LangChain + ctransformers\n\n\nConfigurations\n--------------\n\n\nThe configuration info are in 'smash\\_config.json'.\n\n\nCredits & License\n-----------------\n\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.\n\n\nWant to compress other models?\n------------------------------\n\n\n* Contact us and tell us which model to compress next here.\n* Request access to easily compress your own AI models here."
] |
null | null | ---
license: openrail
---This model is converted to CoreML for us in odysseyapp.io or other Mac-based Stable Diffusion apps. To add this model to Odyssey simply follow these instructions: https://odysseyapp.io/guides/custom-models
More information about the model can be found here: https://civitai.com/models/133005/juggernaut-xl | {} | odyssey-ai/juggernautX | null | [
"region:us"
] | null | 2024-04-22T01:46:29+00:00 | [] | [] | TAGS
#region-us
| ---
license: openrail
---This model is converted to CoreML for us in URL or other Mac-based Stable Diffusion apps. To add this model to Odyssey simply follow these instructions: URL
More information about the model can be found here: URL | [] | [
"TAGS\n#region-us \n"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | relu-ntnu/bart-large-cnn_v4_trained_on_1500_lr_1e-4 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-22T01:46:39+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | relu-ntnu/bart-large-xsum_v4_trained_on_5_lr_1e-4 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-22T01:47:01+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | relu-ntnu/bart-large-xsum_v4_trained_on_10_lr_1e-4 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-22T01:47:20+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | relu-ntnu/bart-large-xsum_v4_trained_on_15_lr_1e-4 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-22T01:47:33+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | relu-ntnu/bart-large-xsum_v4_trained_on_25_lr_1e-4 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-22T01:47:50+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | jsunster/OrpoLlama-3-8B | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-22T01:48:05+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | relu-ntnu/bart-large-xsum_v4_trained_on_50_lr_1e-4 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-22T01:48:25+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.